At the same time there is Dave Snowden who’s work around Cynefin, micro-naratives, and sense-making speaks about approaching culture changes.
In one TEDx talk, Dave speaks about how culture can be shifted. His thesis is that big culture shifts are not controllable and reproducible, and that culture shifts little by little, changing slowly around the “How to we make to have more stories like these and less stories like these?”
So, I do now realise that culture change is hard and takes time, and all those organisational transformation projects aiming to change culture are at least difficult to say the least
This made me think about software, #testing and automation, since there is much talk about #automation being either the miracle panacea or something not so good. In this context, I want to check the attributes associated with a shortcut. The obvious one, that of offering faster results in certain contexts I guess is not up for debate, since computers proved a long time ago they can be faster than humans at some tasks (maybe all …)
Repeatable – This is the first attribute of a good shortcut, seen as its potential to be used over and over again. On this check, automation in testing seems to match positively, as if done right, the same test can be run over and over again. So, automation in testing is repeatable.
Non-harmful – This one means that the shortcut does not have downstream side effects, as in it does harm. One example the author provided initially was that of tax evasion, that is harmful in the long run. Moving to software and testing, in the context of automation, this is a hard one since automation is not without side effects when done poorly. This happens because many times people start believing automation alone is good enough and thus other types of testing are no longer needed, with the end result being poorer quality products. This is up for debate, as many times the automation fails to deliver on its promise.
Additive – This one refers the shortcut’s ability to provide value every-time it is used. Again this is up for debate, as when done well, automation in testing provides good value every-time, but when results are not reliable (False positive or false negatives), the value added is actually negative, decreasing confidence while also asking for a maintenance cost.
But let’s also stop and consider that software testing’s goal and reason to exist is providing decision making information to stakeholders (Jerry Weinberg), being an additive shortcut stops when testing no longer delivers information useful for decision making.
Suitable for crowds – This one refers to whether or not a shortcut is suitable to be actively used by many, everybody in a extreme case. This scenario is a clear winner for automation in testing, as a network effect is happening as more and more people join this trend. Is it thanks to such effects that such a vibrant ecosystem is built, with tools maintained via the OSS model and things moving from independent tools to standards (see WebDriver as W3C standard)
All in all, it seems automation in testing is a good thing, but should be considered with caution, especially when it comes to being repeatable and non-harmful.
During the last couple of years I’ve been working (on & off) for a very creative and design oriented oriented organisation. This tenure helped me learn a lot about the design, copywriting, typography and of course, testing with the aim of delivering a beautiful experience.
Please note the distinction, a beautiful experience, not a mere beautiful website. This is relevant, as an experience contains also the not so happy scenarios, and other side aspects that make the user feel catered for, beyond the normal flow.
Since visual aspects are so important, what is a quick checklist start for this area when it comes to #testing?
Layout (how things look and arrange on the screen)
Is mobile shown OK? Did I check smaller screens (e.g. 320 px wide viewport)
Are desktop & large screens scenarios OK?
Is there a desired orientation?
Is there an orientation lock? Should it be (is the experience usable on landscape orientation)?
Are all elements visible and clear on all supported breakpoints?
How does it look like in high-contast / night mode on selected browsers?
How does it looks like when disabling CSS?
How does it looks like when disabling all scripts?
Is the font typeface correct?
Is the font size & weight correct?
Spacing & kerning are according to the intended design?
Are special char modifiers shown correctly? (e.g. accents)
If needed, are RTL languages supported by the font family?
Is the copy correct?
Is the copy adapted to all supported locales and regions? (e.g. localisation)
Is the copy flowing nice on all layout viewports?
Are images clear & crisp?
Are videos playing on all supported browsers?
How about audio? Do we have audio track and is the video working as intended in the auto play blocked scenarios?
Are assets of a decent size? (think of perceived performance)
How does it looks like when images can not be loaded? (graceful fallback)
Is the contrast OK for reading by visually impaired persons?
Do all images have relevant alt-image description set for each locale?
How does the tested piece look in high-contrast mode?
How does things look like when using “large font” options on devices?