At the same time there is Dave Snowden who’s work around Cynefin, micro-naratives, and sense-making speaks about approaching culture changes.
In one TEDx talk, Dave speaks about how culture can be shifted. His thesis is that big culture shifts are not controllable and reproducible, and that culture shifts little by little, changing slowly around the “How to we make to have more stories like these and less stories like these?”
So, I do now realise that culture change is hard and takes time, and all those organisational transformation projects aiming to change culture are at least difficult to say the least
This made me think about software, #testing and automation, since there is much talk about #automation being either the miracle panacea or something not so good. In this context, I want to check the attributes associated with a shortcut. The obvious one, that of offering faster results in certain contexts I guess is not up for debate, since computers proved a long time ago they can be faster than humans at some tasks (maybe all …)
Repeatable – This is the first attribute of a good shortcut, seen as its potential to be used over and over again. On this check, automation in testing seems to match positively, as if done right, the same test can be run over and over again. So, automation in testing is repeatable.
Non-harmful – This one means that the shortcut does not have downstream side effects, as in it does harm. One example the author provided initially was that of tax evasion, that is harmful in the long run. Moving to software and testing, in the context of automation, this is a hard one since automation is not without side effects when done poorly. This happens because many times people start believing automation alone is good enough and thus other types of testing are no longer needed, with the end result being poorer quality products. This is up for debate, as many times the automation fails to deliver on its promise.
Additive – This one refers the shortcut’s ability to provide value every-time it is used. Again this is up for debate, as when done well, automation in testing provides good value every-time, but when results are not reliable (False positive or false negatives), the value added is actually negative, decreasing confidence while also asking for a maintenance cost.
But let’s also stop and consider that software testing’s goal and reason to exist is providing decision making information to stakeholders (Jerry Weinberg), being an additive shortcut stops when testing no longer delivers information useful for decision making.
Suitable for crowds – This one refers to whether or not a shortcut is suitable to be actively used by many, everybody in a extreme case. This scenario is a clear winner for automation in testing, as a network effect is happening as more and more people join this trend. Is it thanks to such effects that such a vibrant ecosystem is built, with tools maintained via the OSS model and things moving from independent tools to standards (see WebDriver as W3C standard)
All in all, it seems automation in testing is a good thing, but should be considered with caution, especially when it comes to being repeatable and non-harmful.
During the last couple of years I’ve been working (on & off) for a very creative and design oriented oriented organisation. This tenure helped me learn a lot about the design, copywriting, typography and of course, testing with the aim of delivering a beautiful experience.
Please note the distinction, a beautiful experience, not a mere beautiful website. This is relevant, as an experience contains also the not so happy scenarios, and other side aspects that make the user feel catered for, beyond the normal flow.
Since visual aspects are so important, what is a quick checklist start for this area when it comes to #testing?
Layout (how things look and arrange on the screen)
Is mobile shown OK? Did I check smaller screens (e.g. 320 px wide viewport)
Are desktop & large screens scenarios OK?
Is there a desired orientation?
Is there an orientation lock? Should it be (is the experience usable on landscape orientation)?
Are all elements visible and clear on all supported breakpoints?
How does it look like in high-contast / night mode on selected browsers?
How does it looks like when disabling CSS?
How does it looks like when disabling all scripts?
Is the font typeface correct?
Is the font size & weight correct?
Spacing & kerning are according to the intended design?
Are special char modifiers shown correctly? (e.g. accents)
If needed, are RTL languages supported by the font family?
Is the copy correct?
Is the copy adapted to all supported locales and regions? (e.g. localisation)
Is the copy flowing nice on all layout viewports?
Are images clear & crisp?
Are videos playing on all supported browsers?
How about audio? Do we have audio track and is the video working as intended in the auto play blocked scenarios?
Are assets of a decent size? (think of perceived performance)
How does it looks like when images can not be loaded? (graceful fallback)
Is the contrast OK for reading by visually impaired persons?
Do all images have relevant alt-image description set for each locale?
How does the tested piece look in high-contrast mode?
How does things look like when using “large font” options on devices?
Recently someone share this image with me via an instant messaging app, with some added comments. Later on, I realised there is another perspective to this shot, that of the learning process.
From a learning perspective, this pricing model makes perfect sense. Every “container” has a finite capacity, be it mind, organisation or anything accumulating something. At some point, there is no room left to add new things, and some or all of the old things need to be moved out.
In a mind and learning context, these two bottles are similar with two entities. One that is already populated with some knowledge (useful or not so much) or behaviours. The other one is “blank”, ready to learn and accumulate new knowledge, new behaviors. In order for the one on the right to accumulate the same amount as the one on the left from something new and potentially more useful and better, at least some of the existing content needs to go away in some form or another.
There is also another aspect, that of the quality of the content that is accumulating. The blank one, is more prone to acquire, develop , higher quality of the new stuff that is put in, as opposed with the one on the right that even if one flushes, there are still some residual elements influencing the new content.
I do now realise that such a metaphor is valid not only for knowledge, but also for behaviour and even on a more mundane level for any codebase or existing application.
What do I take away from this?
Consider the impact of unlearning things;
Become aware of the value of void or blank states;
Account for the influence of existing “content” on the new added “content”
Take care of the old code, tests and other artefacts, so that they do not become so stale as to not spoil and turn for worse anything new that is added
In case one does not know, since May 2018 all EU citizens are covered by a new personal data regulation that is defined as
Regulation (EU) 2016/679 of the European Parliament and of the Council1, the European Union’s (‘EU’) new General Data Protection Regulation (‘GDPR’), regulates the processing by an individual, a company or an organisation of personal data relating to individuals in the EU.
The rules don’t apply to data processed by an individual for purely personal reasons or for activities carried out in one’s home, provided there is no connection to a professional or commercial activity. When an individual uses personal data outside the personal sphere, for socio-cultural or financial activities, for example, then the data protection law has to be respected.
For internet users, the #GDPR regulation most often takes shape of that cookie consent tool (see below). As one can see, more clearly in the second one, these used a range of dark patterns for luring the user into consenting to accepting the cookies. Most times, the checks were already marked, nudging the user to accept (the easiest way out of those implied consent)
This hidden default consent practice was no clearly defined in the regulation’s text. Meanwhile, national regulators pushed and a trial was launched in Germany. That trial ruling was escalated up to EU’s Justice Court. The court’s ruling is is available on the CURIA website and states clearly the path forward.
Storing cookies requires internet users’ active consent A pre-ticked checkbox is therefore insufficient
As software tester should know what to expect in terms of behaviour from a web app or web page, even if the provided documentation does not state this clearly. Basically, a defect should be raised is the default option for the user is “Accept”. Also, a defect should be raised in if the “Minimal only” path is not clearly visible. The EU’s Court of Justice site offers a good example (they of all others should set the example).
Down the line, one can perform usual tests for cookie handling.
Almost 15 years have passed since I started working in software testing field and I believe there are some things that remain pretty much the same and one of them is how a tester steps into a project.
I come now to believe that many times, how a project is on-boarded can shape in many ways how the project will take place, so this is one more reason why investing in a good new project on-boarding is really important.
Since testing is at the core about asking questions and investigating answers, my prep list is in fact a list of questions that a tester should answer when stepping in a project.
Please bear with me 🙂 as I tend to use interchangeably product and project, as to me they both blend as a deliverable (a word that I really do not like)
What is the goal of the project?
Who are the intended beneficiaries of the projects’s final deliverables? (or in other words, “Who’s life we aim to impact?”)
What is the product intended to do?
What is the product intended NOT to do?
What deliverables are expected to come out of the project’s work?
Are there any other types of users not mentioned? (think operations, support, maintenance, administrators, special kind of users)
Where will the source code be stored?
Do I, as a tester, have access to that location (e.g. repository, shared folder)? If not, how can I get at least read access?
Can I point to the testing environments?
Are there more than one testing stage environments ? (e.g. stage, UAT)
Where will it be hosted? (e.g. user’s machine, company’s infrastructure, cloud provider)
Where can I find the project’s requirements?
Where can I find the design references? (needed for visual testing)
Can I run the project on my local machine?
Where will we track the defects?
Are there other testers involved?
Do we test for accessibility? (#a11y)
Do we test for multiple locales? (#i18n)
What is the intended and preferred environment to use this product? (e.g. mobile, desktop, tablet, other)
What devices do we see the users using this product on?
What are the main user journeys?
Do we have error messages covered? Can I reproduce every error with a designed error message?
Are there any other competing products I need to reference against?
Where do we track and share our team’s work? (e.g. Team board)
Will there be security testing?
What data do I need to prepare for testing?
What personal information should the user put in and later to track according to #GDPR?
Are there any legal aspects I should cover during testing ? (e.g. ensure that they are mentioned)
To whom should I assign defects for triage?
Are there some other 3-rd parties to interface with?