Five for Friday

Moving Past Shallow Incident Data – A look on why to move deeper on data

The Art of Unlearning – A throwback to a topic that I also like to write & think about

Stop Trying to Raise Successful Kids – Good piece on an oblique path to reaching success

Software Quality, Bugs and SLAs – Quality takes many shapes, and this is a good read on some of them

Yes, You Should Estimate Software Projects – In all these #NoEstimates discussions, there is also a middle ground

Five for Friday

Reasons latest Apple releases OS versions are so buggy – https://tidbits.com/2019/10/21/six-reasons-why-ios-13-and-catalina-are-so-buggy/

New reasons to worry about your smart speaker – https://www.technologyreview.com/f/614602/smart-speakers-can-be-hijacked-by-apps-that-spy-on-users/

How to read books when you believe you do not have time for this (inspired to share this based on recent study of how much people in my country read) – https://hbr.org/2019/04/8-ways-to-read-the-books-you-wish-you-had-time-for

Inspiring TDD checklist from one of the fathers of TDD –https://medium.com/@kentbeck_7670/test-desiderata-94150638a4b3

Initiative – https://seths.blog/2019/10/initiative/

Culture is not moving fast (or should not)

“People like us, do things like this” is the definition of culture that Seth Godin is putting out in much of his material, or at least this is how I get it 🙂

At the same time there is Dave Snowden who’s work around Cynefin, micro-naratives, and sense-making speaks about approaching culture changes.

In one TEDx talk, Dave speaks about how culture can be shifted. His thesis is that big culture shifts are not controllable and reproducible, and that culture shifts little by little, changing slowly around the “How to we make to have more stories like these and less stories like these?”

So, I do now realise that culture change is hard and takes time, and all those organisational transformation projects aiming to change culture are at least difficult to say the least

Automation and #testing – Is it a shortcut?

I believe mentioning this before around here, but I started following the work of Seth Godin and I particularly like his Akimbo podcast.

In an episode I recently stumbled upon he was talking about shortcuts, and put out what he believes are criteria for qualifying a good shortcut, beside being actually being what the dictionary says:

shortcut

noun short·​cut | \ ˈshȯrt-ˌkət also -ˈkət  \

“a method or means of doing something more directly and quickly than and often not so thoroughly as by ordinary procedure”

https://www.merriam-webster.com/dictionary/shortcut
Shortcut through a secret bookshelf door
Shortcut – Photo by Stefan Steinbauer on Unsplash

This made me think about software, #testing and automation, since there is much talk about #automation being either the miracle panacea or something not so good. In this context, I want to check the attributes associated with a shortcut. The obvious one, that of offering faster results in certain contexts I guess is not up for debate, since computers proved a long time ago they can be faster than humans at some tasks (maybe all …)

Repeatable – This is the first attribute of a good shortcut, seen as its potential to be used over and over again. On this check, automation in testing seems to match positively, as if done right, the same test can be run over and over again. So, automation in testing is repeatable.

Non-harmful – This one means that the shortcut does not have downstream side effects, as in it does harm. One example the author provided initially was that of tax evasion, that is harmful in the long run. Moving to software and testing, in the context of automation, this is a hard one since automation is not without side effects when done poorly. This happens because many times people start believing automation alone is good enough and thus other types of testing are no longer needed, with the end result being poorer quality products. This is up for debate, as many times the automation fails to deliver on its promise.

Additive – This one refers the shortcut’s ability to provide value every-time it is used. Again this is up for debate, as when done well, automation in testing provides good value every-time, but when results are not reliable (False positive or false negatives), the value added is actually negative, decreasing confidence while also asking for a maintenance cost.

But let’s also stop and consider that software testing’s goal and reason to exist is providing decision making information to stakeholders (Jerry Weinberg), being an additive shortcut stops when testing no longer delivers information useful for decision making.

Suitable for crowds – This one refers to whether or not a shortcut is suitable to be actively used by many, everybody in a extreme case. This scenario is a clear winner for automation in testing, as a network effect is happening as more and more people join this trend. Is it thanks to such effects that such a vibrant ecosystem is built, with tools maintained via the OSS model and things moving from independent tools to standards (see WebDriver as W3C standard)

All in all, it seems automation in testing is a good thing, but should be considered with caution, especially when it comes to being repeatable and non-harmful.

In the end, what’s your take on this?

A visual #testing checklist

During the last couple of years I’ve been working (on & off) for a very creative and design oriented oriented organisation. This tenure helped me learn a lot about the design, copywriting, typography and of course, testing with the aim of delivering a beautiful experience.

Please note the distinction, a beautiful experience, not a mere beautiful website. This is relevant, as an experience contains also the not so happy scenarios, and other side aspects that make the user feel catered for, beyond the normal flow.

Since visual aspects are so important, what is a quick checklist start for this area when it comes to #testing?

Layout (how things look and arrange on the screen)

  • Is mobile shown OK? Did I check smaller screens (e.g. 320 px wide viewport)
  • Are desktop & large screens scenarios OK?
  • Is there a desired orientation?
  • Is there an orientation lock? Should it be (is the experience usable on landscape orientation)?
  • Are all elements visible and clear on all supported breakpoints?
  • How does it look like in high-contast / night mode on selected browsers?
  • How does it looks like when disabling CSS?
  • How does it looks like when disabling all scripts?

Typography

  • Is the font typeface correct?
  • Is the font size & weight correct?
  • Spacing & kerning are according to the intended design?
  • Are special char modifiers shown correctly? (e.g. accents)
  • If needed, are RTL languages supported by the font family?

Copy

  • Is the copy correct?
  • Is the copy adapted to all supported locales and regions? (e.g. localisation)
  • Is the copy flowing nice on all layout viewports?

Assets

  • Are images clear & crisp?
  • Are videos playing on all supported browsers?
  • How about audio? Do we have audio track and is the video working as intended in the auto play blocked scenarios?
  • Are assets of a decent size? (think of perceived performance)
  • How does it looks like when images can not be loaded? (graceful fallback)

Accessibility

  • Is the contrast OK for reading by visually impaired persons?
  • Do all images have relevant alt-image description set for each locale?
  • How does the tested piece look in high-contrast mode?
  • How does things look like when using “large font” options on devices?

How can I help make meetings more effective?

From the start, I have to say that I believe meetings can be shorter or more effective.

One thing that I can do for this is prepare better, and for this answering some questions can help.

Here are some of the ground rules I believe should be used in all meetings

Ground Rules:

  • The core point is to show respect for your fellow team members.
  • No email or surfing the web.
  • No side conversations (via IM etc)
  • No cellphones or blackberries
  • Join the meeting on time (Global Crossing issues aside)

And these are some of the questions that can build the agenda or guide through the meeting

Questions that can be used to build the agenda for Sprint closing meetings (Sprint  Review and Sprint Retrospective)

  1. What was our commitment for the sprint?
  2. What was our final set of completed items?
  3. What is our demonstration of these items?
  4. What was our final burn-down chart?
  5. What did we learn about our estimates?
  6. What changes are there in the Product backlog items, priorities or estimates?
  7. What documented issues/concerns were we able to address? (i.e. from our last retrospective).
  8. What are our current issues/concerns?
  9. What worked well that we’d do again?
  10. What practices would we alter or drop?
  11. What new practices would we want to introduce next sprint?
  12. Based on our last two weeks work what appreciations do we have for individuals? In/outside the team someone who had a particularly valuable affect on our success.
  13. What is our Action Plan?
  14. Have we met the purpose we set out today?

Questions that can be used to build the agenda for Sprint opening meetings (Sprint  Planning)

  1. What is the Sprint’s Goal?
  2. What is the most backlog’s top item?
  3. Are the important backlog items clear?
  4. Do we have all information and all pieces are in place for us to start working on them right away?
  5. Do we know how to test it?
  6. What the experiment to be run to prove this thing is not working?
  7. Is there any other entity that should be included in the conversation before we start work?
  8. Can we estimate this item?
  9. What’s the estimate or sizing for this item?
  10. Is the team’s capacity filled up? If not, how much of it is filled up?
  11. What events we expect to happen during the upcoming sprint that might derail or steal some of the team’s time and focus?

Things that inspired me ? This article

Unlearning can be hard

Recently someone share this image with me via an instant messaging app, with some added comments. Later on, I realised there is another perspective to this shot, that of the learning process.

Sometimes, unlearning something has a cost. An open and ready to learn mind worths more!

From a learning perspective, this pricing model makes perfect sense. Every “container” has a finite capacity, be it mind, organisation or anything accumulating something. At some point, there is no room left to add new things, and some or all of the old things need to be moved out.

In a mind and learning context, these two bottles are similar with two entities. One that is already populated with some knowledge (useful or not so much) or behaviours. The other one is “blank”, ready to learn and accumulate new knowledge, new behaviors. In order for the one on the right to accumulate the same amount as the one on the left from something new and potentially more useful and better, at least some of the existing content needs to go away in some form or another.

There is also another aspect, that of the quality of the content that is accumulating. The blank one, is more prone to acquire, develop , higher quality of the new stuff that is put in, as opposed with the one on the right that even if one flushes, there are still some residual elements influencing the new content.

I do now realise that such a metaphor is valid not only for knowledge, but also for behaviour and even on a more mundane level for any codebase or existing application.

What do I take away from this?

  • Consider the impact of unlearning things;
  • Become aware of the value of void or blank states;
  • Account for the influence of existing “content” on the new added “content”
  • Take care of the old code, tests and other artefacts, so that they do not become so stale as to not spoil and turn for worse anything new that is added

Testing for #GDPR in the latest context

In case one does not know, since May 2018 all EU citizens are covered by a new personal data regulation that is defined as

Regulation (EU) 2016/679 of the European Parliament and of the Council1, the European Union’s (‘EU’) new General Data Protection Regulation (‘GDPR’), regulates the processing by an individual, a company or an organisation of personal data relating to individuals in the EU.

https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en

It doesn’t apply to the processing of personal data of deceased persons or of legal persons.

https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en

The rules don’t apply to data processed by an individual for purely personal reasons or for activities carried out in one’s home, provided there is no connection to a professional or commercial activity. When an individual uses personal data outside the personal sphere, for socio-cultural or financial activities, for example, then the data protection law has to be respected.

https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en

For internet users, the #GDPR regulation most often takes shape of that cookie consent tool (see below). As one can see, more clearly in the second one, these used a range of dark patterns for luring the user into consenting to accepting the cookies. Most times, the checks were already marked, nudging the user to accept (the easiest way out of those implied consent)

Cookie consent request – exibit 1
Cookie consent request – exibit 2

This hidden default consent practice was no clearly defined in the regulation’s text. Meanwhile, national regulators pushed and a trial was launched in Germany. That trial ruling was escalated up to EU’s Justice Court. The court’s ruling is is available on the CURIA website and states clearly the path forward.

Storing cookies requires internet users’ active consent
A pre-ticked checkbox is therefore insufficient

https://curia.europa.eu/jcms/upload/docs/application/pdf/2019-10/cp190125en.pdf

As software tester should know what to expect in terms of behaviour from a web app or web page, even if the provided documentation does not state this clearly. Basically, a defect should be raised is the default option for the user is “Accept”. Also, a defect should be raised in if the “Minimal only” path is not clearly visible. The EU’s Court of Justice site offers a good example (they of all others should set the example).

Down the line, one can perform usual tests for cookie handling.

A Tester’s new project prep

Almost 15 years have passed since I started working in software testing field and I believe there are some things that remain pretty much the same and one of them is how a tester steps into a project.

I come now to believe that many times, how a project is on-boarded can shape in many ways how the project will take place, so this is one more reason why investing in a good new project on-boarding is really important.

Since testing is at the core about asking questions and investigating answers, my prep list is in fact a list of questions that a tester should answer when stepping in a project.

Please bear with me 🙂 as I tend to use interchangeably product and project, as to me they both blend as a deliverable (a word that I really do not like)

  • What is the goal of the project?
  • Who are the intended beneficiaries of the projects’s final deliverables? (or in other words, “Who’s life we aim to impact?”)
  • What is the product intended to do?
  • What is the product intended NOT to do?
  • What deliverables are expected to come out of the project’s work?
  • Are there any other types of users not mentioned? (think operations, support, maintenance, administrators, special kind of users)
  • Where will the source code be stored?
  • Do I, as a tester, have access to that location (e.g. repository, shared folder)? If not, how can I get at least read access?
  • Can I point to the testing environments?
  • Are there more than one testing stage environments ? (e.g. stage, UAT)
  • Where will it be hosted? (e.g. user’s machine, company’s infrastructure, cloud provider)
  • Where can I find the project’s requirements?
  • Where can I find the design references? (needed for visual testing)
  • Can I run the project on my local machine?
  • Where will we track the defects?
  • Are there other testers involved?
  • Do we test for accessibility? (#a11y)
  • Do we test for multiple locales? (#i18n)
  • What is the intended and preferred environment to use this product? (e.g. mobile, desktop, tablet, other)
  • What devices do we see the users using this product on?
  • What are the main user journeys?
  • Do we have error messages covered? Can I reproduce every error with a designed error message?
  • Are there any other competing products I need to reference against?
  • Where do we track and share our team’s work? (e.g. Team board)
  • Will there be security testing?
  • What data do I need to prepare for testing?
  • What personal information should the user put in and later to track according to #GDPR?
  • Are there any legal aspects I should cover during testing ? (e.g. ensure that they are mentioned)
  • To whom should I assign defects for triage?
  • Are there some other 3-rd parties to interface with?