QA Paradise
Written by: @Anna Kovácsová |
When we first talked with Ondra, who hired me, about QA in Pricefx, it sounded to me like a QA paradise. So, I was a little skeptical – of course, I am QA, I must be skeptical! Honestly, it's even better than he described it. And yes, there are negatives, and I will mention them below. Now let me start with the positives.
As @Florent Dotto mentioned in his post, we use the Agile approach, we have several teams and QAs are respected members of the teams. I highlighted the word intentionally because I often hear other QAs saying that developers do not respect them, or developers saying that QA role is useless because they can check what they developed by themselves, and QA is just not needed. We don't have this approach in Pricefx at all, testing is valuable part of the process and I've never heard from anyone here something like "hey, QA is not needed". It's perfect, and it's one of the reasons why I like to work here.
Now let me describe the release process and the related infrastructure from QA perspective. We have two major releases per year, then we have a minor release every month. And we have four (or better five) environments that can be used for testing and development: develop, staging, master, oldstable. Master is the version that is used on production, the most recent stable version. Oldstable is older, also stable version (used by some customers). Staging and develop are versions used for the development. What we do on staging, it's meant to be released in the minor release, so usually bug fixes or some very important features that some customer is waiting for. Develop is meant for the big new features that will be released in the major release. Now where is the fifth environment I mentioned? Well, this is part of the QA paradise – the fifth environment is called templates, and it's used as (simply said) a storage of our data and configurations. We don't do any testing or development there but when we want to store something permanently, we save it there and every night everything from templates is propagated to all the above mentioned environments. It means we can do whatever testing, playing, breaking and we know it's safe because the next day it will be "fixed" again to the default state. And of course, we can run the restoring job manually, if needed.
I do manual testing; it means I test mostly front-end and mostly the way user would use the application. Although manual testing can be considered as lengthy process, it's needed here because Pricefx is highly configurable software and it usually takes time to understand the context of the bug/feature and the configuration needed, to be able to test it. But we also do automation testing, we have a whole team dedicated to automation. We use http://cypress.io and when it makes sense (e.g., commonly used configuration) we try to cover the use case by an automated test. When I am test some issue and I think it should be covered by automation, I just use a label in JIRA and guys from the automation team will find the issue and can cover it.
Now the negative part – as I already mentioned, Pricefx is highly configurable software. In practice, it means we have an endless number of combinations that could (and ideally should) be tested. Everybody knows there is nothing like 100% test coverage. On the other hand, we have customers who are saying "hey, we have this specific configuration with these parameters, and it doesn't work!". How to find a balance? How not to go crazy and still satisfy the customers? Especially before the major release, when we do the regression testing and with many combinations, it can take a month of testing (when we don't do anything else than going through the application and checking every single feature, clicking on every single button etc.) – that sounds terrible, I know!
Right now, we are trying to cover as many regular/common scenarios as possible by automation so manual testers can concentrate on the configurations and don't have to spend time with "easy" use cases. It will take time but we are getting there. I think one of the most valuable qualities of a good manual tester is intuition – we just somehow know (or feel, if you want) where is a critical place that should be examined. Of course, it's based on our experience, we know what areas have been touched by the developers, so we know where to pay our attention.
The good thing is that we aware of what is not ideal and we work on it. I am with Pricefx for almost three years, and we already improved the process a lot. Florent started his article with F-Words, I would like to end with them – when we see something doesn't work or it works but it's not perfect, we just change it so it's better. It's a continuous process, but that's it – being Fast, Flexible and Friendly at the same time.