Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This article debunks common myths and clarifies truths about software testing and quality assurance. It explains that software testing does not directly improve the quality of the software, but rather provides important information for stakeholders to make decisions. It also highlights the complexities and considerations of automated testing, the skills required to be a good tester, and the limitations of achieving zero bugs in software. The article emphasizes the need for testers on projects, the importance of testing after upgrades, and the role of software testing within the broader scope of quality assurance.

Quality Assurance (QA) testing is a critical component of the software development lifecycle, but there are several myths that persist about the practice. Here are some common misconceptions:

  1. QA Is Just Testing: Many people believe that QA is synonymous with testing, but QA is much broader. Testing is one aspect of QA, which encompasses all activities designed to ensure that a product meets the specified requirements and is free of defects. While testing focuses on identifying bugs in the software, QA aims to improve the processes to prevent defects from occurring in the first place [1].

  2. QA Is Expensive: There's a myth that QA is always expensive because it's thought to delay business processes or involve unnecessary roles and tools. In reality, QA should be seen as an investment; it can save money in the long run by catching defects early, which are much more costly to fix after deployment [2].

  3. Testing Can Be Done by Anyone: Some believe that testing doesn't require special skills and can be done by anyone. However, professional testers have a deep understanding of software development, testing methodologies, and tools. They are skilled at designing test cases that effectively find bugs and are critical thinkers who can anticipate where and how a software might fail.

NOTE: Being a good tester requires a set of skills and a mindset not everybody has. It requires analytical skills to understand the software, technical skills to understand details of the solution, thoroughness, critical thinking but seeing the big picture at the same time, often also to understand business of the customer and on top of that being a good communicator, understand the project development process.

  1. Complete Testing Is Possible: It's a myth that software can be tested completely. Given the complexity of modern software, it's impossible to test every scenario and combination of inputs. Instead, risk-based approaches are used to focus on the most critical areas [3].

  2. Automated Testing Replaces Manual Testing: While automated testing is an efficient way to execute repetitive test cases, it cannot replace the insights and nuances that come with manual testing. Automated tests are written based on assumptions about how the software should work, but manual testers can explore the software in ways that automated tests cannot.

NOTE: Consider the bigger picture and ask some questions first:

  • Do we have a big data set we need to test and is the application stable? If yes, then we should really think about test automatization, because it will help us test regression testing faster and with more test data than we can do manually. However, we still should think about somebody is needed to evaluate reports from automated tests, report bugs and modify automated scripts.

  • Who will create those automated tests? Not every tester is able to create them because they usually require special technical skills. Do we have a suitable resource?

  • Is our tested application stable? If it is not - automated testing is usually very expensive and much cheaper is manual testing, because automated tests must be modified and upgraded over and over again.

  • Do we have the right tool for automatization and are we ready to pay for licenses? Test automatization needs a special tool, some of them are free or very cheap, but some of them are very expensive. First, we should try the tool and check if it is suitable for our software. Then we must evaluate and decide if we can afford to buy it.

NOTE: Test automatization is quite a complex topic. It can help, but it can also be very expensive, much more than continue with manual testing. So, think about your project situation.

  1. QA Is Only Needed at the End: Some people believe that QA should only occur at the end of the development process. However, best practices suggest that QA should be integrated throughout the development lifecycle. This allows for continuous feedback and the early detection of issues, which aligns with Agile methodologies.

  2. A Bug-Free Product Is the Sole Objective of QA: While creating a product free of bugs is a goal, QA's objective is broader—it includes ensuring that the product is reliable, user-friendly, performs well under various conditions, and meets customer expectations. Quality is not just about being bug-free but also about delivering value to the user.

NOTE: Having zero bugs in the software is NEVER goal of software testing and there are basically two main reasons for that:

  1. Time is limited: The software is highly complex, with numerous inputs, outputs, and functionalities, making it impractical to test every possible combination and behavior within project timelines. Customers are often unwilling to allocate extensive time for testing, so we must be efficient and pragmatic. We employ test analysis techniques to prioritize the discovery of critical bugs and utilize tools to examine aspects of the software that are not visible through the user interface. While we strive to ensure that the software meets requirements, operates without bugs, and satisfies our customers, absolute certainty about the absence of bugs in the software is unattainable.

  2. We don’t control everything: From the operating system (like Windows or MacOS) to internet browsers, CRM, ERP, and various other systems that are entirely beyond our control. Moreover, their updates are also beyond our control. In many cases, we can't even choose whether to install them. While we assume that they will not impact the functionality of our software, we can't be certain. Essentially, it's similar to the previous point: we're doing our best, but it's possible for a bug outside of our software to cause our software to stop working.

  1. Software testing will improve the quality: In this scenario, software testers identify a bug, report it, and assign it to the developer. However, with the software deployment to the customer scheduled for the following week, stakeholders decide to postpone bug fixing and proceed with deploying the software despite the reported bug. As a result, the bug remains unfixed, and the quality of the software remains unchanged. This highlights that software testing does not directly improve the quality of the software. Instead, its purpose is to provide essential information about the software, including the identification of bugs, so that stakeholders can make timely and informed decisions.

  2. Testing after upgrades isn’t needed: Every software (sw) undergoes upgrades periodically for various reasons such as new features, bug fixes, security patches, and UI enhancements. It's important to acknowledge that software upgrades are a normal part of the process. Testing after upgrades is crucial, especially for business-critical software with extensive data and complex calculations.

IMPORTANT: While it's not necessary to test the entire software from scratch, it's ESSENTIAL to retest selected business-critical use cases to ensure everything works correctly after the upgrade. Test automation can greatly assist with testing after upgrades, particularly for stable business cases that don't change much over time.

  1. Developers can do unit testing: This concept often arises when there is significant pressure to reduce the project cost. At first glance, it may seem reasonable, right? In some cases, it can indeed be effective—especially when the customer is experienced with testing on IT projects and is willing to conduct thorough testing independently. However, in most cases, it does not work well.

WARNING: There are several differences between developers and testers:

  1. Testing coverage: The developer usually tests only the requirement they developed, typically using very simple scenarios. Testers, on the other hand, explore more complex scenarios, perform edge testing, and conduct regression testing for multiple requirements simultaneously. Therefore, when testers are not involved in the project, the customer can expect to encounter more bugs.

  2. Test management: Testers usually document their activities in a simple manner. They typically utilize a test management tool (such as X-Ray) to create test cases and record results, making it easy to review them later or reuse them if necessary. The test cases created can also serve as a reference for the customer's testing. In cases where testing is solely conducted by developers, test management is usually entirely absent from the project.

NOTE: As customers are typically not experienced software testers, they often need help with UAT phase. When UAT phase is coming, a lot of questions arise, like:

  • How UAT test cases look like?

  • What we should cover during UAT?

  • Is there a tool we can use?

  • How UAT phase should be organized?

While software testers are usually able to assist the customer with these questions and facilitate a smoother UAT phase, developers typically cannot offer the same level of support, and the customer must manage the UAT on their own. Given the critical importance of the UAT phase to the project, this can significantly elevate project risks.

These myths highlight a misunderstanding of the comprehensive nature of QA and its importance in producing high-quality software. Dispelling these myths helps organizations better understand the value of QA and integrate it effectively into their development processes.

 

 

Truth#1: Software testing will make the project more expensive

Well, to be honest, it can be true. Especially when we are looking just at project team costs. When there are QA guys or software testers in the project team, somebody must pay for them, right? But this way of thinking can lead to questioning other roles in the project team (like PM and SA). Isn’t the best solution to have a super experienced developer who will lead the project, communicate with the customer, propose the solution, develop the solution, test the solution and make a demo for the customer. Kind of superman, right? Of course not. Every role in the project team has its own purpose and requires different skills. If we remove any role from the project team, we increase the risk our customer will find bugs during UAT, and not finishing the project in time. Is this risk really what we want and are we ready for consequences?

 

Truth#2: Software testing is part of QA

Here we have the terminology issue, so you will be probably able to find a lot of definitions. In general, the difference between software testing and QA (quality assurance) is that software testing is focussed just on testing the particular software. So, software testers often work like - here is the software specification, so I’ll do test analysis, prepare test cases, then test it and report bugs. QA on the other hand cover not just Software testing, but also the whole process regarding the testing on the project - from clarifying requirements, defining test strategy to testing, creating reports to the customer, often even making software demonstrations for the customer and helping him with UAT testing. It can differ from company to company, so it’s definitely a good idea to clarify expectations. Usually, expectations from QA team are higher than from software testers, especially when it comes to communication skills and responsibility.

  • No labels