Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Here you can find general QA terminology which is often used on projects with some examples when and how it can be used effectively. It should make the conversation between the team members easier and make sure that everyone is on the same page (smile). In case you are discussing the terminology with Partners or Customers, please ensure you understand the same terminology in the same way - in that case you will eliminate further misunderstandings.

General overview of QA terminology and testing in general can be found here:

https://www.istqb.org/downloads/category/2-foundation-level-documents.html

/wiki/spaces/CUST/pages/1899496760

BUG/Defect - everything what is not working as it should (smile) (based on requirement, general knowledge of the system etc.)

Black box testing (including Equivalence partitioning, BVA, decision tables, use case testing) - testing with no knowledge of the code or how the functionality was developed. Testing is based on the requirement. This way of testing makes space to question the implemented solution if it was done as the requirement stated. The tester is not influenced by the thinking of the developer.

Often used techniques by BBT coming from Boundary value analysis, State transition testing or decision tabels (please reffer to https://www.istqb.org/downloads/category/2-foundation-level-documents.html or any other source to get familiar with this techniques)

White box testing - testing with some knowledge of the code or the way how the functionality was configured. It is often used on projects with not completed or none documentation or in case that you jump in some running project when you need to be involved quickly to the process.

Example: There is not sufficient description in the requirement which tables should be used, how the final price should be calculated, what is the base for calculation. In this case you need to ask the developer or check the code by yourself. Ideal solution would be to ask customer but this is not always an option.

Experience based - testing based on general experience experience of pfx system.

Example: Quote can’t be submitted when some error occurs.

Exploratory testing - It is the practice of verifying the functionality of a system without using a predefined script.

Example: It is often used by PFX as there is not always required and not enough time to prepare test cases in advance. Once the user story/task is assigned to you can start testing directly and just write some test process afterwards (if it has not been agreed differently)

UAT testing - is performed by the end user or the client to verify/accept the software system before moving the whole system (or its part) to production. UAT is done in the final phase of testing after functional, integration and system testing is done.

Example: Usually customer performs UAT on the end of the sprint (or simply after the functionality is delivered) to confirm that everything was developed based on the requirement and works correctly. Before final GO Live there is another set of UAT from more end users to get confidence with the system.

Regression testing - after new functionality is introduced it needs to be check that everything related is still working.

Example: New calculation for quotes needs to be considered in quote outputs. We need to check that all other calculations are still working and order of elements has not changed.

Retesting (Confirmation testing)- After bug is fixed it needs to be verified that the functionality is really working.

Smoke testing - cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly.

Example: If feature testing is done on DEV environment there needs to be done smoke test after deploy to QA to ensure that deployment was successful (no PP, PX table was forgotten, logic deployment did not fail etc…). Use happy flow to test.

Happy flow - The main scenario which will pass end to end testing. It is often used for quick verification that the functionality is working.

Example: User needs to create LPG, add products, recalculate, change value in PP table, recalculate LPG again and check how the monitored field was changed.

Epic - An epic is a large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.

Example: Customer wants to develop brand new calculation logic for final price calculation in Price list.

User story - is a specific task within an epic.

Example: Epic requirement states that brand new calculation logic for Price list needs to be developed, in user story the parts of the process are defined. For example User story 1: Create specific PP table needed for calculation, User story 2: Create calculation Flow to trigger logic calculation

Root cause - main reason why the functionality is not working.

Test case - set of actions performed on a system to determine if it satisfies software requirements and functions correctly.

Use case - it is the typical case how the customer will use the system.

Example: User opens contract, enters some predefined price list, change value on line item and saves the contract

Sprint vs. Kanban

Scrum: teams commit to ship working software through set intervals called sprints. 

Kanban: Work items are represented visually on a kanban board, allowing team members to see the state of every piece of work at any time.

Performance - how a system performs in terms of responsiveness and stability under a particular workload.

Example: DataMarts tend to have performance issues by applying some filters.

Deployment - it basically means to get your software ready for actual use, in its strictest sense, software deployment refers to the release of the final version of the software or an app to some specific environment (QA, PROD…)

Example: After successful testing some functionality on DEV environment the functionality is deployed to QA for UAT testing.

 

 

  • No labels