Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

View file
namexray - testing in JIRA.docx

Note

This document is for internal usage - it describes configuration process (you need to have admin rights for the project) and expected usage on the projects. Updated version contains detailed step by step guidelines on how to create all issue types, link them to propper test cases, record test results. Screens from a real Jira project will be added.

What is X-RAY

X-ray is an Add-on module to JIRA, it will introduce new issue types for software testing to your project, and it will allow you to organize and document test cases and test results directly in JIRA.

Why X-Ray

X-ray allows us to stay with testing in JIRA – excel, confluence and other tools will remain as backup solution.

...

When X-ray is used on the project from the beginning, it allows us

  • Better traceability - test cases are directly linked with user stories – so we can easily see what is already covered by testing

  • Cover functionalities by sets of tests - Test set issue type can group a set of test cases and it can be used – so we can group several tests to one purpose (like one functionality, one sprint, regression testing, …)

  • Test results are in JIRA - it is transparent for everybody on the project – we can see the problems with some functionalities more easily

  • QA will stay visible for PM and all project roles, even for the customer

Installation and configuration

X-Ray is installed – nothing need to be done here. I’ve checked with our IT it is not any trial version, so we can use it on our projects.

The configuration was tried on the Lightning project – the configuration of the project was standard at the beginning, it was used for several weeks and then X-ray was configured there. Previous project configuration was not changed or affected by X-ray, so configuration of X-ray seems to be safe even on running projects.

 Configuration has 4 steps (I asked Partik Sebek from IT for this, but if you are admin of the project, you should be able to do this):

  • Add issue types to the project:

XRay Settings -> Summary

...

  • Activate all issue for the project:

...

  • Test Coverage configuration:

Based on your project setting – add issue types which will be covered by tests to test coverage configuration

...

  • Defect Mapping Configuration:

Add Bug as an issue type we use for reporting incorrect behaviour:

...

And that’s all - configuration is finished.

Basic explanation of terminology

Test Plan

  • an “umbrella” for all tests on the project

  • all Tests should be linked to the Test Plan (to see all tests ready for testing)

  • Test Executions should be associated with Test Plan as well (to see all executed Tests and their results)

  • All Tests associated with Test Plan are displayed in Test Plan with the latest result of their test (Passed - Failed), so Test Plan will give a good overview of what was tested and what we still need to test

Test

  • Means “Test Case” according to standard methodology

  • Allows us to add test steps and clarify expected results of the test

  • Can be created either from requirement (like Task, Story, …) or directly as New Issue Type – if it is possible, it is recommended to create Tests from the requirement so there in direct link between specification and test coverage

Test Execution

  • Group of tests prepared for testing

  • Tests which should be tested must be linked to the Test Execution

  • One or more Test Executions can be linked to Test Plan

  • Test Execution need to be associated to a tester

  • If Test fails during execution, it is possible to create a bug directly from Test Execution. The Bug will be then linked to the test

Test Set

  • This issue type can be used for grouping tests together, for example according to different functionalities

  • Tests from Test Set can be easily imported to Test Execution – if new tests were added to the Test Set, only those new Tests are added to the Test Execution

  • The same can be done in Testing Board – directories can be used for creating the structure and organize tests so it is clear which functionality they are testing

Preconditions

  • Precondition defines what need to be fulfilled so the Test can be executed

  • One Precondition can be linked to more Tests

  • It seems to be useful for more complex tests, I recommend not use this type of issue unless it is necessary, so we keep out testing simple

Sub Test Execution

  • Similar like Test Execution, seems like we will not need this Issue Type for our testing

General recommendation for testing with X-Ray:

  •  Create Tests from requirement so there is link between Test and Requirement so we can check test coverage

  • Create Test Plan at the beginning of the project, so you have it ready for associating Tests and Test Executions

  • Test Executions should be created at least for every Feature Sprint and they must be linked to the Test Plan

  • During testing – create Bugs directly from Test Execution so there is a correct link between requirement, Test and a Bug

Advantages:

  • Tests are created and documented in JIRA

  • Test results and test progress are visible

  • Test Plan contains an overview of Test Results, the overview is visible to PM at any time

  • Reports from X-Ray are available and can be shared with the customer

  • Tests can be created directly from Tasks or Stories, then test coverage can be seen easily

  • Tests can be easily shared with the team – for review, support team

  • When there is a change in a requirement – it is easy to know which Tests must be updated because they are linked to the changed requirement

  • Nothing can be lost – it is in JIRA

  • Regression tests or Smoke tests can be easily grouped to test sets and their results will be visible for the team – now it is not clear where to store them in JIRA because they are usually not linked to any task

Disadvantages:

  • As for all test management tools, it must be carefully configured so it is usable for practical testing – it can easily become too time consuming

  • Think about making fields mandatory

  • Think about how much details you describe in test steps

  • Standard names of test cases is not clear, some name convention should be used

  • Using X-Ray can be quite time consuming if QA Analyst don’t know how to use it effectively

  • Late changes in structure of Tests can be difficult

Expected usage on projects

Keep a simple basic approach:

  • Create everything as soon as possible (at the beginning of your allocation on the project) – all administrative tasks and all test cases

  • Keep it simple – don’t create too much administration

  • Prioritize - Remember that your time is limited, you will not have enough time for everything you would like to test, so do the most important first

  • Check with PM – but be ready to test officially according to prepared test cases at the end of the Feature Sprint (not during the development) and use Sub tasks instead of Bugs during development

  • Help the customer use the X-Ray, it will gives us better opportunity to communicate with the customer and have a basic info about UAT in our JIRA  

Detailed approach during the Project:

  • At the beginning of the Feature Sprint

  1. Create Test Plan for the Feature Sprint

  2. Create structure of directories in Test Repository – according to User Stories (press + button in Test Repository)

  3. Create test cases (Issue Type: Test) from user stories tasks – navigate to User Story task and check Test Coverage section. You can see Create New Sub Test Execution and Create new Test buttons here, so press Create new Test button and create a new test case. After test is created, check that user story task has “NOTRUN” in Test Coverage section instead of “Uncovered”. This is the sign the Test and User Story are linked together. Create all test cases for all User Stories.

  4. Create test steps for all test cases – either manually (by pressing Create Step button) or use import from csv file (by pressing Import and choosing From csv… ). Note that test steps can be added to the test cases after they are created in previous step. Test steps should contain at least Action and Expected Result, Data should be there if necessary.

  5. Ask CE or SA to review your test cases (not mandatory). Since the test cases are now linked to user stories, they are easily visible to the team for the review.

  6. Add test cases to the right directories in Test Repository (select directory – click right mouse button – choose Add Tests – click on Search tab – select the right user story in Covering – click Search button – click Add Selected button). This will help you see easily how many test cases you created for the Feature Sprint and add all test cases easily to the Test Plan

  7. Add test cases to the Test Plan for the Feature Sprint. This needs to be done so the results of the test cases are shown in Test Plan later. Easy way how to add test cases to the Test Plan is display Test Plan and click Add – Tests in Tests section. This will lead you to

  8. Add Tests to Test Plan window and choose Folder from Test Repository. You can click on Include Sub-folders check box and include all tests in sub-directories

  9. Check all test cases were correctly added to the Test Plan – you should see them in

Overall Execution Status

  1. Create Test Sets for every user story and add all relevant Tests to the Test Sets. To do this press Add Test in Tests section of the Test Set, then press Search and choose the right Folder (you can use the Include sub-folders checkbox). Press Search and Add Selected (or Add All) button. All test cases should be now assigned to the right Test Set. (Test Sets should group test cases covering the same functionality)

  2. Create Test Execution for every Test Set (they should be created for all Test Sets you created in step 9) – you can use + button and then choose “Test Execution” as an Issue Type.

  3. After Test executions are created, add relevant tests to them – press Add – Tests from Test Sets and in Select Issue list select the right Test Set. Press Add Selected button to add all tests from the selected Test Set.

  4. Check test cases were added to the Test Execution – you should see them in Tests section and there should be also Overall Execution Status displayed in Test Execution. Repeat it for all test execution you created in the previous step

  5. Link all Test Executions to the Test Plan for the Feature Sprint. You can do it by displaying the Test Plan and press Add Test Executions in Test Executions section. In Select Issues you can select all Test Execution you want to the Test Plan.

  6. Check all Test Executions were correctly added and linked to the Test Plan – you should see them in Test Execution section in the Test Plan.  

During the Feature Sprint

  1. Test according to prepared test cases, but you don’t have to mark all test cases as passed or failed – since the user story is still under development, you can also log bugs as Sub-tasks

  2. When the functionality (user story) is developed, you can test officially according to test cases and log all results officially – check with PM if you should log test results officially or wait till the end of the Feature Sprint. If the results should be logged officially, display the Test Execution, press Run button on the right side. When detailed test case is displayed, click on green, yellow or red button for every test step (if the test step passed, is executed or failed). If the test step failed click + to add a bug directly from the test step. The final test result is displayed automatically based on results of all test steps.

  3. Repeat the previous step for all Test Executions you need to test in the Feature Sprint (which are associated with the Test Plan)

  4. Check the Test Plan contains results of all test cases created for the Feature Sprint.

At the end of the Feature Sprint

  1. Execute all Test Executions (also those which were executed during the Feature Sprint) and log Bugs – display the Test Execution, press Run button on the right side. When detailed test case is displayed, click on green, yellow or red button for every test step (if the test step passed, is executed or failed). If the test step failed click + to add a bug directly from the test step. The final test result is displayed automatically based on results of all test steps.

  2. Check results for all Test Executions are displayed in the Test Plan for the Feature Sprint