Overview

What needs to be retested? And how can we rerun tests?

These are frequent questions that teams face.

Usually the need to rerun the tests may come from the fact that:

  • tests got blocked and could not be finished for some reason (e.g., test environment issues, dependencies, time or resource constraints)
  • tests failed because of some temporary issues (e.g., load on test environment, wrong data used)
  • we don't trust the previous recorded results for some reason
  • and mostly frequently, because a new code change was made and a new build was delivered to the target test environment; in this case, we can be talking about performing regression testing


Sometimes, testers want also to run the same tests but for different environments, or for different configurations, or using different data. This is a bit different though than rerunning the tests in strict sense (i.e. using the same context, including applicable data).  


How are tests usually run in the first place?

Xray is quite flexible tool; that means that testers can schedule a new execution for Tests in different ways:

  • from a story/requirement issue screen,  creating a (Sub) Test Execution for all the Tests that cover it, or just for the Tests whose latest result was a given one (e.g., FAIL)
  • from a Test issue screen, creating a Test Execution for it; it can be ad hoc or planned (e.g., linked to an existing Test Plan)
  • using the Test Repository, creating a Test Execution for the selected Tests
  • using the Test Plan or its Board, creating a Test Execution for the selected Tests, or just for the Tests whose latest result was a given one in the scope of that Test Plan
  • using unplanned Test Executions, i.e., creating Test Executions with some Tests but without being linked to any Test Plan


Common usage

Most often, teams use Test Plans to plan and track their testing. Test Plans can be scoped to a sprint, to a release, to a certain type of tests, or to any criteria teams decide to.

In this scenario, Tests are added to the Test Plan and then Test Executions are created for some of the Tests within the Test Plan. Subsequent Test Executions are mostly created for the Tests whose last execution was unsuccessful (i.e., "failed"), or for all of them if a new code revision has been made meanwhile.


What's the process to retest in general?

The process to retest depends on the overall software development, including testing, that is in place.

A typical flow would be as follows:

  1. A Test is executed in the context of a given Test Execution issue; in other words, a Test Run is created and executed.
  2.  During the execution of the Test Run, a defect is reported (e.g., a Bug issue is created)
    1. the defect usually, and automatically, contains all the steps until the step that failed
    2. this defect is internally linked to the Test Run
    3. a issue link is created between the defect and the Test issue
    4. a issue link is created between the defect and the covered issue (e.g., "requirement", Story)
  3. Development makes a fix attempt, and marks the defect issue on a specific workflow status (e.g., "Fixed")
  4. Tester schedules another run for the previous Test, by creating a Test Execution having that Test. To achieve this, tester would either create an ad hoc Test Execution right from the Test issue or a planned Test Execution (i.e., a Test Execution linked to an existing Test Plan)

What's the process to retest a defect?

Was the defect created (i.e., reported) during the execution of a Xray Test (i.e., on a Test Run) or was it created on Jira as you would create any other issue type?

Not all defects come out as a result of testing using Xray or any other test management tool. That changes how we retest the defect because the defect may or not have the context we need to test it right away.

Retesting a defect reported while testing using Xray

In this scenario, we're assuming that a defect was created from the execution screen (i.e., the Test Run details page), usually of a manual scripted test case. In this case, defect(s) could either be linked globally to the Test Run or to a specific test step of that Test Run.

Depending on Xray configuration, defects can be linked automatically and explicitly (i.e., using issue links) to the related Test issue, Test Execution issue, and/or requirement issue.

Defects created during testing of manual scripted test cases (i.e., "Manual" Tests) will have all the steps of the related Test until the step that failed; this behaviour depends on a specific configuration.

With the information on the defect, it should be easy to check it by retesting it. To do that, we can perform the same test. We can find the linked Test, for example using the issue link mentioned earlier or using the Traceability report, and schedule a new execution for it.

Retesting a defect reported directly, without using Xray

In this scenario, we're assuming that someone created a defect (e.g.,  a Bug issue) directly in Jira, perhaps because a customer or the support team reported it.

Therefore, there is is no Test or Test Execution linked to it.

Maybe initially the defect has been found without a formal test. Anyway, our goal is to assess if the defect still exists of if it was fixed meanwhile and for that we need to test it.

In Xray, issue types configured to be handled as defects, such as Bug, can also be configured to be coverable by tests, like requirements (more info). We can then create a Test right from the defect issue screen, and proceed as usual.


Tips

Retest the same Tests

Context

  • we have may have performed one execution of tests, using  a Test Execution, for a given build and then we want to run the same tests for a newer build

How to

  • Clone an existing Test Execution issue; a new Test Execution will be created having the same Tests but without recorded test results

Retest the the tests currently failing, or in a given status, for some requirement/covered item

Context

  • Tester aims to retests the tests that are currently failing, in the context of some requirement

How to

  1. From the requirement/covered item issue screen, on the Test Coverage panel select the Execute > With status action, and pick the status (e.g., "FAIL")
  2. Fill out the fields of the new Test Execution to be created

Retest the the tests currently failing, or in a given status, in a given Test Plan

Context

  • The tester aims to retest the tests currently failing in the context of some Test Plan scoped for a Sprint, or a release, for example.

How to

  1. On the Test Plan issue screen, create a new Test Execution just for the Tests whose last result in that Test Plan was a given one.

Retest the the tests that failed, or that were report in a given status, in a given Test Execution

Context

  • The tester aims to retest the tests that failed on a previous Test Execution; this Test Execution might not be linked to a Test Plan.

How to

  1. create a new Test Execution
  2. Add the Tests using the testExecutionTests JQL function

    1. Sample JQL
      issue in testExecutionTests("CALC-310","FAIL")

Finding the tests failed, for some requirement/covered item

Context

  • Tester aims to know which tests are failing, in the context of some requirement

How to

  1. From the requirement/covered item issue screen, we can filter by tests failing
  2. We can also use Overall Requirement Coverage report, and filter by the requirements that are NOK, clicking on the respective bar.
    1. Test failed for the requirement we are interested in can be found on the table that appears below the chart. The count is an hyperlink that will redirect us to the Issue navigator/search page.

Finding the tests failed, or in a given status, in a given Test Plan

Context

  • Multiple iterations may have been performed (i.e., Test Executions) in the scope of a Test Plan, and a tester aims to find which tests are still currently failing.

How to

  1. Use the testPlanTests JQL function.

    Sample JQL
    issue in testPlanTests("DEMO-10","FAIL")
  2. ... or open the Test Plan issue, and filter by the status (e.g., "FAIL")
    1. Note: then you can easily schedule a execution for these Tests.
  3. ... or open the Test Plan Board and filter by by the status (e.g., "FAIL").

Finding the tests failed, or in a given status, in a given Test Execution

Context

  • The tester wants to obtain the Tests that failed on a previous "iteration" / Test Execution. 

How to

  1. Use the testExecutionTests JQL function. This will allow you to obtain the issues on the Issue navigator/search page.
    1. Sample JQL
      issue in testExecutionTests("DEMO-10","FAIL")
  2. ... or click on the totals by status, under the Overall Execution Status bar. This will allow you to obtain the issues on the Issue navigator/search page.
  3. .... or use the filter on the Tests panel within the Test Execution issue screen.

Finding the tests failed, that had some defects created

Context

  • The tester wants to obtain the Tests that failed on a previous "iteration" / Test Execution, that had defects reported.

How to

  1. Use the testExecutionTests JQL function. This will allow you to obtain the issues on the Issue navigator/search page.
    1. Sample JQL
      issue in testExecutionTests("DEMO-10","FAIL",,true)

Finding linked defects to a Test or a list of Tests

Context

  • defects have been created while running Tests; a tester or a team member wants to analyze them.

How to

  1. On the Test issue screen, within the Links section.
  2. .. or from the execution screen of a given Test Run, within the Links section.
  1. ... or using the defectsCreatedDuringTesting JQL function. This will allow you to obtain the issues on the Issue navigator/search page.
    1. Sample JQL
      issue in defectsCreatedDuringTesting("TEST-123")

Finding linked defects to a story/requirement

Context

  • defects have been created while running Tests that cover a requirement; tester or a team member wants to analyse them

How to

  1. Use the defectsCreatedForRequirement JQL function. This will allow you to obtain the issues on the Issue navigator/search page.
    1. Sample JQL
      issue in defectsCreatedForRequirement("REQ-123")

Finding linked defects to a story/requirement

Context

  • defects have been created while running Tests that cover a requirement; a tester or a team member wants to analyze them.

How to

  1. Use the defectsCreatedForRequirement JQL function. This will allow you to obtain the issues on the Issue navigator/search page.
    1. Sample JQL
      issue in defectsCreatedForRequirement("REQ-123")

Finding prior results (i.e., Test Runs) of a Test

Context

  • The tester wants to see previous Test Runs of the Test to see past findings that can perhaps provide tips about a problem found recently.. or a problem that happened before and now doesn't seem to happen anymore.

How to

  1. Open the Test issue
  2. Go to the Test Runs panel
  3. Use columns and filters to see the relevant Test Runs

Finding the Test Run when a given Defect was created

Context

  • The tester wants to find the details of the test run whenever the defect was created.

How to

  1. Open the Traceability Report.
  2. In the filter, choose the covered issue (e.g., story) key.
  1. On the Defects column, look for the defect issue key and find the corresponding Test Run on the column in the left.

Creating specific status for retesting purposes

Context

  • While testing, some problems were found, and the tester wants to mark that test run in a visible way that it needs to be retested.

How to

  1. On Xray global settings, create a custom test status such as "RETEST".
  2. Whenever running the tests, mark the respective Test Runs with the previous status if you aim to retest them later on.

Assuring the requirement(s) is(are) ok after retesting

Context

  • A tester reruns one or more tests and wants to assess if the related requirement(s) are OK.

How to

  1. On each Test, go to the related requirement using the issue link.
  2. On the requirement issue screen, analyze the coverage status, which takes into account the latest results of the related Tests.
  • No labels