Problem
When you trigger test cases on a CI system from Allure Testops or you request a rerun of tests for a created launch, tests are hanging in In progress status and new results are added to a launch.
What causes the duplication
There could be several reasons for test result duplication.
- test result is received from a job different from the initial one
- test parameters are different from the ones we expect
- the environment on which the tests are run is different from the one we expect
These two attributes of a test result (parameters, environments) are used in the calculation of test result's attribute called History ID, which is used by Allure Testops to match expected test result with the one we receive from CI.
History ID
The HistoryID in result file is used when gluing the expected vs actual results in TestOps launch. At this point, you need to understand at what point your work with it breaks.
What we need to know about History ID
historyId aka historyKey = md5(hash, sort(values(parameters)), sort(environment)))
To match re-runs, Allure Testops calculates the hash of the arguments passed to the method + the hash of the environment values. How this hash is calculated is not important, the main thing is that the algorithm is stable and for the same set of attributes provides the same hash.
If parameter's values or environment's values change from run to run (dynamic values like hashes, date/time values), this is treated as a new result, not as a re-run of the initial result.
Consequences
- if during a re-run the restarted test has N as a parameter's value instead of X, this will be treated as a new result, not a re-run, and the result we are waiting for will hang in In progress and a new one will be created, which will not be attached to it due to the difference of their HistoryIDs.
- There is also a possibility that the test simply did not start (rare, but it happens).
Same with environments
- we always trigger a re-run on the same set of parameters that the initial pipeline had. If some environment parameter was changed on rerun (say, it's a dynamic one), the result will be treated as if it's a new one, not a retry and not the expected result.
Examples
Example 1: Initial run or re-run
- There are child pipelines in GL
- Allure TestOps triggers the main Pipeline and passes the environment variables
- the main Pipeline initiates the start of the child Pipelines and does NOT pass all values for the environment to them.
- the requested result expects environment variable OS=iOS, VER=123
- but OS=iOS result arrives and that's all.
- the test hangs In progress (OS=iOS, VER=123) with HistoryID1
- so in launch, one more result (OS=iOS) with HistoryID2 appears.
Example 2: Dynamic parameters
- In initial run a parameterized test is executed with parameter AABBCCDDEEFF HistoryID1 is calculated.
- we request its re-run
- a new result with the same parameter but value BBAACCDCDDEEFF arrives HistoryID2 is calculated.
- the expected (with HistoryID1) test hangs ‘In progress'
- new result (with HistoryID2) is added to the launch.
Solutions
Changing environments
Such environments need to be removed from the project configuration.
if you still need to see this information about your tests, it's better to add this as an attachment (not as a parameter) to a test result.
Dynamic parameters
Dynamic parameters can be moved to test attachments, so they won't affect HistoryID calculations.
Some integrations with the test frameworks allow to mark a parameter as "excluded" and this parameter won't affect the calculation of the HistoryIID
Examples of such test frameworks integrations:
- allure-junit5: https://github.com/allure-framework/allure-java/blob/master/allure-junit5/src/main/java/io/qameta/allure/junit5/AllureJunit5.java#L88
- allure-playwright: https://github.com/allure-framework/allure-js/blob/master/packages/allure-playwright/README.md#parameters-usage
- Check your framework here: https://github.com/orgs/allure-framework/repositories
- check if it supports excluded parameters
- request the functionality in appropriate repository it it's absent
Example 3: different job
If a test was initially executed and received from Job 1 (corresponds to pipeline #1 on a CI side), and the re-run of a test is executed in the Job 2 (corresponds to pipeline #2 on a CI side), these test results are considered by Allure Testops as different results, not as initial test and retry for it.
Solution
If technically required to have a different pipeline or the pipeline is selected by some logic which results in a new pipeline execution when re-running tests, then the only solution is the following:
- You need to have a main pipeline, which will trigger other pipelines.
- You need to pass the context of main pipeline to other pipelines, so these pipelines will be treated by ATO as main pipeline.
How to pass the context of a pipeline.
Jenkins plug-in. The lowest version of Allure Testops plug-in for Jenkins needs to be 3.29.2
Inside the routine withAllureUpload you need to execute shell command
printenv | grep ALLURE_
the command will result into the following information
ALLURE_JOB_RUN_ID=43517 ALLURE_LAUNCH_ID=65606 ALLURE_JOB_RUN_UID=residents/java-junit5-allure-example#4617 ALLURE_ENDPOINT=https://allure.url.here ALLURE_PROJECT_ID=1111 ALLURE_EXECUTION_ID=43517 ALLURE_JOB_UID=residents/java-junit5-allure-example ALLURE_TOKEN=aaaa5352-3bbb-4c55-958a-556bac8cc07f ALLURE_SERVER_ID=testing
This will be the context of current (main) pipeine. All these variables and their values need to be passed to other pipeline(s), i.e. you need to export the vars, so they will be available in the context of other pipeline.
Then in other pipeline you also need to use withAllureUpload and execute the tests inside of it.
allurectl
the usage of allurectl for the same purpose is described in this article. The approach works the same way – you need to pack the context, and then provide the packed context as an environment variable and its value as described in the article.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article