Quality gate
The quality gate is one of the Squash components. It may be used to fine-tune the conditions for success or failure of a pipeline that contains a test execution stage. The quality gate makes it possible to define evaluation rules that specify an expected success threshold and a test evaluation scope. For instance, the user can cause the pipeline to fail at the test stage if less than 90% of tests of very high importance are successful. To do this, he must provide a definition file to the quality gate service.
Quality gate definition file
This file must be in YAML format (.yaml
or .yml
) and define at least one
quality gate containing at least one rule. Here is an example of a basic definition:
qualitygates:
- name: my.quality.gate
rules:
- name: JUnit tests
rule:
scope: (test.technology=='junit') && (test.importance=='VERY_HIGH')
threshold: 95%
failure-status: [failure]
In this example, only one quality gate my.quality.gate
is specified. It contains
one rule, JUnit tests
, which applies to the JUnit tests of very high importance.
The quality gate is passed if at least 95% of tests are evaluated as successful.
Only .rules.name
and .rule.failure-status
properties are optional.
If a rule has no name, an UUID will be provided instead. If failure-status
is omitted, it will default to ['failure', 'error', 'blocked']
.
A definition file may define several quality gates and a quality gate may contain as many rules as needed.
Threshold and scope definition
The rule threshold is a percentage between 0% and 100%.
The rule scope makes use of expressions and functions that are described in the OpenTestFactory documentation.
The quality gate uses the orchestrator test
context. Its properties are detailed
in the table below. In this context, users can also refer to test plan
(iteration or test suite) properties, using test.collection
property, as well as
access datasets or CUFs through test.data
.
Older Versions of Squash TM
All of these priorities are available for Squash TM 6.0 or later.
For older versions, only test.technology
, test.uses
, test.runs-on
, and test.job
are usable.
Legend
️💎 indicates an Ultimate component or feature. An overview of the Premium and Ultimate features is available here. To benefit from these or to ask for more information, check our website or contact us.
Property | Type | Description | Values | test |
---|---|---|---|
test.technology |
string | Test technology. |
cucumber, cucumber5, cypress, junit, playwright, postman, robotframework, skf, soapui, agilitest💎, katalon💎, ranorex💎, uft💎
|
test.uses |
string | Action used to execute the test. | See providers actions in the OpenTestFactory documentation. |
test.runs-on |
object | Execution environment tags. | Use contains() function to apply filtering. |
test.managed |
boolean | Indicator of a test managed by a test referential | true if and only if the test is managed by Squash TM (i.e. it is launched during the execution of a Squash TM test case to which it is linked). |
test.job |
string | Name of the job running the test execution. | |
test.name |
string | Test case name in Squash TM. | See the respective field in the "Test Cases" workspace. |
test.technology-name |
string | Automated test technology, as displayed in Squash TM. | See the "Automated test technology" dropdown list in the test case "Automation" block. |
test.reference |
string | Automated test case reference in Squash TM. | See the "Automated test reference" field in the test case "Automation" block. |
test.importance |
string | Test case importance in Squash TM. | VERY_HIGH, HIGH, MEDIUM, LOW |
test.nature |
string | Test case nature in Squash TM. | NAT_UNDEFINED, NAT_FUNCTIONAL_TESTING, NAT_BUSINESS_TESTING, NAT_USER_TESTING,
NAT_NON_FUNCTIONAL_TESTING, NAT_PERFORMANCE_TESTING, NAT_SECURITY_TESTING, NAT_ATDD |
test.type |
string | Test case type in Squash TM. | TYP_UNDEFINED, TYP_COMPLIANCE_TESTING, TYP_CORRECTION_TESTING, TYP_REGRESSION_TESTING, TYP_EVOLUTION_TESTING,
TYP_END_TO_END_TESTING, TYP_PARTNER_TESTING |
test.path |
object | Test case path in Squash TM ("Test Cases" workspace). | Use contains() function to apply filtering. |
test.collection |
test.collection.path |
object | Test plan (iteration or test suite) path in Squash TM ("Campaigns" workspace). | Use contains() function to apply filtering. |
test.collection.type |
string | Test plan type (iteration or test suite) in Squash TM. | iteration , test suite |
test.collection.uuid |
string | Test plan UUID in Squash TM. | See the "UUID" field of the test iteration or the test suite "Information" block. | test.data |
test.data.DSNAME |
string | Dataset name in Squash TM. | See the "NAME" field of the test case "Parameters and Datasets" block. |
test.data.DS_{param_name} |
string | Parameter name in Squash TM. | As defined in the test case "Parameters and Datasets" block. |
test.data.TC_EXECUTION_ID 💎 |
string | Execution ID in Squash TM. Only available with Squash TM 8.0 or later. |
|
test.data.TC_REFERENCE |
string | Test case reference in Squash TM. | See the "Test case reference" field above the test case "Information" block. |
test.data.TC_UUID |
string | Test case UUID. | |
test.data.TC_CUF_{CUF_CODE} |
string | Test case custom user field in Squash TM. | |
test.data.TS_CUF_{CUF_CODE} 💎 |
string | Test suite custom user field in Squash TM. | |
test.data.IT_CUF_{CUF_CODE} 💎 |
string | Test iteration custom user field in Squash TM. | |
test.data.CPG_CUF_{CUF_CODE} 💎 |
string | Test campaign custom user field in Squash TM. |
Apply the quality gate to a workflow
Provide the definition file to the service
To apply a quality gate to a workflow, the user must first provide the definition file to the service. There are two ways to do it:
1) Load the file when launching the orchestrator. In this case, just add two
parameters to the run
command:
- the definition file mounting point on the orchestrator's docker image;
- the
QUALITYGATE_DEFINITIONS
environment variable, that must contain the definition file path on this image.
docker run ... \
-v /path/to/qg_def/my_qualitygate.yaml:/app/qualitygate/my_qualitygate.yaml \
-e QUALITYGATE_DEFINITIONS=/app/qualitygate/my_qualitygate.yaml \
...
docker run ... ^
-v d:\path\to\qg_def\my_qualitygate.yaml:/app/qualitygate/my_qualitygate.yaml ^
-e QUALITYGATE_DEFINITIONS=/app/qualitygate/my_qualitygate.yaml ^
...
docker run ... `
-v d:\path\to\qg_def\my_qualitygate.yaml:/app/qualitygate/my_qualitygate.yaml `
-e QUALITYGATE_DEFINITIONS=/app/qualitygate/my_qualitygate.yaml `
...
Two quality gates are always provided when the service is launched. The strict
quality gate evaluates all executed tests with a 100% threshold and default failure-status
values. The passing
quality gate has 0% threshold and will be successful
regardless of test execution results.
2) Specify the definition file via the opentf-ctl get qualitygate
command --using
option (detailed below).
Warning
If the definition file is provided using the get qualitygate
command,
only the quality gates defined in this file can be used. The service-level
quality gates become unavailable.
Evaluate the execution result
To evaluate a workflow execution result, you need to use the
opentf-ctl get qualitygate
command from the orchestrator tools.
The opentf-ctl get qualitygate {workflow_id} {options}
command evaluates a workflow using
all the rules of the quality gate specified by the user. This command returns
the evaluation result for each rule and the general workflow evaluation result:
opentf-ctl get qualitygate a13f0572-b23b-40bc-a6eb-a12429f0143c --mode my.quality.gate
RULE,RESULT,TESTS_IN_SCOPE,TESTS_FAILED,TESTS_PASSED,SUCCESS_RATIO
JUnit tests,FAILURE,50,10,40,80.0%
Workflow a13f0572-b23b-40bc-a6eb-a12429f0143c failed the quality gate using mode my.quality.gate.
{workflow_id}
or the evaluated workflow UUID is the mandatory parameter for the command.
The command has the following options:
--mode
or-m
. The quality gate name.--using
or-u
. The user-provided definition file path.--output wide
or-o wide
. Adds two columns to the command output:THRESHOLD
andSCOPE
.
Info
If you want to specify the columns displayed in the command output, you may use the --output custom-columns
option: see the OpenTestFactory documentation.
When the definition file is provided via the --using
option, the --mode
option
is mandatory. If the definition file is loaded at the orchestrator level and
the --mode
option is not specified, the quality gate strict
will be applied by default.
Each rule returns FAILURE
, SUCCESS
or NOTEST
(if no test matching rule scope
has been found). The general result is FAILURE
("workflow failed the quality gate")
if at least one rule has failed, and SUCCESS
("workflow passed the quality gate") if all
the rules return SUCCESS
. If all the rules return NOTEST
, the general result is
also NOTEST
("workflow contains no test matching quality gate scopes").
The get qualitygate
command return codes are:
0
if the workflow passed the quality gate or the general result isNOTEST
;101
if the specified workflow is still running;102
if the quality gate failed for the specified workflow.
Examples
Apply a user-defined quality gate
Let's take a pipeline that executes a Squash TM iteration containing Selenium and Robot Framework tests. We want the pipeline to succeed in the following case:
- at least 90% of
Selenium
-tagged tests of very high and high importance are successful and - at least 75% of Robot Framework tests that do not use
API tests
dataset are successful.
This quality gate is not specified in the orchestrator-level quality gate definition file.
In this scenario, the user must first create his own definition file:
qualitygates:
- name: custom.quality.gate
rules:
- name: Selenium tests
rule:
scope: (test.data.TC_CUF_TAG=='Selenium') && ((test.importance=='VERY_HIGH') || (test.importance=='HIGH'))
threshold: 90%
failure-status: [failure]
- name: RobotFramework
rule:
scope: (test.technology=='robotframework') && (test.data.DSNAME!='API tests')
threshold: 75%
failure-status: [failure]
Information
Instead of ((test.importance=='VERY_HIGH') || (test.importance=='HIGH'))
condition,
you may also use contains(fromJSON('["HIGH", "VERY_HIGH"]'), test.importance)
.
Assuming the user saves the file as custom_quality_gate.yaml
, s/he must then pass its path
to the opentf-ctl get qualitygate
command:
opentf-ctl get qualitygate {workflow_id} --mode custom.quality.gate --using /home/user/custom_quality_gate.yaml --output wide
In this example, the --output wide
option is used and the command output includes
not only the execution information but also the user-defined threshold and scope:
RULE,RESULT,TESTS_IN_SCOPE,TESTS_FAILED,TESTS_PASSED,SUCCESS_RATIO,THRESHOLD,SCOPE
Selenium tests,SUCCESS,20,2,18,90.0%,90%,(test.data.TC_CUF_TAG=='Selenium') && ((test.importance=='VERY_HIGH') || (test.importance=='HIGH'))
RobotFramework,SUCCESS,40,5,35,87.5%,75%,(test.technology=='robotframework') && (test.data.DSNAME!='API tests')
Workflow {workflow_id} passed the quality gate using mode custom.quality.gate.
Apply a rule to all the workflow tests
To apply a rule to all the tests executed in a workflow,
just set the scope value to true
:
qualitygates:
- name: my.quality.gate
rules:
- name: All tests
rule:
scope: 'true'
threshold: 85%
Using paths in the rule scope
The quality gate allows the use of Squash TM iterations and/or test suite paths,
as well as test suite paths, within the rule scope. The function contains(search, item)
must
be applied in this case, search
standing for the property and item
for the respective
path element name.
The following rule evaluates all the test cases in TNR_main
and TNR_smoketest
folders of the WebApp
project in the Squash TM "Test Cases" workspace:
qualitygates:
- name: my.quality.gate
rules:
- name: TNR
rule:
scope: (contains(test.path, 'TNR_main') || contains(test.path, 'TNR_smoketest')) && contains(test.path('WebApp'))
threshold: 95%
To add an iteration and/or test suite path in the rule scope, the function contains()
must be used with the test.collection.path
property. For instance,
the following rule will be applied to all the Dropdowns
iteration test cases:
qualitygates:
- name: my.quality.gate
rules:
- name: Dropdowns
rule:
scope: (test.collection.type=='iteration') && contains(test.collection.path, 'Dropdowns')
threshold: 100%
Using the quality gate in a Jenkins pipeline
It is possible to apply the quality gate in a Jenkins pipeline
using the opentf-ctl get qualitygate
command. The user must have access
to a Jenkins instance that is linked to the orchestrator by the
Jenkins plugin. S/he must also make sure that the
orchestrator tools (opentf-tools
) are available in the Jenkins execution environment.
Info
See respective chapters of this documentation for plugin installation, plugin configuration, and calling the orchestrator from Jenkins.
When the communication between Jenkins and the orchestrator has been established, call the quality gate from the pipeline using the completed workflow UUID.
Here is an example of a pipeline that launches squash_sample_tests.yaml
workflow
and applies the orchestrator-level quality gate sample.quality.gate
to the completed
workflow:
pipeline {
agent any
environment {
WORKFLOW_ID = ''
}
stages {
stage('Run OTF workflow'){
steps {
script {
WORKFLOW_ID = runOTFWorkflow(
workflowPathName: 'squash_sample_tests.yaml',
workflowTimeout: '300S',
serverName:'Orchestrator',
jobDepth: 2,
stepDepth: 3,
dumpOnError: true
)
}
}
}
stage('Apply quality gate'){
steps {
script {
def qg_command = "opentf-ctl get qualitygate $WORKFLOW_ID --mode sample.quality.gate"
def qg_result = sh(returnStdout: true, script: qg_command)
echo "$qg_result"
}
}
}
}
}
This pipeline uses the WORKFLOW_ID
environment variable that can be passed between the
pipeline stages. Its value (completed workflow UUID) is retrieved from the runOTFWorkflow()
function and passed to the get qualitygate
command. The command output is then passed to another
variable (qg_result
) and displayed in the console.
If the quality gate fails, a 102
error code is returned the pipeline fails.
If the workflow passes the quality gate or there are no tests matching evaluation scopes,
the command returns 0
and the pipeline successfully completes (or continues if there are other stages).
The user can choose not to stop the pipeline even if the workflow fails the quality gate.
In this case, the shell script must be executed with the returnStatus: true
option instead
of the returnStdout: true
option. The pipeline step will return the script return code,
which can be handled in the following steps or stages.
Using the quality gate in a GitLab pipeline
It is also possible (and even recommended) to apply the quality gate to a workflow executed in a GitLab pipeline. A complete guide on the Squash Orchestrator integration with the GitLab CI is available in the OpenTestFactory documentation.
Here is an example of a .gitlab-ci.yml
file that runs the my_workflow.yaml
workflow and
applies the quality gate from a user-provided definition file each time the user makes
changes to the project:
default:
image: python:3.12
stages:
- test
opentf-workflow:
stage: test
script:
- pip install opentf-tools
- RESULT=$(opentf-ctl run workflow .opentf/workflows/my_workflow.yaml --watch)
- WORKFLOW_ID=$(echo $RESULT | head -n 1 |awk -F ' ' '{print $2}')
- opentf-ctl get qualitygate $WORKFLOW_ID --mode sample.quality.gate --using .opentf/qualitygates/my_quality_gate.yaml
The pip install
command checks that the most recent version of the orchestrator
tools is available on the runner, and the opentf-ctl run...
command
runs the workflow with the --watch
option to follow the execution. The UUID
of the workflow is retrieved from the command output to be stored
in the WORKFLOW_ID
variable.
Finally, the quality gate sample.quality.gate
, which is defined in the my_quality_gate.yaml
definition file, is applied to the workflow and the get qualitygate
command output is displayed.
If the workflow fails the quality gate, it will return a 102
error code and the pipeline will fail.
Without the quality gate, the pipeline will always be in success regardless of the test results.
Publishing quality gate results as a merge request note
The quality gate results can be published in a GitLab merge request as a note.
You just need to pass the required --plugin gitlab:...
parameters to the
get qualitygate
command. The related pipeline must be a merge request pipeline.
Here is an example of a .gitlab-ci.yml
file that runs the my_workflow.yaml
workflow and applies the service-level defined quality gate at each new commit
on the merge request branch. The quality gate results are then published
to the merge request as a note:
default:
image: python:3.12
stages:
- test
opentf-workflow:
stage: test
script:
- pip install opentf-tools
- RESULT=$(opentf-ctl run workflow .opentf/workflows/my_workflow.yaml --watch)
- WORKFLOW_ID=$(echo $RESULT | head -n 1 |awk -F ' ' '{print $2}')
- opentf-ctl get qualitygate $WORKFLOW_ID --mode sample.quality.gate \
--plugin gitlab:keep-history=true \
--plugin gitlab:token={authentication token}
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
The --plugin gitlab:keep-history
parameter is mandatory. If you set it to true
,
the specified quality gate results history is kept in merge request notes.
When it is set to false
, only one note will be added to the merge request and
will be updated on each quality gate evaluation.
The --plugin gitlab:token
parameter allows to define the project authentication
token when it is necessary.
The GitLab instance, project and merge request are by default retrieved from
the GitLab predefined environment variables (namely CI_SERVER_URL
,
CI_MERGE_REQUEST_PROJECT_ID
, and CI_MERGE_REQUEST_IID
). You may also
specify your own GitLab instance, project, and merge request
using gitlab:server
, gitlab:project
, and gitlab:mr
parameters.
You can also add to the merge request a label with the quality gate status.
First, in the related GitLab project, you need to create three labels containing
the possible quality gate statuses: {prefix}::Passed
, {prefix}::Failed
and
{prefix}::No test
. It is up to you to choose the prefix. Next, you have to add
the --plugin gitlab:label={prefix}
parameter to your get qualitygate
command.
You can also publish the quality gate results to the issue the merge request is
related to. Here is an example of a pipeline that runs the my_workflow.yaml
workflow, applies
the quality gate to this workflow, and publishes the results to the related issue.
The issue IID is retrieved from the merge request description, which should contain this IID
by default:
default:
image: python:3.12
stages:
- test
opentf-workflow:
stage: test
script:
- pip install opentf-tools
- RESULT=$(opentf-ctl run workflow .opentf/workflows/my_workflow.yaml --watch)
- WORKFLOW_ID=$(echo $RESULT | head -n 1 |awk -F ' ' '{print $2}')
- MR_DATA="$(curl --header "PRIVATE-TOKEN:{authentication token}" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID" | sed 's/\\"//g')"
- ISSUE_IID=$(python -c "import json; data = json.loads('$MR_DATA'); print(data.get('description'))" | head -1 | sed 's/.*#//')
- opentf-ctl get qualitygate $WORKFLOW_ID --mode sample.quality.gate \
--plugin gitlab:keep-history=false \
--plugin gitlab:issue=$ISSUE_IID
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
The complete list of the opentf-ctl get qualitygate
command GitLab
options is available in the OpenTestFactory documentation.