Migration from on-premises deployment to SaaS instance

Modified on Fri, 28 Mar at 10:09 AM

Foreword

Migration is a time consuming processж; if you have a lot of data stored in your on-premises instance. Please plan the downtime with the QA team beforehand.

Prerequisites

Availability of SaaS instance

Start of the migration process requires existing cloud instance, so please request to create one before the start of the migration procedure. 

Provide the payment information for the created instance (Administration > License >. Manage Subscription) to prevent any data loss.

On premises instance release requirements

Data migration is possible from Allure TestOps version 24 onwards.  Preferably to have the most recent release deployed.

Data migration support request

Create a support request.

type: Support

subject: "Migration to SaaS <instance_name>.testops.cloud for <your_company_name>". 

Add information about the release of the instance you have deployed on premises.

Add one more time URL of the created SaaS instance to restore data to.

Data to be provided in the support request

Generally, before starting the migration we need to evaluate the volumes of data to be transferred and minimise these.

The size of the database.

Please execute following two requests, save the data to a file and attach to the said support request:

Current size of the database and tables

SELECT
schemaname AS schema,
relname AS table_name,
n_live_tup AS row_count,
pg_size_pretty(pg_total_relation_size(relid)) AS total_size,
pg_size_pretty(pg_table_size(relid)) AS table_size,
pg_size_pretty(pg_indexes_size(relid)) AS indexes_size,
pg_total_relation_size(relid) / 1024 / 1024 AS total_size_mb,
pg_table_size(relid) / 1024 / 1024 AS table_size_mb,
pg_indexes_size(relid) / 1024 / 1024 AS indexes_size_mb
FROM
pg_stat_user_tables
ORDER BY
total_size_mb DESC;

General size of the database

SELECT pg_database.datname AS "Database",
pg_database_size(pg_database.datname) AS "Size_in_bytes",
(pg_database_size(pg_database.datname) / 1024) AS "Size_in_kilobytes",
(pg_database_size(pg_database.datname) / (1024 * 1024)) AS "Size_in_megabytes",
(pg_database_size(pg_database.datname) / (1024 * 1024 * 1024)) AS "Size_in_gigabytes"
FROM pg_database
ORDER BY pg_database_size(pg_database.datname) DESC;

And the number of the test results generated by your team last two weeks.

You need to replace the placeholders

select 
to_char(to_timestamp(created_date / 1000), 'YYYY-MM-DD') as tr_day,
count(*) as cnt
from test_result
where to_timestamp(created_date / 1000) >= 'YYYY-MM-DD 00:00:00'
and to_timestamp(created_date / 1000) <= 'YYYY-NN-XX 23:59:59'
group by to_char(to_timestamp(created_date / 1000), 'YYYY-MM-DD')
order by tr_day;

Please, replace the placeholders as follows:

in the string and to_timestamp(created_date / 1000) <= 'YYYY-NN-XX 23:59:59'

replace 'YYYY-NN-XX 23:59:59' as follows

YYYY = current year, e.g. 2024, 

NN as current month, e.g. 10 for October,

XX as the date of the closest past Sunday, i.e. if today is the 31st of October, then replace XX with 27

In the string 

where to_timestamp(created_date / 1000) >= 'YYYY-MM-DD 00:00:00'

replace YYYY-MM-DD as follows

YYYY = current year, e.g. 2024, 

NN as current month, e.g. 10 for October,

XX as the date 2 weeks back from the date selected on the previous step, in the example above, that would be the 14th of October.

Please share the result of the executed query with us. 

Preparation for the migration

Cleaning the launches

Launches contain test results details in the database, these could consume a lot of space and copying of the database with old results  could be a very long process and will increase the downtime.

Launches can be cleaned with this tool.

You can execute the cleaner tool on a VM or on your local workstation.

The following execution command is recommended.

docker run -e "ALLURE_ENDPOINT=http://localhost:8080" \
-e "ALLURE_USERNAME=REPLACE_WITH_YOUR_USERNAME" \
-e "ALLURE_PASSWORD=REPLACE_WITH_YOUR_PASSWORD" \
-e "PROJECT_ID=REPLACE_WITH_PROJECT_ID" \
-e "LAUNCH_FILTER=true" \
-e "LAUNCH_CREATEDBEFORE=180d 0h 0m" \
ghcr.io/eroshenkoam/allure-testops-utils clean-launches

You need to supply the username of a user with at least write permissions in a nproject, their password and the project ID. 

You need to execute the tool for each project.

The string -e "LAUNCH_CREATEDBEFORE=180d 0h 0m" \ defines that all launched older than 180 days, must be deleted.

Cleaning the test results artefacts. 

The process of the transferring of test results artefacs for big quantity of small files can take considerable amount of time, and would inevitably lead to longer downtime when your team won't be able using both instances – the one you have deployed locally (source) and the one you want to move to (destination). So, the cleanup on the source system is required before the migration process.


Check out these scripts, which could help you save a little of time on the creation of the cleaner schema.

  1. Enable aggressive cleanup rules as the storage of an instance in SaaS is limited by 60 Gigabytes of data. 
    • Consult with your team regarding their needs related to the historical data, if there are no specifica requirements, then setup the following rules on the global level.
    • Rules must include the following (creating global rules for all projects would be sufficient):
      • Passed tests: scenarios, fixtures, attachments. Preferably keep last 48 hours.
      • Failed tests: scenarios, fixtures, attachments. Preferably keep last 48 hours.
      • Unknown tests: scenarios, fixtures, attachments. Preferably keep last 24 hours.
      • Broken tests: scenarios, fixtures, attachments. Preferably keep last 48 hours.
  2. Manually trigger the cleanup process start by executing the following API calls in the order as it is displayed on the picture below
    • access the swagger page (ALLURE-URL/swagger-ui.html) and locate the section "Cleanup controller"
    • execute following three methods in the displayed order

  • if you have a huge amount of the artefacts the commands 1 and 2 could result in an error in the UI, but on the backend the process will be completed.
  • you need to wait each command to complete before executing the next one
  • Cleanup process if never used before, can trigger huge workload for S3 and can cripple the performance, so we highly recommend running this during the less loaded hours.
  • The deletion of the artefacts will take considerable time, especially if you haven't had any cleanup rules created. You can check the progress by executing the SQL command against the report service database, see the command here. 

As soon as the queue is zero, you can proceed to the next steps.


Create the database dump migration

At this moment, your Allure TestOps instance must be already upgraded to at least 4.26.5 version or preferably to version 5 to decrease the downtime.

Database dump creation

  1. Stop all activities on Allure TestOps instance (stop gateway service or a load balancer or turn off routes towards Allure TestOps on your ingress controller).
  2. Stop testops service.
  3. Create dump of testops database as follows (you need to use pg_dump at least version 16.
    1. pg_dump --file=NAME_OF_DB.dump --host=DB_HOST \
      --username=DB_USERNAME --dbname=DB_NAME \
      --compress=9 --format=c --schema=public --verbose --blobs \
      --no-owner --no-privileges --no-comments -W
  4. Create an archivewith full backup of the artefacts storage – full archive of the bucket. Please password protect the data.
    • if the total size of the bucket is more than 4 Gigabytes, it's advised to create multi-volume archive to avoid re-upload and re-download of whole big archive in case of interruptions.
    • Consult with the support on how to locate the artefacts storage if you have any concerns.
  5. Upload the gathered data to a cloud storage of your choice.
  6. Provide links to the uploaded files to the current request alongside with the passwords if any to the created support request.

Data restoration

The data you provided will be used by our SaaS maintenance team to restore your instance to the cloud and then your instance will be upgraded to the most recent release of Allure TestOps. 

Timeline

The data restoration usually takes up to 24 hours, so you need to plan full downtime for the usage of Allure TestOps. 

In case of many big amount of artefacts the upload and download process could take more than 24 hours depending on the network speed. 

As soon as the databases are restored, files are copied, you will be informed by the support ticket in the created request.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article