Historical Launches data
- metadata for each test result
- metadata for each launch
- scenario data
- fixtures data
- links to the artefacts on S3
Built-in Cleanup procedures deletes artefacts and related records in the database
What historical data is not deleted
Launches data
The tool needs to be used with caution as the deletion of a launch is a heavy operation against the database which during the high load times could dramatically degrade the performance even to buzz state.
Deleted data
Execute:
select schemaname, relname, n_live_tup, n_dead_tup, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze from pg_stat_all_tables where schemaname = 'public' order by n_dead_tup desc
n_dead_tup - amount of dead rows
n_live_tup - amount of live/actually filled rows
If in response you are getting lots of dead rows than you will need to perform VACUUM operation:
Option 1. Autovacuum
Execute:
select name, setting from pg_settings where name in ('autovacuum_analyze_threshold', 'autovacuum_analyze_scale_factor');
Calculate from what amounts of dead rows will the autovacuum perform.
Formula for determining the need for analyse:
autovacuum_analyze_threshold + (autovacuum_analyze_scale_factor *the amount of rows in the table)
For example:
The values of the parameters are
autovacuum_analyze_threshold = 0.05, autovacuum_analyze_threshold = 50
50 + (0.05 * 13833401) = 691720 (5%)
This means that autovacuum will run after 72,322 rows have been changed.
Option 2. Manual vacuum
vacuum <tablename>;
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article