Database applications are living, [(sometimes) fire-]breathing systems that behave in unexpected ways. As a purveyor of the pgCraft, it’s important to understand how to interrogate a Postgres instance and learn about the workload. This is critical for lots of reasons:
- Understanding how the app is using the database
- Understanding what risks there are in the data model
- Designing a data lifecycle management plan (i.e. partitions, archiving)
- Learning how ORM is behaving towards the database
- Building a VACUUM strategy
There’s lots of other reasons this data is useful, but let’s take a look at some examples and get down to a few scripts you can use to pull this together into something useful.
First, take a visit to the pgCraftsman’s toolbox to find an easy-to-use snapshot script. This script is designed to be completely self-contained. It will run at whatever frequency you’d like and will save snapshots of the critical monitoring tables right inside your database. There’s even a few reporting functions included to help you look at stats over time.
What to Watch
There’s a number of critical tables and views to keep an eye on in the Postgres catalog, this isn’t an exhaustive list, but a quick set that the toolbox script already watches.
- pg_stat_activity
- pg_locks
- pg_stat_all_tables
- pg_statio_all_tables
- pg_stat_all_indexes
- pg_stat_database
These tables views provide runtime stats on how your application is behaving in regards to the data model. The problem with many of these is that they’re either point-in-time (like pg_stat_activity) or cumulative (pg_stat_all_tables.n_tup_ins contains the cumulative number of inserts since pg_stat_database.stats_reset). In order to glean anything useful from these runtime performance views, you should be snapshot-ing them periodically and saving the results.
I’ve seen (and built) lots of interesting ways to do this over the years, but the simplest way to generate some quick stats over time is with the PgCraftsman Toolbox script: pgcraftsman-snapshots.sql.
This is approach is great, but as you can guess, a small SQL script doesn’t solve all the world’s database problems. True, this script does solve 80% of them, but that’s why it only took me 20% of the time
Let’s say I have a workload that I know nothing about, let’s use pgcraftsman-snapshots.sql to learn about the workload and determine the best way to deal with it:
Snapshots
In order to build actionable monitoring out of the cumulative or point-in-time monitoring views, we need to snapshot the data periodically and compare between those snapshots. This is exactly was the pgcraftsman-snapshots.sql script does. All of the snapshots are saved in appropriate tables in a new ‘snapshots’ schema.
The ‘snapshot’ function simply runs an INSERT as SELECT from each of the monitoring views. Each row is associated with the id of the snapshot being taken (snap_id). When it’s all put together, we can easily see the number of inserts that took place in a given table between two snapshots, the growth (in bytes) of a table over snapshots, or the number of index scans against a particular index. Essentially, any data in any of the monitoring views we are snapshot-ing.
1. Install pgcraftsman-snapshots.sql
❯ psql -h your.db.host.name -U postgres -d postgres -f pgcraftsman-snapshots.sql SET CREATE SCHEMA SELECT 92 CREATE INDEX SELECT 93 CREATE INDEX SELECT 6 CREATE INDEX SELECT 7 CREATE INDEX CREATE INDEX CREATE INDEX SELECT 145 CREATE INDEX SELECT 3 CREATE INDEX SELECT 269 CREATE INDEX CREATE INDEX CREATE INDEX SELECT 1 CREATE INDEX CREATE INDEX CREATE TABLE CREATE INDEX CREATE INDEX CREATE INDEX CREATE TABLE CREATE INDEX CREATE TABLE CREATE INDEX CREATE INDEX CREATE INDEX CREATE TABLE CREATE INDEX CREATE INDEX ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE CREATE SEQUENCE CREATE FUNCTION save_snap ----------- 2 (1 row) CREATE FUNCTION CREATE TYPE CREATE FUNCTION CREATE TYPE CREATE FUNCTION CREATE TYPE CREATE FUNCTION
In addition to installing the snapshot schema, this script takes two initial snapshots for you. You can monitor the snapshots by running:
postgres=# select * from snapshots.snap; snap_id | dttm ---------+------------------------------- 1 | 2020-10-15 10:32:54.31244-04 2 | 2020-10-15 10:32:54.395929-04 (2 rows)
You can also get a good look at the schema:
postgres=# set search_path=snapshots; SET postgres=# dt+ List of relations Schema | Name | Type | Owner | Size | Description -----------+------------------------+-------+----------+------------+------------- snapshots | snap | table | postgres | 8192 bytes | snapshots | snap_all_tables | table | postgres | 96 kB | snapshots | snap_cpu | table | postgres | 8192 bytes | snapshots | snap_databases | table | postgres | 8192 bytes | snapshots | snap_indexes | table | postgres | 120 kB | snapshots | snap_iostat | table | postgres | 8192 bytes | snapshots | snap_load_avg | table | postgres | 8192 bytes | snapshots | snap_mem | table | postgres | 8192 bytes | snapshots | snap_pg_locks | table | postgres | 16 kB | snapshots | snap_settings | table | postgres | 32 kB | snapshots | snap_stat_activity | table | postgres | 16 kB | snapshots | snap_statio_all_tables | table | postgres | 72 kB | (12 rows) postgres=# reset search_path; RESET postgres=#
There’s a few tables here (snap_cpu, snap_load_avg, snap_mem) that seem interesting, eh? I’ll cover these in a future post, we can’t get that data from within a postgres instance without a special extension installed or some external driver collecting it. For now, those tables will remain unused.
2. Take a snapshot
The snapshots.save_snap() function included with pgcraftsman-snapshots.sql does a quick save of all the metadata and assigns it all a new snap_id:
postgres=# select snapshots.save_snap(); save_snap ----------- 3 (1 row) postgres=#
The output row is the snap_id that was just generated and saved.
Every time you want to create a snapshot, just call:
select snapshots.save_snap();
The easiest way to do this is via cron or another similar job scheduler (pg_cron). I find it best to schedule these before large workload windows and after. If you have a 24 hour workload, find inflection points that you’re looking to differentiate between.
Snapshot Performance
Questions here about the performance of a snapshot make lots of sense. You can look a the save_snap() in code, you’ll see that the runtime of the process is going to depend on the number of rows in each of the catalog tables. This will depend on :
- pg_stat_activity <– Number of connections to the instance
- pg_locks < — Number of locks
- pg_stat_all_tables <– Number of tables in the database
- pg_statio_all_tables <– Number of tables in the database
- pg_stat_all_indexes <– Number of indexes in the database
- pg_stat_database <– Number of databases in the instance
For databases with thousands of objects, snapshots should be pruned frequently so that the snapshot mechanism itself does not cause performance problems.
Pruning old snapshots
Pruning old snapshots with this script is really easy. There is a relationship between the snapshots.snap table and all the others, so a simple ‘DELETE FROM snapshots.snap WHERE snap_id = x; ‘ will delete all the rows from the given snap_id.
3. Let the workload run
Let’s learn a little bit about the workload that is running in the database. Now that we have taken a snapshot (snap_id = 3) before the workload, we’re going to let the workload run for a bit, then take another snapshot and compare the difference.
(Note: snapshots just read the few catalog tables I noted above and save the data. They don’t start a process, or run anything. The only thing that’ll make your snapshots run long is if you have a large number of objects (schema, table, index) in the database. )
4. Take a ‘post-workload’ snapshot
After we’ve let the workload run for a while (5 minutes, 2 hours, 2 days… whatever you think will give the best approximation for your workload), take a new snapshot. This will save the new state of data and let us compare the before and after stats:
postgres=# select snapshots.save_snap(); save_snap ----------- 4 (1 row) postgres=#
5. Analyze the report
There are two included functions for reporting across the workload:
select * from snapshots.report_tables(start_snap_id, end_snap_id); select * from snapshots.report_indexes(start_snap_id, end_snap_id);
Both of these reports need a starting and ending snap_id. You can get this by examining the snapshots.snap table:
postgres=# select * from snapshots.snap; snap_id | dttm ---------+------------------------------- 1 | 2020-10-15 10:32:54.31244-04 2 | 2020-10-15 10:32:54.395929-04 3 | 2020-10-15 10:56:56.894127-04 4 | 2020-10-15 13:30:47.951223-04 (4 rows) postgres=#
Our pre-workload snapshot was snap_id = 3 and our post-workload snapshot was snap_id = 4.
Since we are reporting between two snapshots, we can see exactly what occurred between them. The number of inserts / updates / deletes / sequential scans / index scans, and even table growth (bytes and human readable). The key is that this is just what took place between the snapshots. You can take a snapshot at any time and report across any number of them.
(Note: You may need to side-scroll to see the full output. I highly recommend it)
postgres=# select * from snapshots.report_tables(3,4); time_window | relname | ins | upd | del | index_scan | seqscan | relsize_growth_bytes | relsize_growth | total_relsize_growth_bytes | total_relsize_growth | total_relsize | total_relsize_bytes -----------------+-------------------------+--------+--------+-----+------------+---------+----------------------+----------------+----------------------------+----------------------+---------------+--------------------- 02:33:51.057096 | pgbench_accounts | 0 | 588564 | 0 | 1177128 | 0 | 22085632 | 21 MB | 22085632 | 21 MB | 1590083584 | 1516 MB 02:33:51.057096 | pgbench_tellers | 0 | 588564 | 0 | 588564 | 0 | 1269760 | 1240 kB | 1597440 | 1560 kB | 1720320 | 1680 kB 02:33:51.057096 | pgbench_history | 588564 | 0 | 0 | | 0 | 31244288 | 30 MB | 31268864 | 30 MB | 31268864 | 30 MB 02:33:51.057096 | pgbench_branches | 0 | 588564 | 0 | 587910 | 655 | 1081344 | 1056 kB | 1146880 | 1120 kB | 1204224 | 1176 kB 02:33:51.057096 | snap_indexes | 167 | 0 | 0 | 0 | 0 | 49152 | 48 kB | 65536 | 64 kB | 204800 | 200 kB 02:33:51.057096 | snap_all_tables | 111 | 0 | 0 | 0 | 0 | 40960 | 40 kB | 40960 | 40 kB | 172032 | 168 kB 02:33:51.057096 | snap_statio_all_tables | 111 | 0 | 0 | 0 | 0 | 24576 | 24 kB | 24576 | 24 kB | 114688 | 112 kB 02:33:51.057096 | pg_statistic | 23 | 85 | 0 | 495 | 0 | 16384 | 16 kB | 16384 | 16 kB | 360448 | 352 kB 02:33:51.057096 | snap_pg_locks | 39 | 0 | 0 | 0 | 0 | 8192 | 8192 bytes | 32768 | 32 kB | 98304 | 96 kB 02:33:51.057096 | snap_stat_activity | 6 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 32768 | 32 kB 02:33:51.057096 | snap | 1 | 0 | 0 | 0 | 324 | 0 | 0 bytes | 0 | 0 bytes | 57344 | 56 kB 02:33:51.057096 | snap_settings | 1 | 0 | 0 | 1 | 1 | 0 | 0 bytes | 0 | 0 bytes | 114688 | 112 kB 02:33:51.057096 | snap_databases | 1 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 24576 | 24 kB 02:33:51.057096 | pg_class | 0 | 1 | 0 | 1448 | 200 | 0 | 0 bytes | 0 | 0 bytes | 245760 | 240 kB 02:33:51.057096 | pg_trigger | 0 | 0 | 0 | 3 | 0 | 0 | 0 bytes | 0 | 0 bytes | 65536 | 64 kB 02:33:51.057096 | sql_parts | 0 | 0 | 0 | | 0 | 0 | 0 bytes | 0 | 0 bytes | 49152 | 48 kB 02:33:51.057096 | pg_event_trigger | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 16384 | 16 kB 02:33:51.057096 | pg_language | 0 | 0 | 0 | 1 | 0 | 0 | 0 bytes | 0 | 0 bytes | 73728 | 72 kB 02:33:51.057096 | pg_toast_3381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes 02:33:51.057096 | pg_partitioned_table | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes 02:33:51.057096 | pg_largeobject_metadata | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes 02:33:51.057096 | pg_toast_16612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes
This script is a building-block. If you have a single database that you want stats on, it’s great. If you have dozens of databases in a single instance or dozens of instances, you’re going to quickly wish you had this data in a dashboard of some kind.
Hopefully this gets you started with metric building against your postgres databases. Practice the pgCraft, submit me a pull request!
Next time, we’ll look more into some of the insights we can glean from the information we assemble here.