EuroPython 2018: Postgres at any scale

Craig Kerstiens runs the database as a service at Citrus, curates Postgres Weekly, and previously ran several products at Heroku.

This talk had lots of helpful large queries I didn't even come close to typing down.

Why Postgres?

It's the emacs of databases. It has data types, it has lots of index types, full text search, JSONB (proper JSON data structures, indexeable etc), GIS with the best geo support, and extensions, which reduce the waiting time for new releases.

Small Scale

up to 10G of data.

Data types

  • Pay attention to data types: MySQL truncates too-long strings, instead of notifying about the error.
  • Timestamp vs Timestamptz: Postgres is really really good with timezones. Data is stored as UTC plus timezone, and Postgres does the timezone math on top.
  • JSON vs JSONB: JSON just validates a string to be correct JSON – JSONB on the other hand knows about the data.

Constraints

  • Django/SQLAlchemy support is good by default.
  • Proper handling of string lenghts.
  • Proper handling of IP addresses.
  • Proper deletion of FKs.

In Rails it's really common to do soft deletes (via a deleted_at field). Less so in Django+Python, but it's a really good pattern, provided you take the time to consider data retention guidelines.

Tools

psql is builtin, learn it. It's customisale.

  • \x auto for output formatting.
  • Save histories in per-database history files.
  • \timing shows the running time on every query you run.

Backups

Backups don't exist unless you test them. Try to restore your latest backup now. Then test them regularly.

Medium Size

10G-100G of data.

Configure your system

Analyse

Figure out your cache hit rate. It's a pretty long-ish query. You want a cache hit ratio of 99%. If you go to 98% or 95%, your performance falls off really really quickly!

Figure out your index hit rate. Index hit rate should be >=95% on tables with more than 10k rows.

Understand specific query performance. Use EXPLAIN ANALYZE, which will a) tell you what it thinks it will do, and b) then runs the query and tells you about reality. The output is pretty confusing. You may want to use this.

Try to get common page response times under 100ms.

If you're not sure which query is slow: use pg_stat_statements: select * from pg_stat_statements where query ~ 'from users where email';. Aggregate on this table looking for slow requests.

Indexes

Postgres basically gets a new index every release, there are for example: B-Tree, GIN, GIST, SP-GIST, BRINS. A astrophysics professor from Russia who hacks on postgres for fun occasionally comes up with the weirdest index types from research papers, and they implement them.

  • Usually you want B-Tree indexes. For stuff like email address lookups, filters based on salary, blah blah.
  • GIN indexes are great for multiple values in one column, e.g. arrays.
  • GIST indexes are for full text search or geospatial searches, since it works for values that overlap rows.
  • SP-GIST and BRIN are for really really large tables. Hard to get. SP-GIST is for, like, phone numbers, maybe? BRIN for block range time series.
  • Since there's GIN, there will also be VODKA soon. Yeah.

Be specific with your indexes. Add composite indexes if meaningful. Add functional indexes e.g. for lower-cased lookups. Add conditional indexes.

Faster Reads vs Faster Writes

Figure out if you need faster reads or faster writes. Indexes slow stuff down since you'll wait to update indexes, wait for acks, etc. But you still should be generous with your indexes, since postgres is pretty clever about those. Also, find out if you have unused indexes. [Another huge query missing here]

Database migrations

If you add a column with a default value and not null, postgres will REWRITE THE ENTIRE TABLE, since you need to add that value in every row, locking the table. Instead, break the migration up:

  1. Allow nulls, and set a default value.
  2. Gradually ackfill old data.
  3. Add your constraint.

This way, you have basically no downtime. You'll probably run into this once you move from a small to a large (~100G) database.

Large scale

Index creation

CREATE INDEX CONCURRENTLY (not supported in Django, sorry): It's about 2-3x slower, but it doesn't lock the table.

Connection pooling

Use a postgres side connection pooler (and also connection pooling in your application). Use pgBoucer over pgpool.

Scale cache

Moar memory. At some point though, that's not an option any more.

Use wal-e/wal-g (faster) or barman for streaming replicas.

Use sharding. Split up large tables ("logs", "events", anybody?). Look into services like Citus (open source). Have a look at the instagram post on sharding. Don't shard as one database/schema per customer. Avoid sharding as long as possible.

Backups

You'll have started out with pg_dump (human readable SQL, portable, but with DB load, and won't work at TB scale). The alternative is physical backups. It has more initial setup, it has less protability, and produces less database load.

Closing

Small: Leverage datatypes, test backups, master your tools.

Medium: Tune postgres, watch cache hit ratio above anything else, be generous with indexes (just try them out, and delete them later if in doubt).

Large: Move away from pg_dump, setup connection pooling with pgbouncer, and shard.