Accelerate your PostgreSQL system
PostgreSQL uses a Write-Ahead Log (WAL) to write data in a way that survives a database or hardware crash. This is similar to the log buffer or REDO log found in other databases. The database documentation covers the motivation and implementation of the WAL at http://www.postgresql.org/docs/current/ static/wal.html
To quote from that introduction:
WAL’s central concept is that changes to data files (where tables and indexes reside) must be written only after those changes have been logged, that is, after log records describing the changes have been flushed to permanent storage.
This procedure ensures that if your application has received a COMMIT for a transaction, that transaction is on permanent storage, and will not be lost even if there is a crash. This satisfies the durability portion of the ACID (atomicity, consistency, isolation, durability) expectations databases aim to satisfy.
The CPUs and memory in your server are quite fast compared to its disk drives. Accordingly, making the rest of the system wait for the disks, particularly when things need to be written out, can drag overall performance down heavily. Systems that wait for the disks to complete their writes before moving into their next task are referred to as having a write-through cache. While the data may be stored temporarily in a memory cache, until it’s made it all the way through to the physical disk, any write an application requested isn’t considered complete.
The normal solution to making that faster is to introduce a different type of write cache between the program doing the writing and disks. A write-back cache is one where data is copied into memory, and then control returns to the application that requested the write. Those writes are then handled asynchronously, at some future time dictated by the design of the write-back cache. It can take minutes before the data actually makes it to disk.
When PostgreSQL writes information to the WAL, and sometimes when it writes to the regular database files too, that information must be “flushed to permanent storage” in order for the database’s crash corruption defense mechanism to work. So what happens if you have a write-back cache that says the write is complete, but it really isn’t? People call these lying drives, and the result can be very bad:
If you have a system with a write-back cache, and a system crash causes the contents of that write-back cache to be lost, this can corrupt a PostgreSQL database stored on that drive and make it unusable. You can discover it takes expert intervention to even get the database to start again, and determining what data is damaged will be difficult.
Consider the case where you have committed a transaction. Details of that new transaction might be spread across two data blocks on the drive. Now, imagine that one of those made it to disk before the system crashed, but the other didn’t. You’ve now left the database in a corrupted state: one block refers to a transaction that doesn’t exist where it’s supposed to in the other block.
Had at least all of the data blocks related to the WAL been written properly, the database WAL could correct this error after the crash. But the WAL protection only works if it can get honest information about whether information has been written to the disks properly or not, and the “lying” write-back caches do not report that.
Servers are filled with write caches you need to be aware of:
How can you make sure you’re safe given all these write-back caches that might lose your data? There are a few basic precautions to take:
When you have a RAID controller card with a battery-backed cache, you probably expect you’ll need to monitor the card to determine when disks fail. But monitoring controller battery health is an equally critical aspect of maintaining a reliable database system when you’re using this technology. If the battery fails and you’re using it in write-back mode, your writes are not safe. Similarly, if your power fails, you should prefer shutting the database server down after a few minutes of power loss to trying to keep it going. Integrating in power monitoring via a UPS or similar mechanism should be part of your database server configuration, so that a short outage results in an orderly shutdown. Consider the purpose of the controller battery to protect yourself from really unexpected outages, like someone tripping over the power cord. Even if the manufacturer claims the controller battery will last through days of downtime, that’s not a good reason to believe you’ll actually be safe for an extended outage. You should consider the battery as something you’d prefer to only use for some number of minutes of protection. That may be the reality, particularly in a case where the battery is older and has lost much of its capacity, and some controller batteries don’t start out with very much capacity. Be sure to run your own tests rather than blindly believing the manufacturer specifications: your data depends on it.
Better RAID controllers will automatically disable write-back mode if their battery stops working normally. If performance suddenly drops on an older server, this is one potential cause.
Also don’t forget that every UPS has a battery that degrades over time as well. That’s all the more reason to arrange an orderly shutdown of your server during a power outage, rather than optimistically presuming you can keep it running until power returns.
If your card doesn’t disable all the drive write caches, or if you’re using a software RAID approach, you’ll need to turn the caches off yourself. The best way to do this is to see if it’s possible to change the default write cache state using the utilities provided by the drive manufacturer.
You should be able to do this through software as well. Here is a sample session from a Linux system checking the write cache, toggling it off, confirming that change took, and then toggling it on again:
# hdparm -I /dev/sda | grep “Write cache”
* Write cache
# sudo hdparm -W 0 /dev/sda
/dev/sda:
setting drive write-caching to 0 (off)
write-caching = 0 (off)
# hdparm -I /dev/sda | grep “Write cache”
Write cache
# hdparm -W 1 /dev/sda
/dev/sda:
setting drive write-caching to 1 (on)
write-caching = 1 (on)
Only the -W 0 configuration is completely safe for database use. The PostgreSQL WAL documentation suggests similar commands to use for other operating systems.
If you don’t have a battery-backed write cache, and therefore can’t utilize some memory-based cache to accelerate fsync writes, commit performance on your database can be quite bad. The worst-case here is where you have a single client that is issuing a commit after every statement it executes. The reality of how a hard drive works means that individual writes happen once each time the drive spins around. Here are the measurements for the common drive speeds available right now, with computed maximum commit rate:
Rotation speed
Rotation time (ms)
Max commits/second
5400
11.1
90
7200
8.3
120
10000
6.0
166
15000
4.0
250
It’s important to realize how limiting this can be:
If you have a common 7200 rpm hard drive, no single client can commit more than 120 transactions/second in any situation where all that’s available is a write-back cache.
It doesn’t matter how many disks you have in a RAID array, or how you configure your software. You must have hardware with a battery, enabling a non-volatile write-back cache, in order to safely exceed this limit.
Some PostgreSQL installs use a RAID controller card just for this purpose, to provide a BBWC, in Just a Bunch of Disks (JBOD) mode—where no RAID is being done on the controller at all. Sometimes disks are used directly, and others layer software RAID on top, which can have some advantages compared to hardware RAID.
If you have more than one client, you’ll get more done per commit. It’s normal to see >500 committed transactions per second if you have a larger number of clients all committing regularly, because each flushed disk write will include any queued up commit requests from other clients, too. The other common technique here is to batch commits into larger pieces, perhaps going 1000 records at a time rather than a single one, in order to reduce the average impact of commit latency.
Another approach for accelerating systems that don’t have a usable write cache is asynchronous commit.
In this article we saw how accidentally using volatile write-back caching in disk controllers and drives can easily introduce database corruption.
Further resources on this subject:
I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…
Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…
Once we learn how to deploy an Ubuntu server, how to manage users, and how…
Key-takeaways: Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…
While developing a web application, or setting dynamic pages and meta tags we need to deal with…
Software architecture is one of the most discussed topics in the software industry today, and…