9 min read

PostgreSQL 9.0 High Performance

Accelerate your PostgreSQL system

  • Learn the right techniques to obtain optimal PostgreSQL database performance, from initial design to routine maintenance
  • Discover the techniques used to scale successful database installations
  • Avoid the common pitfalls that can slow your system down
  • Filled with advice about what you should be doing; how to build experimental databases to explore performance topics, and then move what you’ve learned into a production database environment
  • Covers versions 8.1 through 9.0

PostgreSQL uses a Write-Ahead Log (WAL) to write data in a way that survives a database or hardware crash. This is similar to the log buffer or REDO log found in other databases. The database documentation covers the motivation and implementation of the WAL at http://www.postgresql.org/docs/current/ static/wal.html

To quote from that introduction:

WAL’s central concept is that changes to data files (where tables and indexes reside) must be written only after those changes have been logged, that is, after log records describing the changes have been flushed to permanent storage.

This procedure ensures that if your application has received a COMMIT for a transaction, that transaction is on permanent storage, and will not be lost even if there is a crash. This satisfies the durability portion of the ACID (atomicity, consistency, isolation, durability) expectations databases aim to satisfy.

Write-back caches

The CPUs and memory in your server are quite fast compared to its disk drives. Accordingly, making the rest of the system wait for the disks, particularly when things need to be written out, can drag overall performance down heavily. Systems that wait for the disks to complete their writes before moving into their next task are referred to as having a write-through cache. While the data may be stored temporarily in a memory cache, until it’s made it all the way through to the physical disk, any write an application requested isn’t considered complete.

The normal solution to making that faster is to introduce a different type of write cache between the program doing the writing and disks. A write-back cache is one where data is copied into memory, and then control returns to the application that requested the write. Those writes are then handled asynchronously, at some future time dictated by the design of the write-back cache. It can take minutes before the data actually makes it to disk.

When PostgreSQL writes information to the WAL, and sometimes when it writes to the regular database files too, that information must be “flushed to permanent storage” in order for the database’s crash corruption defense mechanism to work. So what happens if you have a write-back cache that says the write is complete, but it really isn’t? People call these lying drives, and the result can be very bad:

If you have a system with a write-back cache, and a system crash causes the contents of that write-back cache to be lost, this can corrupt a PostgreSQL database stored on that drive and make it unusable. You can discover it takes expert intervention to even get the database to start again, and determining what data is damaged will be difficult.

Consider the case where you have committed a transaction. Details of that new transaction might be spread across two data blocks on the drive. Now, imagine that one of those made it to disk before the system crashed, but the other didn’t. You’ve now left the database in a corrupted state: one block refers to a transaction that doesn’t exist where it’s supposed to in the other block.

Had at least all of the data blocks related to the WAL been written properly, the database WAL could correct this error after the crash. But the WAL protection only works if it can get honest information about whether information has been written to the disks properly or not, and the “lying” write-back caches do not report that.

Sources of write-back caching

Servers are filled with write caches you need to be aware of:

  • Operating system write cache. This cache can easily be gigabytes in size. Typically you can flush data out of this cache by forcing a “sync” operation on the block that needs to be stored on disk. On POSIX systems (which includes all UNIX-like ones), this is done with the fsync or fdatasync calls. In some cases, it’s possible to write directly in a sync mode, which is effectively a write followed by fsync. The postgresql.conf setting wal_sync_method controls which method is used, and it’s possible to disable this altogether to optimize for speed instead of safety.
  • Disk controller write cache. You’ll find a write cache on most RAID controller cards, as well as inside external storage such as a SAN. Common sizes right now are 128 MB to 512 MB for cards, but gigabytes are common on a SAN. Typically controllers can be changed to operate in the completely writethrough mode, albeit slowly. But by default, you’ll normally find them in write-back mode. Writes that can fit in the controller’s cache are stored there, the operating system is told the write is completed, and the card writes the data out at some future time. To keep this write from being lost if power is interrupted, the card must be configured with a battery. That combination is referred to as a battery-backed write cache (BBC or BBWC).
  • Disk drive write cache. All SATA and SAS disks have a write cache on them that on current hardware is 8 MB to 32 MB in size. This cache is always volatile: if power is lost, any data stored in there will be lost and they’re always write-back caches if enabled.

How can you make sure you’re safe given all these write-back caches that might lose your data? There are a few basic precautions to take:

  • Make sure whatever file system you’re using properly implements fsync calls, or whatever similar mechanism is used, fully.
  • Monitor your driver controller battery. Some controller cards will monitor their battery health, and automatically switch from write-back to writethough mode when there is no battery or it’s not working properly. That’s a helpful safety measure, but performance is going to drop hard when this happens.
  • Disable any drive write caches. Most hardware RAID controllers will do this for you, preferring their own battery-backed caches instead.

Disk controller monitoring

When you have a RAID controller card with a battery-backed cache, you probably expect you’ll need to monitor the card to determine when disks fail. But monitoring controller battery health is an equally critical aspect of maintaining a reliable database system when you’re using this technology. If the battery fails and you’re using it in write-back mode, your writes are not safe. Similarly, if your power fails, you should prefer shutting the database server down after a few minutes of power loss to trying to keep it going. Integrating in power monitoring via a UPS or similar mechanism should be part of your database server configuration, so that a short outage results in an orderly shutdown. Consider the purpose of the controller battery to protect yourself from really unexpected outages, like someone tripping over the power cord. Even if the manufacturer claims the controller battery will last through days of downtime, that’s not a good reason to believe you’ll actually be safe for an extended outage. You should consider the battery as something you’d prefer to only use for some number of minutes of protection. That may be the reality, particularly in a case where the battery is older and has lost much of its capacity, and some controller batteries don’t start out with very much capacity. Be sure to run your own tests rather than blindly believing the manufacturer specifications: your data depends on it.

Better RAID controllers will automatically disable write-back mode if their battery stops working normally. If performance suddenly drops on an older server, this is one potential cause.

Also don’t forget that every UPS has a battery that degrades over time as well. That’s all the more reason to arrange an orderly shutdown of your server during a power outage, rather than optimistically presuming you can keep it running until power returns.

Disabling drive write caches

If your card doesn’t disable all the drive write caches, or if you’re using a software RAID approach, you’ll need to turn the caches off yourself. The best way to do this is to see if it’s possible to change the default write cache state using the utilities provided by the drive manufacturer.

You should be able to do this through software as well. Here is a sample session from a Linux system checking the write cache, toggling it off, confirming that change took, and then toggling it on again:

# hdparm -I /dev/sda | grep “Write cache”
* Write cache
# sudo hdparm -W 0 /dev/sda

/dev/sda:
setting drive write-caching to 0 (off)
write-caching = 0 (off)
# hdparm -I /dev/sda | grep “Write cache”
Write cache
# hdparm -W 1 /dev/sda

/dev/sda:
setting drive write-caching to 1 (on)
write-caching = 1 (on)


Only the -W 0 configuration is completely safe for database use. The PostgreSQL WAL documentation suggests similar commands to use for other operating systems.

Performance impact of write-through caching

If you don’t have a battery-backed write cache, and therefore can’t utilize some memory-based cache to accelerate fsync writes, commit performance on your database can be quite bad. The worst-case here is where you have a single client that is issuing a commit after every statement it executes. The reality of how a hard drive works means that individual writes happen once each time the drive spins around. Here are the measurements for the common drive speeds available right now, with computed maximum commit rate:

 

 

Rotation speed

Rotation time (ms)

Max commits/second

5400

11.1

90

7200

8.3

120

10000

6.0

166

15000

4.0

250

It’s important to realize how limiting this can be:

If you have a common 7200 rpm hard drive, no single client can commit more than 120 transactions/second in any situation where all that’s available is a write-back cache.

It doesn’t matter how many disks you have in a RAID array, or how you configure your software. You must have hardware with a battery, enabling a non-volatile write-back cache, in order to safely exceed this limit.

Some PostgreSQL installs use a RAID controller card just for this purpose, to provide a BBWC, in Just a Bunch of Disks (JBOD) mode—where no RAID is being done on the controller at all. Sometimes disks are used directly, and others layer software RAID on top, which can have some advantages compared to hardware RAID.

If you have more than one client, you’ll get more done per commit. It’s normal to see >500 committed transactions per second if you have a larger number of clients all committing regularly, because each flushed disk write will include any queued up commit requests from other clients, too. The other common technique here is to batch commits into larger pieces, perhaps going 1000 records at a time rather than a single one, in order to reduce the average impact of commit latency.

Another approach for accelerating systems that don’t have a usable write cache is asynchronous commit.

Summary

In this article we saw how accidentally using volatile write-back caching in disk controllers and drives can easily introduce database corruption.


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here