9 min read

One of the reasons that working with open-source databases such as PostgreSQL can be so effective is that every dollar you save on software licensing can be put toward better hardware instead. The three main components you’ll need to balance in your budget are CPUs, memory, and disks, with the disk controller as a related and critical part too.

CPUs

Currently, available processors are bundling at least two and possibly as many as eight cores into each CPU, making the core count the figure of merit for most database applications. There are two basic decisions you need to make while deciding which CPU solution would best match your database application:

  • Which processor family? Nowadays, this normally boils down to choosing among the various 64 bit product lines from Intel or AMD, although there are some other less popular choices still floating around (Itanium, SPARC, and so on).
  • Do you get more cores or faster cores?

These choices are sometimes more tied together than you might think. Currently, Intel has a lead in delivering individual processor cores that are the fastest around, often due to faster transfers between the processor and system RAM. But the processors and related parts are more expensive too. AMD still is competitive at providing more cores per dollar, and their server class designs normally do a good job making the best of the memory available to each core. But if what you want is many more affordable cores instead, that’s where AMD is stronger. AMD also has a better history of making its fastest processors available in configurations with many sockets, when you want to put more than two physical CPUs into a single server.

The best way to figure out which class of database app you have—more cores or faster cores—is to monitor an existing server using tools such as top. If there’s a small number of processes running using a single CPU each, that’s the sort of workload where faster cores are better. That tends to happen if you have giant queries running in a batch fashion, for example when large quantities of data need to be sorted to deliver any single report. But if all the CPUs are active with many more concurrent processes instead, then you’d likely benefit better from more cores. That’s normally what you’ll see in applications with a larger user count, such as databases backing web applications.

If you don’t have the benefit of a working system to inspect, you can try to guess which type of situation you’re in by noting the limitations of the database. PostgreSQL does not allow splitting a single query across more than one core, what’s called parallel query by some other databases that support it. That means that if you have any one query or small number of queries that must run as fast as possible, the only way to do that is to prioritize getting faster cores.

Another situation where getting a faster core is the better choice is if you need to prioritize data loading or export situations. PostgreSQL’s best performing data import method, COPY, can easily become (but isn’t always) limited by CPU performance, where that turns into the bottleneck for operations. While it’s possible to split input files into pieces and load them in parallel, that’s something you’ll need to build or acquire yourself, rather than something the server knows how to do for you. Exporting a copy of the database using the pg_dump utility is another example of something that can become CPU limited on some systems.

Memory

How much to prioritize memory for your application really depends on the size of the working set of data needed to handle the most common operations. Generally, adding more RAM will provide a significant performance boost. There are a few situations where you’d be better served doing something else instead:

  • If your data set is small enough to fit into a smaller amount of RAM, adding more won’t help you much. You probably want faster processors instead.
  • When running applications that scan tables much larger than what you can feasibly purchase as RAM, such as in many data warehouse situations, you might be better served by getting faster disks rather than more memory.

The normal situation where more memory helps most is when the data you access frequently will fit with the larger amount, but not with the smaller. This happens more often than you might think because of the way database B-tree indexes are stored. Even if you can’t fit the entirety of a table or even its index in memory, being able to store a good sized fraction of the index can mean that index-based data lookups will be significantly sped up. Having the most “popular” blocks from the top of the tree structure cached is helpful even if you can’t fit all the leaves into memory too.

Once you have an application running, you can usually get a much better idea how memory is being used by looking inside the PostgreSQL buffer cache (and potentially inside the operating system one as well) and seeing what data it prefers to keep around.

Disks

While it’s always possible to run into situations where the CPU in your database server is its bottleneck, it’s downright likely that you’ll run into a disk bottleneck— particularly if you only have a drive or two in the system. A few years ago, the basic two choices in hard drives were the inexpensive ATA (also known as IDE) drives used in desktops versus the more serious SCSI drives aimed at servers.

Both technologies have marched forward, and the current choice you’re most likely to run into when configuring a database server is whether to use Serial ATA (SATA) or Serial Attached SCSI (SAS). It’s possible to find nearly identical drives available in both interfaces, and there are even drive controllers that allow attaching either kind of drive. Combined with a narrowing performance difference between the two, picking between them is harder than ever.

The broad parameters of each technology are straightforward to compare. Here’s the state of things as this is being written:

  • SAS disks:
    • The maximum available RPM is higher: 10,000 or 15,000
    • Not as much drive capacity: 73 GB-1 TB are popular sizes
    • Cost per MB is higher
  • SATA disks:
    • Drives typically have a slower RPM: 7200 is standard, some 10,000 designs exist such as the Western Digital VelociRaptor
    • Higher drive capacity: 2 TB available
    • Cost per MB is lower

Generally, you’ll find individual SAS disks to be faster even than SATA ones with similar specifications. In particular, you’re likely to see better seek performance on random I/O due to faster drive mechanics in SAS, and sometimes a faster transfer rate from the disk too. Also, because the SAS drives have supported advanced features such as command queuing for longer, it’s more likely your operating system will have matching support to take advantage of them.

RAID

The Redundant Array of Inexpensive Disks (RAID) approach is the standard way to handle both the performance and reliability limitations of individual disk drives. A RAID array puts many disks, typically of exactly the same configuration, into a set that acts like a single disk—but with either enhanced performance, reliability, or both. In some cases the extra reliability comes from computing what’s called parity information for writes to the array. Parity is a form of checksum on the data, which allows reconstructing it even if some of the information is lost. RAID levels that use parity are efficient from a space perspective at writing data in a way that will survive drive failures, but the parity computation overhead can be significant for database applications.

The most common basic forms of RAID arrays used are:

  • RAID 0: It is also called as Striping. Multiple disks are used at the same time, spreading reads and writes over each of them in parallel. This can be almost a linear improvement (two disks reading twice as fast as a single one), but a failure on any volume in the set will lose all the data.
  • RAID 1: It is also called as Mirroring. Here more copies of the same data are put onto multiple disks. This can sometimes improve performance—a good RAID 1 mirroring across two disks might handle two reads by sending one to each drive. Reads executed in parallel against both drives can effectively double average seeks per second. But generally, the reason for RAID 1 is redundancy: if a single drive fails, the system will continue operating using the other one.
  • RAID 10 or 1+0: This first takes pairs of disks and mirrors then using RAID 1. Then, the resulting set is striped using RAID 0. The result provides both high performance and the ability to tolerate any single disk failure, without as many ways for speed to suffer in the average and worst case as RAID 5/6. RAID 10 is particularly appropriate for write-heavy environments, where the parity computation overhead of RAID 5/6 can cause disk performance to suffer. Accordingly, it’s the preferred RAID level for high-performance database systems.
  • RAID 5: It is also called as Striped with Parity. This approach sits midway between 0 and 1. You stripe data across multiple drives similarly to RAID 0, which improves read performance. But some redundant data is added to a parity drive. If one of the disks in the array is lost, the missing data can be recomputed from the ones left using that parity information. While this is efficient in terms of how little space is wasted relative to the tolerance for disk failures provided, write performance in particular can suffer in RAID 5.
  • RAID 6: Similar to RAID 5, except with more parity information, enabling survival even with two disk failures. It has the same fundamental advantages and disadvantages. RAID 6 is an increasingly common way to cope with the fact that rebuilding a RAID 5 array after a disk loss can take a really long time on modern, high capacity drives. The array has no additional fault tolerance during that period, and seeing a second drive failure before that rebuild finishes is not that unlikely when it takes many hours of intense disk activity to rebuild. Disks manufactured in the same batch are surprisingly likely to fail in groups.

To be fair in any disk performance comparison, you need to consider that most systems are going to have a net performance from several disks, such as in a RAID array. Since SATA disks are individually cheaper, you might be able to purchase considerably more of them for the same budget than had you picked SAS instead. If your believe your application will get faster if it is spread over more disks, being able to buy more of them per dollar spent can result in an overall faster system. Note that the upper limit here will often be your server’s physical hardware. You only have so many storage bays or controllers ports available, and larger enclosures can cost more both up-front and over their lifetime. It’s easy to find situations where smaller numbers of faster drives—which SAS provides—is the better way to go. This is why it’s so important to constantly benchmark both hardware and your database application, to get a feel of how well it improves as the disk count increases.

LEAVE A REPLY

Please enter your comment!
Please enter your name here