Starting SQL: Compensating For Disk Waits In SQL Server

Repairer Of Reputations


CPU waits are relatively easy to figure out once you wrap your head around the settings that contribute to them, and the queries that cause them. There’s a pretty direct correlation between parallelism, thread usage, and CPU usage.

Compensating for disk waits is a bit of a different game, because there’s a bit to consider from a few different angles. But first, let’s distinguish a little bit.

Waits that commonly crop up when you’re waiting on disk:

  • PAGEIOLATCH_**
  • WRITELOG

When people hear they’re waiting a lot on disk, their first inclination might be that they need faster disks. For WRITELOG waits, that can definitely be a factor. For PAGEIOLATCH waits, it probably shouldn’t be your first move.

Relatively Speaking


When SQL Server hits PAGEIOLATCH waits, it’s to signal operations needing to read pages from disk into memory. If you just rebooted, this is inevitable. You start with a totally cold buffer cache.

But if you have enough memory, you’re not likely to see queries consistently waiting on it. Why? Because if data you need is already in memory, that’s where you go get it from. Why go to disk if you don’t have to? It’s icky out there.

If you really want to compensate for this wait, you’re going to need to think about a few things, like

  • How much memory you have, and how much memory you’re allowed (non-Enterprise versions have limits)
  • How much data you have, and how many indexes you have

Let’s say end user queries are consistently waiting on reading from disk. It doesn’t matter much if the wait is fast or slow, what matters is that the data isn’t in memory. Sure, it matters more if the waits are slow, but the first question is memory.

  • Do you have enough?
  • Can you add more?
  • Would what you have be enough if you had fewer indexes? (Unused/Duplicative)
  • Would what you have be enough if you had less data? (Purging/Archving)

Judgement Night


The reason getting memory right is so crucial is because of how much it’s responsible for.

Aside from caching all those thoughtfully crafted data pages, queries need it to sort and hash data, and there are all sorts of other lower level caches that rely on it. The plan cache is probably the most obvious.

Once you realize that memory is a shared resource, you treat it a whole lot differently. Especially if you know just how much memory some things can take.

To learn more, check out my videos:

Costing


Yeah, memory is cheap. Unless you need so much that your next step is going to Enterprise Edition.

But there’s an intermediate step in the mix that not many people talk about. You can have 2-3 Standard Edition boxes with data split out, and have it potentially be more cost effective than jumping to Enterprise Edition.

This is a better fit for applications/servers that use multiple databases, of course, but I’ve seen people do it with archival data too.

Of course, there are some disk things that you should fix. Like if you’re on a SAN and using <8Gb networking, or if you’re using a VM and not using PVSCSI disks.

The point though, is that if you have room to add memory, you should do that before fiddling with disks. It just plain goes further, because you’re not just helping queries read data into memory faster. You’re caching more pages overall, and you have more memory available for query memory grants (and other caching activities).

Faster disks also won’t do anything to help the waits we’ll talk about tomorrow, that can for sure be a sign that SQL Server doesn’t have adequate memory.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.