One Metric Ton Of Indexes
Let’s say you hate your storage. Let’s say you hate it so much that you want you SQL Serve to touch it as little as possible.
You’re most of the people I talk to. Congratulations.
But how do you do that?
Let’s talk about a few things.
How SQL Server Works With Data
It doesn’t matter if a query wants to read or modify data, all those itty-bitty little data pages need to end up in memory.
How much ends up in memory depends on how big your tables are, and how helpful your indexes are.
Likewise, the more indexes you need to modify, the more need to be in memory for that to happen.
You need to design indexes so that you can support your queries by making it easy for them to locate data. That’s your where clause, and guess what?
Your modification queries have where clauses, too.
How You Can Make Indexing Better
Make sure you’re reviewing your indexes regularly. Things that you need to keep an eye on:
- Duplicative indexes
- Under-utilized indexes
Even when indexes are defined on the same columns, they’re separate sets of pages within your data files.
- If you have indexes that are on very similar sets of columns, or supersets/subsets of columns, it’s probably time to start merging them
- If you have indexes that just aren’t being read, or aren’t being read anywhere near as much as they’re written to, you should think about ditching them
Cleaning up indexes like this gives you more breathing room to add in other indexes later.
It also gives you far fewer objects competing for space in memory.
That means the ones you have left stand a better chance of staying there, and your queries not having to go to disk for them.
How You Can Make Indexes Better
There are all sorts of things you can do to make indexes better, too. I don’t mean rebuilding them, either!
I mean getting smarter about what you’re indexing.
Things like filtered indexes and index compression can net you big wins when it comes to reducing the overall size of indexes.
My friend Andy Mallon has some Great Posts™ about compression over on his blog:
- Data Compression: How row compression works
- Data Compression: How page compression works
- Data Compression: Costs & benefits
And of course, computed columns can help if you’ve got a wonky schema.
Smaller indexes that take up less space in memory make more efficient use of the space you have, which means you can fit more in there.
How You Can Make Tables Better
There are some obvious bits here, like being extra careful with choosing string length.
And of course, overly-wide, non-normalized tables can also lead to issues.
If you’re running an OLTP workload, you may also want to make sure that your critical tables aren’t heaps.
Those things tend to take up more space in memory than they need to.
And of course, if you need any help fixing these types of issues, drop me a line!
Thanks for reading!