Does My SQL Server Need More Memory?

Work It


Memory is S-Tier crucial for most workloads to run reliably fast. It’s where SQL Server caches data pages, and it’s what it gives to queries to process Sorts and Hashes (among other things, but these are most common).

Without it, those two things, and many other caches, would be forced to live on disk. Horrible, slow disk. Talk about a good way to make databases less popular, eh?

With no offense to the SAN administrators of the world, I consider it my sworn duty to have databases avoid your domain as much as possible.

In this post, we’ll talk about how to figure out if your SQL Server needs more memory, and if there’s anything you can do to make better use of memory at the same time.

After all, you could be doing just fine.

(You’re probably not.)

Tale Of The Wait Stats


You can look at wait stats related to memory and query performance by running sp_PressureDetector.

It’ll give you some details about wait stats that relate to CPU and memory pressure. You wanna pay attention to the memory and disk waits, here. I had to squish it a little, but if you’re unfamiliar you can use the “description” column to better understand which ones to pay attention to.

SQL Server Wait Stats

Some important metrics to note here:

  • How do wait times relate to server uptime?
  • How long on average do we wait on each of these?

This won’t tell the whole story, of course, but it is a reasonable data point to start with. If your workload isn’t a 24×7 slog, though, you might need to spend more time analyzing waits for queries as they run.

In this example, it’s my local SQL instance, so it hasn’t been doing much work since I restarted it. Sometimes, you gotta look at what queries that are currently running are waiting on.

For that, go grab sp_WhoIsActive. If you see queries constantly waiting on stuff like this, it might be a sign you need more memory, because you have to keep going out to disk to get what queries need to use.

SQL Server Wait Stats
telling myself

It could also be a sign of other things, like queries and indexes that need tuning, but if it’s sustained like this then that’s not entirely likely.

It’s much more likely a memory deficiency, but it’s up to you to investigate further on your system.

How Is SQL Server Using Memory Now?


Current memory utilization can be a good way to find out if other things are using memory and taking valuable space away from your buffer pool.

A lot of folks out there don’t realize how many different things SQL Server has to share memory across.

And, hey, yeah, sp_PressureDetector will show you that, too. Here’s a “normal” run:

SQL Server’s buffer pool is uninfringed upon by other consumers! Great. But sometimes queries ask for memory grants, and that’s where things can get perilous.

SQL Server Memory Clerks
i feel good

You may sometimes see Ye Olde First Memory Bank Of Motherboard loan out a swath to one or more queries:

SQL Server Memory Clerks
dramarama

The difference here? The buffer pool is reduced by ~9GB to accommodate a query memory grant.

sp_PressureDetector will show you the queries doing that, too, along with query plans.

sp_WhoIsActive
everyone is gone

It’ll also show you memory available in resource pools for granting out to queries. On this server, Max Server Memory is set to 50GB.

If you’re shocked that SQL Server is willing to give out 37GB of that to query memory grants, you haven’t been hanging around SQL Server long enough.

And you wonder why I worry about memory!

Does SQL Server Need More Memory?


If your queries are constantly waiting on:

  • Reading pages from disk (PAGEIOLATCH_…)
  • Query memory grants (RESOURCE_SEMAPHORE)
  • Compiling query plans (RESOURCE_SEMAPHORE_QUERY_COMPILE)

Then there’s a pretty good chance that it does, especially if data just plain outpaces memory by a good margin (like 3:1 or 4:1 or more).

You also have some options for making better use of your current memory, too.

  • Check critical queries for indexing opportunities  (there may not always be a missing index request, but seasoned query tuners can spot ones the optimizer doesn’t)
  • Apply PAGE compression to existing row store indexes to make them smaller on disk and in memory
  • Check the plan cache for queries asking for large memory grants, but not using all of what’s granted to them

You can check the plan cache using a query like this. It’ll look for queries that ask for over 5GB of memory, and don’t use over 1GB of it.

WITH 
    unused AS
(
    SELECT TOP (100)
        oldest_plan = 
            MIN(deqs.creation_time) OVER(),
        newest_plan = 
            MAX(deqs.creation_time) OVER(),
        deqs.statement_start_offset,
        deqs.statement_end_offset,
        deqs.plan_handle,
        deqs.execution_count,
        deqs.max_grant_kb,
        deqs.max_used_grant_kb,
        unused_grant = 
            deqs.max_grant_kb - deqs.max_used_grant_kb,
        deqs.min_spills,
        deqs.max_spills
    FROM sys.dm_exec_query_stats AS deqs
    WHERE (deqs.max_grant_kb - deqs.max_used_grant_kb) > 1024.
    AND   deqs.max_grant_kb > 5242880.
    ORDER BY unused_grant DESC
)
SELECT      
    plan_cache_age_hours = 
        DATEDIFF
        (
            HOUR,
            u.oldest_plan,
            u.newest_plan
        ),
    query_text = 
        (
            SELECT [processing-instruction(query)] =
                SUBSTRING
                (
                    dest.text, 
                    ( u.statement_start_offset / 2 ) + 1,
                    (
                        ( 
                            CASE u.statement_end_offset 
                                 WHEN -1 
                                 THEN DATALENGTH(dest.text) 
                                 ELSE u.statement_end_offset 
                            END - u.statement_start_offset 
                        ) / 2 
                    ) + 1
                )
                FOR XML PATH(''), 
                    TYPE
        ),
    deqp.query_plan,
    u.execution_count,
    u.max_grant_kb,
    u.max_used_grant_kb,
    u.min_spills,
    u.max_spills,
    u.unused_grant
FROM unused AS u
OUTER APPLY sys.dm_exec_sql_text(u.plan_handle) AS dest
OUTER APPLY sys.dm_exec_query_plan(u.plan_handle) AS deqp
ORDER BY u.unused_grant DESC
OPTION (RECOMPILE, MAXDOP 1);

This will get you the top (up to!) 100 plans in the cache that have an unused memory grant, ordered by the largest difference between grant and usage.

What you wanna pay attention to here:

  • How old the plan cache is: if it’s not very old, you’re not getting the full picture
  • How big the memory grants are: by default, the max is ~25% of max server memory

Controlling Memory Grants


If you’re looking for ways to control memory grants that doesn’t involved a bunch of query and index tuning, you have a few options:

  • Resource Governor: Enterprise Edition only, and usually applies to the whole workload
  • MIN_GRANT_PERCENT and MAX_GRANT_PERCENT query hints: You usually wanna use both to set a proper memory grant, just setting an upper level isn’t always helpful
  • Batch Mode Memory Grant Feedback: Requires Batch Mode/Columnstore, only helps queries between executions, usually takes a few tries to get right

For Resource Governor, you’ll wanna do some analysis using the query in the previous section to see what a generally safe upper limit for memory grants is. The more memory you have, and the higher your max server memory is, the more insane 25% is.

SQL Server Resource Governor
signs and numbers

Again, just be cautious here. If you change this setting based on a not-very-old plan cache, you’re not gonna have a clear pictures of which queries use memory, and how much they use. If you’re wondering why I’m not telling you to use Query Store for this, it’s because it only logs how much memory queries used, not how much they asked for. It’s pretty ridiculous.

After you make a change like this, or start using those query hints, you’ll wanna do some additional analysis to figure out if queries are spilling to disk. You can change the query above to something like this to look at those:

WITH 
    unused AS
(
    SELECT TOP (100)
        oldest_plan = 
            MIN(deqs.creation_time) OVER(),
        newest_plan = 
            MAX(deqs.creation_time) OVER(),
        deqs.statement_start_offset,
        deqs.statement_end_offset,
        deqs.plan_handle,
        deqs.execution_count,
        deqs.max_grant_kb,
        deqs.max_used_grant_kb,
        unused_grant = 
            deqs.max_grant_kb - deqs.max_used_grant_kb,
        deqs.min_spills,
        deqs.max_spills
    FROM sys.dm_exec_query_stats AS deqs
    WHERE deqs.min_spills > (128. * 1024.) 
    ORDER BY deqs.max_spills DESC
)
SELECT      
    plan_cache_age_hours = 
        DATEDIFF
        (
            HOUR,
            u.oldest_plan,
            u.newest_plan
        ),
    query_text = 
        (
            SELECT [processing-instruction(query)] =
                SUBSTRING
                (
                    dest.text, 
                    ( u.statement_start_offset / 2 ) + 1,
                    (
                        ( 
                            CASE u.statement_end_offset 
                                 WHEN -1 
                                 THEN DATALENGTH(dest.text) 
                                 ELSE u.statement_end_offset 
                            END - u.statement_start_offset 
                        ) / 2 
                    ) + 1
                )
                FOR XML PATH(''), 
                    TYPE
        ),
    deqp.query_plan,
    u.execution_count,
    u.max_grant_kb,
    u.max_used_grant_kb,
    u.min_spills,
    u.max_spills,
    u.unused_grant
FROM unused AS u
OUTER APPLY sys.dm_exec_sql_text(u.plan_handle) AS dest
OUTER APPLY sys.dm_exec_query_plan(u.plan_handle) AS deqp
ORDER BY u.max_spills DESC
OPTION (RECOMPILE, MAXDOP 1);

Small spills aren’t a big deal here, but you’ll definitely wanna pay attention to larger ones. This is set to find ones that are over 1GB, which is still pretty small, but could be meaningful.

If you notice a lot more queries spilling in a substantial way, you may have capped the high end of query memory grants too low.

Recap


Memory is something that I see people struggle to right-size, forecast, and understand the physics of in SQL Server. The worst part is that hardly anything in this post applies to Standard Edition, which is basically dead to me.

The main things to keep an eye on are:

  • Wait stats overall, and for running queries
  • Large unused memory grants in the plan cache
  • Size of data compared to size of memory

If you need help with this sort of thing, hit the link below to drop me a line about consulting.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Signs You Need More Memory In Your SQL Server

Guessotron


Most of these signs have to do with wait stats. One could venture out and say that if you have way less memory than data, you need more memory, but… If the server is sitting around bored, who cares?

If we’re going to spend money on memory, let’s make sure it’ll get used. When I’m talking to people with performance problems that memory would solve, here are some of the top reasons.

You’re In The Cloud Where Storage Sucks

Okay, okay, storage can totally suck other places, too. I’ve seen some JBOD setups that would make you cry, and some of them weren’t in the cloud. Where you need to differentiate a little bit here is that memory isn’t going to help slow writes directly. If you add a bunch more memory and free up some network bandwidth for writes by focusing the reads more from the buffer pool, it might.

Look, just avoid disk as much as possible and you’ll be happy.

You’re Using Column Store And/Or Batch Mode

Good column store compression can often rely on adequate memory, but you also need to account for the much larger memory grants that batch mode queries ask for. As more and more workloads move towards SQL Server 2019 and beyond, query memory needs are going to go up because Batch Mode On Row Store will become more common.

You’re Waiting On RESOURCE_SEMAPHORE A Lot

This wait shows up when a bunch of queries are contending for memory grants, but SQL Server has given out all it can. If you run into these a lot, it’s a pretty good sign you need more memory. Especially if you’ve already tuned queries and indexes a bunch, or you’re dealing with a vendor app where they refuse to fix anything.

Other things that might help? The MAX_GRANT_PERCENT hint or Resource Governor

You’re Waiting On RESOURCE_SEMAPHORE_QUERY_COMPILE A Lot

This is another “queue” wait, but it’s for query compilation rather than query execution. Having more memory can certainly help this quite a bit, but so can simplifying queries so that the amount of memory SQL Server has to throw at compiling them chills out a little. You can start by reconsidering those views nested 10 levels deep and the schema design that leads you to needing a 23 table join to construct one row.

You’re Waiting On PAGEIOLATCH_SH Or PAGEIOLATCH_EX A Lot

These waits show up when data pages your query needs aren’t already there. The more you see these, the more latency you’re adding to your workload by constantly shuffling out to disk to get them. Of course, there’s other stuff you can do, like clean up unused and overlapping indexes, compress your indexes, etc. But not everyone is comfortable with or able to do that.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

You Probably Don’t Need To Offload SQL Server Reads To Another Server

Moneybags


Duplicating data for reporting, outside of moving it to a data warehouse where there’s some transformations involved, can be an expensive and perilous task.

Your options come down to a native solution like AGs, Replication, or Log Shipping. You can brew something up yourself that relies on native stuff too, like Change Data Capture, Change Tracking, Temporal Tables, or triggers.

Outside of that, you’re getting into the world of SSIS/ADF, other third party vendors, etc. to get data where it ought to go. That’s none of my business, and good luck to you.

As soon as you let people read that data, you have to fully license the SQL Server that it’s sitting on.

Alt Rock


I’ve talked to a few dozen people about this over the years, too. Most were shocked to learn about the licensing implications, thinking that having Software Assurance from Microsoft covered their needs.

The most frequent desire for offloading reads is real-time reporting, and the most frequently cited reason is that reporting queries put too much load on the current server hardware.

You can see where the conflict sits, here. People think they’re getting a free SQL Server to report off of, so they don’t need to up their current hardware and pay for it. Microsoft may be dumb, but it ain’t that dumb.

Once you get people past the licensing issues, tuning the current hardware becomes a more approachable subject.

Gestalt Block


Of course, the idea of tuning the reporting queries has occurred to most people, but the attempts at tuning are usually flailing attempts to solve problems that aren’t there.

  • Maybe this local variable will do a thing for parameter sniffing
  • NOLOCK is better for reports, anyway
  • Don’t forget to recompile, too
  • Add a DISTINCT, there’s too many rows
  • Throw up hands, rebuild indexes

While everything has its place, it’s rare that this combination will get you past all of your performance issues. Reporting queries are especially interesting to me, because of how few people embrace batch mode to any degree for them.

The cost of tuning queries is far cheaper than the cost of licensing a whole other server, especially for Enterprise Edition customers. If you’re on Standard Edition, you have many other problems. Many, many other problems, standing in the way of scaling and performance issues.

If this is the kind of thing you need help with, drop me a line. This is exactly the kind of thing I do.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Why Reporting Tables Aren’t A Great Idea In SQL Server Databases

Headway


I’ve seen a lot of crazy ways for people to run reports on live data, but one of the worst ideas is creating a reporting table based on the set of data a user wants, and then letting them query that data.

Here’s why:

  • You still have to run the query to populate them, which hits the server with load you’re adverse to
  • The buffer pool gets taken up by all these reporting tables, which only one person uses
  • It’s hard to index the reporting tables, and column store isn’t useful on Standard Edition
  • They get stale quickly, and refreshing them means re-running the original query, or
  • A dynamic Merge statement, because no two tables will have the same columns
  • Generic reporting tables will often use the wrong data types for columns and be very wide

Put all that together, and you’ve got a pretty bad solution for your server and your users.

Money Talks


I’m gonna say that there are better ways to do this, and I’m gonna be right, but they’re all gonna take work and/or money.

The ones that take money are reporting off a copy of the data, which you can do with Availability Groups, Replication, or Log Shipping. You have to fully license the other server that you’re reading from no matter which you choose.

I have no interest in deep-diving into any one of those here, but I’m gonna generalize a little bit to make sure you’re not totally in the dark:

  • Availability Groups are easy to set up, sometimes complicated to manage and patch, and require databases in full recovery model, but give you pretty up-to-date data on the secondary for users to get at.
  • Replication is a lot like the above, except it makes me way more nervous. Snapshot Replication gets bonus points for allowing you to create different indexes on the subscriber, which AGs and Log Shipping don’t let you do. The freshness of the data does depend on which type of Replication you choose, so choose wisely.
  • Log Shipping just ships transaction log backups, which is pretty dead simple unless you need to deal with servers that can’t directly see each other for whatever reason. Simple is nice, but simple means you’re gonna have to stale data, since you have to kick users out to restore more transaction logs.

Manifesto


The solutions that require work are usually ones that you implement locally. I’m not going to talk about data warehouses here, because it’s a lot like the stuff up there, just with ETL in the middle. Or ELT. Whatever.

Getting reporting working from live data usually takes a few steps, depending on your situation:

  • Nonclustered column store indexes if you’re on Enterprise Edition
  • Indexed views if you’re on Standard Edition (I’ll talk more about this in a minute)
  • Enabling snapshot isolation just for reporting queries to not get blocked
  • Tuning the hell out of your report code to not beat the server up too bad

Indexed views have a lot of rules, and if you’re lucky you can get some mileage out of them to speed up aggregations on Standard Edition. You can try using column store, but it’s really severely hobbled on Standard Edition. The DOP two limit is especially a killer in a lot of cases.

All of this is work though, and it at least partially assumes your server can come anywhere near being able to handle the additional workload. If it’s just limping along now, you probably don’t wanna keep throwing straw at Joe Camel.

Out There


What I left out a little bit are tables that everyone can report from, because every time I’ve seen them implemented, it’s been an absolute mess of triggers or periodic data synchronizations that left users unhappy.

Either because their transactional workload slowed way down to keep things up to date synchronously, or because the background process that started out running every 10 minutes beat the crap out of the server every 10 minutes, so now it runs every hour but there’s another Agent job to kill that job if it runs more than five minutes because that means something bad is about to happen.

This is why most of the time, I’m happy to do the work to report off the live data.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Do I Estimate How Much Memory My SQL Server Needs?

It’s Brand New


If you’re looking to scope out hardware for SQL Server, whether it’s in the cloud, or… not in the cloud, you’re gonna wanna get memory right.

Lots of people complain about SQL Server being a memory hog, without pausing to reflect on the fact that they’re data hogs.

You don’t like SQL Server taking up all that memory? Drop all those big databases.

Problem solved.

Considerate


There are a number of things you can look at on your current server that tie in to how much memory you’ll need in the future:

  • Wait Stats
  • Size of data
  • Current indexing
  • Data growth rate

Waits

I have wait stats there because the first thing you should look at to see if you have adequate memory is PAGEIOLATCH_** waits. If they’re more than 10% of your server’s uptime, you might be losing a whole bunch of query performance to getting the data you need into memory. Of course, from here you need to monitor when those waits are happening to figure out if they’re affecting critical user queries, or just happening overnight during maintenance.

Sizes

The size of your data is going to depend on how big your tables are, and how many indexes you have on them. If you have a lot of unused or duplicative indexes, you’re going to have a much bigger database than is necessary. That’s going to inflate the amount of memory you currently “need”, because all of those cruft indexes are going to be competing for space in the buffer pool.

Indexes

An easy first step to making the best possible use of the RAM you currently have is to make sure you have only the most beneficial indexes in place. Once you’ve got that done, you can go even further by looking at compression. Compressed indexes squish things down on disk and in the buffer pool.

Growths

Some database have predictable growth patterns based on regular usage. It’s up to you to trend that stuff for the most part.

There’s a disk usage report under each database’s standard reports that you can get to with a right click, but it only has growths that are currently in the standard trace file currently. They could be aged out when you go looking for them.

Another idea would be to look at backup sizes over the past year or so to see how they’ve grown.

Other databases are unpredictable based on acquisitions, large customer onboarding, big sales, sudden success, etc.

Of course, if you haven’t tended to indexes or compression like I listed above, these trends may not hold up well compared to after doing them. This is something I’d do last, after taking care of the index stuff.

All This And More


An additional consideration that adds quite a bit of complication is query memory grants.

The difficulty here is twofold, because you may have query and index tuning to do to reduce the memory grants queries ask for, and when memory pressure arises on a server the plan cache (where most people wanna go look for queries asking for large memory grants) will be getting cleared out constantly. That can make tracking them hard.

I know, you’re thinking that Query Store is awesome for this, but it’s not. It only tracks memory that queries used, not what they were granted. A query can ask for a huge memory grant, only use a small fraction of it, and there’s no sign of that in Query Store. Brick and mortar, indeed.

Most people aren’t ready to truly speculate on hardware needs when I talk to them, but they can get there. If you want a basic guess? Try to get enough memory to cache 50-75% of your server’s data in memory. That should get you enough to cache what you need, deal with memory growth, and deal with memory grants.

And you know, you’ll probably wanna pad that a little bit if you’re going to the cloud, where storage gear and networking is basically electric porridge.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Many Threads Can One Query Use In SQL Server?

Overlook


Something that many people overlook about parallel plans is that while they are limited to DOP active CPUs, they can suck up way more threads.

How many more depends on how many branches can be active at the same time.

Plus the coordinator thread.

Big Query


In order to test these limits out, we need a rather large query.

Since I’m lazy, I’m using a bit of dynamic SQL to make the query I want. Along the way figuring out how many joins I could gin up before things went amok, I learned some fun things.

For example, if I used Hash Joins, I’d get an error that the Query Processor ran out of stack space, and if I used Nested Loops joins I wouldn’t get the requisite parallel exchanges necessary to have multiple parallel zones.

And if I don’t use a force order hint, I’ll end up spending a really long time waiting for a query plan to compile. It wasn’t a good time.

There’s also a tipping point with the number of joins, where if I go over a certain number, my query’s DOP gets downgraded to one.

yogurt land

Outcoming


After finding my sweet spot at 316 joins, I still had to toggle with DOP a little.

On my home server, I have 706 worker threads available.

With my 316 join query, I was able to reserve 634 worker threads.

bile

But that’s where I topped off. After that, the query would get downgraded to DOP 1 and only ask for 1 thread.

Qurious


I’m not sure if there’s some built-in cap on how many threads a query can use, or if the limit is global before downgrades start happening.

What I found more interesting was that even though the query reserves 634 workers, those workers weren’t immediately subtracted from available workers on the server.

Technically, reserved threads don’t exist yet. They haven’t been created or attached to a task. All the reservation does is prevent other queries from reserving threads beyond the configured limit.

For example, if I run two copies of the big query at the same time, one is downgraded to DOP 1, likely because it’s hinted to DOP 2 and that’s the next lowest DOP, and it can’t run at DOP 2 and reserve 634 more threads that the first query has already reserved.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server Queries Go Faster When They Don’t Touch Disk

A Rears



GitHub scripts

Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Tuning I/O Is Often About Tuning Indexes In SQL Server

One Metric Ton Of Indexes


Let’s say you hate your storage. Let’s say you hate it so much that you want you SQL Serve to touch it as little as possible.

You’re most of the people I talk to. Congratulations.

But how do you do that?

Let’s talk about a few things.

How SQL Server Works With Data


It doesn’t matter if a query wants to read or modify data, all those itty-bitty little data pages need to end up in memory.

How much ends up in memory depends on how big your tables are, and how helpful your indexes are.

Likewise, the more indexes you need to modify, the more need to be in memory for that to happen.

You need to design indexes so that you can support your queries by making it easy for them to locate data. That’s your where clause, and guess what?

Your modification queries have where clauses, too.

How You Can Make Indexing Better


Make sure you’re reviewing your indexes regularly. Things that you need to keep an eye on:

  • Duplicative indexes
  • Under-utilized indexes

Even when indexes are defined on the same columns, they’re separate sets of pages within your data files.

  • If you have indexes that are on very similar sets of columns, or supersets/subsets of columns, it’s probably time to start merging them
  • If you have indexes that just aren’t being read, or aren’t being read anywhere near as much as they’re written to, you should think about ditching them

Cleaning up indexes like this gives you more breathing room to add in other indexes later.

It also gives you far fewer objects competing for space in memory.

That means the ones you have left stand a better chance of staying there, and your queries not having to go to disk for them.

How You Can Make Indexes Better


There are all sorts of things you can do to make indexes better, too. I don’t mean rebuilding them, either!

I mean getting smarter about what you’re indexing.

Things like filtered indexes and index compression can net you big wins when it comes to reducing the overall size of indexes.

My friend Andy Mallon has some Great Posts™ about compression over on his blog:

And of course, computed columns can help if you’ve got a wonky schema.

Smaller indexes that take up less space in memory make more efficient use of the space you have, which means you can fit more in there.

How You Can Make Tables Better


There are some obvious bits here, like being extra careful with choosing string length.

LOB data can lead to weird locking, and mess with memory grants.

And of course, overly-wide, non-normalized tables can also lead to issues.

If you’re running an OLTP workload, you may also want to make sure that your critical tables aren’t heaps.

Those things tend to take up more space in memory than they need to.

And of course, if you need any help fixing these types of issues, drop me a line!

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

The Great Memory Heist: How Queries Steal Memory From SQL Server’s Buffer Pool

Best Buy



Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

If You’re Using Under 20% CPU, You’re Wasting Money On SQL Server Licensing

Sensational


CPUs aren’t normally expensive, but ask them to run a query and the cost skyrockets. It doesn’t matter where you are in the world or in the cloud, prepare to get gouged.

You’d think SQL Server was an Hermes purse.

Life is a bit worse in the cloud, where CPU count and RAM amount are inexorably tied together. Sure, Azure offers constrained vCPU instances that help with that, but still.

Money is expensive, and it never goes on sale.

Slacker


If your CPU load stays consistently under 20%, and you don’t need a bunch of cushion for regularly scheduled tasks like ETL or ELT or LET me write a sentence before changing the acronym, then what’s the point of all those CPUs?

I’m not saying they have to run at 100% and you should be using them to air fry your Hot Pockets, but what’s wrong with running at 40-60%? That still leaves you a good bit of free ticks and cycles in case a parameter gets poorly sniffed or some other developer disaster befalls your server.

When a workload is pretty well-tuned, you should focus on right-sizing hardware rather than staring at your now-oversized hardware.

Bragging Rights


It’s often quite an achievement to say that you tuned this and that and got CPU down from 80% to 20%, but now what?

Can you give back some cores to the VM host? Consolidate workloads? Move to a smaller instance size?

Fashion models are great, but they’re not great role models for servers. CPUs should not be sitting around being expensive and bored.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.