Query Store Improvements I’d Like To See For SQL Server: Choose Queries To Ignore

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Four


A lot of dumb things end up in Query Store that you can’t do anything about, but which skew the graphs you have to look at and make things hard to decipher.

For example:

  • Query store queries from the GUI
  • Index rebuilds
  • Statistics updates
  • Index creation
  • Some dumb thing that runs at 3am that no one cares about

But there they are, acting like the biggest, baddest thing to ever touch SQL Server.

Except you know they’re not. And they have no business showing up when you know better. For as many options to tweak things as are getting added to Query Store, being able to quarantine some stuff would be a nice option.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Forced Plan Association

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Three


Not every application is designed well. This may be shocking news to some of you.

One area where I consistently see application developers go wrong is with parameterization. Either their ORM doesn’t use parameters at all, or it only partially parameterizes queries.

When this happens, you can end up with a lot of duplicate plans both in your plan cache and in Query Store.

For queries that have no parameterization, the Forced Parameterization database setting can be very useful, but it doesn’t help with queries that are only partially parameterized.

It would be absolutely stellar if Query Store had a mechanism for associating queries with the same query or plan hashes are “morally equivalent” even if they’re not parameterized and be able to force them to use the same query plan, when you have a good one.

This would solve many applications problems that I see out there, where the developers refuse to lift a finger to solve basic problems that end users end up feeling.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Memory Grant Columns

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Two


When you run a query, you might get a memory grant, for example to hash or sort data. In the query plan, there are a whole bunch of columns that give you information about the memory grant:

  • SerialRequiredMemory         
  • SerialDesiredMemory          
  • RequiredMemory               
  • DesiredMemory                
  • RequestedMemory              
  • GrantWaitTime                
  • GrantedMemory                
  • MaxUsedMemory                
  • MaxQueryMemory               
  • LastRequestedMemory          
  • IsMemoryGrantFeedbackAdjusted

And if you read the query plan documentation, you’ll get some explanation of them, too.

Provide memory grant estimate as well as actual runtime memory grant information. Serial required/desired memory attributes are estimated during query compile time for serial execution. The rest of attributes provide estimates and counters for query execution time considering actual degree of parallelism. SerialRequiredMemory: Required memory in KB if the query runs in serial mode. The query will not start without this memory. SerialDesiredMemory: Memory estimated to fit intermediate results in KB if the query runs in serial mode. RequiredMemory: Required memory in KB for the chosen degree of parallelism. If the query runs in serial mode, this is the same as SerialRequiredMemory. DesiredMemory: Memory estimated to fit intermediate results in KB for the chosen degree of parallelism. If the query runs in serial mode, this is the same as SerialDesiredMemory. RequestedMemory: Memory in KB which the query requests the memory manager to grant. This can be smaller than sum of RequiredMemory and DesiredMemory if it exceeds the maximum allowed for single query. GrantWaitTime: Time in seconds if the query has to wait for successful memory grant. MaxUsedMemory: Maximum memory in KB used by the query. MaxQueryMemory: Maximum memory in KB allowed for single query. LastRequestedMemory: Memory in KB which was requested by the query from the memory manager during the last execution. IsMemoryGrantFeedbackAdjusted: Information regarding if the memory grant in this plan is adjusted based on memory grant feedback.

In Query Store, you get limited memory grant information in query_store_runtime_stats. But if you look in dm_exec_query_stats, there’s a whole lot more.

SQL Server Query Store
grim

It’s a much more complete view of the memory grant, and it’s a damn shame that you have to rely on the plan cache for this, because it’s so unreliable. Clearing out, dropping plans, recompiles, etc. etc. etc.

It would be marvelous if Query Store retained information like the size of the grant, so you can figure out if the amount of memory that it used was anywhere near reality. The ideal columns are not very useful to me, but the total and used are wonderful to compare.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Recent Actual Plans

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem One


Query Store plans are equivalent to cached plans, which are equivalent to estimated plans. You get what SQL Server thought was gonna happen.

Sometimes, that can have very little in common with reality.

Other times, it can be misleading because of parameter sniffing.

Right now, SQL Server has a plan cache view in SQL Server 2019+ that holds the last actual plan. It would be really cool if Query Store tracked a configurable number of recent actual plans.

That’d really help folks out where the performance problem isn’t easy to deduce or reproduce from what they’ve got available.

Actual plans are incredibly valuable these days, with operator times included, and a whole bunch of other stuff that isn’t surfaced in estimated plans, like spills.

In real life, I see all sorts of weird things popping up as the bottleneck in query plans:

  • TOP above a SCAN
  • Repartition Streams
  • Compute Scalar

You only see the pain when operator times are involved, and Query Store needs to start collecting those to make query tuning easier.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server Features I Still Need To Talk People Into Using

Booze Schmooze


As a counterpart to yesterday’s post, I have a list of Great Ideas™ that sometimes it’s hard to get people on board with, for some reason.

Don’t get me wrong, some people can’t jump on this stuff fast enough — I’ve had people take “unscheduled maintenance” during engagements to flip the right switches — but other times there’s a hem and a haw and a whole lot of’em to go along with the plan.

Somehow people who have auto close and Priority Boost turned on and haven’t checked for corruption since 30 Rock went off the air want a full-bore fisking of every change and every assurance that no edge case exists that could ever cross their path.

Okay pal. You go on with your bad 2012 RTM self.

Lock Pages In Memory


“Please don’t pop my balloon animals.”

To say that this setting only lets SQL Server hang onto memory is a grand injustice. It also lets SQL Server use completely different APIs to access memory through Windows, including skipping over virtual memory space. That can be an awesome benefit for servers with gobs of memory.

What are people worried about, here? Usually some article they read about balloon drivers in 2011, or something.

But the same people aren’t afraid to set min server memory to max server memory, and then wonder why they have no plan cache.

I love this setting though, and if you can also get away with turning on trace flag 834, there are some nice additional benefits.

DBCC CHECKDB


“Well, our index maintenance job runs for 9 hours, so we don’t have time for this. Besides, won’t it cause a lot of blocking?”

Lord have mercy, the foot dragging on this. Part of your job as a DBA is to keep data safe and backed up. Running CHECKDB is part of that.

No DBA got fired over fragmented indexes. More than a few have for data going corrupt.

Granted, this can get a little more complicated for really big databases. Some people break it up into different steps, and other people offload the process.

Some third party backup tools from vendors like Quest and Red Gate allow you to automate processes like that, too. Full backup, restore to new server, run CHECKDB, tell us what happened.

How nice, you get a tested restore, too.

Query Store


“Won’t this catch my server on fire and leak PII to hackers?”

If you’re too cheap to spring for a proper monitoring tool, Query Store makes a pretty okay pseudo replacement. Especially in 2017 and up where you can track wait stats too, you can get some pretty good insights out of it.

Microsoft has also gotten pretty smart about better default settings for this thing, and in 2019 you have more knobs to set smarter standards for which plans get tracked in there.

It’d be really nice if you could choose to ignore queries, too, but you know. Can’t always get what you want, there.

I’d much rather look at Query Store than that unreliable old plan cache, too.

Read Committed Snapshot Isolation


“Why do I want tempdb to be full of old data?”

Remember yesterday? Me either. Nothing good happened, anyway. Do you remember what that row looked like before the update started? No?

Read Committed Snapshot Isolation does. And it wants you to, too. This setting solves so many dumb problems that people run headlong into because Microsoft committed to a garbage isolation level.

One complaint I hear all the time is that a particular application runs a lot better on Oracle with no other changes. This setting is usually the reason why: It’s not turned on.

Once you turn it on, reader/writer blocking and deadlocking goes away, and you don’t need to add a SSMS keyboard shortcut that inserts WITH NOLOCK.

Changing Indexes


“They’re fine the way they are, trust me. That burning smell is another server.”

Index tuning needs to be a process that starts with cleaning up indexes, and ends with adding in better ones.

What makes an index bad? When it’s unused, and/or duplicative.

What makes an index good? When it gets read from more than it’s written to, and it’s a usefully different way for queries to access data.

There are other index anti-patterns that are good to look for too, like lots of single key column indexes, but they usually get cleaned up when you start merging duplicates.

There’s a near fully eclipsed Venn Diagram of people who are worried about having too many indexes and people who have never dropped an index in their career.

Talk, Talk


These are the kinds of changes and processes people should be comfortable with making when they work with SQL Server.

Sure, there are a ton of others, but some of them have become part of the installer and get a little more leeway — parallelism settings, instant file initialization, tempdb etc. — I only wish that more of this stuff would follow suit.

One wonders quite loudly why setting MAXDOP made it into the installer, but setting Cost Threshold For Parallelism did not.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Finding Queries With Multiple Plans In Query Store

Cached Out


There are lots of examples of how to do this with the plan cache, but, well, the plan cache can be an awfully unstable place.

Query store being a bit more historically reliable, we can use some of the same tricks to track them down there too.

SELECT
    x.total_query_plans,
    qsq.query_hash,
    qsp.query_plan_hash,
    query_plan =
        TRY_CONVERT
        (
            xml,
            qsp.query_plan
        )
FROM 
(
    SELECT
        qsq.query_hash,
        distinct_query_plans = 
            COUNT_BIG(DISTINCT qsp.query_plan_hash),
        total_query_plans = 
            COUNT_BIG(qsp.query_plan_hash)
    FROM sys.query_store_query AS qsq
    JOIN sys.query_store_plan AS qsp
        ON qsq.query_id = qsp.query_id
    GROUP BY 
        qsq.query_hash
    HAVING  COUNT_BIG(DISTINCT qsp.query_plan_hash) > 1
    AND     COUNT_BIG(DISTINCT qsp.query_plan_hash)
                <= COUNT_BIG(qsp.query_plan_hash)
) AS x
CROSS APPLY
(
    SELECT TOP (x.total_query_plans)
        qsq.*
    FROM sys.query_store_query AS qsq
    WHERE x.query_hash = qsq.query_hash
) AS qsq
CROSS APPLY
(
    SELECT
        qsp.*
    FROM sys.query_store_plan AS qsp
    WHERE qsp.query_id = qsq.query_id
) AS qsp
ORDER BY 
    x.total_query_plans DESC;

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Finding Single Use Queries In Query Store

Caching The Line


This metric gets looked at a lot in the plan cache to see how effective it is. The main problem is that with high enough churn, you might not catch all the queries involved in the problem. Here are a couple ways to look at this in Query Store.

WITH x AS
(
    SELECT
        single_use_queries = 
            SUM
            (
                CASE 
                    WHEN qsrs.count_executions = 1 
                    THEN 1 
                    ELSE 0 
                END
            ),
        total_queries = 
            COUNT_BIG(*)
    FROM sys.query_store_runtime_stats AS qsrs
)
SELECT
    x.*,
    percent_single_use_plans = 
        CONVERT
        (
            decimal(5,2),
            single_use_queries / 
                (
                    1. * 
                    NULLIF
                    (
                        x.total_queries, 
                        0
                    )
                ) * 100.
        )
FROM x;

SELECT
    qsqt.query_sql_text
FROM sys.query_store_query_text AS qsqt
WHERE EXISTS
      (
          SELECT 
              1/0
          FROM sys.query_store_query AS qsq
          JOIN sys.query_store_plan AS qsp
              ON qsq.query_id = qsp.query_id
          JOIN sys.query_store_runtime_stats AS qsrs
              ON qsp.plan_id = qsrs.plan_id
          WHERE qsqt.query_text_id = qsq.query_text_id
          AND   qsrs.count_executions = 1
      );

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Why You Should Stop Looking At Query Costs In SQL Server

Spare No Expense


Over the years, there have been a lot of requests to get sp_BlitzCache to sort results by query cost. I understand why. It’s assumed that the optimizer is never wrong and that cost is directly associated with poor performance.

There are also rather misguided efforts to figure out parallelism settings based on plan costs. The main problem with that being that if you currently have a lot of parallel queries, all that means is that the estimated cost of the serial plan was higher than your current Cost Threshold For Parallelism setting, and the cost of the parallel plan was less than the cost of the serial plan.

If you increase Cost Threshold For Parallelism, you may very well still end up with a parallel plan, because the serial version was still more expensive. If you eventually change Cost Threshold For Parallelism to the point where some queries are no longer eligible for parallelism, you may eventually find yourself unhappy with the performance of the serial version of the query plan.

Albeit with less overall wait time on CX* doodads.

Next you’ll be complaining about all the SOS_SCHEDULER_YIELD waits you’ve got.

Insteads


Rather than look at estimated metrics, you should be looking at how queries actually perform. For most servers I look at, that means looking at queries with high average CPU time, and large memory grants. Those metrics typically represent tunable aspects of the query.

In other cases, you might look at wait stats to direct the type of queries you want to go after. Reads, writes, and executions are also valuable metrics at times.

One danger of looking at totals rather than averages is that you may find things that do a little bit of something a whole lot of times, and there’s no real way to tune the small bit of activity they generate other than to run the query less.

What’s A Cost For?


In general, I only tend to look at costs to figure out plan choices within a query, or when comparing two different plans for “the same” query.

This is where experimenting with hints to change the plan shapes and choices can show you why you got the plan you did, and what you might have to do to get the plan you want naturally.

Let’s say you want to figure out why you got a specific join type. You hint the type of join you want, and there’s a missing index request now. Adding the index gets you the plan shape you want without the hint. Everyone lived happily ever after.

Until the index got fragmented ???

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

No THREADPOOL Waits In Query Store

Yep Nope


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Wait Stats SQL Server’s Query Store Doesn’t Actually Track

Miss Leading


There are some oddities in the documentation for query store wait stats.

One is that RESOURCE_SEMAPHORE_QUERY_COMPILE is listed as a collected wait, but with an asterisk that says it’s not actually collected. I’ve tested workloads that generate lots of that wait, and just like the docs say, it doesn’t end up there.

Of course, since I added wait stats recently to sp_QuickieStore, I wanted to make sure other waits that I care about actually show up in there.

THREADPOOL Doesn’t


I ran a workload where 747 queries waited on THREADPOOL before running and completing.

THREADPOOL
el threado

But nothing shows up in query store wait stats. The stored procedure I ran to create the THREADPOOL waits shows up, and it has other waits — CPU and stuff — but nothing about workers. My guess why is because that’s a pre-compilation wait. When they pile up, there’s no execution context.

They’re quite not-figuratively spidless.

THREADPOOL
lezz than zero

So, uh, I had to pull filtering for that wait out.

RESOURCE_SEMAPHORE Does


Running a similar workload, I can see where queries start to rack up RESOURCE_SEMAPHORE waits. At least, I’m pretty sure they do.

See, those waits are buried in a bunch of memory waits that are usually insignificant — and hell, on well-functioning server so is RESOURCE_SEMAPHORE — but I see the time spent on that, plus some other memory related waits.

RESOURCE_SEMAPHORE
carry the um

And that’s probably good enough for hand grenades.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.