CTEs Don’t Make Queries More Readable, Formatting Does

Kākāpō!


One line I see over and over again — I’ve probably said it too when I was young and needed words to fill space — is that CTEs make queries more readable.

Personally, I don’t think they make queries any more readable than derived tables, but whatever. No one cares what I think, anyway.

Working with clients I see a variety of query formatting styles, ranging from quite nice ones that have influenced the way I format things, to completely unformed primordial blobs. Sticking the latter into a CTE does nothing for readability even if it’s commented to heck and back.

nope nope nope

There are a number of options for formatting code:

Good Standing


Formatting your code nicely doesn’t just help others read it, it can also help people understand how it works.

Take this example from sp_QuickieStore that uses the STUFF function to build a comma separated list the crappy way.

If STRING_AGG were available in SQL Server 2016, I’d just use that. Darn legacy software.

parens

The text I added probably made things less readable, but formatting the code this way helps me make sure I have everything right.

  1. The opening and closing parens for the STUFF function
  2. The first input to the function is the XML generating nonsense
  3. The last three inputs to the STUFF function that identify the start, length, and replacement text

I’ve seen and used this specific code a million times, but it wasn’t until I formatted it this way that I understood how all the pieces lined up.

Compare that with another time I used the same code fragment in sp_BlitzCache. I wish I had formatted a lot of the stuff I wrote in there better.

carry the eleventy

With things written this way, it’s really hard to understand where things begin and end and that arguments belong to which part of the code.

Maybe someday I’ll open an issue to reformat all the FRK code 😂

Thanks for reading!

The Three Kinds Of Memory Contention In SQL Server

Savings and Loans


Whomever decided to give “memory bank” its moniker was wise beyond their years, or maybe they just made a very apt observation: all memory is on loan.

Even in the context we’ll be talking about, when SQL Server has lock pages in memory enabled, the pages that are locked in memory may not have permanent residency.

If your SQL Server doesn’t have enough memory, or if various workload elements are untuned, you may hit one of these scenarios:

  • Query Memory Grant contention (RESOURCE_SEMAPHORE)
  • Buffer Cache contention (PAGEIOLATCH_XX)
  • A mix of the two, where both are fighting over finite resources

It’s probably fair to note that not all query memory grant contention will result in RESOURCE_SEMAPHORE. There are times when you’ll have just enough queries asking for memory grants to knock a significant pages out of the plan cache to cause an over-reliance on disk without ever hitting the point where you’ve exhausted the amount of memory that SQL Server will loan out to queries.

To help you track down any of these scenarios, you can use my stored procedure sp_PressureDetector to see what’s going on with things.

Black Friday


Most servers I see have a mix of the two issues. Everyone complains about SQL Server being a memory hog without really understanding why. Likewise, many people are very proud about how fast their storage is without really understanding how much faster memory is. It’s quite common to hear someone say they they recently got a whole bunch of brand new shiny flashy storage but performance is still terrible on their server with 64GB of RAM and 1TB of data.

I recently had a client migrate some infrastructure to the cloud, and they were complaining about how queries got 3x slower. As it turned out, the queries were accruing 3x more PAGEIOLATCH waits with the same amount of memory assigned to SQL Server. Go figure.

If you’d like to see those waits in action, and how sp_PressureDetector can help you figure out which queries are causing problems, check out this video.

Market Economy


The primary driver of how much memory you need is how much control you have over the database. The less control you have, the more memory you need.

Here’s an example: One thing that steals control from you is using an ORM. When you let one translate code into queries, Really Bad Things™ can happen. Even with Perfect Indexes™ available, you can get some very strange queries and subsequently very strange query plans.

One of the best ways to take some control back isn’t even available in Standard Edition.

If you do have control, the primary drivers of how much memory you need are how effective your indexes are, and how well your queries are written to take advantage of them. You can get away with less memory in general because your data footprint in the buffer pool will be a lot smaller.

You can watch a video I recorded about that here:

Thanks for watching!

How SQL Server 2019 Helps You Link Queries To Missing Index Requests

Not Another Upgrade


When dm_db_missing_index_group_stats_query got documented, I was really happy. After all, this has been a peeve of mine for ages.

“Wow look at all these missing index requests. Where’d they come from?”

So this is neat! And it’s better than nothing, but there are some quirks.

And what’s a quirk, after all, but a twerk that no one enjoys.

Columnar


The first thing to note about this DMV is that there are two columns purporting to have sql_handles in them. No, not that sql_handle.

One of them can’t be used in the traditional way to retrieve query text. If you try to use last_statement_sql_handle, you’ll get an error.

SELECT
    ddmigsq.group_handle,
    ddmigsq.query_hash,
    ddmigsq.query_plan_hash,
    ddmigsq.avg_total_user_cost,
    ddmigsq.avg_user_impact,
    query_text = 
        SUBSTRING
        (
            dest.text, 
            (ddmigsq.last_statement_start_offset / 2) + 1,
            (
                ( 
                    CASE ddmigsq.last_statement_end_offset 
                        WHEN -1 
                        THEN DATALENGTH(dest.text) 
                        ELSE ddmigsq.last_statement_end_offset 
                    END
                    - ddmigsq.last_statement_start_offset 
                ) / 2 
            ) + 1
        )
FROM sys.dm_db_missing_index_group_stats_query AS ddmigsq
CROSS APPLY sys.dm_exec_sql_text(ddmigsq.last_statement_sql_handle) AS dest;

Msg 12413, Level 16, State 1, Line 27
Cannot process statement SQL handle. Try querying the sys.query_store_query_text view instead.

Is Vic There?


One other “issue” with the view is that entries are evicted from it if they’re evicted from the plan cache. That means that queries with recompile hints may never produce an entry in the table.

Is this the end of the world? No, and it’s not the only index-related DMV that behaves this way: dm_db_index_usage_stats does something similar with regard to cached plans.

As a quick example, if I execute these two nearly-identical queries, the DMV only records one potential use of the index:

SELECT 
    COUNT_BIG(*)
FROM dbo.Posts AS p
WHERE p.OwnerUserId = 22656
AND   p.Score < 0;
GO 

SELECT 
    COUNT_BIG(*)
FROM dbo.Posts AS p
WHERE p.OwnerUserId = 22656
AND   p.Score < 0
OPTION(RECOMPILE);
GO
grizzly

Italic Stallion


You may have noticed that may was italicized in when talking about whether or not plans with recompile hints would end up in here.

Some of them may, if they’re part of a larger batch. Here’s an example:

SELECT 
    COUNT_BIG(*)
FROM dbo.Posts AS p
WHERE p.OwnerUserId = 22656
AND   p.Score < 0
OPTION(RECOMPILE);

SELECT
    COUNT_BIG(*)
FROM dbo.Users AS u
JOIN dbo.Posts AS p
   ON p.OwnerUserId = u.Id
JOIN dbo.Comments AS c
    ON c.PostId = p.Id
WHERE u.Reputation = 1
AND   p.PostTypeId = 3
AND   c.Score = 0;

Most curiously, if I run that batch twice, the missing index request for the recompile plan shows two uses.

computer

Multiplicity


You may have also noticed something odd in the above screenshot, too. One query has produced three entries. That’s because…

The query has three missing index requests. Go ahead and click on that.

lovecraft, baby

Another longstanding gripe with SSMS is that it only shows you the first missing index request in green text, and that it might not even be the “most impactful” one.

That’s the case here, just in case you were wondering. Neither the XML, nor the SSMS presentation of it, attempt to order the missing indexes by potential value.

You can use the properties of the execution plan to view all missing index requests, like I blogged about here, but you can’t script them out easily like you can for the green text request at the top of the query plan.

something else

At least this way, it’s a whole heck of a lot easier for you to order them in a way that may be more beneficial.

EZPZ


Of course, I don’t expect you to write your own queries to handle this. If you’re the type of person who enjoys Blitzing things, you can find the new 2019 goodness in sp_BlitzIndex, and you can find all the missing index requests for a single query in sp_BlitzCache in a handy-dandy clickable column that scripts out the create statements for you.

Thanks for reading!

A Bug With Recursive UDFs When Inlined In SQL Server 2019

Enough Already


I see people do things like this fairly often with UDFs. I don’t know why. It’s almost like they read a list of best practices and decided the opposite was better.

This is a quite simplified function, but it’s enough to show the bug behavior.

While writing this, I learned that you can’t create a recursive (self-referencing) scalar UDF with the schemabinding option. I don’t know why that is either.

Please note that this behavior has been reported to Microsoft and will be fixed in a future update, though I’m not sure which one.

Swallowing Flies


Let’s take this thing. Let’s take this thing and throw it directly in the trash where it belongs.

CREATE OR ALTER FUNCTION dbo.how_high
(
    @i int,
    @h int
)
RETURNS int
WITH
    RETURNS NULL ON NULL INPUT
AS
BEGIN

    SELECT 
       @i += 1;
    
    IF @i < @h
    BEGIN
        SET 
            @i = dbo.how_high(@i, @h);
    END;

    RETURN @i;

END;
GO

Seriously. You’re asking for a bad time. Don’t do things like this.

Unless you want to pay me to fix them later.

Froided


In SQL Server 2019, under compatibility level 150, this is what the behavior looks like currently:

/*
Works
*/
SELECT 
    dbo.how_high(0, 36) AS how_high;
GO 

/*
Fails
*/
SELECT 
    dbo.how_high(0, 37) AS how_high;
GO

The first execution returns 36 as the final result, and the second query fails with this message:

Msg 217, Level 16, State 1, Line 40
Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).

A bit odd that it took 37 loops to exceed the nesting limit of 32.

This is the bug.

Olded


With UDF inlining disabled, a more obvious number of loops is necessary to encounter the error.

/*
Works
*/
SELECT 
    dbo.how_high(0, 32) AS how_high
OPTION (USE HINT('DISABLE_TSQL_SCALAR_UDF_INLINING'));
GO 

/*
Fails
*/
SELECT 
    dbo.how_high(0, 33) AS how_high
OPTION (USE HINT('DISABLE_TSQL_SCALAR_UDF_INLINING'));
GO

The first run returns 32, and the second run errors out with the same error message as above.

Does It Matter?


It’s a bit hard to imagine someone relying on that behavior, but I found it interesting enough to ask some of the nice folks at Microsoft about, and they confirmed that it shouldn’t happen. Again, it’ll get fixed, but I’m not sure when.

Thanks for reading!