SQL Server 2022: Introduces the DATETRUNC Function, So You Can Truncate Your Dates And Stuff

Underscoring The Importance


When I first sat down to write about this, I made a funny mistake: I kept writing DATE_TRUNC over and over again.

In SQL Server it’s DATETRUNC.

Why? Because that’s the way it’s implemented in Postgres and DB2. Oracle, of course, just calls it TRUNC.

So, while it’s nice to have (what appears to be) the same behavior, it doesn’t exactly help to not have a 1:1 calling equivalent to other platforms.

I assume most of these additions to T-SQL are for cross-platform development and migrations.

Of course, Microsoft being so gosh darn late to this game means folks have probably been rolling-their-own versions of these functions for years.

If they went and called their system function DATE_TRUNC or even TRUNC, they might have some object naming issues to contend with.

Well, okay. But how does it work?

Childish Games


Here are some quick examples of how you call it.

SELECT TOP (10)
    u.DisplayName,
    year = 
        DATETRUNC(YEAR, u.LastAccessDate),
    quarter = 
        DATETRUNC(QUARTER, u.LastAccessDate),
    month = 
        DATETRUNC(MONTH, u.LastAccessDate),
    dayofyear = 
        DATETRUNC(DAYOFYEAR, u.LastAccessDate),
    day = 
        DATETRUNC(DAY, u.LastAccessDate),
    week = 
        DATETRUNC(WEEK, u.LastAccessDate),
    iso_week = 
        DATETRUNC(ISO_WEEK, u.LastAccessDate),
    hour = 
        DATETRUNC(HOUR, u.LastAccessDate),
    minute = 
        DATETRUNC(MINUTE, u.LastAccessDate),
    second = 
        DATETRUNC(SECOND, u.LastAccessDate),
    millisecond = 
        DATETRUNC(MILLISECOND, u.LastAccessDate),
    microsecond = 
        DATETRUNC(MICROSECOND, u.LastAccessDate) /*Doesn't work with datetime because there are no microseconds*/
FROM dbo.Users AS u;

And here are the results:

SQL Server Query Results
workin’

The thing to note here is that there’s no rounding logic involved. You just go to the start of whatever unit of time you choose. Of course, this doesn’t seem to do anything to the millisecond portion of DATETIME, because it’s not precise enough.

But for anyone out there who was hoping for a SOMONTH function to complement the EOMONTH function, you get this instead.

Works well enough!

But does it perform, Darling?

UnSARGable?


To make any test like this worthwhile, we need an index to make data searchable.

CREATE INDEX
    v
ON dbo.Votes
    (CreationDate)
WITH
(
    SORT_IN_TEMPDB = ON,
    DATA_COMPRESSION = PAGE
);

You know, because that’s what they do. To make searching faster. Hello.

So look, under these perfect circumstances, everything performs well. But we have to do a lot of typing.

SELECT
    c = COUNT_BIG(*)
FROM dbo.Votes AS v
WHERE v.CreationDate >= DATETRUNC(YEAR, CONVERT(datetime, '20130101 00:00:00.000'));

Note here that we’re working on a literal value, not a column value, and we have to tell the datetrunc function which type we want via the convert function so that we get a simple seek plan:

SQL Server Query Plan
grin

Without all that, we get a dynamic seek plan:

SELECT
    c = COUNT_BIG(*)
FROM dbo.Votes AS v
WHERE v.CreationDate >= DATETRUNC(YEAR, '20130101 00:00:00.000');

This has some… obvious performance issues compared to the above plan with correct data types.

SQL Server Query Plan
query pan

Query Pranks


Frequent blog readers will not be surprised that wrapping a table column in the new DATETRUNC function yields old performance problems:

SELECT
    c = COUNT_BIG(*)
FROM dbo.Votes AS v
WHERE DATETRUNC(YEAR, v.CreationDate) >= CONVERT(datetime, '20130101 00:00:00.000');

This is particularly annoying because we’re truncating the column to the beginning of the year, which should be searchable in the index since that’s the sort order of the data in the index.

SQL Server Query Plan
shined out

Like most functions, these are fine in the presentation layer, but terrible in the relational layer. There’s no warning about this performance degradation in the documentation, either at the example of using the function in a where clause, or in the final closing remarks.

But that’s par for the course with any of these built-in functions.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly.

Even If SQL Server Table Variables Were Always In Memory, It Wouldn’t Make Them Better Than Temp Tables

Past Perfect


You know all those pesky developer myths that never go away?

  • CTEs and Views can’t use indexes: WRONG
  • Subqueries are slower than joins: WRONG
  • Temp tables are bad for performance: WRONG
  • Table Variables are always in memory: WRONG

Okay, that’s not a complete list, but a complete list would fill the internet to its very brim and cause all sorts of overflow errors.

My job as a performance tuning consultant is to teach folks when they’ve been lead astray.

To confuse things a bit further, Microsoft allows for in-memory @table variables, but you have do a lot of work to get set up to use them.

They’re not the ones you get out of the box.

Future Mediocre


Let’s say for the sake of argument that @table variables were always more “in-memory” than #temp tables (they’re not). They’d still have a lot of the same fundamental problems:

With all that stuff in mind: who cares if they’re only in memory? There are so many downsides for most query patterns, they could all happen in quantum space and performance would still stink.

I know, they have their uses:

But most of y’all aren’t using them that wisely.

And here’s the other thing: Most of y’all don’t have enough memory to begin with, and the memory you do have you misuse in a million ways. Why do you wanna make that worse?

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly.

Why You Can’t Always Rely On Estimated Query Plans In SQL Server

Simple Enough


Most of the time, if you get the estimated plan for a query, the actual execution plan will look about the same.

You’ll get the additional information of course, but the shape and most of the other attributes will match up.

Other times, things will change between the two.

Here’s a silly example I came across recently while working on demos for some other stuff.

SELECT TOP (100)
     p.Id,
     vs.UpVotes,
     vs.DownVotes
FROM dbo.VoteStats() AS vs
JOIN dbo.Posts AS p
    ON vs.PostId = p.Id
WHERE vs.DownVotes > vs.UpMultipier
AND   p.CommunityOwnedDate IS NULL
AND   p.ClosedDate IS NULL
ORDER BY vs.UpVotes DESC;

Estimated Execution Plan


Here’s the estimated plan I get for this query:

SQL Server Query Plan
estimations

SQL Server estimates that a serial execution plan will work best, here.

Actual Execution Plan


Here’s the actual execution plan for the same query:

SQL Server Query Plan
actuality

At runtime, SQL Server decides that a parallel plan is the better solution.

Wild Nights


SQL Server can totally estimate parallel plans. I’ve seen it a million times.

In this case, though, it doesn’t do that. If you get the estimated plan after executing the query, it’ll come back with the parallel version.

Why does this happen? Well, as SQL Server adds features to the Intelligent Query Processing, more and more decisions are made when a query executes.

Stuff like this can make estimated and actual plans look much different from each other:

  • Adaptive joins
  • Interleaved execution
  • Table variable deferred compilation
  • Batch mode on row store

Wonders of modern query optimizers, eh?

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Let’s Stop Calling Queries “Expensive”

Bad Names


When we talk about finding queries to tune, there’s an unfortunate term that gets thrown around: Expensive Queries.

Why is it unfortunate? Well, it reinforces the wrong mindset when it comes to query tuning, and leads people down the wrong path, looking at the wrong metrics.

SQL Server does have a cost-based optimizer, but those costs don’t mean anything to your hardware, or even to how long a query runs for.

Those costs are all estimates, based on two-decade old computer specs. There are many times when they’re wrong, or not properly aligned with reality.

Worse, it leads people to want to do crazy things like sort the plan cache by query cost to find things to tune.

Worse than that, they’ll look at “query cost relative to batch” to compare two queries for efficiency.

Ethically Sourced


There are many sources to find queries eating up your server hardware.

The point of this post isn’t to teach you how to use any of those things, but to teach you how to be smarter about using them in whatever way you’re comfortable.

My two favorite metrics to look at when looking for queries to tune are CPU and RAM. I explain why in the post, but the short story is that they’re reliable, real-life metrics that can be directly measured before and after to gauge progress.

I don’t look at things like reads, because those might go up or down while your query runtime doesn’t change at all.

They’re also pretty misleading if you’re looking at STATISTICS IO in a lot of circumstances, like with lookups.

Sortie


A while back I recorded a bunch of videos that show how cached/estimated plans can be really misleading when it comes to costs and all that stuff.

You can find it here:

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How SQL Server 2022’s Parameter Sensitive Plan Feature Can Make Query Store Confusing

Psssssp



Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server 2017 CU 30: The Real Story With SelOnSeqPrj Fixes

I am a heading



Thanks for watching! Demo scripts below.

Demo Scripts


USE StackOverflow2013;
EXEC dbo.DropIndexes;
SET NOCOUNT ON;
DBCC FREEPROCCACHE;
GO 


CREATE INDEX 
   chunk 
ON dbo.Posts 
    (OwnerUserId, Score DESC) 
INCLUDE  
    (CreationDate, LastActivityDate)
WITH
    (MAXDOP = 8, SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);
GO 

CREATE OR ALTER VIEW 
    dbo.PushyPaul
WITH SCHEMABINDING
AS
    SELECT 
        p.OwnerUserId,
        p.Score,
        p.CreationDate,
        p.LastActivityDate,
        PostRank = 
            DENSE_RANK() OVER
            ( 
               PARTITION BY 
                  p.OwnerUserId 
               ORDER BY     
                  p.Score DESC 
            )
    FROM dbo.Posts AS p;
GO 

SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656;
GO 

CREATE OR ALTER PROCEDURE 
    dbo.StinkyPete 
(
    @UserId int
)
AS 
SET NOCOUNT, XACT_ABORT ON;
BEGIN
    SELECT 
        p.* 
    FROM dbo.PushyPaul AS p
    WHERE p.OwnerUserId = @UserId;
END;
GO 

EXEC dbo.StinkyPete 
    @UserId = 22656;



/*Start Here*/

ALTER DATABASE 
    StackOverflow2013 
SET PARAMETERIZATION SIMPLE;

DBCC TRACEOFF
(
    4199, 
    -1
);

ALTER DATABASE SCOPED CONFIGURATION 
    SET QUERY_OPTIMIZER_HOTFIXES = OFF;


SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656
AND   1 = (SELECT 1); /*Avoid trivial plan/simple parameterization*/


/*Let's cause a problem!*/
ALTER DATABASE 
    StackOverflow2013 
SET PARAMETERIZATION FORCED;

SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656
AND   1 = (SELECT 1); /*Avoid trivial plan/simple parameterization*/


/*Can we fix the problem?*/
DBCC TRACEON
(
    4199, 
    -1
);


SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656
AND   1 = (SELECT 1); /*Avoid trivial plan/simple parameterization*/


/*That's kinda weird...*/
DBCC FREEPROCCACHE;


SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656
AND   1 = (SELECT 1); /*Avoid trivial plan/simple parameterization*/


/*Turn Down Service*/
DBCC TRACEOFF
(
    4199, 
    -1
);

SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656
AND   1 = (SELECT 1); /*Avoid trivial plan/simple parameterization*/


/*Okay then.*/


/*I'm different.*/
ALTER DATABASE SCOPED CONFIGURATION 
    SET QUERY_OPTIMIZER_HOTFIXES = ON;


SELECT 
    p.* 
FROM dbo.PushyPaul AS p
WHERE p.OwnerUserId = 22656
AND   1 = (SELECT 1); /*Avoid trivial plan/simple parameterization*/



/*Cleanup*/
ALTER DATABASE 
    StackOverflow2013 
SET PARAMETERIZATION SIMPLE;

ALTER DATABASE SCOPED CONFIGURATION 
    SET QUERY_OPTIMIZER_HOTFIXES = OFF;

DBCC TRACEOFF
(
    4199, 
    -1
);

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server 2017 CU 30 Doesn’t Actually Fix The Problem With Views And Parameters

I am a heading



In the release notes for SQL Server 2017 CU30, there’s a note that it fixes a problem where parameters can’t be pushed passed Sequence Project operators:

“In Microsoft SQL Server 2017, running parameterized queries skips the SelOnSeqPrj rule. Therefore, pushdown does not occur.” But it doesn’t actually do that.

Paul White Original Post: The Problem with Window Functions and Views

The Problem In The Plan


Here are the good and bad plans, comparing using a literal value vs. a parameterized value:

SQL Server Query Plan
dunksville
  • In the plan with a literal value, the predicate is applied at the index seek, and the filtering is really quick.
  • In the plan with a parameterized value, the index is scanned, and applied at a filter way later in the query plan.

This is where the SelOnSeqPrj rule comes in: The parameter can’t be pushed past the Sequence Project operator like the literal value can.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Fixing Ordered Column Store Sorting In SQL Server 2022

Groove Is In The Heart


When Brent posted about the availability of, and disappointment with creating ordered column store indexes in SQL Server 2022, I got to work.

I can’t have my dear friend Brent being all distraught with all those fast cars around. That’s how accidents happen, and I fear he might leave the Blitz scripts to me in his will or something.

Anyway, I decided to dig in and see what was going on behind the scenes. Which of course, means query plans, and bothering people who are really good at debuggers.

Most of the problems that you’ll run into in SQL Server will come from sorting data.

Whenever I have to think about Sorts, I head to this post about all the different Sorts you might see in a query plan.

More on that later, though.

Cod Piece


In Paul’s post, he talks about using undocumented trace flag 8666 to get additional details about Sort operators.

Let’s do that. Paul is smart, though he is always completely wrong about which season it is.

DROP TABLE IF EXISTS
    dbo.Votes_CCI;

SELECT
    v.*
INTO dbo.Votes_CCI
FROM dbo.Votes AS v;

I’m using the Votes table because it’s nice and narrow and I don’t have to tinker with any string columns.

Strings in databases were a mistake, after all.

DBCC TRACEON(8666);
CREATE CLUSTERED COLUMNSTORE INDEX
    vcci
ON dbo.Votes_CCI
ORDER (Postid);
DBCC TRACEOFF(8666);

Here’s what we get back in the query plan:

SQL Server Query Plan
Tainted Sort

We’ve got a Soft Sort! What does our seasonally maladjusted friend say about those?

A “soft sort” uses only its primary memory grant and never spills. It doesn’t guarantee fully-sorted output. Each sort run using the available memory grant will be sorted. A “sort sort” represents a best effort given the resource available. This property can be used to infer that a Sort is implemented with CQScanPartitionSortNew without attaching a debugger. The meaning of the InMemory property flag shown above will be covered in part 2. It does not indicate whether a regular sort was performed in memory or not.

Well, with that attitude, it’s not surprising that there are so many overlapping buckets in the column store index. If it’s not good enough, what can you do?

Building the index with the Soft Sort here also leads to things being as bad as they were in Brent’s post.

Insert Debugging Here


Alas, there’s (almost) always a way. Microsoft keeps making these trace flag things.

There are a bunch of different ways to track them down, but figuring out the behavior of random trace flags that you may find just by enabling them isn’t easy.

One way to tie a trace flag to a behavior is to use WinDbg to step through different behaviors in action, and see if SQL Server checks to see if a trace flag is enabled when that behavior is performed.

If you catch that, you can be reasonably sure that the trace flag will have some impact on the behavior. Not all trace flags can be enabled at runtime. Some need to be enabled as startup options.

Sometimes it’s hours and hours of work to track this stuff down, and other times Paul White (b|t) already has notes on helpful ones.

The trace flag below, 2417, is present going back to SQL Server 2014, and can help with the Soft Sort issues we’re seeing when building ordered clustered column store indexes today.

Here’s another one:

DBCC TRACEON(8666, 2417);
CREATE CLUSTERED COLUMNSTORE INDEX
    vcci
ON dbo.Votes_CCI
ORDER (Postid)
WITH(MAXDOP = 1);
DBCC TRACEOFF(8666, 2417);

The MAXDOP 1 hint isn’t strictly necessary. With a parallel plan, you may see up to DOP overlapping row groups.

SQL Server Query Plan
community service

That’s why it was a popular maneuver to emulate this behavior by creating a clustered row store index, and then create a clustered column store index over it with drop existing and a MAXDOP 1 hint.

At DOP 1, you don’t see that overlap. It takes a lot longer of course — 3 minutes instead of 30 or so seconds — which is a real bummer. But without it, you could see DOP over lapping rowgroups.

If you want All The Pretty Little Rowgroups, this is what you have to do.

Anyway, the result using sp_BlitzIndex looks a lot better now:

EXEC sp_BlitzIndex
    @TableName = 'Votes_CCI';
SQL Server Query Results
capture the flag

How nice.

You can also use undocumented and unsupported trace flag 11621, which is

[A] feature flag for the ‘partition sort on column store order’ so the end result is similar, but via a different mechanism to 2417.
A partition sort is useful in general to prevent unnecessary switching between partitions. If you sort the stream by partition, you process all the rows for one before moving on to the next. A soft sort is ok there because it’s just a performance optimization. Worst case, you end up switching between partitions quite often because the sort ran out of memory, but correct results will still occur.

Chain Gang


A “reasonable” alternative to trace flags maybe to adjust the index create memory configuration option. If we set it down to the minimum value, we get a “helpful” error message:

EXEC sys.sp_configure 
    'index create memory', 
    704;

RECONFIGURE;

As promised:

Msg 8606, Level 17, State 1, Line 31

This index operation requires 123208 KB of memory per DOP.

The total requirement of 985800 KB for DOP of 8 is greater than the sp_configure value of 704 KB set for the advanced server configuration option “index create memory (KB)”.

Increase this setting or reduce DOP and rerun the query.

If you get the actual execution plan for the clustered column store index create or rebuild with the Soft Sort disabled and look at the memory grant, you get a reasonable estimate for what to set index create memory to.

Changing it does two things:

  • Avoids the very low memory grant that Soft Sorts receive, and causes the uneven row groups
  • The Soft Sort keeps the index create from going above that index create memory number

Setting index create memory for this particular index creation/rebuild to 5,561,824 gets you the nice, even row groups (at MAXDOP 1) that we saw when disabling the Soft Sort entirely.

Bottom line, here is that uneven row groups happen with column store indexes when there’s a:

  • Parallel create/rebuild
  • Low memory grant create/rebuild

If this sort of thing is particularly important to you, you could adjust index create memory to a value that allows the Soft Sort adequate memory.

But that’s a hell of a lot of work, and I hope Microsoft just fixes this in a later build.

Reality Bites


The cute thing here is that, while this syntactical functionality has been available in Azure Cloud Nonsense© for some time, no one uses that, so no one cares.

The bits for this were technically available in SQL Server 2019 as well, but I’m not telling you how to do that. It’s not supported, and bad things might happen if you use it.

I mean, bad things happen in SQL Server 2022 where it’s supported unless you use an undocumented trace flag, but… Uh. I dunno.

This trace flag seems to set things back to how things worked in the Before Times, though, which is probably how they should have stayed.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server 2022 Parameter Sensitive Plan Optimization: Does Not Care To Fix Your Local Variable Problems

–To Fix Parameter Sniffing


There are some code comments you see that really set the stage for how tuning a query is going to go.

Usually one misgiving about how SQL Server works gives way to a whole levee-breaking bevy of other ones and three days later you can’t feel your legs but dammit it’s done.

Okay, maybe it was three hours, but it felt like three days. Something about the gravitation pull of these black hole queries.

One fix I’ve been wishing for, or wish I’ve been fixing for, is a cure for local variables. I’d even be cool if Forced Parameterization was that cure, but you know…

Time will tell.

Husk


Let’s say we’ve got this stored procedure, which does something similar to the “I’m gonna fix parameter sniffing with a local variable hey why is everything around me turning to brimstone before my very eyes?” idea, but with… less of an end-of-times vibe.

CREATE OR ALTER PROCEDURE 
    dbo.IndexTuningMaster
( 
    @OwnerUserId int,
    @ParentId int, 
    @PostTypeId int 
)
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    /*Someone passed in bad data and we got a bad query plan,
      and we have to make sure that doesn't happen again*/
    
    DECLARE 
        @ParentIdFix int = 
            CASE 
                WHEN @ParentId < 0 
                THEN 0 
                ELSE @ParentId 
            END;
    
    SELECT TOP (1) 
        p.*
    FROM dbo.Posts AS p
    WHERE p.ParentId = @ParentIdFix
    AND   p.PostTypeId = @PostTypeId
    AND   p.OwnerUserId = @OwnerUserId
    ORDER BY 
        p.Score DESC, 
        p.Id DESC;

END;

How bad could a top 1 query be, anyway?

Fortune Teller


When we run this query like so and so:

EXEC dbo.IndexTuningMaster 
    @OwnerUserId = 22656, 
    @ParentId = 0, 
    @PostTypeId = 1;

EXEC dbo.IndexTuningMaster 
    @OwnerUserId = 22656, 
    @ParentId = 184618, 
    @PostTypeId = 2;

We come up with zip zero zilch none nada:

SQL Server Query Plan
still playing

We get a super low guess for both. obviously that guess hurts a large set of matched data far worse than a small one, but the important thing here is that both queries receive the same bad guess.

This is a direct side effect of the local variable’s poor estimate, which PSP isn’t quite yet ready to go up against.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server 2022 Parameter Sensitive Plan Optimization: How PSP Can Help Some Queries With IF Branches

Time Served


I’ve spent a bit of time talking about how IF branches can break query performance really badly in SQL Server.

While the Parameter Sensitive Plan (PSP) optimization won’t fix every problem with this lazy coding habit, it can fix some of them in very specific circumstances, assuming:

  • The parameter is eligible for PSP
  • The parameter is present across IF branches

We’re going to use a simple one parameter example to illustrate the potential utility here.

After all, if I make these things too complicated, someone might leave a comment question.

The horror

IFTTT


Here’s the procedure we’re using. The point is to execute one branch if @Reputation parameter is equal to one, and another branch if it equals something else.

In the bad old days, both queries would get a plan optimized at compile time, and neither one would get the performance boost that you hoped for.

In the good news days that you’ll probably get to experience around 2025, things are different!

CREATE OR ALTER PROCEDURE 
    dbo.IFTTT 
(
    @Reputation int
)
AS 
BEGIN
SET NOCOUNT, XACT_ABORT ON;

SET STATISTICS XML ON;  

    IF @Reputation = 1
    BEGIN
        SELECT 
            u.Id, 
            u.DisplayName, 
            u.Reputation, 
            u.CreationDate
        FROM dbo.Users AS u
        WHERE u.Reputation = @Reputation;
    END;

    IF @Reputation > 1
    BEGIN
        SELECT 
            u.Id, 
            u.DisplayName, 
            u.Reputation, 
            u.CreationDate
        FROM dbo.Users AS u
        WHERE u.Reputation = @Reputation;
    END;

SET STATISTICS XML OFF; 

END;
GO 

Johnson & Johnson


If we execute these queries back to back, each one gets a new plan:

EXEC dbo.IFTTT 
    @Reputation = 1;
GO 

EXEC dbo.IFTTT 
    @Reputation = 2;
GO
SQL Server Query Plan
psychic driving

Optimize For You


The reason why is in the resulting queries, as usual. The Reputation column has enough skew present to trigger the PSP optimization, so executions with differently-bucketed parameter values end up with different plans.

option (PLAN PER VALUE(QueryVariantID = 3, predicate_range([StackOverflow2013].[dbo].[Users].[Reputation] = @Reputation, 100.0, 1000000.0)))

option (PLAN PER VALUE(QueryVariantID = 2, predicate_range([StackOverflow2013].[dbo].[Users].[Reputation] = @Reputation, 100.0, 1000000.0)))

And of course, each plan has different compile and runtime values:

SQL Server Query Plan
care

If I were to run this demo in a compatibility level under 160, this would all look totally different.

This is one change I’m sort of interested to see the play-out on.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.