Why Spills In Parallel Plans Can Be Weird

Go Along, Get Along


This is a somewhat complicated topic. There’s not a great TL;DR here, but I don’t want anyone to walk away from this post thinking that parallelism or indexes are “bad”.

What I do want to show is how uneven parallelism can exacerbate existing plan quality issues, and how some indexing can be unhelpful.

Query The First


This is the query we’ll be working with.

SELECT      u.Id, u.DisplayName, u.Reputation, ca.*
FROM        dbo.Users AS u WITH
CROSS APPLY (   SELECT c.*, 
                       ROW_NUMBER() 
					       OVER ( PARTITION BY c.PostId 
						          ORDER BY c.Score DESC ) AS PostScoreRank
                FROM   dbo.Comments AS c
                WHERE  u.Id = c.UserId
                AND    c.Score > 0 ) AS ca
WHERE       u.Reputation >= 100000
AND         ca.PostScoreRank < 1
ORDER BY    u.Reputation DESC
OPTION(RECOMPILE);

I’m using cross apply because the optimizer is likely to pick a Nested Loops join plan. These plans are unlikely to see a Redistribute Streams on the inner side of the join.

Within the apply, I’m making SQL Server do a more significant amount of work than outside of it. This will make more sense later on.

Outside of the apply, I’m doing a little bit of work against a few columns in the Users table, columns that would probably make good candidates for indexing.

The index that I currently have on the Comments table looks like this:

    CREATE INDEX kerplop 
    ON dbo.Comments(UserId, PostId, Score DESC) 
    WHERE Score > 0

Anyway, the query plan for this run looks like this:

Stevenage overspill

The part I want to focus on are the spills.

Goes on…

What you should keep in mind is that while all 4 threads spill, they all spill pretty evenly.

Thread distribution is pretty good across parallel workers. Not perfect, but hey.

All together now

If you want perfect, go be disappointed in what you get for $47k per .75 cores of Oracle Enterprise Edition.

Query The Second


Knowing what we know about stuff, we may wanna add this index:

    CREATE UNIQUE INDEX hey_scully
    ON dbo.Users (Id, Reputation DESC) 
    INCLUDE(DisplayName);

But when we do, performance gets much worse.

If only.

Zooming back in on the Sorts…

Happening in mine.

Each spill was about ~2x as bad, because thread distribution got much worse.

Fall down

Poor thread 4 got stuck with ~534k rows. The problem here is that each thread in a parallel plan gets an even cut of the memory grant. That doesn’t rebalance if parallelism is skewed. Threads may rebalance if a Redistribute Streams operator appears, but we don’t have one of those here. We will sometimes get one on the outer side of nested loops joins, if the optimizer decides it’s needed.

But since we don’t, things get all screwy.

Underage

Thread 2, which had only 63k rows assigned to it didn’t use the full amount of memory it got, though it still apparently spilled. Same with thread 3, but to a lesser extent (get it?).

But why did this happen when we added an index?

Paper Boy


Reading the plan from right to left, top to bottom, we start with a scan of the Users table. This is when something called the parallel page supplier kicks in and starts handing out rows as threads ask for them. Its job is to make sure that parallel workers get rows when they ask for them, and that different threads don’t get the same rows. To do that, it uses key ranges from the statistics histogram.

It makes for a rather dull screenshot, but both histograms are identical for the clustered and nonclustered indexes in this demo. It’s not a statistical issue.

Nor are indexes fragmented, so, like, don’t get me started.

According to my Dear Friend, the parallel page supplier aims for 64k chunks. The smaller index just happens to end up with a more unfortunate key range distribution across its fewer pages.

Feuer

What About A Different Index?


Let’s switch our indexes up and add this one:

    CREATE UNIQUE INDEX spooky_mulder
    ON dbo.Users (Reputation DESC, Id) 
    INCLUDE(DisplayName);

The plan no longer goes parallel, and it runs for about 4 seconds.

First Resort, Last Resort

We’re doing the same amount of work on the inner side of the nested loops join. The only part of the plan that changed is on the outer side.

Monkey Bread

This is more of an aside than anything, but in parallel nested loops plans, the optimizer only considers if parallelism will reduce the cost of the outer side of the join.

The plan changing to use a cheaper seek with no need to sort data means the outer side is rather cheap to execute, but the inner side is just as expensive.

Not to brag but

The DOP 1 plan is only slightly cheaper, here. You may expect a plan that “costs” this much to go parallel, but alas, it was not meant to be.

Thanks for reading!

Spills Week: Exchange Spill Excruciation

Many Have Entered, Few Have Left


For some background on Exchange Spills, check out this Great Post™ by Joe.

The root of this demo was trying to show people silly things about CTEs, how TOP can fence things off, and how TOP introduces a serial zone in plans unless it’s used inside the APPLY operator.

The result was this magnificent beast.

Long Mane


Why is this magnificent?

Because we have the trifecta. We have a spill on all three types of parallel exchanges.

It’s not gonna work out

Let’s take a closer look at those beauties.

*slaps hood*

Why Did That Happen?


This plan has a Merge Join, which requires ordered input.

That means the Repartition and Gather Streams operators preserve the order of the Id column in the Users table.

News Of The World

They don’t actually order by that column, they just keep it in order.

But what about Distribute Streams? GREAT QUESTION!

Legalize Burberry

It has the same Partition Column as Repartition Streams. They both have to respect the same order going into the Merge Join, because it’s producing ordered output to the Gather Streams operator.

In short, there’s a whole lot of buffers filling up while waiting for the next ordered value.

Were Parallel Merge Joins A Mistake?


[Probably] not, but they always make me nervous.

Especially when exchange operators are the direct parent or child of an order preserving operator. This also goes for stream aggregates.

I realize that these things are “edge cases”. It says so in the documentation.

The Exchange Spill event class indicates that communication buffers in a parallel query plan have been temporarily written to the tempdb database. This occurs rarely and only when a query plan has multiple range scans… Very rarely, multiple exchange spills can occur within the same execution plan, causing the query to execute slowly. If you notice more than five spills within the same query plan’s execution, contact your support professional.

Well, shucks. We only have three spills. It looks like we don’t qualify for a support professional.

Thanks for reading!

Spills Week: Hash Join Humiliation

Thirsty Thursday


If you’ve made it this far, you’ve learned a few things:

  • Not all spills are worth trying to fix
  • The more columns you select, the worse spills get
  • The larger your string datatypes are, the worse spills get

Today’s post won’t prove much else different from those things, but follow along if you’re interested.

Batter Up


Our first example looks like this:

SELECT v.*
FROM dbo.Votes AS v
LEFT  JOIN dbo.Comments AS c
ON v.PostId = c.PostId
WHERE ISNULL(v.UserId, c.UserId) > 2147483647;

We’re joining Votes to Comments with kind of a funny where clause, again.

This’ll force us to join both tables fully together, and then filter things out at the end.

Maximum Bang For Maximum Buck.

With no restrictions, this query runs for about 18 seconds with a 4.6GB memory grant.

Stolen wine

If we restrict the memory grant to 10MB, it runs for around 30 seconds. The spill is fairly large, too: 600k pages.

Paul White Likes Cowbells

Dropping it down to 4.5MB follows a similar pattern. I told you. No surprises. Easy reading.

Slightly less good, eh?

Spill level 6. 1.4mm pages. Runs for a minute eighteen.

It’s almost like memory is kind of a big deal for SQL Server, huh?

That might be something to consider the next time you look at the size of your data in relation to the amount of memory that pesky VM admin swears is “enough” for SQL server.

Home Team


Our first query was selecting all the columns from the Votes table.

This time, we’re gonna select everything from the Comments table, including that pesky NVARCHAR 700 column.

SELECT c.*
FROM dbo.Votes AS v
LEFT  JOIN dbo.Comments AS c
ON v.PostId = c.PostId
WHERE ISNULL(v.UserId, c.UserId) > 2147483647;
Get in loser

About 22 seconds, with a 9.7GB memory grant.

If you recall up a little further, when we just selected the columns from Votes, the grant was 4.6GB.

Still big, but look at those string columns inflating things again. Golly and gosh.

With a 10MB grant, we shoot right to nearly 2 minutes.

DEAR LORD

If you’re keeping score at home, bloggers are very patient people.

My, my, my

That’s 4:32 of my life that I’m never getting back. And I have to waste it again because I forgot to look at the hash bailout extended event for this.

There we are.

I’m bailing out of finishing this post tonight.

That represents a significant performance degradation.

Ahem.

Tomorrow, we’ll look at Exchange Spills, which represent an even worse one.

Thanks for reading!

Spills Week: Hash Match Aggravation

Quite Different


Hash spills are nothing like sort spills, in that even with (very fast) disks, there’s no immediate benefit to breaking the hash down into pieces via a spill.

In fact, there are many downsides, especially when memory is severely constrained.

The query that I’m using looks about like this:

SELECT   v.PostId, 
		 COUNT_BIG(*) AS records
FROM dbo.Votes AS v
GROUP BY v.PostId
HAVING COUNT_BIG(*) >  2147483647

The way this is written, we’re forced to count everything, and then only filter out rows at the end.

The idea is to spend no time waiting on rows to be displayed in SSMS.

Just One Int


To get an idea what performance looks like, I’m starting with one integer column.

With no spills and a 776 MB memory grant, this runs for about 15 seconds.

Hello

If we drop the grant down to about 10 MB, we spill a bunch, but runtime doesn’t go up too much.

Hurts A Little

And if we drop it down to 4.5 MB, things go absolutely, terribly, pear shaped.

Hurts A Lot

The difference in both the number of pages spilled and the spill level are pretty dramatic.

TWO THOUSAND!

Expansive


If we expand the query a bit to look like this, memory starts to matter more:

SELECT   v.PostId, 
         v.UserId, 
		 v.BountyAmount, 
		 v.VoteTypeId, 
		 v.CreationDate, 
		 COUNT_BIG(*) AS records
FROM dbo.Votes AS v
GROUP BY v.PostId, 
         v.UserId, 
		 v.BountyAmount, 
		 v.VoteTypeId, 
		 v.CreationDate
HAVING COUNT_BIG(*) >  2147483647
Extra Extra

With more columns, the first spill escalates to a higher level faster, and the second spill absolutely wipes out.

It runs for almost 2 minutes.

EATS IT

As a side note, I really hate how long that Repartition Streams operator runs for.

Predictably


When we get the Comments table involved, that string column beats us right up.

Love On An Escalator

The first query asks for the largest possible grant on my laptop: 9.7GB. The second query gets 10MB.

The spill is godawful.

When we reduce the memory grant to 4.5MB, the spill runs another 1:20, for a total of 3:31.

Crud

Those spills are the root cause of why these queries run longer than any we’ve seen to date in this series.

Something quite funny happens when Hashes of any variety spill “too much” — which you can read about in more detail here.

There’s an Extended Event called “hash warning” that we can use to track recursion and bailout.

Here’s the final output aggregated:

[outdated political joke]
What happens when a Hash Aggregate bails out?

GOOD QUESTION.

In Which I Belabor The Point Anyway, Despite Saying…


Not to belabor the point too much, but if we select and group all the columns in the Comments table, things get a bit worse.

Not fond

Three minutes of spills. What a time to be alive.

But, yeah, the bulk of the trouble here is caused by the string column.

Adding in some numbers and a date on top doesn’t have a profound effect.

Taking Up


While Sort Spills certainly dragged query performance down a bit when memory was severely limited, Hash Spills were far more detrimental.

If I had to choose between which one to investigate first, it’d be Hash spills.

But again, small spills are often not worth the effort, and in some cases, you may always see spills.

If your server is totally under-provisioned from a memory perspective, or if there are multiple concurrent memory consuming operations (i.e. they can’t share intra-query memory), it may not be possible for a large enough grant to be give to satisfy all of them.

This is part of why writing very large queries can be perilous, and it’s usually worth splitting them up.

In tomorrow’s post, we’ll look at hash joins.

Thanks for reading!

Spills Week: When Sort Spills Start To Matter

Imbalance


In yesterday’s post, we looked at a funny situation where a query that spilled was about 5 seconds faster than one that didn’t.

Here’s what the query looked like:

SELECT x.PostId
FROM (
SELECT v.PostId, 
       ROW_NUMBER() OVER ( ORDER BY v.PostId DESC ) AS n
FROM dbo.Votes AS v
) AS x
WHERE x.n = 1;

Now, I can add more columns in, and the timing will hold up:

SELECT x.Id, 
       x.PostId, 
	   x.UserId, 
	   x.BountyAmount, 
	   x.VoteTypeId, 
	   x.CreationDate
FROM (
SELECT v.Id, 
       v.PostId, 
	   v.UserId, 
	   v.BountyAmount, 
	   v.VoteTypeId, 
	   v.CreationDate,
       ROW_NUMBER() OVER ( ORDER BY v.PostId DESC ) AS n
FROM dbo.Votes AS v
) AS x
WHERE x.n = 1;
Gas Pedal

They both got slower, the non-spill plan by about 2.5s, and the spill plan by about 4.3s.

But the spill plan is still 3s faster. With fewer columns it was 5s faster, but hey.

No one said this was easy.

Fully comparing things from yesterday, when memory is capped at 0.0, the query takes much longer now, with more columns:

Killing Time

To compare the “fast” spills, here’s yesterday and today’s warnings.

More Pages, More Problems

With one integer column, we spilled 100k pages.

With five integer columns and one datetime column, we spill 450k pages.

That’s a non-trivial amount. That’s like every column adding 75k pages to the spill.

If you’re really worried about spills: STOP SELECTING SO MANY COLUMNS.

For The Worst


I promised to show you things going quite downhill, and for the spill query to no longer be faster.

To do that, we need a different table.

I’m going to use the Comments table, because it has a column called Text in it, which is an NVARCHAR(700).

Very few comments are 700 characters long. The majority are < 120 or so.

5-7-9

This query looks about like so:

SELECT x.Id, 
       x.CreationDate, 
	   x.PostId, 
	   x.Score, 
	   x.Text, 
	   x.UserId
FROM (
SELECT c.Id, 
       c.CreationDate, 
	   c.PostId, 
	   c.Score, 
	   c.Text, 
	   c.UserId,
       ROW_NUMBER() 
           OVER ( ORDER BY c.PostId DESC ) AS n
FROM dbo.Comments AS c
) AS x
WHERE x.n = 1

And the results are… icky.

Gigs To Spare?

The top query asks for 9.7GB of RAM. That’s as much as my laptop can give out.

It still spills. Nearly 10GB of memory grant, and it still spills.

If you care about spills: STOP OVERSIZING STRING COLUMNS:

Billy Budd

Apparently only spilling 1mm pages is a lot faster than spilling 2.5mm pages.

But still much slower than not spilling string columns.

Who knew?

Matters of Whale


I was using the Stack Overflow 2013 database for that, which is fairly big relative to the 64GB of RAM my laptop has.

If I go back to using the 2010 version, we can get a better comparison, because the first query won’t spill anymore.

It’s like what all those query tuners keep telling you.

Some points to keep in mind here:

  • I’m testing with (very fast) local storage
  • I don’t have tempdb contention

But still, it seems like spilling out non-string columns is significantly less painful than spilling out string columns.

Ahem.

“Seems.”

I’ll reiterate two points:

  • Stop selecting so many columns
  • Stop oversizing string columns

In the next two posts, we’ll look at hash match and hash join spills under similar circumstances.

Thanks for reading!

Spills Week: When Sort Spills Might Not Matter

Pre-faced


Every post this week is going to be about spills. No crazy in-depth, technical, debugger type stuff.

Just some general observations about when they seem to matter more for performance, and when you might be chasing nothing by fixing them.

The queries I use are sometimes a bit silly looking, but the outcomes are ones I see.

Sometimes I correct them and it’s a good thing. Other times I correct them and nothing changes.

Anyway, all these posts started because of the first demo, which I intended to be a quick post.

Oh well.

Intervention


Spills are a good thing to make note of when you’re tuning a query.

They often show up as a symptom of a bigger problem:

  • Parameter sniffing
  • Bad cardinality estimates

My goal is generally to fix the larger symptom than to hem and haw over the spill.

It’s also important to keep spills in perspective.

  • Some are small and inconsequential
  • Some are going to happen no matter what

And some spills… Some spills…

Can’t Hammer This


Pay close attention to these two query plans.

Completely unfounded

Not sure where to look? Here’s a close up.

Grounded

See that, there?

Yeah.

That’s a Sort with a Spill running about 5 seconds faster than a Sort without a Spill.

Wild stuff, huh? Here’s what it looks like.

Still got it.

Not inconsequential. >100k 8kb pages.

Spill level 2, too. Four threads.

A note from future Erik: if I run this with the grant capped at 0.0 rather than 0.1, the spill plan takes 12 seconds, just like the non-spill plan.

There are limits to how efficiently a spill can be handled when memory is capped at a level that increases the number of pages spilled without increasing the spill level.

Z to the Ero

But it’s still funny that the spill and non-spill plans take about the same time.

Why Is This Faster?


Well, the first thing we have to talk about is storage, because that’s where I spilled to.

My Lenovo P52 has some seriously fast SSDs in it. Here’s what they give me, via Crystal Disk Mark:

Girlfriend In A Cartoon

If you’re on good local storage, you might see those speeds.

If you’re on a SAN, I don’t care how much anyone squawks about how fast it is: you’re not gonna see that.

(But seriously, if you do get those speeds on a SAN, tell me about your setup.)

(If you think you should but you don’t, uh… Operators are standing by.)

With that out of the way, let’s hit some reference material.

Kiwis & Machanics


First, Paul White:

Multiple merge passes can be used to work around this. The general idea is to progressively merge small chunks into larger ones, until we can efficiently produce the final sorted output stream. In the example, this might mean merging 40 of the 800 first-pass sorted sets at a time, resulting in 20 larger chunks, which can then be merged again to form the output. With a total of two extra passes over the data, this would be a Level 2 spill, and so on. Luckily, a linear increase in spill level enables an exponential increase in sort size, so deep sort spill levels are rarely necessary.

Next, Paul White showing an Adam Machanic demo:

Well, okay, I’ll paraphrase here. It’s faster to sort a bunch of small things than one big thing.

If you watch the demo, that’s what happens with using the cross apply technique.

And that’s what’s happening here, too, it looks like.

On With It


The spills to (very fast) disk work in my favor here, because we’re sorting smaller data sets, then reading from (very fast) disk more small data sets, and sorting/merging those together for a final finished product.

Of course, this has limits, and is likely unrealistic in many tuning scenarios. I probably should have lead with that, huh?

But hey, if you ever fix a Sort Spill have have a query slow down, now you know why.

In tomorrow’s post, you’ll watch my luck run out (very fast) with different data.

Thanks for reading!