Best Buy
Thanks for watching!
SQL Server Consulting, Education, and Training
Thanks for watching!
There are lots of great options out there for figuring out what’s happening on a server. If you’re the kind of person still running sp_who, or sp_who2, or sp_who8045, it’s time to stop.
You’ve got much better options, like sp_WhoIsActive, or the First Responder Kit. They’re both great, but sometimes I needed this very specific information very quickly without all the bells and whistles.
I’ve had to do some pretty weird troubleshooting in my life, where a server was totally jammed up, and these help me figure out what’s going on.
I call it the pressure detector. It’s four simple DMV queries that will give you different levels of detail about memory and CPU usage currently taking place on a server. Like a lot of these scripts, when a server is Baltimoring, you’re better off running them using the DAC. Otherwise, you’re kinda stuck in the same place all your other queries are.
So what does this help you with? Situations where you’re running out of worker threads, or when you’re running out of memory.
When you hit those, you’ll get smacked with a couple nasty waits:
The thing is, sometimes it’s hard to see what’s happening, what’s causing problems, and what’s on the receiving end.
Check out this video to walk you through the proc results.
Thanks for reading!
This was originally posted by me as an answer here. I’m re-posting it locally for posterity.
The two reasons that I find the most compelling not to use SELECT *
in SQL Server are
When queries need to Sort, Hash, or go Parallel, they ask for memory for those operations. The size of the memory grant is based on the size of the data, both row and column wise.
String data especially has an impact on this, since the optimizer guesses half of the defined length as the ‘fullness’ of the column. So for a VARCHAR 100, it’s 50 bytes * the number of rows.
Using Stack Overflow as an example, if I run these queries against the Users table:
SELECT TOP 1000 u.DisplayName FROM dbo.Users AS u ORDER BY u.Reputation; SELECT TOP 1000 u.DisplayName, u.Location FROM dbo.Users AS u ORDER BY u.Reputation;
DisplayName is NVARCHAR 40, and Location is NVARCHAR 100.
Without an index on Reputation, SQL Server needs to sort the data on its own.
But the memory it nearly doubles.
DisplayName:
DisplayName, Location:
This gets much worse with SELECT *
, asking for 8.2 GB of memory:
It does this to cope with the larger amount of data it needs to pass through the Sort operator, including the AboutMe column, which has a MAX length.
If I have this index on the Users table:
CREATE NONCLUSTERED INDEX ix_Users ON dbo.Users ( CreationDate ASC, Reputation ASC, Id ASC );
And I have this query, with a WHERE clause that matches the index, but doesn’t cover/include all the columns the query is selecting…
SELECT u.*, p.Id AS PostId FROM dbo.Users AS u JOIN dbo.Posts AS p ON p.OwnerUserId = u.Id WHERE u.CreationDate > '20171001' AND u.Reputation > 100 AND p.PostTypeId = 1 ORDER BY u.Id;
The optimizer may choose not to use the narrow index with a key lookup, in favor of just scanning the clustered index.
You would either have to create a very wide index, or experiment with rewrites to get the narrow index chosen, even though using the narrow index results in a much faster query.
CX:
SQL Server Execution Times: CPU time = 6374 ms, elapsed time = 4165 ms.
NC:
SQL Server Execution Times: CPU time = 1623 ms, elapsed time = 875 ms.
Thanks for reading!
This was originally posted as an answer by me here, I’m re-posting it locally for posterity
For SQL Server queries that require additional memory, grants are derived for serial plans. If a parallel plan is explored and chosen, memory will be divided evenly among threads.
Memory grant estimates are based on:
If a parallel plan is chosen, there is some memory overhead to process parallel exchanges (distribute, redistribute, and gather streams), however their memory needs are still not calculated the same way.
The most common operators that ask for memory are
Less common operators that require memory are inserts to column store indexes. These also differ in that memory grants are currently multiplied by DOP for them.
Memory needs for Sorts are typically much higher than for hashes. Sorts will ask for at least estimated size of data for a memory grant, since they need to sort all result columns by the ordering element(s). Hashes need memory to build a hash table, which does not include all selected columns.
If I run this query, intentionally hinted to DOP 1, it will ask for 166 MB of memory.
SELECT * FROM ( SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u OPTION(MAXDOP 1);
If I run this query (again, DOP 1), the plan will change, and the memory grant will go up slightly.
SELECT * FROM ( SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u JOIN ( SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u2 ON u.Id = u2.Id OPTION(MAXDOP 1);
There are two Sorts, and now a Hash Join. The memory grant bumps up a little bit to accommodate the hash build, but it does not double because the Sort operators cannot run concurrently.
If I change the query to force a nested loops join, the grant will double to deal with the concurrent Sorts.
SELECT * FROM ( SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u INNER LOOP JOIN ( --Force the loop join SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u2 ON u.Id = u2.Id OPTION(MAXDOP 1);
The memory grant doubles because Nested Loop is not a blocking operator, and Hash Join is.
This query selects string data of different combinations. Depending on which columns I select, the size of the memory grant will go up.
The way size of data is calculated for variable string data is rows * 50% of the column’s declared length. This is true for VARCHAR and NVARCHAR, though NVARCHAR columns are doubled since they store double-byte characters. This does change in some cases with the new CE, but details aren’t documented.
Size of data also matters for hash operations, but not to the same degree that it does for Sorts.
SELECT * FROM ( SELECT TOP (1000) u.Id -- 166MB (INT) , u.DisplayName -- 300MB (NVARCHAR 40) , u.WebsiteUrl -- 900MB (NVARCHAR 200) , u.Location -- 1.2GB (NVARCHAR 100) , u.AboutMe -- 9GB (NVARCHAR MAX) FROM dbo.Users AS u ORDER BY u.Reputation ) AS u OPTION(MAXDOP 1);
If I run this query at different DOPs, the memory grant is not multiplied by DOP.
SELECT * FROM ( SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u INNER HASH JOIN ( SELECT TOP (1000) u.Id FROM dbo.Users AS u ORDER BY u.Reputation ) AS u2 ON u.Id = u2.Id ORDER BY u.Id, u2.Id -- Add an ORDER BY OPTION(MAXDOP ?);
There are slight increases to deal with more parallel buffers per exchange operator, and perhaps there are internal reasons that the Sort and Hash builds require extra memory to deal with higher DOP, but it’s clearly not a multiplying factor.
Thanks for reading!
We’ve got no supporting indexes right now. That’s fine.
The optimizer is used to not having helpful indexes. It can figure things out.
SELECT p.* FROM dbo.Posts AS p JOIN dbo.Votes AS v ON p.Id = v.PostId WHERE p.PostTypeId = 2 AND p.CreationDate >= '20131225' ORDER BY p.Id;
So uh. We got a merge join here. For some reason. And a query that runs for 27 seconds.
The optimizer was all “no, don’t worry, we’re good to sort 52 million rows. We got this.”
[You don’t got this — ED]
Since we have an order by on the Id column of the Posts table, and that column is the Primary Key and Clustered index, it’s already in order.
The optimizer chose to order the PostId column from the Votes table, and preserve the index order of the Id column.
Merge Joins expect ordered input on both sides, don’tcha know?
It could have chosen a Hash Join, but then the order of the Id column from the Posts table wouldn’t have been preserved on the other side.
Merge Joins are order preserving, Hash Joins aren’t. If we use a Hash Join, we’re looking at ordering the results of the join after it’s done.
But why?
Going into the Merge Join, we have a Good Guess™
Coming out of the Merge Join, we have a Bad Guess™
Thinking back to the Sort operator, it only has to order the PostId column from the Votes table.
That matters.
To compare, we need to see what happens with a Hash Join.
Okay, ignore the fact that this one runs for 2.6 seconds, and the other one ran for 27 seconds.
Just, like, put that aside.
Here’s why:
This Sort operator is different. We need to sort all of the columns in the Posts table by the Id column.
Remember that the Id column is now out of order after the Hash Join.
Needing to sort all those columns, including a bunch of string columns, along with an NVARCHAR(MAX) column — Body — inflates the ever-weeping-Jesus out of the memory grant.
The Hash Join plan is not only judged to be more than twice as expensive, but it also asks for a memory grant that’s ~3x the size of the Merge Join plan.
Let’s tally up where we’re at.
Both queries have identical estimated rows.
The optimizer chooses the Merge Join plan because it’s cheaper.
That has impacted the reliability.
In a world where memory grants adjust between executions, I’ll take the Hash Join plan any day of the week.
But this is SQL Server 2017, and we don’t get that without Batch Mode, and we don’t get Batch Mode without playing some tricks.
There are lots of solutions if you’re allowed to tune queries or indexes, but not so much otherwise.
In the next couple posts, I’ll look at different ways to approach this.
Thanks for reading!
I have two queries. They return the same number of rows.
The only difference is one column in the select list.
This query has the Id column, which is the primary key and clustered index of the Posts table.
SELECT DISTINCT p.Id, p.AcceptedAnswerId, p.AnswerCount, p.Body, p.ClosedDate, p.CommentCount, p.CommunityOwnedDate, p.CreationDate, p.FavoriteCount, p.LastActivityDate, p.LastEditDate, p.LastEditorDisplayName, p.LastEditorUserId, p.OwnerUserId, p.ParentId, p.PostTypeId, p.Score, p.Tags, p.Title, p.ViewCount FROM dbo.Posts AS p JOIN dbo.Votes AS v ON p.Id = v.PostId WHERE p.PostTypeId = 2 AND p.CreationDate >= '20131225' ORDER BY p.Id;
The query plan for it looks like this:
Notice that no operator in this plan performs any kind of aggregation.
There’s no Hash Match Aggregate, no Stream Aggregate, no Distinct Sort, NADA!
It runs for ~1.9 seconds to return about 25k rows.
Watch how much changes when we remove that Id column from the select list.
SELECT DISTINCT p.AcceptedAnswerId, p.AnswerCount, p.Body, p.ClosedDate, p.CommentCount, p.CommunityOwnedDate, p.CreationDate, p.FavoriteCount, p.LastActivityDate, p.LastEditDate, p.LastEditorDisplayName, p.LastEditorUserId, p.OwnerUserId, p.ParentId, p.PostTypeId, p.Score, p.Tags, p.Title, p.ViewCount FROM dbo.Posts AS p JOIN dbo.Votes AS v ON p.Id = v.PostId WHERE p.PostTypeId = 2 AND p.CreationDate >= '20131225';
This is what the query plan now looks like:
Zooming in a bit…
After we Scan the Posts table, we sort about 47k rows.
After the join to Votes, we aggregate data twice. There are two Stream Aggregate operators.
What do we sort?
We Sort every column in the table by every column in the table.
In other words, we order by every column we’ve selected.
What do we aggregate?
Everything. Twice.
When selecting distinct rows, it can be beneficial to include a column that the optimizer can guarantee is unique in the set of selected columns. Think of a primary key, or another column with a uniqueness constraint on it.
Without that, you can end up doing a lot of extra work to create a distinct result set.
Of course, there are times when that changes the logic of the query.
Thanks for reading!
During my (sold out, baby!) Madison precon, one attendee asked a great question while we were talking about memory grants.
Turns out, if you use the SQL Variant datatype, the memory grants function a lot like they do for any long string type.
From the documentation, which hopefully won’t move or get deleted:
sql_variant can have a maximum length of 8016 bytes. This includes both the base type information and the base type value. The maximum length of the actual base type value is 8,000 bytes.
Since the optimizer needs to plan for your laziness indecisiveness lack of respect for human life inexperience, you can end up getting some rather enormous memory grants, regardless of the type of data you store in variant columns.
Here’s a table with a limited set of columns from the Users table.
CREATE TABLE dbo.UserVariant ( Id SQL_VARIANT, CreationDate SQL_VARIANT, DisplayName SQL_VARIANT, Orderer INT IDENTITY ); INSERT dbo.UserVariant WITH(TABLOCKX) ( Id, CreationDate, DisplayName ) SELECT u.Id, u.CreationDate, u.DisplayName FROM dbo.Users AS u
In all, about 2.4 million rows end up in there. In the real table, the Id column is an integer, the CreationDate column is a DATETIME, and the DisplayName column is an NVARCHAR 40.
Sadly, no matter which column we select, the memory grant is the same:
SELECT TOP (101) uv.Id FROM dbo.UserVariant AS uv ORDER BY uv.Orderer; SELECT TOP (101) uv.CreationDate FROM dbo.UserVariant AS uv ORDER BY uv.Orderer; SELECT TOP (101) uv.DisplayName FROM dbo.UserVariant AS uv ORDER BY uv.Orderer; SELECT TOP (101) uv.Id, uv.CreationDate, uv.DisplayName FROM dbo.UserVariant AS uv ORDER BY uv.Orderer;
It’s also the maximum memory grant my laptop will allow: about 9.6GB.
As if there aren’t enough reasons to avoid sql_variant, here’s another one.
Thanks for reading.