CHAR vs VARCHAR Memory Grants

Strings Were A Mistake


While working with a client recently, we found that someone, at some point in time, probably during the original migration from Access, had chosen CHAR columns rather than VARCHAR columns.

Okay, fine. How bad could it be?

Bad enough that… A whole bunch of columns that only had a single character in them were stored in CHAR(1000) columns.

Which was wildly inflating memory grants, and causing all sorts of nasty issues.

Table create script it at the end of the post!

Del Granto


To show how you much different grants are for char vs varchar, I need to give you a little background on sort memory grant estimates.

When the optimizer estimates how much memory it’ll need, the calculation is based on the number and width of the rows that will have to get sorted.

For variable length string columns, it estimates that half the number of bytes will be filled. So if you have a varchar(100) column it’ll estimate 50 bytes will be filled for every row, and for an nvarchar(100) column it’ll estimate that 100 bytes will be filled for every row, because unicode characters are stored as double-byte to account for accent marks, etc.

So, yes, identically sized varchar and nvarchar columns will have different memory grant requirements.

And yes, identically sized (n)char and (n)varchar columns will also have different memory grant requirements.

Granto Loco


Let’s take these two queries, and let memory grant feedback right-size the grants for these two queries:

DECLARE 
    @c char(1000);
SELECT
    @c = m.char_col
FROM dbo.murmury AS m
ORDER BY m.some_date DESC;

DECLARE 
    @v varchar(1000);
SELECT
    @v = m.varchar_col
FROM dbo.murmury AS m
ORDER BY m.some_date DESC;

Here are the fully adjusted and stabilized grants:

hangin’ tough

Around 9GB vs 441MB. All because those CHAR columns are padded out with 999 empty characters.

camp nightmare

So hey, if you need a reason to review schema for char column sanity, this just might be it.

Thanks for reading!

Scripto


DROP TABLE IF EXISTS dbo.murmury;

SELECT
    ISNULL
    (
        x.n, 
        0
    ) AS id,
    DATEADD
    (
        MINUTE,
        x.n,
        '20210101'
    ) AS some_date,
    CONVERT
    (
        char(1000),
        LEFT(x.text, 1)
    ) AS char_col,
    CONVERT
    (
        varchar(1000),
        LEFT(x.text, 1)
    ) AS varchar_col
INTO dbo.murmury
FROM 
(
SELECT TOP (1000 * 5000)
    ROW_NUMBER() OVER
    (
        ORDER BY 
            1/0
    ) AS n,
    m.text
FROM sys.messages AS m
CROSS JOIN sys.messages AS m2
) AS x;

ALTER TABLE dbo.murmury
ADD CONSTRAINT pk_mr PRIMARY KEY CLUSTERED(id);

 

The Three Kinds Of Memory Contention In SQL Server

Savings and Loans


Whomever decided to give “memory bank” its moniker was wise beyond their years, or maybe they just made a very apt observation: all memory is on loan.

Even in the context we’ll be talking about, when SQL Server has lock pages in memory enabled, the pages that are locked in memory may not have permanent residency.

If your SQL Server doesn’t have enough memory, or if various workload elements are untuned, you may hit one of these scenarios:

  • Query Memory Grant contention (RESOURCE_SEMAPHORE)
  • Buffer Cache contention (PAGEIOLATCH_XX)
  • A mix of the two, where both are fighting over finite resources

It’s probably fair to note that not all query memory grant contention will result in RESOURCE_SEMAPHORE. There are times when you’ll have just enough queries asking for memory grants to knock a significant pages out of the plan cache to cause an over-reliance on disk without ever hitting the point where you’ve exhausted the amount of memory that SQL Server will loan out to queries.

To help you track down any of these scenarios, you can use my stored procedure sp_PressureDetector to see what’s going on with things.

Black Friday


Most servers I see have a mix of the two issues. Everyone complains about SQL Server being a memory hog without really understanding why. Likewise, many people are very proud about how fast their storage is without really understanding how much faster memory is. It’s quite common to hear someone say they they recently got a whole bunch of brand new shiny flashy storage but performance is still terrible on their server with 64GB of RAM and 1TB of data.

I recently had a client migrate some infrastructure to the cloud, and they were complaining about how queries got 3x slower. As it turned out, the queries were accruing 3x more PAGEIOLATCH waits with the same amount of memory assigned to SQL Server. Go figure.

If you’d like to see those waits in action, and how sp_PressureDetector can help you figure out which queries are causing problems, check out this video.

Market Economy


The primary driver of how much memory you need is how much control you have over the database. The less control you have, the more memory you need.

Here’s an example: One thing that steals control from you is using an ORM. When you let one translate code into queries, Really Bad Things™ can happen. Even with Perfect Indexes™ available, you can get some very strange queries and subsequently very strange query plans.

One of the best ways to take some control back isn’t even available in Standard Edition.

If you do have control, the primary drivers of how much memory you need are how effective your indexes are, and how well your queries are written to take advantage of them. You can get away with less memory in general because your data footprint in the buffer pool will be a lot smaller.

You can watch a video I recorded about that here:

Thanks for watching!

Quickly Detecting CPU and Memory Pressure On A SQL Server

Options Ahoy


There are lots of great options out there for figuring out what’s happening on a server. If you’re the kind of person still running sp_who, or sp_who2, or sp_who8045, it’s time to stop.

You’ve got much better options, like sp_WhoIsActive, or the First Responder Kit. They’re both great, but sometimes I needed this very specific information very quickly without all the bells and whistles.

I’ve had to do some pretty weird troubleshooting in my life, where a server was totally jammed up, and these help me figure out what’s going on.

Sir-Procs-A-Lot


I call it the pressure detector. It’s four simple DMV queries that will give you different levels of detail about memory and CPU usage currently taking place on a server. Like a lot of these scripts, when a server is Baltimoring, you’re better off running them using the DAC. Otherwise, you’re kinda stuck in the same place all your other queries are.

So what does this help you with? Situations where you’re running out of worker threads, or when you’re running out of memory.

When you hit those, you’ll get smacked with a couple nasty waits:

  • THREADPOOL: Running out of worker threads
  • RESOURCE_SEMAPHORE: Running out of query memory

The thing is, sometimes it’s hard to see what’s happening, what’s causing problems, and what’s on the receiving end.

Check out this video to walk you through the proc results.

Video Killed


Thanks for reading!

All The Problems With Select *

This was originally posted by me as an answer here. I’m re-posting it locally for posterity.

The two reasons that I find the most compelling not to use SELECT * in SQL Server are

  1. Memory Grants
  2. Index usage

Memory Grants


When queries need to Sort, Hash, or go Parallel, they ask for memory for those operations. The size of the memory grant is based on the size of the data, both row and column wise.

String data especially has an impact on this, since the optimizer guesses half of the defined length as the ‘fullness’ of the column. So for a VARCHAR 100, it’s 50 bytes * the number of rows.

Using Stack Overflow as an example, if I run these queries against the Users table:

SELECT TOP 1000 
       u.DisplayName 
FROM dbo.Users AS u 
ORDER BY u.Reputation;


SELECT   TOP 1000
         u.DisplayName,
         u.Location
FROM     dbo.Users AS u
ORDER BY u.Reputation;

 

DisplayName is NVARCHAR 40, and Location is NVARCHAR 100.

Without an index on Reputation, SQL Server needs to sort the data on its own.

NUTS

But the memory it nearly doubles.

DisplayName:

NUTS

DisplayName, Location:

NUTS

This gets much worse with SELECT *, asking for 8.2 GB of memory:

NUTS

It does this to cope with the larger amount of data it needs to pass through the Sort operator, including the AboutMe column, which has a MAX length.

NUTS

Index Usage


If I have this index on the Users table:

CREATE NONCLUSTERED INDEX ix_Users ON dbo.Users ( CreationDate ASC, Reputation ASC, Id ASC );

 

And I have this query, with a WHERE clause that matches the index, but doesn’t cover/include all the columns the query is selecting…

SELECT   u.*,
         p.Id AS PostId
FROM     dbo.Users AS u
JOIN     dbo.Posts AS p
    ON p.OwnerUserId = u.Id
WHERE    u.CreationDate > '20171001'
AND      u.Reputation > 100
AND      p.PostTypeId = 1
ORDER BY u.Id;

The optimizer may choose not to use the narrow index with a key lookup, in favor of just scanning the clustered index.

NUTS

You would either have to create a very wide index, or experiment with rewrites to get the narrow index chosen, even though using the narrow index results in a much faster query.

NUTS

CX:

SQL Server Execution Times: CPU time = 6374 ms, elapsed time = 4165 ms.

 

NC:

SQL Server Execution Times: CPU time = 1623 ms, elapsed time = 875 ms.

Thanks for reading!

Does Parallelism Change Memory Grants?

This was originally posted as an answer by me here, I’m re-posting it locally for posterity

Sup?


For SQL Server queries that require additional memory, grants are derived for serial plans. If a parallel plan is explored and chosen, memory will be divided evenly among threads.

Memory grant estimates are based on:

  • Number of rows (cardinality)
  • Size of rows (data size)
  • Number of concurrent memory consuming operators

If a parallel plan is chosen, there is some memory overhead to process parallel exchanges (distribute, redistribute, and gather streams), however their memory needs are still not calculated the same way.

Memory Consuming Operators


The most common operators that ask for memory are

  • Sorts
  • Hashes (joins, aggregates)
  • Optimized Nested Loops

Less common operators that require memory are inserts to column store indexes. These also differ in that memory grants are currently multiplied by DOP for them.

Memory needs for Sorts are typically much higher than for hashes. Sorts will ask for at least estimated size of data for a memory grant, since they need to sort all result columns by the ordering element(s). Hashes need memory to build a hash table, which does not include all selected columns.

Examples


If I run this query, intentionally hinted to DOP 1, it will ask for 166 MB of memory.

SELECT *
FROM 
     (  
        SELECT TOP (1000) 
               u.Id 
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u
OPTION(MAXDOP 1);

NUTS

If I run this query (again, DOP 1), the plan will change, and the memory grant will go up slightly.

SELECT *
FROM (  
        SELECT TOP (1000) 
               u.Id
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u
JOIN (
        SELECT TOP (1000) 
               u.Id
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u2
ON u.Id = u2.Id
OPTION(MAXDOP 1);

NUTS

There are two Sorts, and now a Hash Join. The memory grant bumps up a little bit to accommodate the hash build, but it does not double because the Sort operators cannot run concurrently.

If I change the query to force a nested loops join, the grant will double to deal with the concurrent Sorts.

SELECT *
FROM (  
        SELECT TOP (1000) 
               u.Id
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u
INNER LOOP JOIN ( --Force the loop join
        SELECT TOP (1000) 
               u.Id
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u2
ON u.Id = u2.Id
OPTION(MAXDOP 1);

NUTS

The memory grant doubles because Nested Loop is not a blocking operator, and Hash Join is.

Size Of Data Matters


This query selects string data of different combinations. Depending on which columns I select, the size of the memory grant will go up.

The way size of data is calculated for variable string data is rows * 50% of the column’s declared length. This is true for VARCHAR and NVARCHAR, though NVARCHAR columns are doubled since they store double-byte characters. This does change in some cases with the new CE, but details aren’t documented.

Size of data also matters for hash operations, but not to the same degree that it does for Sorts.

SELECT *
FROM 
     (  
        SELECT TOP (1000) 
                 u.Id          -- 166MB (INT)
               , u.DisplayName -- 300MB (NVARCHAR 40)
               , u.WebsiteUrl  -- 900MB (NVARCHAR 200)
               , u.Location    -- 1.2GB (NVARCHAR 100)
               , u.AboutMe     -- 9GB   (NVARCHAR MAX)
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u
OPTION(MAXDOP 1);

But What About Parallelism?


If I run this query at different DOPs, the memory grant is not multiplied by DOP.

SELECT *
FROM (  
        SELECT TOP (1000) 
               u.Id
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u
INNER HASH JOIN (
        SELECT TOP (1000) 
               u.Id
        FROM dbo.Users AS u
        ORDER BY u.Reputation
     ) AS u2
ON u.Id = u2.Id
ORDER BY u.Id, u2.Id -- Add an ORDER BY
OPTION(MAXDOP ?);

NUTS

There are slight increases to deal with more parallel buffers per exchange operator, and perhaps there are internal reasons that the Sort and Hash builds require extra memory to deal with higher DOP, but it’s clearly not a multiplying factor.

Thanks for reading!

Bad Guesses and Bad Choices

Let’s Run A Cruddy Query


We’ve got no supporting indexes right now. That’s fine.

The optimizer is used to not having helpful indexes. It can figure things out.

    SELECT   p.*
    FROM     dbo.Posts AS p
    JOIN     dbo.Votes AS v
        ON p.Id = v.PostId
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225'
    ORDER BY p.Id;
Snap City.

So uh. We got a merge join here. For some reason. And a query that runs for 27 seconds.

The optimizer was all “no, don’t worry, we’re good to sort 52 million rows. We got this.”

YOU’VE GOT THIS.

[You don’t got this — ED]

Choices, Choices


Since we have an order by on the Id column of the Posts table, and that column is the Primary Key and Clustered index, it’s already in order.

The optimizer chose to order the PostId column from the Votes table, and preserve the index order of the Id column.

Merge Joins expect ordered input on both sides, don’tcha know?

It could have chosen a Hash Join, but then the order of the Id column from the Posts table wouldn’t have been preserved on the other side.

Merge Joins are order preserving, Hash Joins aren’t. If we use a Hash Join, we’re looking at ordering the results of the join after it’s done.

But why?

Good Guess, Bad Guess

Going into the Merge Join, we have a Good Guess™

Coming out of the Merge Join, we have a Bad Guess™

Thinking back to the Sort operator, it only has to order the PostId column from the Votes table.

That matters.

Hash It Up


To compare, we need to see what happens with a Hash Join.

smh

Okay, ignore the fact that this one runs for 2.6 seconds, and the other one ran for 27 seconds.

Just, like, put that aside.

Here’s why:

Neither Nor

This Sort operator is different. We need to sort all of the columns in the Posts table by the Id column.

Remember that the Id column is now out of order after the Hash Join.

Needing to sort all those columns, including a bunch of string columns, along with an NVARCHAR(MAX) column — Body — inflates the ever-weeping-Jesus out of the memory grant.

I see.

The Hash Join plan is not only judged to be more than twice as expensive, but it also asks for a memory grant that’s ~3x the size of the Merge Join plan.

Finish Strong


Let’s tally up where we’re at.

Both queries have identical estimated rows.

The optimizer chooses the Merge Join plan because it’s cheaper.

  • The Merge Join plan runs for 27 seconds, asks for 3.3GB of RAM, and spills to disk.
  • The Hash Join plan runs for 3 seconds, asks for 9.7GB of RAM and doesn’t spill, but it only uses 188MB of the memory grant.

That has impacted the reliability.

In a world where memory grants adjust between executions, I’ll take the Hash Join plan any day of the week.

But this is SQL Server 2017, and we don’t get that without Batch Mode, and we don’t get Batch Mode without playing some tricks.

There are lots of solutions if you’re allowed to tune queries or indexes, but not so much otherwise.

In the next couple posts, I’ll look at different ways to approach this.

Thanks for reading!

Distinct Rows, And The Columns You Choose

Footnote


I have two queries. They return the same number of rows.

The only difference is one column in the select list.

This query has the Id column, which is the primary key and clustered index of the Posts table.

    SELECT   DISTINCT
             p.Id, p.AcceptedAnswerId, p.AnswerCount, p.Body, p.ClosedDate,
             p.CommentCount, p.CommunityOwnedDate, p.CreationDate,
             p.FavoriteCount, p.LastActivityDate, p.LastEditDate,
             p.LastEditorDisplayName, p.LastEditorUserId, p.OwnerUserId,
             p.ParentId, p.PostTypeId, p.Score, p.Tags, p.Title, p.ViewCount
    FROM     dbo.Posts AS p
    JOIN     dbo.Votes AS v
        ON p.Id = v.PostId
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225'
    ORDER BY p.Id;

The query plan for it looks like this:

Eligible

Notice that no operator in this plan performs any kind of aggregation.

There’s no Hash Match Aggregate, no Stream Aggregate, no Distinct Sort, NADA!

It runs for ~1.9 seconds to return about 25k rows.

Lessen


Watch how much changes when we remove that Id column from the select list.

    SELECT   DISTINCT
             p.AcceptedAnswerId, p.AnswerCount, p.Body, p.ClosedDate,
             p.CommentCount, p.CommunityOwnedDate, p.CreationDate,
             p.FavoriteCount, p.LastActivityDate, p.LastEditDate,
             p.LastEditorDisplayName, p.LastEditorUserId, p.OwnerUserId,
             p.ParentId, p.PostTypeId, p.Score, p.Tags, p.Title, p.ViewCount
    FROM     dbo.Posts AS p
    JOIN     dbo.Votes AS v
        ON p.Id = v.PostId
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225';

This is what the query plan now looks like:

What’s wrong with you.

Zooming in a bit…

Woof.

After we Scan the Posts table, we sort about 47k rows.

After the join to Votes, we aggregate data twice. There are two Stream Aggregate operators.

What do we sort?

Boogers.

We Sort every column in the table by every column in the table.

In other words, we order by every column we’ve selected.

What do we aggregate?

Everything. Twice.

What Does It All Mean?


When selecting distinct rows, it can be beneficial to include a column that the optimizer can guarantee is unique in the set of selected columns. Think of a primary key, or another column with a uniqueness constraint on it.

Without that, you can end up doing a lot of extra work to create a distinct result set.

Of course, there are times when that changes the logic of the query.

Thanks for reading!

Memory Grants For SQL Variant

Great Question, You


During my (sold out, baby!) Madison precon, one attendee asked a great question while we were talking about memory grants.

Turns out, if you use the SQL Variant datatype, the memory grants function a lot like they do for any long string type.

From the documentation, which hopefully won’t move or get deleted:

sql_variant can have a maximum length of 8016 bytes. This includes both the base type information and the base type value. The maximum length of the actual base type value is 8,000 bytes.

Since the optimizer needs to plan for your laziness indecisiveness lack of respect for human life inexperience, you can end up getting some rather enormous memory grants, regardless of the type of data you store in variant columns.

Ol’ Dirty Demo


Here’s a table with a limited set of columns from the Users table.

CREATE TABLE dbo.UserVariant 
( 
    Id SQL_VARIANT, 
    CreationDate SQL_VARIANT, 
    DisplayName SQL_VARIANT,
    Orderer INT IDENTITY
);

INSERT dbo.UserVariant WITH(TABLOCKX)
( Id, CreationDate, DisplayName )
SELECT u.Id, u.CreationDate, u.DisplayName
FROM dbo.Users AS u

In all, about 2.4 million rows end up in there. In the real table, the Id column is an integer, the CreationDate column is a DATETIME, and the DisplayName column is an NVARCHAR 40.

Sadly, no matter which column we select, the memory grant is the same:

SELECT TOP (101) uv.Id
FROM dbo.UserVariant AS uv
ORDER BY uv.Orderer;

SELECT TOP (101) uv.CreationDate
FROM dbo.UserVariant AS uv
ORDER BY uv.Orderer;

SELECT TOP (101) uv.DisplayName
FROM dbo.UserVariant AS uv
ORDER BY uv.Orderer;

SELECT TOP (101) uv.Id, uv.CreationDate, uv.DisplayName
FROM dbo.UserVariant AS uv
ORDER BY uv.Orderer;

It’s also the maximum memory grant my laptop will allow: about 9.6GB.

Large Marge

Get’em!


As if there aren’t enough reasons to avoid sql_variant, here’s another one.

Thanks for reading.