Things SQL Server vNext Should Address: Going From INT To BIGINT

Historical


I’ve had to help people with this a few times recently, and it’s always a mess. If you’re lucky, you can use a technique like Andy’s to do it, but even this can be complicated by foreign keys, schemabound objects, etc. If your scenario is one big lonely table though, that can be great.

Michael J. Swart has a post where he talks about some of the things you can run into, and Keeping It Canadian™, Aaron Bertrand has a four part series on the subject, too.

Going fully international, Gianluca Sartori and Paul White have also written about the subject.

Now that we have all that covered, let’s talk about where everything falls short: If the identity column is in the primary key, or any other indexes, you still have to drop those to modify the column even if they all have compression enabled.

El Tiablo


For example, if we have this table:

DROP TABLE IF EXISTS dbo.comp_test;

CREATE TABLE dbo.comp_test
(
    id   int PRIMARY KEY CLUSTERED 
        WITH (DATA_COMPRESSION = ROW) , 
    crap int, 
    good date, 
    bad  date,
    INDEX c (crap, id) 
        WITH (DATA_COMPRESSION = ROW),
    INDEX g (good, id) 
        WITH (DATA_COMPRESSION = ROW),
    INDEX b (bad,  id) 
        WITH (DATA_COMPRESSION = ROW)
);

And we try to alter the id column:

ALTER TABLE dbo.comp_test
    ALTER COLUMN id BIGINT NOT NULL 
WITH 
(
    ONLINE = ON
);

We get all these error messages:

Msg 5074, Level 16, State 1, Line 22
The object 'PK__comp_tes__3213E83FF93312D6' is dependent on column 'id'.
Msg 5074, Level 16, State 1, Line 22
The index 'b' is dependent on column 'id'.
Msg 5074, Level 16, State 1, Line 22
The index 'g' is dependent on column 'id'.
Msg 5074, Level 16, State 1, Line 22
The index 'c' is dependent on column 'id'.
Msg 4922, Level 16, State 9, Line 22
ALTER TABLE ALTER COLUMN id failed because one or more objects access this column.

Odds R Us


The chances of you having an identity column on a table that isn’t the PK seems pretty low to me, based on every single database I’ve ever looked at.

The chances of you being able to drop the Primary Key on a table running over 2 billion rows, alter the column, and then add it back without causing some congestion aren’t so hot. If your database is in an AG or synchronizing data in some other way, you’re in for a bad time with that, too.

Sure, if you’re on Enterprise Edition, you can drop the Primary Key with ONLINE = ON, but you can’t do that with the nonclustered indexes.

ALTER TABLE dbo.comp_test
    DROP CONSTRAINT PK__comp_tes__3213E83FF93312D6
WITH (ONLINE = ON);

That works fine, but, this does not:

DROP INDEX c ON dbo.comp_test WITH (ONLINE = ON);

This error makes our issue clear:

Msg 3745, Level 16, State 1, Line 33
Only a clustered index can be dropped online.

Adding them back with ONLINE = ON is also available in Enterprise Edition, but all the queries that used those indexes are gonna blow chunks because those 2 billion row indexes were probably pretty important to performance.

Partitioning Is Useless


I know, I know. It probably feels like I’m picking on partitioning here. It really wasn’t made for this sort of thing, though.

CREATE PARTITION FUNCTION pf_nope(datetime) 
    AS RANGE RIGHT 
    FOR VALUES ('19990101');

CREATE PARTITION SCHEME ps_nope 
    AS PARTITION pf_nope 
    ALL TO ([PRIMARY]);

CREATE TABLE dbo.one_switch
(
    id integer, 
    e datetime
) ON ps_nope(e);

CREATE TABLE dbo.two_switch
(
    id bigint, 
    e datetime
) ON ps_nope(e);

In the first table, our id column is an integer, and in the second column is a big integer.

ALTER TABLE dbo.two_switch 
    SWITCH PARTITION 1 
    TO dbo.one_switch PARTITION 1;

Leads us to this error message:

Msg 4944, Level 16, State 1, Line 65
ALTER TABLE SWITCH statement failed because column 'id' has data type bigint 
in source table 'Crap.dbo.two_switch' which is different from its type int in 
target table 'Crap.dbo.one_switch'.

No match, no switch.

What a drag it is getting old.

FIXFIX


Moving from INT to BIGINT is not fun, especially for a change that realistically only needs to apply to new pages.

Changes to old pages (if ever necessary) could be deferred until then, but in the case of columns based on identities or sequences, I can’t think of a realistic scenario where that would even happen.

It would be really nice to have other options for making this change that didn’t seemingly trade complexity for uptime.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Optional Parameters

Lamp


This issue is one that could be linked to other times when the optimizer defers certain portions of optimization to later stages. It’s also something that could lead to complications, because the end result is multiple execution plans for the same query.

But it goes back to a couple basic approaches to query writing that I think people need to keep in mind: write single purpose queries, and things that make your job easier make the optimizer’s job harder.

A good example of a multi-purpose query is a merge statement. It’s like throwing SQL Server a knuckleball.

Fiji


Another example of a knuckleball is this knucklehead pattern:

SELECT 
    p.*
FROM dbo.Posts AS p
WHERE (p.OwnerUserId   = @OwnerUserId OR @OwnerUserId IS NULL)
AND   (p.CreationDate >= @CreationDate OR @CreationDate IS NULL);

SELECT 
    p.*
FROM dbo.Posts AS p
WHERE (p.OwnerUserId   = ISNULL(@OwnerUserId, p.OwnerUserId))
AND   (p.CreationDate >= ISNULL(@CreationDate, p.CreationDate));
GO 

SELECT 
    p.*
FROM dbo.Posts AS p
WHERE (p.OwnerUserId   = COALESCE(@OwnerUserId, p.OwnerUserId))
AND   (p.CreationDate >= COALESCE(@CreationDate, p.CreationDate))
ORDER BY p.Score DESC;
GO

I hate seeing this, because I know how many bad things can happen as a result of this.

One example I love is creating these two indexes and running the first query up there.

CREATE INDEX onesie ON dbo.Posts(OwnerUserId, Score, CreationDate);
CREATE INDEX threesie ON dbo.Posts(ParentId, OwnerUserId);

The optimizer chooses the wrong index — the one that starts with ParentId — even though the query is clearly looking for a potential equality predicate on OwnerUserId.

SQL Server Query Plan
180

Deferential


It would be nice if the optimizer did more to sniff out NULL values here to come up with more stable plans for the non-NULL values, essentially doing the job that dynamic SQL does by only adding predicates to the where clause when they’re not NULL.

It doesn’t have to look further at actual values on compilation, because that’s essentially a RECOMPILE hint on every query.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Lookup Placement

All Looked Up


Lookups are interesting. On the one hand, I think the optimizer should be less biased against them, and on the other hand they can cause a lot of issues.

They’re probably the most common issue in queries that suffer from parameter sniffing that I see, though far from the only unfortunate condition.

Under the read committed isolation level, lookups can cause readers to block writers, and even cause deadlocks between readers and writers.

This isn’t something that happens under optimistic isolation levels, which may or may not have something to do with my earlier suggestion to make new databases use RCSI by default and work off the local version store associated with accelerated database recovery.

Ahem.

Leafy Greens


One thing that would make lookups less aggravating would be giving the optimizer the ability to move them around.

But that really only works depending on what the lookup is doing. For example, some Lookups just grab output columns, and some evaluate predicates:

SQL Server Query Plan
all one word

Further complicating things is if one of the columns being output is used in a join.

SQL Server Query Plan
bad movie

Outside Chance


There are likely other circumstances where decoupling the lookup and moving the join to another part of the plan would be impossible or maybe even make things worse. It might even get really weird when dealing with a bunch of left joins, but that’s the sort of thing the optimizer should be allowed to explore during, you know, optimization.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Table Variable Modification Performance

Canard


People still tell me things like “I only put 100 rows in table variables”, and think that’s the only consideration for their use.

There are definitely times when table variables can be better, but 100 rows is meaningless.

Even if you put one row in a table variable it can fudge up performance because SQL Server doesn’t know what’s in your table variable. That’s still true in SQL Server 2019, even if the optimizer knows how many rows are in your table variable.

The problem that you can run into, even with just getting 100 rows into a table variable, is that it might take a lot of work to get those 100 rows.

Bernard


I’ve blogged before about workarounds for this problem, but the issue remains that inserts, updates, and deletes against table variables aren’t naturally allowed to go parallel.

The reason why is a bit of a mystery to me, since table variables are all backed by temp tables anyway. If you run this code locally, you’ll see what I mean:

SET NOCOUNT ON;
SET STATISTICS IO ON;
DECLARE @t table(id int);
SELECT * FROM @t AS t;
SET STATISTICS IO OFF;

Over in the messages tab you’ll see something like this:

Table '#B7A53B3E'. Scan count 1, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

Now, look, I’m not asking for update or delete portions of the query plan to go parallel, but it might be nice if other child operators could go parallel. That’s how things go with regular tables and #temp tables. It would be nice if inserts could go parallel, but hey

Ardbeg


The problem this solves is one that I see often, usually from vendor code where the choice of which temporary object to use was dependent on individual developer preference, or they fell for the meme that table variables are “in memory” or something. Maybe the choice was immaterial at first with low data volume, and over time performance slowly degraded.

If I’m allowed to change things, it’s easy enough to replace @table variables with #temp tables, or use a workaround like from the above linked post about them to improve performance. But when I’m not, clients are often left begging vendors to make changes, who aren’t receptive.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Common Table Expression Materialization

Repetition Is Everything


I know what you’re thinking: this is another post that asks for a hint to materialize CTEs.

You’re wrong. I don’t want another hint that I can’t add to queries to solve a problem because the code is coming from a vendor or ORM.

No, I want the optimizer to smarten up about this sort of thing, detect CTE re-use, and use one of the New And Improved Spools™ to cache results.

Let’s take a look at where this would come in handy.

Standalone


If we take this query by itself and look at the execution plan, it conveniently shows one access of Posts and Users, and a single join between the two.

SELECT
    u.Id AS UserId,
    u.DisplayName,
    p.Id AS PostId,
    p.AcceptedAnswerId,
    TotalScore = 
        SUM(p.Score)
FROM dbo.Users AS u
JOIN dbo.Posts AS p
    ON p.OwnerUserId = u.Id
WHERE u.Reputation > 100000
GROUP BY 
    u.Id, 
    u.DisplayName,
    p.Id, 
    p.AcceptedAnswerId
HAVING SUM(p.Score) > 1
SQL Server Query Plan
invitational

Now, let’s go MAKE THINGS MORE READABLE!!!

Ality


WITH spool_me AS
(
    SELECT
        u.Id AS UserId,
        u.DisplayName,
        p.Id AS PostId,
        p.AcceptedAnswerId,
        TotalScore = 
            SUM(p.Score)
    FROM dbo.Users AS u
    JOIN dbo.Posts AS p
        ON p.OwnerUserId = u.Id
    WHERE u.Reputation > 100000
    GROUP BY 
        u.Id, 
        u.DisplayName,
        p.Id, 
        p.AcceptedAnswerId
    HAVING SUM(p.Score) > 1
)
SELECT
    a.UserId,
    a.DisplayName,
    a.PostId,
    a.AcceptedAnswerId,
    a.TotalScore,
    q.UserId,
    q.DisplayName,
    q.PostId,
    q.AcceptedAnswerId,
    q.TotalScore
FROM spool_me AS a
JOIN spool_me AS q
    ON a.PostId = q.AcceptedAnswerId
ORDER BY a.TotalScore DESC;

Wowee. We really done did it. But now what does the query plan look like?

SQL Server Query Plan
oh, you

There are now two accesses of Posts and two accesses of Users, and three joins (one Hash Join isn’t in the screen cap).

Detection


Obviously, the optimizer knows it has to build a query plan that reflects the CTE being joined.

Since it’s smart enough to do that, it should be smart enough to use a Spool to cache things and prevent the additional accesses.

Comparatively, using a #temp table to simulate a Spool, is about twice as fast. Here’s the CTE plan:

SQL Server Query Plan
double

Here’s the Spool Simulator Plan™

SQL Server Query Plan
professionals

Given the optimizer’s penchant for spools, this would be another chance for it to shine on like the crazy diamond it is.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Handling Disjunctive Predicates

Bite Sized Gripes


I sat down to write this blog post, and I got distracted. I got distracted for two hours.

SQL Server Query Plan
take two

So, pretty obviously, we have a query performance issue.

What’s the cause of this malady? OR. Just one little OR.

Aware


It’s not like I don’t have indexes. They’re fabulous.

CREATE INDEX c 
ON dbo.Comments
    (PostId, UserId);

CREATE INDEX v 
ON dbo.Votes
    (PostId, UserId);

CREATE INDEX cc
ON dbo.Comments
    (UserId, PostId);

CREATE INDEX vv
ON dbo.Votes
    (UserId, PostId);

Look at those things. Practically glowing.

But this query just wrecks them

SELECT
    records = 
        COUNT_BIG(*)
FROM dbo.Comments AS c
JOIN dbo.Votes AS v
    ON c.UserId = v.UserId
    OR c.PostId = v.PostId;

That’s the plan up there that ran for a couple hours.

Unrolling


A general transformation that the optimizer can apply in this case is to union two result sets together.

SELECT
    records = 
        COUNT_BIG(*)
FROM 
(
    SELECT
        n = 1
    FROM dbo.Comments AS c 
    JOIN dbo.Votes AS v 
        ON  c.UserId = v.UserId
        AND c.PostId <> v.PostId
    
    UNION ALL
    
    SELECT
        n = 1
    FROM dbo.Comments AS c
    JOIN dbo.Votes AS v 
        ON  c.PostId = v.PostId
        AND c.UserId <> v.UserId
) AS x;

The following are two executions plans for this transformation. One in compatibility level 150, where Batch Mode On Row Store has kicked in. The second is in combability level 140, in regular old Row Mode. Though the Row Mode only plan is much slower, it’s still a hell of a lot faster than however much longer than two hours the original query would have run for.

SQL Server Query Plan
light of day

The reason the Row Mode plan is so slow is because of a god awful Repartition Streams.

SQL Server Query Plan
dunked

Abstract


This is one of those “easy for you, hard for the optimizer” scenarios that really should be easy for the optimizer by now.

I don’t even care if it can be applied to every instance of this — after all there may be other complicating factors — it should at least be available for simple queries.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Default Isolation Level

You’re Gonna Miss Me


I deal with blocking problems all the time. All the damn time. Deadlocks, too.

Why do I have to deal with these problems? Read Committed is the default isolation level in SQL Server.

It is an utterly broken isolation level, and it shouldn’t be the default anymore.

Can’t Fix It


Any isolation level that lets

Shouldn’t be the default in a major database. No self-respecting database would do that to itself, or to end users.

Imagine you’re some poor developer with no idea what SQL Server is doing, just trying to get your application to work correctly, and your queries end up in asinine blocking chains because of this dimwitted default.

I’d switch to Postgres, too. Adding in NOLOCK hints and maybe getting wrong data is probably the least of their concerns.

Ahzooray


In Azure SQL DB, the default isolation level is Read Committed Snapshot Isolation (RCSI). Optimism for me but not for thee is the message, here.

Imagine a problem so goofy that Microsoft didn’t want to deal with it in its cloud product? You’ve got it right here.

For the next version of SQL Server, the default isolation level for new databases should also be RCSI.

Especially because databases can have a local version store via Accelerated Database Recovery. Why not make the most of it?

And solve the dumbest problem that most databases I see deal with.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Spools

Per Capita


There are two angles to this. One is that spools are just crappy temp tables. The mechanism used for loading data, especially into Eager Spools, is terribly inefficient, especially when compared to the various advances in tempdb efficiency over the years.

The second angle is that Eager Index Spools should provide feedback to users that there’s a potentially useful index that could be created.

There are checks for Eager Table and Index Spools in sp_BlitzCache. Heck, it’ll even unravel them and tell you which index to create in the case of Eager Index Spools.

It still bothers me that there’s no built-in tooling or warning about queries that rely on Spools. Not because the Spools themselves are necessarily bad, but because they’re usually a sign that something is deficient with the query or indexing. Spools in modification queries are often necessary and not as worthy of your scorn, but can often be substituted with manual phase separation.

You may have lots of very fast queries with small Spools in them. You may have them right now.

You may also have lots of very slow queries with Spools in them, and there’s nothing telling you about it.

Spool Building


In a perfect world, when spools are being built, they’d emit a specific wait stat. Having that information would help all of us who do performance tuning work know what to look for and focus in on during investigations.

Right now you can get part of the way there by looking at EXECSYNC waits, but even that’s unreliable. They show up in parallel plans with Eager Index Spools, but they show up from other things too. Usually, your best bet is to just look for top resource consuming queries and examine the plans for Spools.

SQL Server Query Plan
syncer

Step 1: Give us better waits to identify Spools being built

Index Building


The optimizer will complain about missing indexes all the live-long day. Even in cases where an index would barely help.

SQL Server Query Plan
banjo

I don’t know about you, but in general ~200ms isn’t a performance emergency.

But like, three minutes?

SQL Server Query Plan
bank account

I’d wanna know about this. I’d wanna create an index to help this query.

Step 2: Give us missing index requests when Eager Index Spools get built.

Buy A Telescope


This kind of stuff is already super-important now, and will become even more important as Scalar Function Inlining becomes more widely used.

End users need better feedback when new features get turned on and performance gets worse.

Sure, if you read this blog and know what to look for, you can find and fix things quickly. But there are a whole lot of people who don’t have things that easy, and I get a lot of calls from them to fix this sort of issue.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: ISNULL SARGability

Preferential


Of course, I’d love if everyone lived in the same ivory tower as me and always wrote perfect queries with clear predicates that the optimizer understands and lovingly embraces.

But in the real world, out in the nitty gritty, queries are awful. It doesn’t matter if it’s un-or-under-trained developers writing SQL, or that exact same person designing queries in an ORM. They turn out horrible and full of nonsense, like drunk stomachs at IHOP.

One of the most common problems I see is people getting lazy with checking for NULLs, or overly-protective about it. In “””””real””””” programming languages, NULLs get you errors. In databases, they’re just sort of whatever.

Thing Is


When you create a row store index on a column, whether it’s ascending or descending, clustered or nonclustered, the data is put in order. In SQL Server, that means NULLs are sorted together. Despite that, ISNULL still creates problems.

DROP TABLE IF EXISTS #t;

SELECT
    x.n
INTO #t
FROM
(
SELECT
    CONVERT(int, NULL) AS n

UNION ALL

SELECT TOP (10)
    ROW_NUMBER() OVER
    (
        ORDER BY
            1/0
    ) AS n
FROM sys.messages AS m
) AS x;


CREATE UNIQUE CLUSTERED INDEX c 
ON #t (n) WITH (SORT_IN_TEMPDB = ON);

In this table we have 11 rows. One of them is NULL, and the other 10 are the numbers 1-10.

Odor By


If we select an ordered result, we get a simple query plan that scans the clustered index and returns 11 rows with no Sort operator.

SQL Server Query Plan
pleased to meet you

However, if we want to replace that NULL with a 0, things get goofy.

SQL Server Query Plan
dammit

Wheredo


Something similar occurs when ISNULL is applied to the where clause.

SQL Server Query Plan
happy
SQL Server Query Plan
unhappy

There’s one NULL. We know where it is. But we still have to scan 10 other rows. Just in case.

Conversion


The optimizer should be smart enough to figure out simple use of ISNULL, like in both of these cases.

I’m sure wiser people can figure out deeper cases, too, and even apply them to more functions that involve some types of date math, etc.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Things SQL Server vNext Should Address: Parameter Sniffing

Olympus


In my session on using dynamic SQL to defeat parameter sniffing, I walk through how we can use what we know statistically about data to execute slightly different queries to get drastically better query plans. What makes the situation quite frustrating is that SQL Server has the ability to do the same thing.

In fact, it does it every time it freshly compiles a plan based on parameter values. It makes a whole bunch of estimates and comes up with that it thinks is a good plan based on them.

But it sort of stops there.

Why am I talking about this now? Well, since Greatest and Least got announced for Azure SQL, it kind of got my noggin’ joggin’ that perhaps a new build of the box-product might make its way to the CTP phase soon.

I Dream Of Histograms


When SQL Server builds execution plans, the estimates (mostly) come from statistics histograms, and those histograms are generally pretty solid even with low sampling rates. I know that there are times when they miss crucial details, but that’s a different problem than the one I think could be solved here.

You see, when a plan gets generated, the entire histogram is available. It would be neat if there were a process to do one of a few things:

  • Mark objects with significant skew in them for additional attention
  • Bucket values by similar populations
  • Explore other plan choices for values per bucket

If you team this up with other features in the Intelligent Query Processing family, it could really help solve some of the most basic parameter sniffing issues. That’s what you want these features for: to take care of the low-hanging nonsense. Just like the cardinality estimation feedback that got announced at PASS Summit.

Take the VoteTypeId column in the Votes table. Lots of these values could safely share a plan. Lots of others have catastrophe in mind when plans are shared, like 2 and… well, most others.

ribbit

Sort of like how optimistic isolation levels take care of basic reader/writer blocking that sucks to deal with and leads to all those crappy NOLOCK hints. Save the hard problems for young handsome consultants.

Ahem ?

Overboard


I know, this sounds crazy-ambitious, and it could get out of hand quickly. Not to mention confusing! We’re talking about queries having multiple cached and usable plans, here. Who used what and when would be crucial information.

You’d need a lot of metadata about something like this, so you can tweak:

  • The number of plans
  • The number of buckets
  • Which plan is used by which buckets
  • Which code and parameters should be considered

I’m fine with auto-pilot for this most of the time, just to get folks out of the doldrums. Sort of like how Query Store was “good enough” with a bunch of default options, I think you’d find a lot of preconceived notions about what works best would pretty quickly be relieved of their position.

Anyway, I have a bunch of other posts about similar solvable problems. I have no idea when the next version of SQL Server will come out, or what improvements or features might be in it, but I hear that blog posts are basically wishes that come true. I figure I might as well throw some coins in the fountain.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.