Is RECOMPILE Really That Bad For SQL Server Query Performance?

To Re Or Not To Re


It’s been a while since SQL Server has had a real RECOMPILE problem. And if you put it up against the performance problems that you can hit with parameter sniffing, I’d have a hard time telling someone strapped for time and knowledge that it’s the worst idea for them.

Obviously, you can run into problems if you (“you” includes Entity Framework, AKA the Database Demolisher) author the kind of queries that take a very long time to compile. But as I list them out, I’m kinda shrugging.

Here are some problems you can hit with recompile. Not necessarily caused by recompile, but by not re-using plans.

  • Long compile times: Admittedly pretty rare, and plan guides or forced plans are likely a better option.
  • No plan history in the cache (only the most recent plan): Sucks if you’re looking at the plan cache. Sucks less if you have a monitoring tool or Query Store.
  • CPU spikes for high-frequency execution queries: Maybe time for caching some stuff, or getting away from the kind of code that executes like this (scalar functions, cursors, etc.)

But for everything in the middle: a little RECOMPILE probably won’t hurt that bad.

Thinking of the problems it can solve:

  • Parameter sniffing
  • Parameter embedding (lack of)
  • Local variable estimates
  • Catch all queries

Those are very real problems that I see on client systems pretty frequently. And yeah, sometimes there’s a good tuning option for these, like changing or adding an index, moving parts of the query around, sticking part of the query in a temp table, etc.

But all that assumes that those options are immediately available. For third party vendors who have somehow developed software that uses SQL Server for decades without running into a single best practice even by accident, it’s often harder to get those changes through.

There’s More Than One Way To Recompile


Sure, you might be able to sneak a recompile hint somewhere in the mix even if it’d make the vendor upset. You can always yoink it out later.

But you have alternatives, too.

  • DBCC FREEPROCCACHE: No, not the whole cache. You can single out troublesome queries to remove specific plans.
  • Plan Guides: An often overlooked detail of plan guides is that you can attach hints to them, including recompile.

Using a plan guide doesn’t interfere with that precious vendor IP that makes SQL Server unresponsive every 15 minutes. Or whatever. I’m not mad.

And yeah, there’s advances in SQL Server 2017 and 2019 that start to address some issues here, but they’re still imperfect.

I like’em, but you know. They’re not quite there yet.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.



12 thoughts on “Is RECOMPILE Really That Bad For SQL Server Query Performance?

  1. Heh… on the DBCC FREEPROCCACHE thing and with the understanding that my server probably isn’t anywhere near a place where it would cause a real concern, we have a heavy mix between OLTP and large processing runs, especially during the daytime. It takes way too long to isolate every single stored procedure that needs a recompile and so I DO use DBCC FREEPROCCACHE on my production box from time to time with no serious ill effects and it cleans up a wealth of sins immediately. YMMV but, for me, using DBCC FREEPROCCACHE is not the no-no that everyone makes it out to be.

    As for recompiles, I agree. They aren’t necessarily the sin that a lot of people would have you believe. They’re especially useful in some of those large processing runs where you need for a recompile to occur to actually get minimal logging to work.

    It’s been a while since I’ve looked at our recompile situation at work but we had a proc that “only” took 100ms to execute, which I also thought was taking too long but couldn’t get management to budge on even though I explained the tens of thousands of times it was being hit each hour. Then I did a recompile analysis and the damned thing wasn’t only recompiling EVERY time it was called, it was taking between 2 and 22 seconds to recompile every time with the average coming in at 20 seconds. Once I did my tweaks to the rather short code in the proc, not only did the recompiles go away but the execution is now measured in single digit ms. I didn’t know it, but this also solved a major pain point in return times on the floor for a particular screen in the application that used the proc.

    As for the “advances” they came up with in this area for 2017 and 2019, I’m scared to death to move off 2016. I’m still suffering from some of the “advances” they came up with in 2014 and 2016. They don’t give us much choice when it comes to upgrading, though. One such “advance” is “Fast Inserts”. Most people don’t know about it because their index maintenance routines hide the fault it has with allocating a full extent without looking for partially empty already existing extents even if you’re inserting a proverbial “1 byte” row. Thank goodness for TF 692.

    The changes they made for TempDB are pretty good but not allowing even a temporary excursion to unbalance file sizes has killed a whole lot of stuff that we were doing because of another fault embedded in SET IDENTITY INSERT ON where the entire data transfer is sorted in TempDB even though it’s not needed because of the presence of a Clustered Index and Minimal Logging.

    I wish MS would stop making “improvements” and start fixing-in-earnest the “improvements” they’ve already made.

    1. Thanks, Jeff. Always appreciate your comments. I’m terribly interested in what was going on in that proc that took 20 seconds on average to compile, hahaha.

  2. I have been espousing rampant use OPTION (RECOMPILE) for over a decade now.

    The VAST majority of report queries/sprocs written have at least one date (with comparison) or two of them (start and end). Those should ALL be RECOMPILEd. You NEVER want the same plan for one day as for 15 years of data, or vice versa. The generic case of what I call “widely varying input parameters”, and there are quite a few different scenarios where that comes into play.

    Another situation that REQUIRES OPTION (RECOMPILE) is data value skew. Is one of your clients 35% of all data? Best RECOMPILE anything with @ClientID then. Almost every client I come across has this situation, often in several important tables. It can also take place in IN clauses, so don’t forget those.

    The recompilation lock issue I see others mention is an EXTREME rarity on sufficiently powered hardware. Think VISA on Black Friday frequency to hit it.

    Long-compiling queries? Those are often the very reports that NEED RECOMPILE. And if you are struggling with long recompiles and using an ORM, well, then you have other problems (and can perhaps switch to sprocs to better manage some of your ORM-induced badness – of which the types are numerous). CPU-killing recompiles? I willhit sufficiently powered hardware again. And if you have cursors/UDFs in play then you have pretty simple solutions, namely, get rid of them. 😀

    Hey Jeff!!! I need to dig into your two issues with 2016. I didn’t know about them and can think of some situations where they could be causing issues with clients!

  3. Hey Erik,

    I had forgotten to click on the “Notify me” box when I posted and so I went to the “Subscribe without commenting” link. Be advised that the field for the email address has white lettering on a white background.

  4. Hey Kevin,

    Thanks for the feedback.

    Yeah, the “Fast Inserts” TF 692 thing was a real pain for me. The best documentation for it is at the following URL. Do a search for “692” when you get to the page. I didn’t add the WhatsUp Gold database to my index maintenance and it exploded. It was consuming over 50GB and when I went to check on why, it showed that about 43GB was “allocated but unused”. I got lucky with finding the fix because it’s not a well known issue.

    https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15

    I’ve lost track of the “Azure Feedback” item about SET IDENTITY INSERT ON. I’ll see if I can dig it out of my links when I get home. I did verify that it’s what happened to me when I tried to copy that table I mentioned.

  5. Using it is not that bad an idea as it can sometimes be the right solution. The problem with it is the warning that you should give that it doesn’t apply everywhere as a default answer. A few years back we had a lead DB developer use it in a specific place and as the sproc was a decent example of code the non-DB developers picked up on that and assumed that a recompile was the default way we should be writing queries. At that company we did somewhere around 25k batches/sec almost constantly throughout the day. It got noticed quite quickly when a deploy rolled out and it was used all over. Definitely one for the ‘use wisely’ folder and not ‘never use’ though.

    1. Ooh yeah, that’s one of the points I make about when you shouldn’t use it. Like I always say: code is culture, you gotta set a good example for people who don’t know any better.

      Thanks!

Comments are closed.