Shame Shame Shame
One of the very first performance problems that developers will run into when using SQL Server is locking, and often deadlocks.
Though deadlocks have their root in (b)locking behavior, not all blocking leads to deadlocks.
Some (b)locking will just go on forever, ha ha ha.
The reason there are so many SQL Server blog posts about NOLOCK hints, and so much confusion about what it does, is because of defaults.
The worst part that it’s a default that shouldn’t be, and… somehow people have this sunny view of what NOLOCK does, but they all have a very negative view of better solutions to the problem.
What NOLOCK Really Does
I have this conversation at least twice a week, despite having written about it plenty of times.
And other people writing about it many times.
Think of modifications sort of like an Infinite Improbability Drive.
As soon as the drive reaches infinite Improbability, it passes through every conceivable point in every conceivable universe simultaneously. An incredible range of highly improbable things can happen due to these effects.
Perhaps not quite that eccentric, but you get the idea. While a modification in flight, a query with a NOLOCK (or READ UNCOMMITTED) hint may read those changes while they’re happening.
- Incomplete inserts
- Incomplete deletes
- Incomplete updates
Inserts and deletes are a bit more straight forward. Say you’re inserting or deleting 10 rows, and either one is halfway done when your select query that is running with
flaming knives and scissors a NOLOCK hint comes along.
You would read:
- The first five inserted rows
- The remaining five rows to be deleted
For updates, things are a little trickier because you might end up with an in-place update or per-index update.
You can read:
- Partially changed rows
- From an index that hasn’t been modified yet
- Something in between
This is not what you want to happen.
Even if you have a million excuses as to why it works okay for you (it’s just a mobile app; they can refresh, we only need close-enough reports; users make changes and then read them later) I promise you that it’s not something you want to happen, because you can’t fully predict the ramifications of many concurrent modifications and selects running all together.
What You Really Want To Happen Instead
The utter beauty of these solutions is that they give you reliable results. They may not be perfect for every situation, but for probably like 99% of cases where you’re using NOLOCK hints everywhere anyway, they do.
Rather than futz about with the Infinite Improbabilities that could be read from modifications that are neither here nor there but are certainly not completed, you read the last known good version of a row or set of rows that are currently locked.
There you have it! No more uncertainty, puzzled users, additional database requests to refresh wonky-looking data, or anything like that.
It’s just you and your crisp, clean data.
If you’re on SQL Server 2019 and using Accelerated Database Recovery, the known-good versions of your precious data will be stored locally, per-database.
In all other scenarios, the row versioning goes off to tempdb.
Your select queries can read that data without being impeded by locks, and without all of incorrectness.
What options do you have to take advantage of these miraculous functionalities?
Those sound pretty close, but let’s talk a little bit more about them.
Isolation Levels, Not In Depth
It’s difficult to cover every potential reservation or concern someone may have about isolation levels. If you have an application that depends on certain locking guarantees to correctly process certain events, you may need read committed, or something beyond read committed (like repeatable read or serializable) to hold the correct locks.
If that’s your app design, then your job becomes query and index tuning to make sure that your queries run as quickly as possible to reduce the locking surface area of each one. This post is not for you.
This post is largely geared towards folks who have NOLOCK everywhere like some sort of incantation against performance problems, who hopefully aren’t totally stuck in their ways.
Here are some of the potential downsides of optimistic isolation levels:
- Prior to SQL Server 2019, you’re going to add some load to tempdb
- You’re going to add an up to 14-byte pointer to each row for tracking in the version store (but that happens on any table where there’s a trigger anyway)
- You need to remove local locking hints on queries you want to use row versioning
- You need to set the isolation level to allow queries to use Snapshot Isolation
- Read query performance may slow down if the version store gets particularly large
- You need to more closely monitor long running modifications to make sure they don’t fill tempdb
But this stuff is all worth it, because you can get around lots of weird, transient application issues:
- Incorrect results
- Incorrectly processed events
- Readers blocking writers
- Readers deadlocking with writers
If you’re currently using NOLOCK everywhere, or if someone starts suggesting you use it everywhere for better performance, know that you have better options out there.
Thanks for reading!
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.