Software Vendor Mistakes With SQL Server: Lowering Fill Factor For Every Index

Times And Places

I can’t remember the last time I saw someone lower fill factor for a good reason. Perhaps those just aren’t the type of folks who need my help, or perhaps no one has done it for a good reason since Windows 2003 dropped and part of Any Good Consultation™ was checking to make sure disks were aligned properly.

What a bad time for disks, that was. Thank the Magnetic Fields that SSD and Flash came along to sit at the other end of a SAN with a shoestring and some Dixie cups between them.

But anyway, the story used to go something like this:

We have a lot of page splits

Are they good or bad?

Aren’t they all bad?

No, but we should look more closely at page density to figure out…



I set Fill Factor to 70 for everything. We’re cool.

This is, of course, wrong-headed in the same way that applying anything that works to fix a specific thing across the board is.

What Fill Factor Does

When you change Fill Factor, whether it’s at the database level, or index level, is leave your chosen percent as free space. on each data page at the leaf level of an index. But only when you rebuild or reorganize them. At no point in-between does SQL Server care about that percentage.

At the very best, you’re only buying yourself some time until you have “page splits” again. Depending on how busy a bottom your table is, you might need to do index maintenance quite frequently in order to get that fill factor back.

And you know what? That’s a real bad time. It’s what I refer to as spinning disk mentality.

I can’t imagine how anyone would track bad page splits in a meaningful way, and apply fill factor in a way that would permanently keep them at bay.

The worst part about Fill Factor is that it gets applied to all pages — even ones that are in no danger of facing a split — and every time you apply it, your indexes get bigger as you add free space back to new pages.

Since people always seem to want to shrink the ever lovin’ beet juice out of their databases, I assume they hate big databases that take up a lot of disk space. One way to get a big database really fast is to add a bunch of empty space to all of your tables and indexes.

What Fill Factor Doesn’t Do

Fill Factor doesn’t make read queries faster, especially if you’ve designed them mostly to accommodate Seeks in an OLTP environment. Seeks do not fall victim to these things the way scans do, because they navigate to individual rows.

They do just about the same amount of work no matter what, unless you add more levels to your indexes, but that tends to happen as they get larger, anyway.

And, good news, lowering Fill Factor will make Big Scans way slower. Why? They have to read more pages, because you decided to add a bunch of empty space to them. You’re really racking up the wins here, boss.

Not only is your database artificially huge, but all those reporting queries you’re worried about bringing your transactional queries to a halt take longer and do the dreaded “more reads” 😱 😱 😱

I often call Fill Factor “silent fragmentation”, because it reduces the density of your data pages dramatically, depending on what you lower it to. And it’s the actual bad kind of fragmentation — physical fragmentation — not the stuff your index maintenance scripts look at.

Thanks for reading!

Going Further

If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

7 thoughts on “Software Vendor Mistakes With SQL Server: Lowering Fill Factor For Every Index

  1. “When you change Fill Factor, whether it’s at the database level, or index level, is leave your chosen percent as free space […]”
    I thought it will leave (100 – your_percent) as free space.

  2. Great post series Erik as usual.

    Here in Brazil we had a huge erp company called ……. that has more client than SAP in my country! Whenever the Vendor join the conversations “Fill Factor” is a magic big one :)D

  3. To quote the first reply on this thread…
    happydba says:
    January 12, 2022 at 10:50 am

    I predict a comment form Jeff Moden incoming.

    Wait for it….

    BWAAAA-HAAAA-HAAAA!!! You don’t have to be a skilled data analyst to know that was going to happen! 😀

    First, I totally agree with Erik on the subject of any “blanket” Fill Factor on all tables of less than 100/0. People that do that are just wasting a whole lot of space because there’s a special purpose for Fill Factor’s lower than 100. That purpose is to help prevent “Bad” page splits, which is one of the causes of both Logical (pages out of physical order compare to the Logical order identified by the “double-linking” and Physical (low page density) fragmentation.

    As a bit of a side bar, let’s say that after a rebuild of a given index at 100%, it has a page density of 98% because of the row lengths. Now you want to rebuild that index at 70%. How much will it cost you in extra disk and buffer pool space? Most folks would say 98-70 or 28% more. It’s actually much worse than that because you have to now account for the original data and the extra space ON EACH PAGE. The correct formula is (100/70-1)*100 or 42.86% more. Please see the following article for the proof of that.

    Now, the really big waste come from all those tables that don’t actually suffer any fragmentation. Those include STATIC reference tables and a ton of ever-increasing indexes that never suffer “ExpAnsive” updates, which usually include the largest tables in your entire database (Log, Audit, History and similar tables). What good does it do to waste nearly 43% of the disk space there (except to help justify buying more disk space or keeping it as a hidden reserve)?

    Shifting gears a bit, how can you find out if an index that is fragmenting would benefit from lowering the Fill Factor? You could do a deep dive on all that goes on in that index including determining what is doing “ExpAnsive” updates and out of order inserts and measuring page “bad” pages splits in the log file, etc, etc, etc but there’s a much easier way to figure out if lowering the Fill Factor will help prevent fragmentation… lower the Fill Factor for THAT index to 70% and a couple of days for small indexes and a week for big indexes. If fragmentation occurs (>1%), then that Fill Factor isn’t doing a thing for you. You either need to figure out why the index is fragmenting and fix it or rebuild it at 100% and wait until page density drops below, say, 80% before you rebuild it again to recover the disk/buffer pool space. (I rebuild these types of indexes and 97% because the “7” looks like a footless “2” and it reminds me that “I have something TO do” on that index to stop it from fragmenting).

    To wit, Erik said in one of his previous articles ( ) that people are mostly measuring the wrong thing (Logical Fragmentation) to determine if an index should need maintenance. He’s mostly correct.

    The game changes a whole lot if you use Random GUIDs. The reason why they fragment so badly isn’t because of their random insert positions. They’re actually the epitome of what and how most people thing and index operates as. If you have Random GUID indexes that are fragmenting, then you need to watch the following presentation because you’re maintaining them incorrectly. When you do, remember that Random GUID indexes aren’t the only index that behave in such a fashion and that if you’re using REORGANIZE on fragmenting indexes that you’ve lowered the Fill Factor on, then your maintaining those indexes incorrectly (the “Best Practice” Index Maintenance that people have adopted has been horribly wrong for more than 2 decades). The irony here is this is one place where you need to check for Logical Fragmentation >1% in order to prevent fragmentation for literally months at a time once you rebuild them. Again, see the following presentation.

    But, getting back to this fine article, Erik is absolutely correct. Don’t ever use a lower Fill Factor as a panacea and don’t ever apply it to all indexes and don’t every change your system or database default Fill Factor to anything less than 100%… not even if ALL your indexes are based on Random GUIDs. By the same token, don’t be afraid to use lower Fill Factor BY INDEX to prevent “Bad” page splits if and when a lower Fill Factor actually works as expected.

Leave a Reply

Your email address will not be published. Required fields are marked *