There are a lot of things I skip over when I look at sp_BlitzIndex. Not because they’re unhelpful, but because they need a lot of context.
Diagnosing index issues is tough, because server uptime, workload variability, and optimizer sleight of hand make things more difficult than they should be.
For example, say you have a unique nonclustered index on one column that has zero reads — it’s not helping queries go any faster — has millions of writes, but can you drop it? Is the optimizer using the statistics attached to it to make better guesses?
It’s really hard to know. So you keep this index around because you don’t want to make things worse.
SQL Server tells you a lot about indexes, and almost nothing about statistics.
Most of what I want to do with sp_BlitzIndex is figure out:
- Which indexes I can safely clean up
- Which tables might need clustered indexes
- Which tables have scalar UDFs in computed columns or check constraints
- Which missing index requests I can correlate to long running queries
To do that, you can run either of these commands:
EXEC dbo.sp_BlitzIndex @DatabaseName = N'StackOverflow2013', @Mode = 0; --Lighter detail EXEC dbo.sp_BlitzIndex @DatabaseName = N'StackOverflow2013', @Mode = 4; --Heavier detail
Depending on how much detail you want to get into, you might want to use either one. Usually, Mode zero gives you enough to get boogying on.
To get the stuff I want out of sp_BlitzIndex, I need to do some additional validation.
Let’s say for instance I have:
- 50 unused indexes
- 20 duplicate indexes
- 150 borderline duplicate indexes
I might drop the 50 unused indexes — assuming the server has been up for long enough for me to make that call with confidence — and that might also get rid of a bunch of duplicate and borderline duplicate indexes, too.
The same thing goes for duplicate indexes: most of the time everyone is just trying to solve today’s performance problem by adding an index. Not many people are looking back at all the other indexes that aren’t so useful anymore.
If you drop and/or merge a bunch of duplicate indexes, you’re very likely going to take a chunk out of the borderline duplicate indexes, too.
Index tuning is a very iterative process. There are entire chunks of data in sp_BlitzIndex that I won’t touch until I’ve taken care of other things.
This is somewhat dependent on the number of indexes that get flagged in each section, but in general I don’t want to spend a bunch of time working through hundreds of indexes that are going to disappear when I get through an earlier chunk of changes.
That’s why I always stress that you’re not just gonna be able to call fixing indexes a one-and-done procedure. You need to work your way through and plan changes in chunks.
Some tables are likely going to get prioritized in the process based on how important they are to the workload, too.
Of course, you can zoom in to a single table with sp_BlitzIndex to do really focused analysis, but a lot of the time there’s just so much darn blood everywhere that you need to save this step for last.
So what I usually do is:
- Get rid of totally unused indexes
- Come back and see what duplicate indexes are left
- Merge those together
- Come back and see what borderline duplicate indexes are left
- Merge those together
- Come back and see if there are any indexes with a really bad write to read ratio
- Decide which of those are safe to drop
I almost never wanna try to script all those changes in one go, because you’re really likely going to end up double and triple working indexes.
One section in particular that I used to pay a lot of attention to is missing indexes, but I don’t do that any more.
Why, you might ask?
They’re based on sort of weird things:
- Average query cost
- Assumed reduction in query cost
- Number of executions
I’ve blogged about what missing index requests really mean in SQL Server before, but:
- Missing index request impact is based on the assumed cost reduction of a single operator
- That operator’s cost is based on totally ancient assumptions about CPU and I/O hardware characteristics
The number of executions is a pretty stable metric, but if the query attached to the request is already running quickly enough, why bother?
Nowadays when I skip over the missing index request stuff until I find a query asking for one, figure out how important it is, and then figure out if the index will change the query in a meaningful way.
There’s almost no good reason to go and add every (or even any) missing index request based solely on what’s in these views, especially if you have a bunch of indexes you really need to clean up first.
Thanks for reading!
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly.