I’ve had quite a bit of luck with filtered indexes over the years. When they work well, they’re absolute aces.
But there’s a bit of trickiness to them, both for getting the optimizer to match the expression in your query to the filter on the index (they have to be an exact match), and for getting it to choose that index (parameterization is a nuisance for this).
You also need to define them correctly, which means any columns that you’re filtering on need to reside in the key or included columns as well. Not doing that risks the index not being used.
One thing I hear quite a bit — I’ve probably even said it myself at times — is that unless the index filters the number of rows down an appreciable amount, the filter might be more trouble than it’s worth.
But what if…
I see queries that sometimes have a lot of predicates. Where clause, join clause, the works.
When you have queries that reliably filter on some literal values, like bits or a status or something equivalent, you may want to consider filtering your index on those even if they’re not selective in order to accomplish two things:
- Reduce the number of candidate key columns
- Reduce the effort of applying residual predicates
Designing effective indexes is tough, and it’s something I see a lot of people struggle with. Particularly with complex queries that filter, join, group by, order by, maybe even throw a window function in there for good measure…
You see what I’m getting at. For most OLTP queries, I want my indexes to be where clause centric. The faster I can filter rows down to find the ones I care about the more problems I avoid later on.
Filtering indexes is just another way of doing that.
Thanks for reading!
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.