This post is rather short, because it’s more of a link round-up than anything.
I think that index compression is so generally useful that I’d start off any new system with it enabled, just to avoid issues with needing to apply it later. Where it’s particularly useful is on systems where data is plenty, and memory is scarce.
Having index structures that are much smaller both on disk and in memory is quite useful. It’s also nice when you’re on Standard Edition, and you need to make the most of out the 128GB cap on the buffer pool.
For some great information and detail on index compression, check out My Friend Andy™ who has very helpful blog posts about it.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
It’s hard to believe that this is still a lively debate, given how far technology has come since the original conversations around if and when fragmentation is a problem were a thing, but here we are in 2020 for the third year straight.
I will give people some credit where it’s due, I’ve seen index maintenance habits change a bit over the years:
Reducing frequency to once a week or once a month from every night
Bumping the thresholds that they reorg and rebuild way higher than 5% and 30%, like 50% and 80%
Abandoning it all together when using AGs or other data synchronization technologies
It’s a good start, but people still ascribe far too many benefits to doing it. Rather than rehash everything I’ve ever said about it, I’m gonna post a video of Erin Stellato (b|t) and I discussing the pros, cons, whens, wheres, whys, and hows in this video:
Thanks for reading (and watching)!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
I can’t remember the last time I saw someone lower fill factor for a good reason. Perhaps those just aren’t the type of folks who need my help, or perhaps no one has done it for a good reason since Windows 2003 dropped and part of Any Good Consultation™ was checking to make sure disks were aligned properly.
What a bad time for disks, that was. Thank the Magnetic Fields that SSD and Flash came along to sit at the other end of a SAN with a shoestring and some Dixie cups between them.
But anyway, the story used to go something like this:
We have a lot of page splits
Are they good or bad?
Aren’t they all bad?
No, but we should look more closely at page density to figure out…
FIXED IT!
What?
I set Fill Factor to 70 for everything. We’re cool.
This is, of course, wrong-headed in the same way that applying anything that works to fix a specific thing across the board is.
What Fill Factor Does
When you change Fill Factor, whether it’s at the database level, or index level, is leave your chosen percent as free space. on each data page at the leaf level of an index. But only when you rebuild or reorganize them. At no point in-between does SQL Server care about that percentage.
At the very best, you’re only buying yourself some time until you have “page splits” again. Depending on how busy a bottom your table is, you might need to do index maintenance quite frequently in order to get that fill factor back.
I can’t imagine how anyone would track bad page splits in a meaningful way, and apply fill factor in a way that would permanently keep them at bay.
The worst part about Fill Factor is that it gets applied to all pages — even ones that are in no danger of facing a split — and every time you apply it, your indexes get bigger as you add free space back to new pages.
Since people always seem to want to shrink the ever lovin’ beet juice out of their databases, I assume they hate big databases that take up a lot of disk space. One way to get a big database really fast is to add a bunch of empty space to all of your tables and indexes.
What Fill Factor Doesn’t Do
Fill Factor doesn’t make read queries faster, especially if you’ve designed them mostly to accommodate Seeks in an OLTP environment. Seeks do not fall victim to these things the way scans do, because they navigate to individual rows.
They do just about the same amount of work no matter what, unless you add more levels to your indexes, but that tends to happen as they get larger, anyway.
And, good news, lowering Fill Factor will make Big Scans way slower. Why? They have to read more pages, because you decided to add a bunch of empty space to them. You’re really racking up the wins here, boss.
Not only is your database artificially huge, but all those reporting queries you’re worried about bringing your transactional queries to a halt take longer and do the dreaded “more reads” 😱 😱 😱
I often call Fill Factor “silent fragmentation”, because it reduces the density of your data pages dramatically, depending on what you lower it to. And it’s the actual bad kind of fragmentation — physical fragmentation — not the stuff your index maintenance scripts look at.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
In yesterday’s post, we covered some of the basics of designing nonslustered indexes to make queries go faster, but sometimes those aren’t quite enough.
In today’s post, I’m going to give you more of my paid training about filtered indexes and indexed views.
What I cover here is how to use them correctly, and some of the things they just don’t work well with. Again, if you like what you see, hit the link at the end of the post for 75% off.
Filtered Indexes
Here’s the intro to filtered indexes
Here are the demos:
Indexed Views
Here’s the intro to indexed views:
Here are the demos for indexed views:
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
There’s a bit of magic to index tuning, once you move beyond the basics. But we do have to start with the basics. In order to do that quickly, I’m putting a couple videos from my paid training in this post. If you like what you see, hit the link at the end of the post to get 75% off everything.
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
Without fail, I seem to have this conversation every week:
Why is this a heap?
The vendor doesn’t like clustered indexes.
Ah, okay, why?
I don’t know.
There’s a nonclustered primary key, though…
Isn’t that the default?
No, you have to go out of your way to choose that.
Wann Ist Es
Heaps aren’t always bad, but you have to be careful when you choose to use them, because the problems that sneak up with them are tricky to detect and annoying to fix. If you never run into them, great.
But if you do, try to keep an open mind. Clustered indexes work wonderfully for a great many people, and it’s unlikely that you fall far out of that category, especially if you have anything within a horseshoe or hand grenade from an OLTP workload.
When I look at client workloads that have problems with heaps, the main things that I call out are either:
Forwarded fetches that result in uneven I/O patterns
Captive pages from tiny deletes that don’t remove empty pages
Rather than go on and on about this stuff, here are a couple videos where I discuss the downsides and upsides of heaps. Normally they’re part of my paid training, but I’m making them available here to you for free:
To find tables that might need clustered indexes, a good, free tool is sp_BlitzIndex. It’s part of an open source project that I’ve spent a lot of time on. It’ll warn you about heaps that have forwarded records in them, and that have been deleted from. From there, it’s up to you to figure out if the table is big and critical enough to warrant adding a clustered index to.
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
Like many things, applications tend to evolve over time. New and improved functionality, Bug Fixed And Performance Improvements™, and ever expanding schema usually lead to new queries or tweaks to existing ones.
When designing new queries, creating or tweaking indexes to help them along is perhaps a bit more intuitive, depending on how comfortable you are with such things. If you’re starting from near-zero there, check the link at the end of my post for 75% off of my video training. I’ll teach you how to design effective indexes.
One of the more common issues I see is that someone tweaked a query to support new functionality, and it just happened to use indexes well-enough in a development environment that’s much smaller than real customer deployments.
In these cases, a better index may not be recommended by SQL Server. If it’s not obvious to the optimizer, it may also not be obvious to you, either. No offense.
All Devils
Let’s say we have this query. It’s nothing magnificent, but it’s enough to prove a couple points.
CREATE INDEX c ON dbo.Comments(CreationDate, UserId);
SELECT TOP (1000)
C.CreationDate,
C.Score,
C.Text,
DisplayName =
(
SELECT
U.DisplayName
FROM dbo.Users AS U
WHERE U.Id = C.UserId
)
FROM dbo.Comments AS C
WHERE C.CreationDate >= '20131215'
ORDER BY C.CreationDate DESC;
Our index serves three purposes:
The predicate on CreationDate
The order by on CreationDate
The correlated subquery on UserId
It’s important to keep things like this in mind, that sorted data is useful for making more efficient.
Let’s Go
The query plan for this is quite simple and efficient.
It’s hard to ask for anything faster, here, even if I am running on a VM. Two seeks, a small lookup, and everything done in 5 milliseconds.
Wolves At
But then one day a pull request comes along that changes the query slightly, to let us also filter and order by the Score column.
It looks like this now:
SELECT TOP (1000)
C.CreationDate,
C.Score,
C.Text,
DisplayName =
(
SELECT
U.DisplayName
FROM dbo.Users AS U
WHERE u.Id = C.UserId
)
FROM dbo.Comments AS C
WHERE C.CreationDate >= '20131215'
AND C.Score >= 8
ORDER BY
C.CreationDate DESC,
C.Score DESC;
In real life, if you’re smart, your queries will be parameterized. In this blog post, these values are to show you what can happen even with small changes to a query. The query plan looks like this now:
Arf Arf
We still seek into our nonclustered index to search for CreationDates that pass our predicate, but now we need to evaluate the Score predicate when we do our key lookup.
Rather than just get 1000 rows out immediately, we need to keep findings rows that pass the CreationDate predicate, but that also pass the Score predicate.
Since that’s judged by the optimizer to be a much more “expensive” task, and a parallel plan is chosen. Despite that, it still takes 231 milliseconds of duration, with 844 milliseconds of CPU time.
This could have many effects on the workload, depending on how frequently a query executes within the workload. Parallel queries use more CPU threads, which can get tricky under high concurrency, since they’re a finite resource based on CPU count.
We can save a lot of the problems here with a slightly adjusted index, like this:
CREATE INDEX c ON dbo.Comments(CreationDate, Score, UserId);
Delicious
With that index in place, we get a much more efficient plan back, that doesn’t need to go parallel to stay competitively fast. It’s not quite as fast as the original query, but it’s Good Enough™.
Cheese Wants
I know you’re all smart folks out there, and you’re going to have questions about the index I created, and why the columns are in the order they’re in.
If you have a copy of the StackOverflow2013 database, you might do some poking around and find that there are 374,865 rows that pass our CreationDate predicate, and only 122,402 that pass our Score filter, making Score more selective for this version of the query.
But that’s just this one execution, and things could be a lot different depending on what users filter on. The big advantage to keeping the columns in this order is that the order by remains supported. Without that, the optimizer goes back to choosing a parallel plan, and asking for a memory grant.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
Building on yesterday’s post about cleaning up old indexes: Once you’ve gotten rid of indexes that aren’t used anymore, you’re gonna have some more work to do. I know, it sucks, but hopefully it won’t be too difficult or confusing. If it is, hit the link at the end of the post to drop me a line for some consulting help.
The next thing I usually do is look for nonclustered indexes that have overlapping columns in the key to merge together.
Here are two basic patterns to look at, in order of how useful they are to us currently:
Key columns are an exact match
Key columns are super/sub-sets of other indexes
Key columns match to a point, but then differ
Key columns are the same, but in a different order
For the second two, I put those aside at first. Remember that we already got rid of indexes that aren’t used at all to make queries go faster, so now we’re left with indexes that do get used (though how much will vary dramatically from database to database).
Order Is Everything
Key column order matters to us, because it defines how queries can access data in the index. Let’s take an imaginary index keyed on columns (A, B, C).
If you want to search on:
A: Fast
A, B: Fast:
A, B, C: Fast
B: Slower
B, C: Slower
C: Slower
Column A being the leading key column in the index means that index data is sorter first by column A. If there are any duplicates in there, then column B will be sorted within that group.
One way to visualize that is like this:
Finding any value(s) in column A is easy, because they’re in sorted order. But finding values in any combination of B/C means we have to scan through all the values to find ones we care about, if we’re not also searching on A.
If we have indexes on
A, B, C
B, A C
Are they identical? Maybe sorta kinda. This is where domain knowledge about your application comes into play, and knowing if queries most often filter on A or B, and which queries are more important to the workload. If you’re not sure, leave’em both alone for now.
Definitional
Let’s say you have a bunch of indexes from the first two categories, where the key columns might look something like this:
Duplicates:
Key: A, B Includes (D, E)
Key: A, B Includes (D, E, F, G)
Super/sub-sets:
Key: D, E Includes (A, B)
Key: D, E, F Includes (A, B, C)
There are some other things we need to consider about the indexes:
Included columns: Can be merged safely; order doesn’t mater
Only stored at the leaf level, not ordered
Uniqueness: Can only be preserved for exact key matches
Unique D, E is different from D, E, F
Filters: Look at usage metrics to figure out these
A filtered may not be useful to all queries, especially when parameterized
Practical
In theory, wider indexes are better indexes, because they’re more useful to more queries.
With indexes that fully cover all the columns our queries reference, we don’t need to worry about the optimizer sometimes choosing our index and sometimes choosing the clustered index depending on how many rows it thinks it’s going to have to deal with.
That’s generally a positive, but there are some caveats:
Indexing columns that are updated frequently can exacerbate locking and deadlocking
Indexing max columns over and over again can really bloat out our database
Indexing to include every column in the table creates whole copies of the table
You have to strike a careful balance with indexes. Enough, but not too many. Covering, but not in a counterproductive way.
To find indexes that can be removed because they’re overlapping, a good, free tool is sp_BlitzIndex. It’s part of an open source project that I’ve spent a lot of time on. It’ll warn you about indexes that are exact and borderline duplicates, so that you can start looking at which are safe to merge together.
In some posts to follow, we’ll cover index design strategies that work, and how you can improve on SQL Server’s missing index requests.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
The longer your application has been around, the more developers and queries it has seen. There are a lot of ways the indexes could look, depending on how you approach performance issues with customers.
If you’re the type of shop that:
Creates specific indexes for clients experiencing problems or
Packages indexes into patches that everyone gets or
Allows clients to manage indexes on their own
You could be dealing with a lot of stray indexes depending on which path you’ve chosen. If no one is going back and looking at how all those indexes get used, you could be missing a lot of performance optimizations.
Three Ways
Let’s talk about the three ways that not cleaning up indexes can hurt performance that I see most often while working with clients:
SQL Server doesn’t work with pages while they’re stored on disk. It’s architected to work with pages that are stored in RAM, called the buffer pool, and that’s that. The more data you have by way of rows stored in tables, and indexes that make copies of data (except the clustered index, which is the base copy of your table), the more objects you have contending for space in the buffer pool.
It doesn’t matter if an index hasn’t been used in 10 years to help a query go faster, if you need to load or modify data in the base table, the relevant index pages need to be read into memory for those to occur. If your data is larger than memory, or if you’re on a version of SQL Server with a cap on the buffer pool, you could be hitting serious performance problems going out to disk all the time to fetch data into memory.
How to tell if this is a problem you’re having: Lots of waiting on PAGEIOLATCH_XX
Transaction Logging
The transaction log is SQL Server’s primitive blockchain. It keeps track of all the changes that happen in your database so they can be rolled back or committed during a transaction. It doesn’t keep track of things like who did it, or other things that Change Tracking, Change Data Capture, or Auditing get for you.
It also doesn’t matter (for the most part) which recovery model you’re in. Aside from a narrow scope of minimally logged activities like inserts and index rebuilds, everything gets fully logged. The big difference is who takes a log backup. Under FULL and BULK LOGGED, it’s you. Under SIMPLE, it’s SQL Server.
Just like with the buffer pool needing to read objects in from disk to make changes, the changes to those various objects need to be written to the transaction log, too. The larger those changes are, and the more objects get involved in those changes, the more you have to write to the log file.
There’s a whole layer of complication here that is way more than I can cover in this post — entire books are written about it — but the idea I want you to understand is that SQL Server is a good dog, and it’ll keep all your indexes up to date, whether queries use them to go faster or not.
How to tell if this is a problem you’re having: Lots of waiting on WRITELOG
Lock Escalation
The more indexes you have, the more locking you’ll likely have to do in order to complete a write. For inserts and deletes, you’ll have to hit every index (unless they’re filtered to not include the data you’re modifying). For updates, you’ll only have to lock indexes that have columns being changed in them. The story gets a little more complicated under other circumstances where things like foreign keys, indexed views, and key lookups get involved, but for now let’s get the basics under control.
When you start making changes to a table, SQL Server has a few different strategies:
Row locks, with a Seek plan
Page locks, with a Scan plan
Object locks with a Scan plan
Because SQL Server has a set amount of memory set for managing locks, it’ll attempt to make the most of it by taking a bunch of row or page locks and converting them to object locks. That number is around the 5000 mark. The number of indexes you have, and if the plan is parallel, will contribute to that threshold.
How to tell if this is a problem you’re having: Lots of waiting on LCK_XX
Sprung Cleaner
In this video, which is normally part of my paid training, I discuss how over-indexing can hurt you:
To find indexes that can be removed because they’re not used, a good, free tool is sp_BlitzIndex. It’s part of an open source project that I’ve spent a lot of time on. It’ll warn you about indexes that are unused by read queries, and even ones that have a really lopsided ratio of writes to reads.
Those are a great place to start your clean up efforts, because they’re relatively low-risk changes. If you have indexes that are sitting around taking hits from modifications queries and not helping read queries go faster, they’re part of the problem, not part of the solution.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
If your developers could intuit every possible way that your application could be used, and were really good at performance turning SQL Server, you probably wouldn’t be reading things like this.
But here I am, and here you are, and here we are, staring at each other desperately searching for a reason to go on.
Many times when I’m trying to help people achieve better performance, indexes are a big part of my analysis. Unless you have a brood of queries that are just so awful they’d thwart any and every index you could possibly throw under them, you can’t avoid this inevitability.
At the end of these calls, what I often get met with is: This is great, we just have to run it by our vendor.
Lemme explain why this is wrong, because I’ve seen that end result. Things don’t get better when you make then wait.
Sycamore Tree
You are a vendor. You vend software. The way clients use that software is up to them. They may customize it in some weird way, or they may have way more data stuck in their local site than other people.
It’s sort of like being a parent: you vend life to your kids, and you can teach them how you think they should use it, but there’s a really good chance they’re gonna make different choices.
When this happens, you need to be okay with the fact that your definition of how the software gets used is no longer applicable. This goes double for vendors who don’t bring much SQL Server performance tuning expertise to the table to begin with, because the indexes you had for your ideal usage pattern probably weren’t great either.
Running the new indexes they need by you for their usage of the software is a lot like you running what you wanna have for breakfast by me. I have no idea how you plan to spend your day. A feedbag of eggs and bacon might be totally reasonable.
The one exception to this is that the vendor might be able to take my suggestions and apply them to other installations — but this stinks of a different problem than hyper-specific customization — it means you have a lot of unhappy customers out there and you just got lucky that one was willing to pay for real help.
Plastic Surgeon
I’m going to spend a number of post talking about index follies I see all the time when looking at software vendor design patterns. For now, watch this video from my paid training to learn more about why you need nonclustered indexes:
Things like not cleaning up old indexes, not adding sufficient new indexes, general index design patterns, clustered indexes, and all the silly index settings I see people changing hoping to make a difference.
This post is to prepare you for the fact that indexes need to change over time, as your application grows. Index tuning is something you need to stay actively engaged with, otherwise you’re leaving a lot of performance on the table.
Especially for folks in the cloud, where hardware size is a monthly bill, this can be an expensive situation to end up in.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.