User Experience Under Different Isolation Levels In SQL Server

User Experience Under Different Isolation Levels In SQL Server



Thanks for watching!

Video Summary

In this video, I explore the nuances of isolation levels in SQL Server, specifically focusing on read committed and read committed snapshot (RCSI) isolation modes. Given my recent foray into unexpected hot weather, it’s clear that even my brain can get a bit foggy when things heat up! However, today’s content is anything but lukewarm. I delve into how these isolation levels behave differently under various conditions, particularly highlighting the challenges and inconsistencies you might encounter with read committed. Through a series of demonstrations, I show how queries running under read committed can be blocked or return jumbled results due to changes in the underlying data, even for short durations. By contrast, RCSI provides a more consistent snapshot view but still doesn’t reflect intermediate changes within transactions. This video aims to help you understand the trade-offs and choose the most suitable isolation level based on your application’s needs, ensuring better performance and fewer surprises.

Full Transcript

Erik Darling here with Darling Data on an unexpectedly hot and sticky day here in New York. So unexpectedly hot and sticky that I think parts of my brain have just ceased functioning. It’s just shut down. Nap time. We will siesta. We will siesta you tomorrow. That was bad. That was real bad. I should have thought about that before I said it out loud. I have that problem, though. I do have that problem. So today’s video is going to be sort of, I’ve said in a number of videos about isolation levels. I like talking about them because exploring isolation levels is an important part of any SQL Server journey because eventually you will hit strange concurrency phenomenons that you can’t explain or reproduce or even find out. any real evidence of. And so I want to talk about what I’ve said in these videos is that almost no isolation level is universally perfect for every workload. And it’s more about making a choice about you making a choice about how what you want your application to behave under certain circumstances. So we’re going to compare and contrast a little bit between read committed to the pessimistic isolation level and read committed snapshot isolation level. I know that I recorded a video sort of about this, but I came up with this demo that I like because it highlights a lot more of the weird changes that can take place while a query under read committed is unable to make progress or even is just making slow progress for some reason. Like in the case that I’m using, there’s some blocking involved. And we’re going to compare and see what I’m using.

And in real life, you might have a query that’s just like has to read from a big table or like scan along or seek along a big index. And maybe that index isn’t in memory. And maybe you have to maybe it take you a couple seconds. And a lot of stuff can change in a couple seconds. The point of this isn’t like, like, you know, begin trend, do a few things and like look at all the stuff that changed in like 10 seconds. It’s like a lot of this stuff can change very quickly. Like within like a few hundred milliseconds. And your query would still return just jacked up looking results. So I’m going to hit execute here to reload this table. I’ve got a table called read committed stinks. You know, perhaps a little overkill in the table name. You know, I’m willing to give read committed a little bit of credit. I’m getting a haircut tomorrow. And this like this patch here is just annoying the crap out of me. But if I had the wherewithal, I would probably just shave my head at this point. Nothing but trouble. So this table has like some standard like account information stuff.

You can ID and account ID, how much money you have in the account, your first name, your last name, when the account was created and the last time you were active in your account. And I couldn’t think of any good people names. So I just stuck a bunch of brunch menu items in the table. The prices do not reflect my respect for, adoration of, or preferences for these brunch items in any way.

I just rattled them off the top of my head. So please don’t try to infer anything about my brunch habits based on this table. And what I want to do is just give you a quick view of what this table currently looks like.

It pretty much looks like what I did up there, except now it’s a nice Excel format. All right. So that’s all null. That’s all like right now. We got first names, last names, values, everything.

Everything that you could want in a table. Really. Except maybe throw a nice XML or JSON column in there that just concatenates all that stuff together so that you can pull that out with your application instead.

I don’t know. People do dumb things all the time. So I want to make sure that recommitted snapshot isolation is off for the first run through. And what I’m going to do is, well, I’m not going to do it quite yet.

I’m going to have to hold on to those horses of yours for a couple seconds. So over in this window, I have a couple queries. This first query is doing a select from our knee-jerk table name.

And the idea is to look for anything with an account value greater than or equal to 1,000. You could put, you know what’s kind of funny about currency in SQL Server, is you could put any currency symbol in front of that 1,000, and it would find anything over 1,000.

SQL Server does not do currency conversion for you. So if I made that pound sign or whatever the euro is, it would just look for 1,000. It wouldn’t be like, oh, well, the pound is worth an extra 50 cents or something.

We’re going to look for anything that’s over 950. No, it just looks for anything over. You can put any currency you want in there.

You could put, like, I don’t know, whatever they use in Zimbabwe. If Zimbabwe is still a country, I’m not sure. You could put, like, 1,000 Zimbabwean dollars on there, and it would just be like, yeah, it’s over 1,000.

No problem. SQL Server does not do currency exchange rates for you. So you’d have to write a CLR function to go do a web call and check exchange rates and bring that back and then do some local conversion.

I don’t want to give you any worse ideas, though. So this query goes and looks for anything with an account value over 1,000. And if we think about what, you know, I put into the table up here, everything is over 1,000, right, or 1,000 or greater, right, 1,000 through 10,000.

And I’ve got this little decoder column because I’m going to make changes to the table that are not going to be reflected or might be extra reflected in the final results.

And I have go 2 after this because I want the first go. So this is going to execute this one query twice. I want the first go to get blocked to show you what happens when a query gets blocked.

Even for, again, like a couple hundred milliseconds, all this stuff could change in. And then run the query again afterwards because what I want you to see is that read committed, this pessimistic isolation level is not a snapshot of your point-in-time snapshot of your data, and that running the same query twice in a row can get you very different results.

And then I just have a query down at the end that just gives a select of everything in the table, right? So let’s come back up here a little bit, and let’s switch tabs, and let’s go and begin a transaction.

And what we’re going to do is we’re going to update the account value for ID 7, and we’re going to leave that hanging for a second. Now, again, just as I start this one off, again, like this isn’t stuff that has to go on for a long time in order for you to get mangled results from read committed, the pessimistic isolation level, because all of these queries that I’m going to run to make changes to the table, like, for instance, this is going to update three rows to set account value to 999.

If you remember, our original query is looking for anything with an account value greater than or equal to 1,000. And so these rows would no longer qualify.

We’re going to mess with a couple primary key values. And I realize that primary keys don’t often change in a database, but you might be working with other data that has more volatile keys.

You might be working with data that does not, like, that does change fairly often, like the key of an index that does change fairly often, right? Certain relational values might change from time to time.

And so you might see weird things where rows get thrown around and just juggled all over results. Think about, like, the context of Stack Overflow, where you might get a hot network question, and your score might jump tremendously, very quickly.

Or you might get downvoted into oblivion very quickly if you give a bad answer or ask a bad question or something. All this stuff can change, rapid fire. And you could move around in results and queries that you, results you should have been and that you’re not, or query results that you shouldn’t be and that you are.

It’s terrible, right? So I’m going to delete some rows, and then I’m going to reinsert a couple rows. Now, what I’m doing here is, you know, again, kind of tricky. I’m just switching French toast and steak frites around, right?

So they’re going to both get values of 1,000, but I’m going to flip their primary key values and their user account values, I think. Yeah, that’s all flipping.

So, yeah, made a bunch of changes, right? And again, this doesn’t, I don’t know how long I’ve been talking about this for, 10, 20, 30 interminable, uncalculatable numbers of seconds. But now I’m going to commit all those changes.

So, bloop, you go and do something. And now this query is finished. So what we got in our results, this first chunk of results up here, and if this SSMS would be so kind as to let me drag things around, the first set of results, remember, we paused on ID 7, right?

ID 7 was where we paused, because that’s where we got blocked as we were setting the account value to 5,001. So we made a whole bunch of changes to this table that are either reflected because we moved things past 7 or not reflected because we made changes before the ID 7, right?

So we read up to ID 7, got stuck, and we made a bunch of changes all around ID 7 that made these results weird. So like ID 9 didn’t change. ID 11 used to be ID 6.

ID 6 got set to ID 11. ID 7 was where we were blocked. This row got deleted. This value, this value, and this value all got changed to 999.

This got deleted and reinserted with different values. This got deleted and reinserted with different values. And now we have some weird stuff in the table. So coming back over here, right? Remember the table definition where I don’t only have a primary key on ID, but I also have a unique constraint on account ID.

And if we come over and look at the initial set of results, right? We have stuff that’s kind of all over the place.

We have two account IDs, 1006. We have two scotch eggs in a row with different IDs, but the same account ID. We have two steak frites, right?

If we look here, where’s that other? Whoa, that wasn’t what I wanted to do. Yeah, we got stuff all over the place. Like here’s, zoom back in there. Come on, zoom it, work with me.

We got one steak frites here. We got another steak frites here. We got a bunch of rows with, you know, account values that changed, right? Like these got set down. This one got set down.

This is the first run. This is just the block query, right? So a bunch of stuff changed around that query that this read query was stuck. It did a bunch of reads and then went, oh, I don’t know. I don’t care if things change all around me.

All I know is that I’m stuck here, right? We’re just stuck on ID 7 and we made changes all around ID 7. And really the results should have looked like this. We should have only gotten six rows back and we should have gotten six rows back with these values, right?

Like this is the actual state of the table for accounts that have a value greater than or equal to 1,000 after all that stuff goes through, right? And this is just what the table looks like as a whole.

So you can see like this is all just janky. Like I think one of the things that like really sticks out is in the first result set, this last activity column, even though we made a bunch of changes and updated this for a bunch of rows, only two of the rows actually ended up with a last activity date, right?

Well, the rest of them are null. So this looks bad and weird, okay? Like it’s not great. Like if you think about like repeatable read or serializable, like sure you wouldn’t have necessarily those problems.

You would just have fun deadlocking problem because, you know, if you tried to like run an update in the transaction and then selected data with serializable and then you ran another update, it would just deadlock, right?

Just same with repeatable read. It’d be over with, through, kaput, finito. So let’s contrast that with, of course, RCSI. Now I’m not saying that the RCSI results are necessarily better because the RCSI results are, well, they’re more consistent.

I’ll give it that. But you’re still not getting back like the intermediate changes within all those things, right?

So if we, I think I have to rerun this because it’s going to tell me that, oh no, it didn’t work. Okay, good. All right. So let’s reload the table. And for some reason, that something weird opened up on my monitor.

So forget that. So let’s reload the table. And now we have the table set back to its original state. We have RCSI turned on. And now if we do this and we look at the table, we come over here and we run this, we’re just getting a snapshot of the table back before this change to 5001, right?

Because that first update that we run over here, we’re saying set account value equals 5001, where ID equals seven.

So we’re still getting back the last known good row for ID seven, right? This did not change to 5001. We did not have a last activity date. And the same will go for all the other stuff that happens within this transaction, right?

If we update these things and we come back over here and we run these queries again, nothing in this table has changed, right? Like everything is exactly the same as before things ended up being committed.

And if we run this and do the inserts and we run this, like we can still see that we’re not reflecting any changes, right? We’re not blocked, but we’re still not showing the changes from within this transaction yet.

If we commit this, of course, let’s just make sure that’s extra committed and we run this. Now we see all the changes, but that was a lot like when, like in the second query, sorry, the first demo, when this query got blocked up and like didn’t, like didn’t show a bunch of wrong stuff, like inconsistent state data from the table, like as things changed around it.

And then this one down here, which is, you know, essentially, like in the first query, this one was essentially like the right results. Like we just had to run the query again after the blocking resolved.

And this is sort of like the full state of the table. So yeah, basically, coming back to my original point, no isolation level is exactly perfect.

What you want to ask yourself when you’re designing applications and when you’re trying to choose things like isolation levels is what you would prefer. With read committed to pessimistic isolation level, if you, you know, remember that first set of queries that we did, we got blocked and our query returned bad results anyway.

So being blocked in SQL Server is not a pleasant user experience. It’s not fun for anybody. So our query got blocked and then still gave us crappy results at the end.

Under RCSI, like we didn’t get blocked. We didn’t see all of the reflected changes just yet. We didn’t get blocked though, right? The query results returned immediately. I’m not saying that users would be maybe satisfied with that because like, you know, they got back like that snapshot of the table where ID7 still hadn’t changed, right?

We did all that stuff around ID7 and we didn’t see any of those changes until the transaction committed. That might be right for you, but that might not be exactly what end users are expecting either.

So really, it’s just a matter of you asking yourself the question, do I want users to get, you know, like committed data back very quickly or do I want to risk users getting blocked and like getting like crappy data back after getting blocked for a long time?

For me, I would much, much rather prefer, I don’t think I need to rather prefer. I would much rather or I would much prefer, not much rather prefer, that’s absurd, I should smack myself, self-flagellate for using English so dumbly.

I would prefer that users get consistent data back until a transaction commits and then see the final result of everything that happened in that transaction rather than get blocked, get back some plum weird results and then have to rerun it again anyway.

So, you might not want that though. For some reason, you might hate your end users and you might want them to get blocked and you might want them to get janky results back and have to run the query again.

Maybe you charge them by the query, maybe you charge them by the CPU tick, maybe every month, at the beginning of every month, you record the number of CPU ticks and at the end of the month, you record the number of CPU ticks and whatever the difference is, you charge like a dollar a tick or something.

Right? This is a pretty good pricing. I think, wait, isn’t that a DTU? That’s interesting. Anyway, yeah, these are the application design questions that you should ask yourself.

Now, again, something that I think obviously bears repeating even though many of you, if you’ve been watching my videos, are already well aware of this. If we use no lock, everything would have been terrible.

Right? We would have seen not good stuff along the way. So, we’re not no locking. We’re not going to, we’re not even going to demo it because I don’t want people to say, oh, look, Eric did no, like, like, turn off the sound and just be like, oh, look, Eric did no lock in the demo and it didn’t get blocked and maybe that looks cool.

I don’t, I don’t, I don’t think that’s all right either. So, we’re going to not do that. What we’re going to do is take this, take this time to reflect upon the applications we’re developing and ask ourselves, what would we prefer users experience with our application?

You know, it might be okay for users to just hit refresh a few times and get eventually the right results, eventual consistency, or you might just want them to sit there and get blocked for a while and then get weird results back.

Start threatening to sue you or something. I don’t know. I don’t know how all that works. So, take this time, take this lovely, well, I mean, it’s Monday afternoon here.

I don’t know when you’re going to be watching this. So, take this, take this moment now, this moment, while you have it. This time is ticking and precious and fleeting and you’re never going to have this time back.

So, take this time to really think carefully about how you want your applications to function and how you want your end users to experience your applications. RCSI is usually a better choice for most applications and most application developers.

It takes away a lot of the headaches that come with having to troubleshoot blocking and deadlocking between readers and writers and it just generally gives you more consistent results without the concerns of no lock or read uncommitted.

So, yeah. Yeah. I think that’s about good there. Like I said, it’s hot and my brain is fried.

So, we’re going to end this one here. Thank you for watching. I hope you enjoyed yourselves. I hope you learned something. I hope you all go turn RCSI on immediately. I kid, but I don’t kid.

If you liked this video, there’s a little thumbs up you lucky button I’m told on YouTube that you can say thank you with. Comments are another nice way to say thank you.

And if you would like to see more videos like this immediately, hot off the presses as soon as Beer Gut Magazine finishes cutting me my check and shooting me up with my adrenochrome, I can talk through this, and I record the video, then you can subscribe to the channel and experience me in almost real time.

I guarantee you I’m quite a real time experience. You should do that. You and me. Subscribey buddies.

I’m going to go open windows and take a cold shower. Maybe not in that order, but in some order. Eventually that’ll be consistent too.

Anyway, thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Loops, Transactions, and Transaction Log Writes In SQL Server

Loops, Transactions, and Transaction Log Writes In SQL Server



Thanks for watching!

Video Summary

In this video, I delve into an interesting aspect of SQL Server transaction management by exploring how loops and transactions interact in a practical scenario. I demonstrate three different approaches for handling modifications within a loop—allowing implicit transactions to handle each operation individually, wrapping all operations in a single explicit transaction, and conditionally committing based on specific criteria. By using SP Pressure Detector to monitor the transaction log activity, I show that optimizing the transaction commit strategy can significantly reduce the load on the transaction log and speed up the overall process. Whether you’re dealing with cursors or other looping constructs, this video offers valuable insights into how to tune your SQL Server operations for better performance.

Full Transcript

Erik Darling here with Darling Data on my tenth take at starting this thing. Usually recording these videos in the morning is against my religion because I am not suitably or favorably dispositioned to be presentable for this sort of work before like 3pm. So this video is about how if sort of like loops in terms of the work that you can do, transactions and the transaction log in SQL Server and how you can use transactions in loops to be more favorable to the transaction log and speed those loops up considerably. So, well, I don’t often recommend looping code. Sometimes it is unavoidable and you’ve got to do it because it’s unavoidable and the two are synonymous. So what we’re going to do is look at three different options you have for different options you have for writing to or doing modifications in a loop. If this were select queries, we wouldn’t care because select queries don’t do anything to the transaction log and this would have no impact. It would just be about tuning the select queries in a loop, you know, tune your cursor queries or whatever. So, yeah, we have three options here and we’re going to look at the first one, which is allowing SQL Server to behave as it normally does and use implicit, well, I mean, I mean, like automatic transactions. It’s not implicit transactions. It’s automatic transactions. But implicit transactions are a completely different thing. But implicitly, each one of these things is a transaction, right? So this insert is a transaction. This update is a transaction. This delete is a transaction. And what we’re going to do is highlight this and go over to this window, this tab, this tabulature. And we’re going to use a newer addition to SP pressure detector that allows you to sample a server for a number of seconds. And look at what happened in that number of seconds. So I’m going to kick this off. And I’m going to run this. And if this, this demo lives up to prior executions, it should finish in about seven or eight seconds. Look at seven seconds. Look at that professional presenter, even at whatever time it is in the morning.

Nailed it, right? Not like I didn’t just run through this or anything. So looking at SP pressure detectors results, the stuff that I want to focus on is first up here. So this second line is going to be rights to the transaction log for our database. Looks a bit stranger now that I see it on the screen. But if we look over here for that second line, we wrote about 235 megs to the transaction log over 60,000 total rights. And that is backed up in, you know, mostly, like mostly, you know, correct numbers. If we look at perfmon counters as well, where, let’s frame this up a little bit more nicely.

If we look at log bytes flushed, there were 247 million total or about 24 million a second. And if we look at the log flushes, we’ll have about 60,000 total flushes or about 6,000 flushes per second. And that lines up pretty well with, so like the 247 million bytes is probably pretty close to 235 megs. And 60,000 log flushes is pretty close to 60,000 total rights to the transaction log. So that might be fine, right? You might be doing this at a time when it doesn’t matter if your loop runs for seven seconds. It just might not be a big deal.

That’s okay. But if you’re like me and you often need to tune processes like this, you might be looking at other ways to improve upon this. One way you can do that is by wrapping all three of the modifications into an explicit transaction. So we have up here, we have a begin train and down here we have a commit. So rather than having each transaction, each insert, update, delete, auto commit when they run, we’re going to make them commit as a group when each loop or when each thing finishes.

So let’s highlight this code. And let’s come over here and kick off SP pressure detector. And let’s come back over here and run, oh, not you, run this. And this should finish in about between two and four seconds. We got two seconds on that one. Things seem to be finishing on the low end when I run them here. And what we’re going to see is that we cut everything down to about a third of what it was before.

So coming up here and looking at the total megs written, that’s just about 80, which is just about 30% of 250, whatever it was before. And the total write count is about 20,000, which is about 30% of the 60,000 that it was before. Something like that.

20, 40, 30, 33 and a third. 20, 40, 60, 33, something. 33 point infinite threes. And if we look down in the perfmon stats section at the same counters that we looked at before, if you look at log flush bytes, it’s about 1.6 bajillion.

And really what we’re looking at over here is the total difference, which is about 83 million and about 8.3 million a second. So that’s all coming down by about 30%. And the same thing we’re going to see here for the log flushes a second, where that’s at about 20,000 total and about 2,000 per second.

So before this was 60,000 and 6,000. Before this was, you know, 240, whatever bajillion. So that’s one way of doing it.

One way that I’ve found of making this even better is by not making every single loop through a transaction that commits, but conditionally committing the transactions based on something. And in this case, excuse me, the something that I’m using, it looks like this.

So we have, at the very top, we have a begin transaction. And at the very bottom, we have a commit transaction. But in the middle, every time, get in there, every time the ID value is modulist by 1,000 and equal to 0, so basically every 1,000 loops through, we’re going to commit the transaction and begin a new transaction.

All right, so it’s very important that you do this, and it’s very important that you do this and this. All right, cool. So let’s get this highlighted.

It’s a little bit more verbosity to the code. Let’s start this running, and now let’s run this. And that didn’t take two seconds.

That took no seconds. That was very fast, right? Pretty good, I think, anyway, at least if you’re into that sort of thing. And if you look at what happened with regards to Perfmon, we wrote a total of 10 megs to the transaction log, over 180 writes.

That’s a little cut off over there. And if we look down at the Perfmon counters, and let’s frame this up a little bit more nicely, we have for log flushed bytes a second, we are down to 10 million there, or 1 million a second.

And if we look at the log flushes a second for our database, we are down to 180, or about 18 per second. So that lines up, the 180 there lines up with the 180 there, and the 10 megs there lines up with the total difference there.

So everything kind of agrees that SQL Server writes to the log more efficiently when you, you know, A, like, don’t use, like, the auto-commit transactions if you’re doing multiple modifications in a loop.

And if you do an explicit transaction that encompasses all of the modifications in the loop, you’ll do better. And then if you change your code a little bit so that you control exactly the sort of cadence of commits to the transaction log, you can do even better.

Again, this is kind of a rare thing, but I do see it often enough that I find this to be a useful tactic to speed things up. So if you’re ever looking at code that’s running in a loop, whether it’s a cursor or a while loop or any other sort of construct that might loop over things, and you’re like, there are just not enough hours remaining in my life for me to write this as a set-based solution, you might consider using one of these techniques, either wrapping all of the modifications into a single transaction or controlling the cadence of transaction commits and begins and stuff to speed that up.

So, an admittedly quick video this morning because I have stuff to do soon, and I got to do those things. So, thank you for watching.

I hope you learned something. I hope you enjoyed yourselves. If you like this video, even early in the morning, I am clearly a bleary-eyed individual, thumbs up is a good way to say thank you.

Leaving a comment that says thank you is even more verbosity. And, of course, if you like this sort of SQL Server performance tuning training content, you should subscribe to my channel so that you get notified every single time I post one of these whiz-bang things, and I promise you that most of them will not be in the morning.

I prefer to work in the dark or something. Anyway, thank you for watching. I need to leave.

Bye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Who Made That Change? Low Rent User Auditing Using Temporal Tables

I Don’t Find This Stuff Fun


ED: I moved up this post’s publication date after Mr. O posted this question. So, Dear Brent, if you’re reading this, you can consider it my humble submission as an answer.

It’s really not up my alley. I love performance tuning SQL Server, but occasionally things like this come up.

Sort of recently, a client really wanted a way to figure out if support staff was manipulating data in a way that they shouldn’t have. Straight away: this method will not track if someone is inserting data, but inserting data wasn’t the problem. Data changing or disappearing was.

The upside of this solution is that not only will it detect who made the change, but also what data was updated and deleted.

It’s sort of like auditing and change data capture or change tracking rolled into one, but without all the pesky stuff that comes along with auditing, change tracking, or change data capture (though change data capture is probably the least guilty of all the parties).

Okay, so here are the steps to follow. I’m creating a table from scratch, but you can add all of these columns to an existing table to get things working too.

Robby Tables


First, we create a history table. We need to do this first because there will be computed columns in the user-facing tables.

/*
Create a history table first
*/
CREATE TABLE
    dbo.things_history
(
    thing_id int NOT NULL,
    first_thing nvarchar(100) NOT NULL,
    original_modifier sysname NOT NULL, 
        /*original_modifier is a computed column below, but not computed here*/
    current_modifier sysname NOT NULL, 
        /*current_modifier is a computed column below, but not computed here*/
    valid_from datetime2 NOT NULL,
    valid_to datetime2 NOT NULL,
    INDEX c_things_history CLUSTERED COLUMNSTORE
);

I’m choosing to store the temporal data in a clustered columnstore index to keep it well-compressed and quick to query.

Next, we’ll create the user-facing table. Again, you’ll probably be altering an existing table to add the computed columns and system versioning columns needed to make this work.

/*Create the base table for the history table*/
CREATE TABLE
    dbo.things
(
  thing_id int
      CONSTRAINT pk_thing_id PRIMARY KEY,
  first_thing nvarchar(100) NOT NULL,
  original_modifier AS /*a computed column, computed*/
      ISNULL
      (
          CONVERT
          (
              sysname,
              ORIGINAL_LOGIN()
          ),
          N'?'
      ),
  current_modifier AS /*a computed column, computed*/
      ISNULL
      (
          CONVERT
          (
              sysname,
              SUSER_SNAME()
          ),
          N'?'
      ),
  valid_from datetime2
      GENERATED ALWAYS AS
      ROW START HIDDEN NOT NULL,
  valid_to datetime2
      GENERATED ALWAYS AS
      ROW END HIDDEN NOT NULL,
  PERIOD FOR SYSTEM_TIME
  (
      valid_from,
      valid_to
  )
)
WITH
(
    SYSTEM_VERSIONING = ON  
    (
        HISTORY_TABLE = dbo.things_history,
        HISTORY_RETENTION_PERIOD = 7 DAYS
    )
);

A couple things to note: I’m adding the two computed columns as non-persisted, and I’m adding the system versioning columns as HIDDEN, so they don’t show up in user queries.

The WITH options at the end specify which table we want to use as the history table, and how long we want to keep data around for. You may adjust as necessary.

I’m tracking both the ORIGINAL_LOGIN() and the SUSER_SNAME() details in case anyone tries to change logins after connecting to cover their tracks.

Inserts Are Useless


Let’s stick a few rows in there to see how things look!

INSERT
    dbo.things
(
    thing_id,
    first_thing
)
VALUES
    (100, N'one'),
    (200, N'two'),
    (300, N'three'),
    (400, N'four');

Okay, like I said, inserts aren’t tracked in the history table, but they are tracked in the main table.

If I do this:

EXECUTE AS LOGIN = N'ostress';
INSERT
    dbo.things
(
    thing_id,
    first_thing
)
VALUES
    (500, N'five'),
    (600, N'six'),
    (700, N'seven'),
    (800, N'eight');

And then run this query:

SELECT
    table_name =
        'dbo.things',
    t.thing_id,
    t.first_thing,
    t.original_modifier,
    t.current_modifier,
    t.valid_from,
    t.valid_to
FROM dbo.things AS t;

The results won’t make a lot of sense. Switching back and forth between the sa and ostress users, the original_modifier column will always say sa, and the current_modifier column will always show whichever login I’m currently using.

You can’t persist either of these columns, because the functions are non-deterministic. In this way, SQL Server is protecting you from yourself. Imagine maintaining those every time you run a different query. What a nightmare.

The bottom line here is that you get no useful information about inserts, nor do you get any useful information just by querying the user-facing table.

Updates And Deletes Are Useful


Keeping my current login as ostress, let’s run these queries:

UPDATE 
    t
SET 
    t.first_thing =
        t.first_thing +
        SPACE(1) +
        t.first_thing
FROM things AS t
WHERE t.thing_id = 100;

UPDATE 
    t
SET 
    t.first_thing =
        t.first_thing +
        SPACE(3) +
        t.first_thing
FROM things AS t
WHERE t.thing_id = 200;

DELETE
    t
FROM dbo.things AS t
WHERE t.thing_id = 300;

DELETE
    t
FROM dbo.things AS t
WHERE t.thing_id = 400;

Now, along with looking at the user-facing table, let’s look at the history table as well.

To show that the history table maintains the correct original and current modifier logins, I’m going to switch back to executing this as sa.

sql server query results
peekaboo i see you!

Alright, so here’s what we have now!

In the user-facing table, we see the six remaining rows (we deleted 300 and 400 up above), with the values in first_thing updated a bit.

Remember that the _modifier columns are totally useless here because they’re calculated on the fly every time

We also have the history table with some data in it finally, which shows the four rows that were modified as they existed before, along with the user as they logged in, and the user as the queries were executed.

This is what I would brand “fairly nifty”.

FAQ


Q. Will this work with my very specific login scenario?

A. I don’t know.

 

Q. Will this work with my very specific set of permissions?

A. I don’t know.

 

Q. But what about…

A. I don’t know.

I rolled this out for a fairly simple SQL Server on-prem setup with very little insanity as far as login schemes, permissions, etc.

You may find edge cases where this doesn’t work, or it may not even work for you from the outset because it doesn’t track inserts.

With sufficient testing and moxie (the intrinsic spiritual spark, not the sodie pop) you may be able to get it work under you spate of local factors that break the peace of my idyllic demo.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Join me In Boston May 10 For A Full Day Of SQL Server Performance Tuning Training

Spring Training


This May, I’ll be presenting my full day training session The Foundations Of SQL Server Performance Tuning.

All attendees will get free access for life to my SQL Server performance tuning training. That’s about 25 hours of great content.

Get your tickets here for this event, taking place Friday, May 10th 2024 at the Microsoft Offices in Burlington.

Here’s what I’ll be presenting:

The Foundations Of SQL Server Performance Tuning

Session Abstract:

Whether you want to be the next great query tuning wizard, or you just need to learn how to start solving tough business problems at work, you need a solid understanding of not only what makes things fast, but also what makes them slow.

I work with consulting clients worldwide fixing complex SQL Server performance problems. I want to teach you how to do the same thing using the same troubleshooting tools and techniques I do.

I’m going to crack open my bag of tricks and show you exactly how I find which queries to tune, indexes to add, and changes to make. In this day long session, you’re going to learn about hardware, query rewrites that work, effective index design patterns, and more.

Before you get to the cutting edge, you need to have a good foundation. I’m going to teach you how to find and fix performance problems with confidence.

Event Details:

Get your tickets here for this event!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Indexing SQL Server Queries For Performance: Equality vs. Inequality Searches

Big, Bold Flavor


Since I first started reading about indexes, general wisdom has been to design the key of your indexes to support the most restrictive search predicates first.

I do think that it’s a good starting place, especially for beginners, to get acceptable query performance. The problem is that many databases end up designed with some very non-selective columns that are required for just about every query:

  • Soft deletes, where most rows are not deleted
  • Status columns, with only a handful of potential entries

Leaving the filtered index question out for the moment, I see many tables indexed with the “required” columns as the first key column, and then other (usually) more selective columns further along in the key. While this by itself isn’t necessarily a bad arrangement, I’ve seen many local factors lead to it contributing to bad performance across the board, with no one being quite sure how to fix it.

In this post, we’ll look at both an index change and a query change that can help you out in these situations.

Schema Stability


We’re going to start with two indexes, and one constraint.

CREATE INDEX
    not_posts
ON dbo.Badges
    (Name, UserId)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

CREATE INDEX
    not_badges
ON dbo.Posts
    (PostTypeId, OwnerUserId)
INCLUDE
    (Score)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

ALTER TABLE
    dbo.Posts
ADD CONSTRAINT
    c_PostTypeId
CHECK
(
      PostTypeId > 0 
  AND PostTypeId < 9
);
GO

The index and constraint on the Posts table are the most important. In this case, the PostTypeId column is going to play the role of our non-selective leading column that all queries “require” be filtered to some values.

You can think of it mentally like an account status, or payment status column. All queries need to find a particular type of “thing”, but what else the search is for is up to the whims and fancies of the developers.

A Reasonable Query?


Let’s say this is our starting query:

SELECT
    DisplayName =
        (
            SELECT
                u.DisplayName
            FROM dbo.Users AS u
            WHERE u.Id = b.UserId
        ),
    ScoreSum = 
        SUM(p.Score)
FROM dbo.Badges AS b
CROSS APPLY
(
    SELECT
        p.Score,
        n =
            ROW_NUMBER() OVER
            (
                ORDER BY
                    p.Score DESC
            )
    FROM dbo.Posts AS p 
    WHERE p.OwnerUserId = b.UserId
    AND   p.PostTypeId < 3
) AS p
WHERE p.n = 0
AND   b.Name IN (N'Popular Question')
GROUP BY
    b.UserId;

Focusing in on the CROSS APPLY section where the Posts table is queried, our developer has chosen to look for PostTypeIds 1 and 2 with an inequality predicate. Doing so yields the following plan, featuring an Eager Index Spool as the villain.

sql server query plan
i came to drop crumbs

SQL Server decided to scan our initial index and create a new one on the fly, putting the OwnerUserId column first, and the Score column second in the key of the index. That’s the reverse of what we did.

Leaving aside all the icky internals of Eager Index Spools, one can visually account for about 20 full seconds of duration spent on the effort.

Query Hints To The Rescue?


I’ve often found that SQL Server’s query optimizer is just out to lunch when it chooses to build an Eager Index Spool, but in this case it was the right choice.

If we change the query slightly to use a hint (FROM dbo.Posts AS p WITH(FORCESEEK)) we can see what happens when we use our index the way Codd intended.

It is unpleasant. I allowed the query to execute for an hour before killing it, not wanting to run afoul of my laptop’s extended warranty.

The big problem of course is that for each “seek” into the index, we have to read the majority of the rows across two boundaries (PostTypeId 1 and PostTypeId 2). We can see that using the estimated plan:

sql server query plan
in this case, < 3 is not a heart.

Because our seek crosses range boundaries, the predicate on OwnerUserId can’t be applied as an additional Seek predicate. We’re left applying it as a residual predicate, once for PostTypeId 2, and once for PostTypeId 1.

The main problem is, of course, that those two ranges encompass quite a bit of data.

+------------+------------+
| PostTypeId |    count   |
+------------+------------+
|          2 | 11,091,349 |
|          1 |  6,000,223 |
|          4 |     25,129 |
|          5 |     25,129 |
|          3 |        167 |
|          6 |        166 |
|          7 |          4 |
|          8 |          2 |
+------------+------------+

11 million rows for 2, and 6 million rows for 1.

Changing The Index


If you have many ill-performing queries, you may want to consider changing the order of key columns in your index to match what would have been spooled:

CREATE INDEX
    not_badges_x
ON dbo.Posts
    (OwnerUserId, PostTypeId)
INCLUDE
    (Score)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);
GO

This gets rid of the Eager Index Spool, and also the requirement for a FORCESEEK hint.

sql server query plan
satisfaction

At this point, we may need to contend with the Lazy Table Spool in order to get across the finish line, but we may also consider getting a query from ~30 seconds down to ~4 seconds adequate.

Of course, you may just have one query suffering this malady, so let’s look at a query rewrite that also solves the issue.

Optimizer Inflexibility


SQL Server’s query optimizer, for all its decades of doctors and other geniuses working on it, heavily laden with intelligent query processing features, still lacks some basic capabilities.

With a value constraint on the table telling the optimizer that all data in the column falls between the number 1 and 8, it still can’t make quite a reasonable deduction: Less than 3 is the same thing as 1 and 2.

Why does it lack this sort of simple knowledge that could have saved us so much trouble? I don’t know. I don’t even know who to ask anymore.

But we can do it! Can’t we? Yes! We’re basically optimizer doctors, too.

With everything set back to the original two indexes and check constraint, we can rewrite the where clause from PostTypeId < 3 to PostTypeId IN (1, 2).

If we needed to take extraordinary measures, we could even use UNION ALL two query against the Posts table, with a single equality predicate for 1 and 2.

Doing this brings query performance to just about equivalent with the index change:

sql server query plan
good and able

The main upside here is the ability for the optimizer to provide us a query plan where there are two individual seeks into the Posts table, one for PostTypeId 1, with an additional seek to match OwnerUserId, and then one additional seek for PostTypeId 2, with an additional seek to match OwnerUserId.

sql server query plan
coveted

This isn’t always ideal, of course, but in this case it gets the job fairly well done.

Plan Examiner


Understanding execution plans is sometimes quite a difficult task, but learning what patterns to look for can save you a lot of standing about gawping at irrelevancies.

The more difficult challenge is often taking what you see in an execution plan, and knowing what options you have available to adjust them for better performance.

In some cases, it’s all about establishing better communication with the optimizer. In this post, I used a small range (less than 3) as an example. Many dear and constant readers might find the idea that someone would write that over a two value IN clause ridiculous, but I’ve seen it. I’ve also seen it in more reasonable cases for much larger ranges.

It’s good to understand that the optimizer doesn’t have infinite tricks available to interpret your query logic into the perfect plan. Today we saw that it was unable to change < 3 to = 1 OR = 2, and you can bet there are many more such reasonable simplifications that it can’t apply, either.

Anyway, good luck out there. If you need help with these things, the links in the below section can help you get it from me.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Why Read Committed Queries Can Still Return Bad Results In SQL Server

Why Read Committed Queries Can Still Return Bad Results In SQL Server



Thanks for watching!

Video Summary

In this video, I delve into some fascinating insights about the read committed isolation level in SQL Server, which I wish I had known much earlier in my career. Specifically, I explore how this isolation level can lead to inconsistent query results and even violate unique constraints, all while running under what seems like a simple “read committed” setting. By walking through practical demos, I highlight these quirks using tables and stored procedures that mimic real-world scenarios. Through these examples, you’ll see firsthand how queries can get blocked on certain rows, leading to incomplete or misleading results when the transaction finally commits. Additionally, I explain why snapshot isolation levels are generally a better fit for most workloads, emphasizing the importance of choosing the right isolation level based on your specific needs.

Full Transcript

Erik Darling here with Darling Data, slightly off screen when I do this, with my big hands. And today’s video… Bum, bum, bum, bum, bum, bum, bum. I got a drum solo on that one. Today’s video, I’m going to teach you about a couple things that I learned. Admittedly, I wish I had learned them much earlier in my SQL Server career because they would have answered a lot of questions that I learned a lot of questions that I learned a lot of questions that I had about weird stuff that I saw in query results. Not query execution plans, query results. I don’t know how many of you actually care, but I started off my SQL Server career at a market research company. And my first SQL Server test was like sort of automated loading Excel files of data into SQL Server for an auto dialer. Well, actually, not an auto dialer. You had to physically dial the numbers. But just like a list of people who companies wanted to contact to see how satisfied they were with things or if they’d be interested in this new product, usually a credit card. And from there, I ended up writing reports and stuff. And people would run reports. They’d be like, data in these reports is wrong. They’d be like, don’t know how. They’d also be like, this report ran for kind of a long time. And what it turned out to be a lot of the time was that the databases in question were using read committed, the pessimistic isolation level. And like project managers, you know, demanding squirrel brains if they are, would be running reports constantly like while people were dialing and getting like respondent results in. So not only were like report queries getting blocked, but they were getting blocked in weird places that would make results look weird, wrong, incorrect. They’d be like, this doesn’t tally up to that. And this doesn’t tally up over here. We’re not confident in here. Our confidence interval is very low. And it wasn’t until I had moved on from there to other DBA developer type jobs where the applications were just, I didn’t use NOLOCK. I would never. But I ran into other applications that use NOLOCK quite heavily, which had its own set of problems. But at least like, you know, no one was just like, my query’s blocked. Which, you know, I guess, I guess you take what you can get. But, you know, people would still complain about like result inconsistency. And I’d be like, well, the application uses NOLOCK. Talk them out of it, right?

Like call up support and be like, stop the damn NOLOCKs. Stop the count. So what I’m going to show you today is a couple funny things that can happen under read committed, the pessimistic isolation level that I learned from, I forget if I already said this. I learned from reading blog posts by a lovely fellow named Craig Friedman, who, I don’t know, I think in a weird way, like, I don’t know. I don’t want to make him feel old if he ever sees this, but he’s like the grandfather of SQL Server blogs.

Because he works at Microsoft and he worked closely on, worked, worked, worked, I’m not sure what he currently does. But worked, at least as far as I know, worked, maybe still works closely on SQL Server and would write a lot about it because he knew a lot about it. And sharing is caring.

So let’s look at a couple funny things that I learned from Mr. Friedman about read committed, the pessimistic isolation level, that I have turned into sort of my own brand of demo. All right. So the first thing we’re going to do is create a table, well, a couple of tables.

First, we’re going to drop if exists a couple of tables. We’re going to get rid of a couple of these things. Then we’re going to create a table called consultants that has a primary key on consultant ID, a first name and a last name.

And we’re going to insert one row for me, Mr. Erik Darling. That’s me, Darling Data. And then we’re going to create a table called clients that has an invoice ID, a consultant ID with a foreign key that references the consultant table.

Consultants table. Even though there’s one, it’s not more than one. You know, hope burns eternal.

Perhaps someday I will grow and hire and contribute meaningfully to this American economy. For now, I mostly just retire to bartender’s rent. So we’re going to create this table.

We’re going to put two rows in there. And as, you know, typical consultants do, we have reached nearly the integer max for this invoice amount. So this is how much these invoices are worth, which is clearly why I hang out in an Adidas t-shirt all day, because I make this much money in invoices.

Why I still have to record these videos. With all my largesse around, I choose to record YouTube videos that a few hundred people appreciate. So we’re going to create this table, put a couple rows in there.

And just to show you what this looks like initially, these are the query results. So we hit control and one. There we go.

Zoom it is responsive. This is what we have. Two rows for me, Erik Darling. Two invoice IDs. My consultant ID, because it’s in both tables. So you see all the columns from both tables in there.

And an invoice amount. And I didn’t spell anything wrong. That’s great. All right. So let’s get out of here. Zoom it is, will become responsive again.

There we go. Okay. Booper reel.

And now what we’re going to do is make sure that we have the right thing in here. We do. We have the right query in there. We are highly skilled, trained professionals here at Darling Data. And if I run this query, we’re going to see the same results that we just saw.

So this is all good here. Now, let’s say that we have a store procedure or a batch of crap stored some other way that runs. It begins a transaction.

It does an update. And, you know, it’s hard to replicate concurrency as one person. So begin train is a casual exercise. And, like, store is, like, two people at two computers trying to do two things at the same time.

All right. So begin train is a good way to say, hey, I started doing something. And you just started doing something exactly the same time. Even though it’s not exactly precisely the same time because I’ve been babbling for a little bit.

But if you come over here and run this query now. And, again, this is read committed to pessimistic isolation level. This query is going to get blocked. And it’s going to get blocked because the second invoice in this table is locked because we’re updating it.

Now, the query in the other window got blocked on this invoice ID. It’s already read the first row, the first invoice ID. All right.

So now let’s say that something else happened. And Erik Darling, wedding bells rang. And Erik Darling married the data and became Mr. Darling data. All right.

So, you know, don’t congratulate me yet. We’re still working out the prenup. But let’s just say this happened. All right. I’m now a Mr. Erik Darling data. And run that update.

And if we look at the results after the update for a query that’s not going to be blocked, we’re going to see correct results. All right.

So we’re going to see that this nice client over here decided to give me a tip. All right. They gave me an extra dollar. All right. So this ends in 07. This ends in 06. And Erik Darling’s recent marriage to the data has gone through.

We worked out the prenup. Everything’s good. All right. So that’s over here. Now, if we hit commit on this, all right, and I’m going to hit this twice just to make sure, right?

So that says we actually committed it. And this says, oh, you’re all out of transactions now. Your ATM card is declined. And we come look over here. We’re going to see a conflicting vision of the world.

It is going to look like Erik Darling might be trifling a little bit. It might be pretending to be single with some clients and might be pretending to be married with other clients.

What happened? Are you a cheater, Erik Darling? There’s no too boring for that. So this is one type of inconsistency that can happen under read committed, the pessimistic isolation level.

Your query can read some rows, get blocked, not read some other rows. And then when your query is done, it’s going to look a bit half finished, isn’t it? Right?

Now, I suppose something like this could also happen under no lock, right? If you read uncommitted, the pessimistic isolation level, you could catch that transaction in mid-flight and maybe not see everything that you’re supposed to see or see too much or see too little.

But this is just read committed. This is what SQL Server databases operate under all day, every day. This is not like read committed snapshot isolation, which you have to enable. Also, this wouldn’t happen under read committed snapshot isolation, the optimistic isolation level, because you would have just read the last known good version of the row.

So both rows, in this case, would have just said darling, which is probably at least a little bit less suspicious than having an Erik Darling and Erik Darling slash data. If you saw this, you might be like, one of those people did wrong data entry.

Damn your eyes. So that’s one thing that can happen. Another thing that can happen is that it can look like, and this could happen with a single table too.

And there’s like, I could do a demo of that. But I think what’s more interesting is when read committed the pessimistic isolation level, it can look like SQL Server did not honor a unique constraint.

So if we look at this table definition here, let me unhighlight this. That looks terribly ugly. I hate when I do that.

If you look at this table here, butthead is a unique column. It has a unique constraint on it. Right? So we’re not allowed to have duplicate values in butthead. Only one value in butthead at a time.

So let’s do this. And let’s load three rows into Beavis and butthead that would not violate that unique constraint. And just to kind of show you a little bit, just to sort of validate that this is a truly unique constraint, let’s try to insert this row.

And we’ll get an error there. Right? So we’re not allowed to have a duplicate value here. Right?

Our unique key constraint. So this fails. Right? So let’s make a little note in here. Oh, no. Oh, no. This failed. What are we going to do?

All right? That’s not really the point of the demo. The point of the demo is if we stick this query over in this window and we run this, we’re going to see three complete rows in here. We got a ha, a he, and a ha.

Very funny. All right. So what we’re going to do is we’re going to come over here, or we’re going to come down here, rather.

We’re going to begin a transaction and we’re going to run that. And then we’re going to come back over here and we’re going to run this. And then we’re going to play an old whoopsie-daisy on these other two rows. So we’re going to set butthead to he for row one.

And we’re going to set butthead to ha for row three. All right? So if we come look at this, all is right with the world.

We got a he, a ho, and a ha in butthead. All unique values. Our unique key is alive and well. The world is at peace.

There’s no more hunger, no more crime, no more want for anything. And then we’re going to do what we did before, and we’re going to commit this transaction fully here.

And we’ll just make sure that really, really committed. And now we’re going to come look at the results in this window. And we are going to see that butthead somehow has two huss. How did butthead get two huss in there?

It’s a unique constraint. The answer is, just did a little switcheroo around. But because this query under read committed, read this row, found a ha, got blocked here, saw whatever this was before, and waited until that transaction committed, we saw the table at two different points in time.

Now, this goes back to something that I have to keep explaining to a lot of people about read committed, the pessimistic isolation level, and that it is not a point in time snapshot of your data.

And you can gauge that mentally by the fact that it does not have snapshot in the name. Snapshot isolation has snapshot in the name. Check.

Read committed snapshot isolation has snapshot in the name. Check. Read committed, no snapshot. X. No snapshot. You read data, your queries, shrug through data, get blocked, and they can miss state, they can miss rows, they can double count rows, and all sorts of other wacky stuff.

Right? Lots of bad things can happen under read committed that a lot of people aren’t aware of. So whenever I’m talking to someone about using an optimistic isolation level, and I have to explain these things, that read committed is not the ultimate promise of pure data that it seems to be.

Read committed is not this bastion of correctness that everyone thinks it is. It is not a snapshot of your data at a point in time. It is the most recent version of your data is when your query read those rows. Now, granted, that’s what you need sometimes.

Right? If you want to make sure that you read the most recent version of this row, get blocked and then go read it. Just be aware that while you’re waiting for this, rows can get deleted here, rows can get updated here, rows can get updated here, rows can get deleted here.

Well, everything moving all around this query can happen while it’s blocked here. And then when this thing finishes, it’ll go try to read the rest of the data. And the results you get may be stale here, right?

And the results you get may be out of date here as well, because, or like, sorry, the data you read over here might be out of date. The data you read over here, the stuff you might have missed if your query was looking for data at a point in time.

So moral of the story is snapshot isolation levels, optimistic isolation levels are generally a better fit for most workloads. Getting blocked should be reserved for special occasions.

And even if you need your queries to get blocked, read committed is not a very good guarantee of data consistency for a point in time. Right?

So something like repeatable read or serializable, depending on, like, the requirements of the query, might be a necessary isolation level to prevent these kind of anomalies. So with that being said, I hope you enjoyed yourselves.

I hope you learned something. If you like this video, they’re available to you for a limited time for free. Don’t tell anyone it’s free.

There’s a thumbs up button that you can push that will increase my happiness. Also, there’s a subscribe button, which, for an even shorter amount of time, will make us best friends.

But, you know, I’m already a married man. I’m already Erik Darling data, so don’t get frisky. All right? And if you subscribe to my channel, you’ll get notifications when I drop these wonderful gems of learned wisdom upon your freckled brow.

So, I think I’ve said so 15 times in trying to end this thing. But anyway, thank you for watching. I will see you again in another, probably tomorrow.

Next week, I am on vacation, so there will not be any recording. But thankfully, due to the magic of WordPress blog scheduling, you will see regular blogging from me all week.

So, let’s all pause and think the spirits that came before us for inventing WordPress blog scheduling. Maybe it might still be alive.

I’m not sure. I don’t keep track of WordPress development that closely, because they use MySQL behind the scenes, and I’ll be damned if I’m ever going to care about that one.

It’s a walking, super fun site of a database. But anyway, I’m going to go work on the next set of demos to show you. So, stay tuned.

Keep your pants on. All that other stuff. Okay. We’re good. Thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Join Me And Kendra Little At PASS For Two Days Of SQL Server Performance Tuning Precons

Last Year


Kendra and I both taught solo precons, and got to talking about how much easier it is to manage large crowds when you have a little helper with you, and decided to submit two precons this year that we’d co-present.

Amazingly, they both got accepted. Cheers and applause. So this year, we’ll be double-teaming Monday and Tuesday with a couple pretty cool precons.

You can register for PASS Summit here, taking place live and in-person November 4-8 in Seattle.

Here are the details!

Day One: A Practical Guide to Performance Tuning Internals


Whether you’re aiming to be the next great query tuning wizard or you simply need to tackle tough business problems at work, you need to understand what makes a workload run fast– and especially what makes it run slowly.

Erik Darling and Kendra Little will show you the practical way forward, and will introduce you to the internal subsystems of SQL Server with a practical guide to their capabilities, weaknesses, and most importantly what you need to know to troubleshoot them as a developer or DBA.

They’ll teach you how to use your understanding of the database engine, the storage engine, and the query optimizer to analyze problems and identify what is a nothingburger best practice and what changes will pay off with measurable improvements.

With a blend of bad jokes, expertise, and proven strategies, Erik and Kendra will set you up with practical skills and a clear understanding of how to apply these lessons to see immediate improvements in your own environments.

Day Two: Query Quest: Conquer SQL Server Performance Monsters


Picture this: a day crammed with fun, fascinating demonstrations for SQL Server and Azure SQL.

This isn’t your typical training day; this session follows the mantra of “learning by doing,” with a good dose of the unexpected. Think of this as a SQL Server video game, where Erik Darling and Kendra Little guide you through levels of weird query monsters and performance tuning obstacles.

By the time we reach the final boss, you’ll have developed an appetite for exploring the unknown and leveled up your confidence to tackle even the most daunting of database dilemmas.

It’s SQL Server, but not as you know it—more fun, more fascinating, and more scalable than you thought possible.

Going Further


We’re both really excited to deliver these, and have BIG PLANS to have these sessions build on each other so folks who attend both days have a real sense of continuity.

Of course, you’re welcome to pick and choose, but who’d wanna miss out on either of these with accolades like this?

twitter
pretty, pretty, pretty, pretty good

You can register for PASS Summit here, taking place live and in-person November 4-8 in Seattle.

See you there!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Updates To sp_QuickieStore, sp_HumanEventsBlockViewer, and sp_PressureDetector

Chasing Perfection


We here at Darling Data strive to get things right the first time, but sometimes late nights and tired eyes conspire against us.

The nice thing about using these on a wide variety of SQL Servers in various states of disrepair is that bugs get spotted and sorted pretty quickly.

You can download all of the main SQL Server troubleshooting procedures I use in one convenient file.

Here’s a breakdown of changes you can find in the most recent releases!

sp_QuickieStore


Here’s what got fixed and added in this round of changes:

  • Fixed a big where execution count and other metrics were being underreported
  • Fixed a bug when checking for AG databases would throw an error in Azure SQLDB and Managed Instance (Thanks to AbhinavTiwariDBA for reporting)
  • Added the IGNORE_DUP_KEY option to a couple temp table primary keys that could potentially see duplicate values when certain parameter combinations are used (Thanks to ReeceGoding for reporting)
  • Added support for displaying plan XML when plans have > 128 nested nodes of XML in them (you can’t open them directly, but you can save and reopen them as graphical plans)
  • Added underscores to the “quotable” search characters, so they can be properly escaped

So now we don’t have to worry about any of that stuff. How nice. How nice for us.

sp_PressureDetector


Here’s what got fixed and added in this round of changes:

  • Fixed an issue in the disk metrics diffing where some data types weren’t explicit
  • Fixed a bunch of math issues in the disk diff, too (it turns out I was missing a useful column, doh!)
  • Fixed a bug in the “low memory” XML where I had left a test value in the final query
  • Added information about memory grant caps from Resource Governor (with a small hack for Standard Edition)

Turns out I’m not great at math, and sometimes I need to think a wee bit harder. Not at 4am, though.

sp_HumanEventsBlockViewer


Here’s what got fixed and added in this round of changes:

  • Added a check for COMPILE locks to the analysis output
  • Added a check for APPLICATION locks to the analysis output
  • Improved the help section to give blocked process report and extended event commands
  • Improved indexing for the blocking tree code recursive CTE
  • Moved contentious object name resolution to an update after the initial insert

The final one was done because when there’s a lot of data in the blocked process report, this query could end up being pretty slow. Why, you might ask? Because calling OBJECT_ID() in the query forces a serial plan.

Fun stuff.

Issues and Contributions


If you’d like to report an issue, request or contribute a feature, or ask a question about any of these procedures, please use my GitHub repo. Specifically, check out the contributing guide.

As happy as I am to get emails about things, it makes support difficult for the one-man party that I am. Personal support costs money. Public support is free. Take your pick, there!

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Join me In Boston May 10 For A Full Day Of SQL Server Performance Tuning Training

Spring Training


This May, I’ll be presenting my full day training session The Foundations Of SQL Server Performance Tuning.

All attendees will get free access for life to my SQL Server performance tuning training. That’s about 25 hours of great content.

Get your tickets here for this event, taking place Friday, May 10th 2024 at the Microsoft Offices in Burlington.

Here’s what I’ll be presenting:

The Foundations Of SQL Server Performance Tuning

Session Abstract:

Whether you want to be the next great query tuning wizard, or you just need to learn how to start solving tough business problems at work, you need a solid understanding of not only what makes things fast, but also what makes them slow.

I work with consulting clients worldwide fixing complex SQL Server performance problems. I want to teach you how to do the same thing using the same troubleshooting tools and techniques I do.

I’m going to crack open my bag of tricks and show you exactly how I find which queries to tune, indexes to add, and changes to make. In this day long session, you’re going to learn about hardware, query rewrites that work, effective index design patterns, and more.

Before you get to the cutting edge, you need to have a good foundation. I’m going to teach you how to find and fix performance problems with confidence.

Event Details:

Get your tickets here for this event!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Lookup Costing Is Really Weird In SQL Server

Lookup Costing Is Really Weird In SQL Server



Thanks for watching!

Video Summary

In this video, I delve into a peculiar aspect of SQL Server’s query optimizer behavior related to lookup costing between heaps and clustered indexes. Erik Darling from Darling Data shares his insights on why the cost estimates for these operations are identical despite their structural differences. I explore how the query plan costs remain consistent, even though the underlying data structures differ significantly. Through practical examples, I demonstrate that while a clustered index might result in more logical reads due to key lookups, SQL Server does not penalize this operation in terms of cost estimation. This video aims to clarify common misconceptions about clustered indexes and heaps, emphasizing their appropriate use cases in different scenarios. Whether you’re a seasoned DBA or just starting out with SQL Server, understanding these nuances can greatly enhance your query tuning skills.

Full Transcript

Erik Darling here with Darling Data, and in this video we’re going to talk about something that I just think is weird. Not good or bad or ugly, just kind of strange to me. We’ll talk about some reasons why things might be the way they are, and I will try to remain not complaining about this, just explaining. I don’t want to complain, I want to explain. Sometimes I really need to complain, and you’ll hear it, but this one, just get a nice walkthrough why look, what I think, one aspect of lookup costing that I think is really weird. Now, it’s not that, like, something that I will normally complain about is that SQL Server is very much stuck in the spinny disk era of lookup, of like, I.O. costing generally. This is different from that, so you don’t have to, you don’t have to worry about that. This is just the difference, well, the lack of a difference, really. In lookup costing between heaps and clustered indexes. Alright, so let’s go with that. So I’ve created a temp table already ahead of time, because if you, well, you can’t see it because I’m in the way, but it takes about six seconds, and I’ve wasted enough of your time babbling.

So, matching me, like, move data around is probably not on your, like, top ten list for ways you’re going to spend your life. So, I’ve already created this table called votes. It is a heap. There is no indexing on it currently. We’re going to look at how indexes change things in a minute. But, yeah, right now there’s just no order to this thing. It is just a heap of nonsense.

And if we run this query right here to look at the sort of page or index structure stuff for the heap, we’re going to have this big, long, confusing table name. But we’re going to collapse that a little bit, because no one needs to see that. Absolute barrage of underscores.

And if we zoom in here, so this is a heap. So it has an index depth of one and an index level of zero to contain everything. This is just how heaps look. All 242,798 pages are just flat across, kind of.

And you can see the record count is about almost 53 million rows. So, a fairly impressive row-to-page ratio, I think. You know, sort of.

Don’t worry, we’re not going to look at fragmentation. Oh. So, if we go ahead, come down here and run this query, necessarily, because it is a heap and we have no index with which we can find this data. If we look at the IO output, you’re just going to have to, like, trust me.

Oops, that was the wrong Windows key. You’re just going to have to trust me that this is the votes table, because I’m going to just knock this whole thing out. So, it’s not in the way.

But, yeah, we’ve got to read all 242,798. We’ve got to scan that whole thing. Right. And, of course, this has a scan count of nine, because it is a parallel query plan at DOP8. And, for some reason, that gets counted as nine scans.

Right. So, the coordinator thread also apparently gets included as a scan. But, this whole thing just ran at DOP8.

Right. So, look at this. DOP8. If we come over here. Look. No, you can’t really see much there. But, if you look at the number of executions, it is also eight here. But, stats.io counts it as nine.

So, I’m only using it for convenience. I don’t use stats timer I.O. when I’m tuning queries generally. I look at the stuff in the query plans, because it’s there and it’s easier.

I don’t like switching over to a messages tab to look at stuff. So, there. We got that. Anyway. Let’s go add a filtered nonclustered index to this table.

So, this filtered nonclustered index is going to get us exactly to the data that we care about in our where clause. Alright. So, good for us.

We’ve done it. I’m going to create this index. And, we’re going to look at what changes. Alright.

So, first, let’s look at the index structure again. And, now we’re going to have a couple extra rows in there for the new index that we just created. Alright. Let’s get rid of this stuff. And, we will have…

We still have the heap with its 242,000 pages. But, now we have a nonclustered index with an index depth of two. Alright.

Now, 2,286 rows in this index are on six pages. Which is also a very… I think it’s a very, very good row-to-page ratio.

It’s excellent. Good job, SQL Server. He nailed it. Alright. And, just because the bounty amount column is an integer and there’s 2,000… Like, we just don’t need a lot of data pages to fit that one column filter down to that one little bit of data.

So, if we go run this query now… Let me see. Hello. And, we look at the execution plan. We have a rid lookup.

Right? An RID lookup row identifier. Because, that’s how SQL Server, like, keeps track of unique rows in a heap. We don’t have a…

We don’t have, like, a clustered index. We get a row identifier. We’re not going to talk about, you know, when heaps are good or bad. This is just something that I find kind of interesting. So, if we come over here and we look at the stats.io output… Let’s delete this line because there’s nothing useful in there.

And, once again, we’re going to get rid of this incredible amount of underscores. And, we’re just going to zoom in on this. I’m going to see that we did 2,294 logical reads.

Right? So, it’s like the 2,286 rows. Like, we just did a thing and we had to do some extra reads and some navigating. So, close enough to, like, the number of rows that are actually in the nonclustered index.

It’s fine. Right? But, if we go over and look at the query plan. And, we look at the costing.

Eh, come on, buddy. Help me out here. I do that so that the tooltip shows up in a way that I don’t block it or it doesn’t, like… It’s not, like, cut off or anything annoying. So, if we look at the costing for this.

We have 2,286 executions. And, then we have our estimated costs. Right? Remember, costs are not actual, real-life anythings. They are unitless, meaningless metrics that the optimizer uses to come up with an execution plan.

Right? It’s just… It’s internal stuff so that SQL Server can, like, figure out what it thinks the cheapest plan will be. Right?

So, we have 2,286 executions. We have the total operator cost of the lookup is 7.46-something query bucks. And, then the estimated IO and CPU costs are very, very low.

But, if you add those together and then multiply them by 2,286, you actually get a slightly higher number. It was, like, 7.5-something. So, I’m guessing that there’s, like, some cost reduction that happens after, like, the initial read.

Because SQL Server does assume a cold cache. Right? It doesn’t assume anything is in memory when a query starts up. So, it assumes all the IO is going to have to be done on disk.

Which, you know, we’ve talked about this. But, that’s how things start up. So, maybe there’s some cost reduction after the initial, like, lookup thing. This is, like, oh, well, now the data is going to be in memory.

So, these might be a little cheaper. So, close enough, though. Right? Like, just remember those numbers. 0.003125, 0.001581, and the 7.46856. Right?

So, all very easy numbers to remember. No problems there. Now, let’s backtrack a little bit. And let’s touch our table one more time with a unique clustered index. Now, when I add this unique clustered index, two things are going to happen.

The table is now going to be a clustered table. Right? Now, it’s a good term that I heard from Tim Chapman, smart fellow who works at Microsoft. He used that term rather than clustered index because some folks do get confused when they hear clustered index.

So, now they think there’s, like, this magical heap structure and also a clustered index. But there’s no, you just cluster the table. Right? So, it’s a good thing to do there. Right?

Good thing to remember. So, that’s going to happen. And the other thing that’s going to happen is this index is going to get rebuilt using the clustered index key column ID as the row identifier rather than the internal rid that a heap uses.

This took a lot longer when I wasn’t using a filtered index because you basically had to create one 52 million row unique index to cluster the table. And then you had to rebuild the 52 million row nonclustered index. And that was not a good use of leisure time.

This is like an incomplete on that in kindergarten. Very, very bad use of leisure time. So, now that we have a clustered index and a nonclustered index, let’s look at how that table index stuff is laid out now.

So, the first thing you’re going to notice most likely is that we no longer have that single, let’s, I forgot to do that. I’m so forgetful.

Well, let’s zoom in on this now. So, the first thing you might notice is that we have, oh, zoom it. Be my friend today. This changes a little bit, didn’t it?

Right? We no longer have that one row for the heap and the two rows for the nonclustered index. Now, we have three rows for the nonclustered index. So, there’s one page up at the very top of the index, like the root page. And there’s 395 records in there.

And then there’s 395 pages with 242,000 records. And then there’s 242,000 pages for 52 million records. And, like, we have a much deeper index now, don’t we?

We don’t just have that flat heap structure. We have three rows. Or the three levels of index. So, that’s interesting.

But what’s even more interesting, I think, is that when we run this, and we look at the messages tab, again, we’re going to get rid of this work table line because it’s useless. And we’re going to delete all these underscores.

And we’re just going to zoom in and look at the logical reads. So, now, we have 7,015 logical reads. Interesting, huh?

We did a lot. We did about 3x the logical reads is when we just had the heap. Because now we have to navigate all that clustered index stuff. And now, I’m not saying clustered indexes are bad.

Because, I think, in SQL Server, for transactional tables, they’re pretty great. In SQL Server, heaps are pretty great for staging tables and stuff. But I think one thing that probably happened a long time ago is someone might have been testing clustered indexes versus heaps and come across maybe a plan with lookups in it.

And they might have seen something like, oh, maybe it’s a little faster with the lookups because we don’t have to navigate the whole clustered index thing. And maybe we did fewer logical reads with a clustered index.

My goodness. You’ve got to have those low logical reads or else. So, someone might have been comparing them. You might have been like, well, clustered indexes are going to make everything slow.

Why would we want them? This sounds terrible. Why would we ever have a clustered index? We do all these logical reads now with a clustered index. Bummer.

But, oh, now if we look at the query plan, what’s interesting, right, is now we have a key lookup rather than that rid lookup. But, if we zoom in on this and we look, we have the same number of executions, but we also have the exact same costing, right?

So, SQL Server doesn’t actually cost key or rid lookups any differently. It’s the same numbers in there. 7.46856, 0.003125, 0.0001581.

It’s the same numbers. So, there’s no cost difference to SQL Server when it’s looking at either heaps or clustered indexes. And these queries, I mean, there’s so few rows that the timing on these is almost indistinguishable.

So, this isn’t like a performance test. This is just something to show you how the costing is the same. Now, you might be wondering why the costing is the same.

And I was, too. And I was gently reminded by a client in New Zealand that when you have a, that, rather, that in early versions of SQL Server query plans, there was no distinguishing between key and rid lookups.

Everything was just called a bookmark lookup. It was one single operator. I think it was around, like, SQL Server 2005 when that changed, when you had some distinguishing between key and rid lookups. So, that’s probably why there’s just one unified costing strategy for key or bookmark lookups, despite the fact that there is a physical, like, difference in the structure between key and, between clustered indexes and clustered tables and heap tables.

So, it is kind of weird. And it might be kind of misleading, depending on how you approach query tuning. And depending on how you, you know, design your nonclustered indexes as well, where if you were, you know, just, you know, doing a basic sanity test of, like, you know, should this table have a clustered index on it, which, you know, I mean, I don’t know how many people are starting brand new databases from scratch these days.

Most of the databases I see have been around since, I don’t know, forever. Like, if you cut them open, there’s a lot of rings. But, yeah, like, you know, it’s one of those things where, like, if you were, you know, in the, like, very early days of SQL Server, you just didn’t understand, or, like, you’re building a new database now, but maybe your knowledge is not so expansive about, like, you know, clustered indexes, primary keys, you know, nonclustered indexes, query plans, all this other stuff.

You might, you know, test having a heap versus having a clustered index and think, wow, look at this query. It does all these extra logical reads of the clustered index. It’s three nanoseconds slower.

Better just leave that clustered index out. But that’s a pretty big mistake, I think, for most SQL Server workloads. So you should not, you should not probably consider that as part of your strategy. Heaps, great staging tables, clustered indexes, very good for transactional stuff.

Anyway, that’s about enough of this. Thank you for watching. I love you.

I don’t know. Maybe I do. Maybe I’m, maybe, maybe I do, maybe I don’t. Maybe I’m indifferent. Maybe we just haven’t met yet. Maybe someday I will love you. When I, if I do love you, I will always love you.

Darling data promise. Anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something. I hope you’ll watch more videos. If you, if you like this video, the thumbs up button lets me know that you liked it.

Otherwise, I just see views and I don’t see thumbs ups. And I think, wow, what happened? If you like this sort of SQL Server content, I do try to publish as often as possible, both my blog and the videos. So you should subscribe to my channel if you want to get more of this stuff.

Because I don’t know where else people get it from these days. So yeah, thank you for watching and I will see you in the next video, whenever, whenever that may be. I love you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.