Loops, Transactions, and Transaction Log Writes In SQL Server

Loops, Transactions, and Transaction Log Writes In SQL Server



Thanks for watching!

Video Summary

In this video, I delve into an interesting aspect of SQL Server transaction management by exploring how loops and transactions interact in a practical scenario. I demonstrate three different approaches for handling modifications within a loop—allowing implicit transactions to handle each operation individually, wrapping all operations in a single explicit transaction, and conditionally committing based on specific criteria. By using SP Pressure Detector to monitor the transaction log activity, I show that optimizing the transaction commit strategy can significantly reduce the load on the transaction log and speed up the overall process. Whether you’re dealing with cursors or other looping constructs, this video offers valuable insights into how to tune your SQL Server operations for better performance.

Full Transcript

Erik Darling here with Darling Data on my tenth take at starting this thing. Usually recording these videos in the morning is against my religion because I am not suitably or favorably dispositioned to be presentable for this sort of work before like 3pm. So this video is about how if sort of like loops in terms of the work that you can do, transactions and the transaction log in SQL Server and how you can use transactions in loops to be more favorable to the transaction log and speed those loops up considerably. So, well, I don’t often recommend looping code. Sometimes it is unavoidable and you’ve got to do it because it’s unavoidable and the two are synonymous. So what we’re going to do is look at three different options you have for different options you have for writing to or doing modifications in a loop. If this were select queries, we wouldn’t care because select queries don’t do anything to the transaction log and this would have no impact. It would just be about tuning the select queries in a loop, you know, tune your cursor queries or whatever. So, yeah, we have three options here and we’re going to look at the first one, which is allowing SQL Server to behave as it normally does and use implicit, well, I mean, I mean, like automatic transactions. It’s not implicit transactions. It’s automatic transactions. But implicit transactions are a completely different thing. But implicitly, each one of these things is a transaction, right? So this insert is a transaction. This update is a transaction. This delete is a transaction. And what we’re going to do is highlight this and go over to this window, this tab, this tabulature. And we’re going to use a newer addition to SP pressure detector that allows you to sample a server for a number of seconds. And look at what happened in that number of seconds. So I’m going to kick this off. And I’m going to run this. And if this, this demo lives up to prior executions, it should finish in about seven or eight seconds. Look at seven seconds. Look at that professional presenter, even at whatever time it is in the morning.

Nailed it, right? Not like I didn’t just run through this or anything. So looking at SP pressure detectors results, the stuff that I want to focus on is first up here. So this second line is going to be rights to the transaction log for our database. Looks a bit stranger now that I see it on the screen. But if we look over here for that second line, we wrote about 235 megs to the transaction log over 60,000 total rights. And that is backed up in, you know, mostly, like mostly, you know, correct numbers. If we look at perfmon counters as well, where, let’s frame this up a little bit more nicely.

If we look at log bytes flushed, there were 247 million total or about 24 million a second. And if we look at the log flushes, we’ll have about 60,000 total flushes or about 6,000 flushes per second. And that lines up pretty well with, so like the 247 million bytes is probably pretty close to 235 megs. And 60,000 log flushes is pretty close to 60,000 total rights to the transaction log. So that might be fine, right? You might be doing this at a time when it doesn’t matter if your loop runs for seven seconds. It just might not be a big deal.

That’s okay. But if you’re like me and you often need to tune processes like this, you might be looking at other ways to improve upon this. One way you can do that is by wrapping all three of the modifications into an explicit transaction. So we have up here, we have a begin train and down here we have a commit. So rather than having each transaction, each insert, update, delete, auto commit when they run, we’re going to make them commit as a group when each loop or when each thing finishes.

So let’s highlight this code. And let’s come over here and kick off SP pressure detector. And let’s come back over here and run, oh, not you, run this. And this should finish in about between two and four seconds. We got two seconds on that one. Things seem to be finishing on the low end when I run them here. And what we’re going to see is that we cut everything down to about a third of what it was before.

So coming up here and looking at the total megs written, that’s just about 80, which is just about 30% of 250, whatever it was before. And the total write count is about 20,000, which is about 30% of the 60,000 that it was before. Something like that.

20, 40, 30, 33 and a third. 20, 40, 60, 33, something. 33 point infinite threes. And if we look down in the perfmon stats section at the same counters that we looked at before, if you look at log flush bytes, it’s about 1.6 bajillion.

And really what we’re looking at over here is the total difference, which is about 83 million and about 8.3 million a second. So that’s all coming down by about 30%. And the same thing we’re going to see here for the log flushes a second, where that’s at about 20,000 total and about 2,000 per second.

So before this was 60,000 and 6,000. Before this was, you know, 240, whatever bajillion. So that’s one way of doing it.

One way that I’ve found of making this even better is by not making every single loop through a transaction that commits, but conditionally committing the transactions based on something. And in this case, excuse me, the something that I’m using, it looks like this.

So we have, at the very top, we have a begin transaction. And at the very bottom, we have a commit transaction. But in the middle, every time, get in there, every time the ID value is modulist by 1,000 and equal to 0, so basically every 1,000 loops through, we’re going to commit the transaction and begin a new transaction.

All right, so it’s very important that you do this, and it’s very important that you do this and this. All right, cool. So let’s get this highlighted.

It’s a little bit more verbosity to the code. Let’s start this running, and now let’s run this. And that didn’t take two seconds.

That took no seconds. That was very fast, right? Pretty good, I think, anyway, at least if you’re into that sort of thing. And if you look at what happened with regards to Perfmon, we wrote a total of 10 megs to the transaction log, over 180 writes.

That’s a little cut off over there. And if we look down at the Perfmon counters, and let’s frame this up a little bit more nicely, we have for log flushed bytes a second, we are down to 10 million there, or 1 million a second.

And if we look at the log flushes a second for our database, we are down to 180, or about 18 per second. So that lines up, the 180 there lines up with the 180 there, and the 10 megs there lines up with the total difference there.

So everything kind of agrees that SQL Server writes to the log more efficiently when you, you know, A, like, don’t use, like, the auto-commit transactions if you’re doing multiple modifications in a loop.

And if you do an explicit transaction that encompasses all of the modifications in the loop, you’ll do better. And then if you change your code a little bit so that you control exactly the sort of cadence of commits to the transaction log, you can do even better.

Again, this is kind of a rare thing, but I do see it often enough that I find this to be a useful tactic to speed things up. So if you’re ever looking at code that’s running in a loop, whether it’s a cursor or a while loop or any other sort of construct that might loop over things, and you’re like, there are just not enough hours remaining in my life for me to write this as a set-based solution, you might consider using one of these techniques, either wrapping all of the modifications into a single transaction or controlling the cadence of transaction commits and begins and stuff to speed that up.

So, an admittedly quick video this morning because I have stuff to do soon, and I got to do those things. So, thank you for watching.

I hope you learned something. I hope you enjoyed yourselves. If you like this video, even early in the morning, I am clearly a bleary-eyed individual, thumbs up is a good way to say thank you.

Leaving a comment that says thank you is even more verbosity. And, of course, if you like this sort of SQL Server performance tuning training content, you should subscribe to my channel so that you get notified every single time I post one of these whiz-bang things, and I promise you that most of them will not be in the morning.

I prefer to work in the dark or something. Anyway, thank you for watching. I need to leave.

Bye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 25% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.



2 thoughts on “Loops, Transactions, and Transaction Log Writes In SQL Server

  1. This was very eye-opening. The decrease in log writes makes sense, but the reduction in run time was more than I expected. Of course, the trade-off with not committing transactions so often is the increased chance of blocking, so, as with most things, it’s a balancing act.

Comments are closed.