SQL Server Performance Office Hours Episode 29

SQL Server Performance Office Hours Episode 29



To ask your questions, head over here.

In Brent’s post about maintenance he said updating statistics causes parameter sniffing. Should I stop doing that? It’s very confusing.
Hi Erik. You discussed index compression in a YouTube video about 10 months ago. I do not see a video on table compression. What is your professional opinion on table compression, and do you ever use it or recommend it to clients? Thank you sir.
Hi Erik, what are thoughts regarding AI and its long term impact on the dba world. I used Claude for the first time a month ago and I’m now considering retraining to be a plumber… Cheers, Oli the dba.
Which metric best reflects actual execution time – the “Last Total Elapsed Time” in the Messages tab via SET STATISTICS TIME ON? In production, a query ran slowly despite a low time shown in the execution plan (guessing low CPU time); what factors might cause this, and how can I find and resolve them? It wasn’t just slow in SSMS but the app too. So I guess we can rule out high Client time?
I’ve been watching your T-SQL bgeginner content and it’s really great. I thought I was more advanced but I’m learning a lot of surprising things. I can’t imagine whatt eh advanced content will look like. How so you remember all this stuff?

Video Summary

In this video, I dive into a series of Office Hours questions from viewers, providing insights and advice on topics ranging from parameter sniffing and statistics maintenance to table compression and the long-term impact of AI on the DBA world. Whether you’re dealing with performance issues or curious about future trends in technology, there’s something for everyone. I also share my thoughts on the T-SQL beginner content that has been well-received by many, emphasizing the importance of continuous learning and practice in the tech field. If you have your own questions or want to support the channel, make sure to check out the links provided in the video description.

Full Transcript

Erik Darling here with Darling Data, and you guessed it, you smart devil you. We are going to do an Office Hours episode. Why? Because I have five questions to answer. So I’m going to answer them. If you want to ask your own question, there’s a link right there. It’s going to look very much like a link down in the video description where you can go ask your very own questions. It’s all very nice and streamlined for you. If you want to support the channel, there’s also a link down there to do that. It’s just a nice way to say thank you for the endless hours of content that I produce. If you’re into that sort of thing. If you’re into the whole gratitude trip. If you need consulting help, and boy, do you need consulting help. I’ve seen your servers. They’re messes. I am available for all of these things and more. And as always, my rates are reasonable. If you want to get your hands on my performance tuning training, you can get all 24 hours of it for about 150 US buckaroos. That will last you for the rest of the day. The rest of your life. Another link down in the video description to do all that with. And if you want to get in on my T-SQL course, while it is still at the pre-sale price of $250, you can. 23 hours of beginner content is out and available. And this will go up to $500 when the advanced material drops after the summer. And so please buy it now before you send an email to me saying, hey, can I get it for the old price? And I say, no, no, you can’t. Anyway, the nice folks at Redgate, they’re making me leave the house this summer. So August, September and October, I will be in New York, Dallas and Utrecht with the Pass on Tour extravaganza, all leading up to the Pass Data Community Summit taking place in Seattle November 17th to 21st.

And now let’s go answer some office hours questions here. Let’s zoom this in to a reasonable degree so that everyone can see exactly what’s happening here. All right. In Brent’s post about maintenance, he said, updating statistics causes parameter sniffing. Should I stop doing that? It’s very confusing. I know the post you’re talking about. I’m not sure that I would have ended the sentence so abruptly about that. But, you know, to each their own. You can’t. It’s hard to write hard to write a whole book in every post. But, you know, when when when you look at Brent, one of the first things you notice is that he is a fitness enthusiast.

What what what may not be obvious is that his fitness icon is a lady named Susan Powder, who is big in the like, I guess, the 80s and 90s. And her catchphrase was stop the insanity. And so I would what I presume was meant by that is if you are running nightly statistics maintenance and you come in in the morning and you are facing some sort of server meltdown because all your queries got bad, different, weird plans that maybe you should examine the way that you are updating statistics nightly. And perhaps rein that in a little bit, perhaps change your technique a bit. I can’t tell you exactly what to do because I don’t know if you are facing these things.

But, you know, when you update statistics, you, of course, invalidate a lot of execution plans. The execution plans become uncashed or however you want to put it. And then SQL Server compiles a new plan based on whatever comes in next. So, you know, you can either take that to is, you know, if you are facing that, if you if you are facing that issue, then you might consider some things like figuring out which queries are susceptible to that.

And you can either maybe, you know, not not not do the statistics maintenance for tables involved in those queries. You could also, you know, you know, use a plan guide or force a plan with query store that doesn’t result in your server melting down. You could, you know, like, you know, after maintenance, you could run a run a store procedure or query with the with parameters that get you the execution plan that you want.

There are there are there are lots of ways to do that that don’t involve stopping updating statistics wholesale, because if you want to stop that, you would actually have to look at, you know, turning auto update stats off and, you know, some other stuff, which could also cause parameter sniffing. And, you know, it’s just my general feeling and my general experience is that when the statistics that are a good representation of the data in your tables and indexes is a good thing for SQL servers query optimizer. So, you know, it’s a little tough to balance that with not updating statistics.

So, I don’t know, perhaps there will be a clarifying post at some point in the future. But sorry, you were confused by something someone else said. Anyway, hi, Eric.

Hi, you discussed index compression in a YouTube video about 10 months ago. I do not see a video on table compression. What is your professional opinion on table compression? And do you ever use it or recommend it to clients?

Thank you, sir. Sure. So, when you say table compression, I assume you mean that you are talking about applying compression to a heap, since index compression would involve clustered and nonclustered indexes. Both, since they are indexes.

And a clustered index makes for a clustered table, not a copy of the table, but the table. So, the only thing left is a heap. That is a table without a clustered index. And the answer there is no.

And the reason why is because index compression, you know, so, like, just remembering the details of this, for an index, when you apply compression, assuming page compression, then as rows get added in, row compression will be applied. And then when a page is full, then page compression will be applied. That could be, it’s a little foggy for me because I haven’t looked at that in a long time.

But the thing with heaps is that data is not compressed on insert to a heap. You have to rebuild the heap in order to apply it. So, usually when I’m talking about heaps with clients, it’s not should I add page compression to this heap.

It’s should this table be a heap? Check yes or check no. If no, then depending on the size of the heap, perhaps we should explore page compression for when we decide on and add a clustered index to that table to make it a clustered table and take away its heapness.

So, there we go. All right. All right.

Oh, another hi, Eric. Look at these polite people. Hi, Eric. Hi, it’s me. Oh, that was not a good square. That was covering up all sorts of stuff. All right. What are your thoughts regarding AI and its long-term impact on the DBA world? I used Claude for the first time a month ago.

Jeez. Late to the game. And I’m now considering retraining to be a plumber. Cheers, Ali the DBA. Well, maybe we should start calling you Ali the plumber. So, I’ve talked about this stuff a couple of times in the past.

And, you know, again, with the AI LLM thing, you don’t really want to watch where the ball is. You want to watch where the ball is going. And so, like, right now, if you are a, you know, moderate to highly skilled DBA, AI is not going to replace you immediately. Depending on the trajectory of AI, because there are all sorts of variables and factors that are going to play into the trajectory, it could get very good and end up being enough to replace moderate to highly skilled people.

Or other factors may kick in and it may be less of an impact. So, like, right now, the things that a lot of AI people worry about is, like, you know, like, just as far as, like, you know, like, things progressing with it. You know, because, like, LLMs, like, if they, like, on the current trajectory, will just keep getting infinitely better and people will stay about the same.

Like, what a lot of people are just like, well, you know, it’s, like, high energy expenditure. It costs a lot of money to do all this stuff and there’s not a lot of, like, return on investment for it. So, like, there are things that could make AI, like, stop just infinitely getting better and kind of, like, crater it off.

But then there also might be some things that happen that improve either the energy, you know, cost, things like that and make it so that there is some return on investment for it. But I think the biggest problem that people are facing right now is that there are a lot of idiots in the world and a lot of idiots in the world are in charge of people. They are managers.

They are executives. They are C-levels. And they are not very highly technical people. And they think that AI and LLMs right now can replace people. And I’m sure that there are some replaceable people in the world.

To some degree, everyone’s replaceable. But I think the biggest thing right now is they don’t realize how, like, sort of not good, like, LLMs are, like, at, like, the deeper level thought. And I know that because when I try to ask an LLM to do deep research, it ends up quoting my blog post back to me.

And I’m like, there has to be something better. Like, there has to be, like, better out there. Like, why are you coming back to me with my own blog posts?

So, you know, depends on a lot of stuff. It could, of course, you know, be a real nightmare for people. And it, of course, could just end up being, you know, sort of a, I don’t know, like, slightly better Siri or something.

I don’t know. So, you know, it does depend a bit on a lot of things. But really, the worst part about it is the higher-ups thinking that it’s way more capable than it is.

It’s, you know, because they, like, they’re under a lot of pressure to use it and keep up with, like, technology trends and stuff. And, you know, like, I try, like, I use AI, or I try to use AI for a lot of things. And I just find it severely wanting for most of the things that I try to do with it.

Anyway, let’s see what we got here. Which metric best reflects actual execution time? The last total elapsed time in the messages tab via set statistics time on?

Question mark. And that went on? In production, a query ran slowly, despite a low time shown in the execution plan, guessing low CPU time.

No. What factors might cause this, and how can I find and resolve them? It wasn’t just slow in SSMS, but the app, too.

So I guess we can rule out high client time. Well, I don’t, I don’t know. It depends on how many rows your query was returning. But I don’t, I don’t, I don’t, I don’t really use set statistics time on anymore.

Now that query plans have, like, the operator times and stuff in them, I just, I don’t see much of the point in it. You can get all that information from various parts of the query plan without having to switch over to the messages tab and look at a bunch of nonsense. But, you know, easy ways to figure that out.

So assuming that you are looking at the right thing in the actual execution plan, sometimes looking at the last operator is not a great way of figuring out. If, if, what, what, how long a query took, because if it was running in either mixed row and batch mode or just all in batch mode, then there could have been some longer running operators off to the side that you didn’t see.

So if you right click on the root of any query plan, any actual execution plan, rather, there will be an actual time statistics thing in the properties window. And that will tell you the CPU and elapsed time of the query.

And that, that, that’s also a good place to look because if you are returning a lot of rows, that will, of course, also include the time that it took for SSMS to receive and render the results. And that can be, that can take a long time.

It might be very, very easy thing for you to test just dumping the results into a temp table, assuming that it’s not like millions and billions of rows. And just see if the elapsed time reduces because you’re sort of discarding the results. You could also try discarding the results via SSMS or assigning columns to variables or something.

But that can have an impact on the query plan overall. So I don’t usually recommend that. But, you know, you know, I wish you had shared the plan or something with me, because then I could give you a more definite answer.

But like I said, I don’t really use set statistics time anymore. I generally rely on what’s in the execution plan. And what you see for the operators and actual execution plans is, of course, wall clock time, not CPU. For CPU, you have to look at the operator properties to determine that.

All right. Last question here. I’ve been watching your T-SQL beginner content, and it’s really great.

Well, thank you. What a sweetie you are. These polite people with their hierics and their compliments. I thought I was more advanced, but I’m learning a lot of surprising things.

Well, surprises abound with T-SQL. It’s actually what the S in SQL is. Surprise. Surprise.

I can’t imagine what the advanced content will look like. How do you remember all this stuff? Well, most of the time I don’t. Like a lot of people who work in tech, I have no internal dialogue or biographical memories.

So I write a lot of stuff down. And that’s generally how I am able to remind myself of things if perhaps I have forgotten something. But, you know, a lot of it is just the fact that I do it all day, every day.

And, you know, the more you do stuff all day, every day, the more you remember, kind of. The more it kind of just sticks with you becomes kind of muscle memory with things. And it becomes harder to forget.

Or I don’t know. I don’t want to say it becomes harder to forget because, man, there’s always a way to forget. But, you know, it does become a bit easier to have these things in the front of your mind rather than having them fly out of your ears. Anyway, that’s five questions.

Thank you for watching. I hope you enjoyed yourselves. I hope you learned something. And I will see you in the next video tomorrow. All right. Good night.

I love you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 25% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.