A Little About Why Parallelism Doesn’t Make Every SQL Server Query Faster

A Little About Why Parallelism Doesn’t Make Every SQL Server Query Faster


Video Summary

In this video, I delve into an intriguing scenario where parallelism in SQL Server doesn’t always yield the expected performance benefits. Starting off with a view called “parallel but serial,” which gets amusingly abbreviated as PBS by Redgate’s SQL Prompt tool (a nod to Sesame Street), I run through its execution plans side-by-side to highlight the nuances of parallel query processing. The key takeaway is the concept of skewed parallelism, where all rows end up on a single thread despite running in parallel, leading to suboptimal performance. This video serves as both an educational exploration and a reminder that while SQL Server’s parallelism can be powerful, it’s crucial to monitor row distribution across threads to ensure optimal query execution.

Full Transcript

Erik Darling here with Darling Data. Exciting news on top of Scotch Sniffer Magazine wanting to partner up with me. I’ve also, I guess Wine Wanker Magazine also watches my YouTube channel and they’ve put in a competing offer to partner with me on future endeavors. This video was of course sponsored by Fiji Water. I guess not really because I can’t, I don’t know if this is actual Fiji Water or if this is the Fiji Water bottle that I refilled with New York tap water. Maybe, maybe it says something about my palate that I can’t tell the difference between a fresh bottle of Fiji Water and New York tap water, but quite frankly, water doesn’t get me drunk. I’m not going to invest that much time in figuring it out. This video, video is a little bit about why parallelism does not help every query in SQL Server. And what I have set up over here is actually a view called parallel but serial, which is oddly enough gets aliased by Redgate SQL prompt is PBS, which I find amusing because this is supposed to be a very informative and entertaining Sesame Street video.

But what I’m going to do is I’m actually going to run these. I’ve got query plans turned on so I won’t waste anyone’s time. And while that happens, it’s going to take around 45 or so seconds. So I’ve got a lot of dead space to kill here. Maybe I should have talked about the magazines and whatnot while these things were running. Might have been smart of me. Poor time management skills, I guess you could put in my yearly review. But while these things are running. But while these things are running, this is the definition of the view. I’m actually going to just take that thing off for a moment so that the code is a little bit more clear and easier to read. Reading a wall of green text is no easier than reading a wall of red text. But anyway, what we’re doing is flattening out dates into years and months, because that’s all we care about. And we are joining the post table to itself to find accepted answers and accepted answers and what not. We’re also sort of doing a third kind of weird cross join to find the max creation date of all posts, where the score is greater than zero. And we’ve got a little bit of filtering at the end to find dates that are less than the max creation date in the table, which I know looks a little bit silly. But I deal with a lot of reporting queries that hit like date, dimension tables kind of things like calendar tables and stuff.

that do things like this that end up with weird execution plans. And then of course, we have to group by because we are we are getting a count. So we have to aggregate this expression as well. But anyway, our queries finished, it took about 41 seconds, my guess was between 40 and 45. So I was close on the low end. That’s a nice change of pace for me. But if we look at the execution plans, we’re going to have the parallel plan up top, which takes 21.525 seconds, 21.525 seconds, 21.525 seconds. And we’re going to have the serial plan down the bottom, which takes 20.170 seconds. So the parallel plan takes about a second longer, for some reason, I’m not going to invest too much time in figuring out where that extra second goes. But what I do want to point out is that the plans have the same basic shape, right?

They both start with an index seek into posts down here. Now we have our index scan of posts that goes into a loop join to seek into posts, which is the same thing that happens here. We have a key lookup, we have a key lookup, we have another nested loops join back to posts again. And the only thing that’s really different are some of the repartition streams operators that come along with a parallel plan. All right. Now this is an important thing that you need to check when you have a parallel plan. And you’re like, wow, this parallel plan is running a whole lot more slowly than I think it should. What the heck is wrong with this parallel plan? One thing that you want to check and make sure is not happening to you is skewed parallelism. So if we right click on this nested loops join, and we look at the actual number of rows, we’ll see that absolutely all of the rows in this query end up on a single thread. And this will continue throughout the query.

If we click on different operators, all of the rows will be on one thread going throughout. I mean, clearly, like the worst of it is in here, where we’re dealing with a lot of rows. But you know, the story doesn’t get much better. Well, I mean, the story does get much better as rows get filtered out. But the story doesn’t get much better as far as like row distribution goes. When when when even even after the number of rows gets narrowed down, they’re still all on one single thread.

Now, the way that SQL Server assigns pages to parallel threads in a query is via a mechanism called the parallel page supplier, which uses a sort of a hashing function, which is like like modulus dop, you know, like modulus, you know, eight, or whatever for this query, since we ran it at max dop eight. And it starts assigning pages by looking by by using that and saying, well, this row goes to this thread, this row goes to this thread, this row goes to that thread. But in this case, we get incredibly unlucky and all our rows end up on a single thread.

Now, there are various ways to solve this. Sometimes actually forcing loop joins can be a good way to solve it. Sometimes forcing different join types like hash or merge joins can solve it, especially changing the plan shape and, you know, all that other good stuff. But a lot of the times, you know, you do like, you know, you may be using something like a temp table to pre stage some of the data would have been useful, like maybe instead of cross joining to this, maybe if we just stuck the max creation date into, into a temp table and then, you know, use that to filter things, that would have been more appropriate.

But this is something that I actually ran across it twice in the last, I don’t know, well, actually not this month, because it’s the first of the month. But I ran across this exact thing a couple times last month, that would be April for anyone counting. And there were, well, I mean, the stuff that I just talked about was kind of how I ended up solving the problems in those cases.

But yeah, that’s just sort of an unfortunate side effect of parallelism that not all row distributions are created equally across your parallel threads. There have been a couple of times in the past where using an odd DOP number like seven or three or something has helped with this a little bit. But in other cases, you do have to use different methods of solving the problem.

Anyway, this is a short video because I just need today to be done with so I can start resting my brain for tomorrow’s recording. So I’m going to say goodnight here. I’m going to say thank you for watching.

I hope you learned something. I hope you enjoyed yourselves. And of course, a reminder to like and subscribe. Like this video and subscribe to my YouTube channel so that you can keep learning all sorts of fun and interesting things about SQL Server. Anyway, that’s about it.

Again, thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

A Little About Date Math In SQL Server Queries

A Little About Date Math In SQL Server Queries


Video Summary

In this video, I delve into the nuances of proper date math in SQL Server queries and demonstrate how it can significantly impact query performance. After a rather unconventional breakup with BeerGut magazine, I’ve found myself in an interesting new partnership with Scott Sniffer magazine. While this venture might be more refined and elegant than my previous endeavors, it does come with its own set of challenges—like dealing with fancy Scotch sniffers who, despite their title, may or may not actually drink the stuff. Focusing on practical examples, I show how moving date math operations to the correct position in your queries can drastically improve performance by allowing SQL Server to utilize indexes more effectively and avoid unnecessary scans. This video is a reminder that writing well-structured queries isn’t just about getting results; it’s also about making sure those results come quickly and efficiently—leaving you with more time to enjoy a well-deserved glass of Scotch, or at least the virtual equivalent!

Full Transcript

Erik Darling here with Darling Data. After my prolonged, painful breakup with BeerGut magazine, I’ve been approached by Scott Sniffer magazine about forming a partnership. It’s going to be a little bit harder on my liver, but, you know, maybe a little bit more refined and elegant. Just like, just like my T-SQL and BeerGut magazine. Finally, work with a higher echelon of individual, not lowly beer drinkers. I’m going to hang out with a bunch of fancy, pinky-out Scotch sniffers. Actually, it’s unclear to me if they actually drink the Scotch or throw it in. They just sniff the Scotch and tell you what they think it smells like. So, all things I’m sure that I’ll learn during our, uh, conversations. So, uh, this video, uh, like the helpful ASCII text is telling you is, uh, where we do date math and how we do date math in our where clauses, and I guess occasionally in our join clotters, clotters, uh, to avoid those, uh, clauses matters for performance. So, right now, the only index that I have on the post table is on the ID column, and that’s a weird bit of format. What, what, what, quite what possessed me to do that. I think maybe I just got drunk and hit the wrong button. Uh, the only index that I have on the post table at the moment is on the ID column and with the owner user ID column included.

And what this query, well, I have two queries, one of them written the wrong way and one of them written the right way. Uh, in this join, I am doing date math the wrong way. Let’s frame that nicely. There we go. Let’s see, uh, I can draw a pink box around my head kind of. Ah, the things you can do with computers. So, in this one, uh, we are doing date math on the column that we care about. We have the v.creationDate wedged in the middle of, uh, our dateDiff function.

And in this query where we’re doing things the right way, uh, we are comparing the creationDate column to a date math expression here. So, I’m going to run both of these queries. And we got query plans turned on, so that’s a good thing.

And, uh, the reason why, of course, this makes a big difference is because when, uh, SQL Server has to perform date math. Um, um, on a column directly, uh, it has to do a lot more work than if it, if we, if it can do the date math on, uh, an expression and apply that to the data in our table. So, just moving, like, we don’t have, we don’t even have a useful index on the votes table right now to, uh, to, to find data in the creationDate column.

But even without that, even without that, we are able to make the, our query go nearly twice as fast. Well, actually, that’s a little bit better. The shave is about two seconds off the whole thing.

Go from 3.6 seconds when we write our query the wrong way to about 1.4 seconds when we write our query the correct way. So, there’s about a 2.2 second difference just making that change, just pushing the, uh, the, the date math from being, uh, from having the creationDate in the dateDiff column. And, uh, you know, comparing the creationDate column to this expression.

Now, if we go back and look at the query plans, I mean, aside from the fact that, you know, when you use non-sargable expressions, like when you put a column in a function and compare it to something, uh, SQL Server doesn’t bother to tell you that an index might help. Because you’re going to have to scan that entire index anyway. SQL Server’s like, well, what’s the difference?

I have to do, I have to do the same amount of work. You know, I’m just not even going to bother with that. Like, it’s not a cleanly, enough, not a cleanly written expression. So, uh, sorry, it’s not a cleanly written comparison. So, you can’t really, you don’t really get anything useful out of an index there.

Like, you might have a smaller index to scan, but it’s not going to be like a seek or anything. You’re not going to be able to seek to the data that you care about. So, you know, if you go look at this, uh, we have this predicate here where we have all sorts of stuff going on.

Uh, we, we compare, we, ooh, look at that. That’s a fun one. Uh, we have a convert implicit to date time offset three.

That’s fun, right? So that, that’s going to be part of, that’s going to contribute a bit to our, our woes. And then, uh, in the, there’s a helicopter going by.

Uh, I’ve been told by. The ever vigilant Randolph West that, uh, the, the New York City noises in my videos are part of the charm. I wonder if part of the charm is also the fact that I’m standing in front of the thing I want to show you.

So, let’s, let’s reframe that a little bit. Let’s move that over. Uh, actually, let’s bring up the tool tip first. And let’s get that where we want it, where we can see it mostly. Good enough.

Uh, nope. Stay. Stay. Good boy. Sit, Ubu, sit. Is there old enough to remember that reference? Uh, I owe you a, an Adidas t-shirt and a high five. So, in this one, even though we, we scan the scanner index in the same way, the predicate we apply doesn’t have any implicit conversions. Uh, we are able to get a bit more performance out of our query just by moving the date math a little bit.

Uh, and, uh, of course, SQL Server. And this one helpfully suggests an index on creation date that includes post ID. Right there.

Up in the pink. Oh, uh, fragile green screen. Uh, but that, that is, uh, you know, uh, sort of secondary. I mean, anyone, uh, who’s been working with, uh, SQL Server or databases in general for any amount of time could probably figure out, uh, that an index on a where clause is a useful. That’s a helpful thing.

Remember, we put data in order, it gets really easy to find. Uh, the only thing that I would probably disagree with, and another big reason why I tend to, uh, try to dissuade folks from listening to the missing index request, is because, uh, if you, we go back to the missing index request, uh, SQL Server only wants to include post ID in the index.

It does not, uh, does not think that our join column would be useful as a key column, which is, um, in most cases, a fair bit of malarkey. So, I’m going to be honest with you. Those missing index requests, not terribly valuable things.

Not terribly valuable at all. Kind of, kind of dislike them. Um, used to like them a lot. Now I hate them.

Anyway, um, you know, they’re, if they’re okay if you’re looking at the query itself, and if, you know, they’re okay if, you know, uh, you’re looking at the actual execution plan. And, uh, the operator, uh, in this case, the clustered index scan is the longest running thing in the query. And, you know, uh, you might say, oh, well, uh, like, you know, it might, it might be useful.

Might not. You know, uh, shaving 863 milliseconds off here. Not, not sure that that’s going to be, you know, the, the big winner.

Uh, you know, it might help a little bit more. But, uh, you know, this one up here where it’s, you know, three seconds of execution time, that might be a little bit more compelling. But, you know, um, also, if you write your queries correctly, uh, they tend to run faster anyway.

So, uh, this is about all I had to say about this one. Um, this is me urging you to write date math in your queries correctly and improve performance. And, uh, I don’t know, maybe you’ll get to see more of your friends or family or just get to do the things you enjoy better than sitting around scratching your head about why your SQL Server queries are all slow when you have poorly written date math inquiries like this.

So, I don’t know. You’re welcome. Um, you’re welcome in advance, I guess.

For all that. Uh, I’m going to, I don’t know. Maybe I’m going to go do something I enjoy now. Maybe I’m going to go sniff some scotch.

Sounds like a pretty decent idea. It’s 6.28 p.m. here at the moment. Sitting here recording videos instead of smelling delicious scotch. That seems like a bad choice.

Maybe you shouldn’t listen to me about anything. Maybe I’m wrong. Maybe, maybe I’ve got my life entirely wrong. Oh, anyway. I’m going to kind of go re-evaluate some things. Thank you for watching.

Please like, please subscribe. It’s the thumbs up and the bell. Uh, if you don’t, I’ll just have to sniff scotch even harder. I might sniff it so hard that it goes right up my nose.

And who knows what will happen then. Get scotch in my brain or something. Anyway, uh, thank you for watching. I hope you learned something.

Hope you enjoyed yourselves. Uh, and I hope that you start writing date math correctly in your where clauses so that you don’t end up with poorly performing queries. Thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Formatting, Debugging, and Troubleshooting Performance Of sp_QuickieStore

Formatting, Debugging, and Troubleshooting Performance Of sp_QuickieStore


Video Summary

In this video, I delve into some useful features of QuickieStore, a handy tool for extracting data from SQL Server’s Query Store DMVs and exporting it to formats like Excel. I highlight how QuickieStore can help you format output in a more readable way by adding commas to large numbers, making it easier to understand the scale of your query statistics. Additionally, I explain how to use parameters like `debug` and `troubleshoot performance`, which provide detailed information about parameter values, temporary tables used, and execution plans for each query run through QuickieStore. These features are invaluable for debugging issues or troubleshooting performance problems when working with Query Store data.

Full Transcript

Erik Darling back again, still with Darling Data, at least for the time being. Who knows? Maybe I’ll be with MongoDB in six months. See what happens. Just kidding. I’ve heard it’s not a real database, despite all their best efforts. Just kidding. My dear friend Joe Sack works at MongoDB, and I would never besmirch the product he works for, because I did enough of that when he worked at my Microsoft. He worked for it worked on SQL Server. So in this video, what I would like to talk about is some different ways you can use QuickieStore if you need to put the data into something like maybe Excel, or if you run into problems with SP, QuickieStore. So again, all of this stuff is detailed under the help parameter. If you scroll down a little bit, we’re going to be dealing with things sort of towards the end, around formatting output, debugging issues, and troubleshooting performance. So, what we get, if we run QueryStore, sorry, if we run SP QuickieStore, by default, normally, without any interventions whatsoever, you’re going to see something startling.

That’s just a real deal. If you want to see something like that. Now, if the people who create and work on SQL Server Management Studio cared about you in the least, they would put commas into numbers.

because when you put commas in numbers, you can very easily judge their scale when they get really large. And that can be pretty important if you’re the type of person who deals with big problems. So let’s scroll over to the right a little bit, and let me start showing you some numbers with commas in them.

Starting here with weight stats, we can see that there were 186,915 milliseconds of parallelism weights in this query very easily. If this were all smushed together, it might be a little bit harder. This will happen whenever a query breaks the thousand, or sorry, whenever a number breaks the thousand mark.

You can see all that in here. Where things are in the thousands, we have a comma, so we can see that very easily. Where things are under the thousand mark, we don’t need a comma, because there’s no such thing as 3,49 unless you’re dealing with weird currency issues. But we are not. We don’t talk about query cost because query cost is a meme, and query cost should be ignored to the fullest extent of the law.

So, scrolling over a little bit further, we have lovely, lovely commas in all of these columns, so that we can see exactly what kind of numbers we’re dealing with very easily. Right over here, this logical read, look at this, 1.1 million logical read, very easy to tell what that is. Now, I’m going to tell you something about milliseconds.

It’s going to bake your noodle. If you chop off the last three digits, that’s how many seconds it was. Pretty impressive, huh?

Average duration, 24 seconds. Total duration, 48 seconds. Because there were two executions. Pretty sweet. So, while we wait for the lovely, hardworking, intelligent people who keep SSMS afloat for us to put commas into numbers, SP Quickie Store is out there in the world doing it for you.

You’re welcome. All of you are welcome. Now, let’s say you run into a problem with SP Quickie Store. Let’s say you run into some issue executing it, and you get an error, and you want to figure out what’s going on.

A great way to do that is to use the debug parameter. So, if we use the debug parameter, up front, we’re going to get a bunch of stuff. We are going to get all of the parameter values that were passed in to the stored procedure.

All right, it’s going to show us what the starting value was for every parameter. It’s going to show us what the declared variables were set to inside of the stored procedure, which can be very helpful for figuring out if anything got set incorrectly along the way.

All right, lots of good stuff in here. Lots of helpful, useful things in here. And then if we go down a little bit further, we’re going to get the contents of all of the temporary tables that were used to filter, join, get data out of our Query Store DMV.

So, this distinct plans table is the one that drives most of what we pick up out of Query Store. I use a lot of temp tables in this store procedure because I found that just naturally querying the Query Store DMVs directly led to a lot of performance issues.

I can still run into performance issues querying them in this way, but I run into far fewer of them. And I’m going to tell you something here. There is almost nothing more embarrassing than talking a client into turning on Query Store because it’s nice, lightweight, almost no overhead.

You won’t even notice it. And then you open up Query Store, and like the first three queries you see in it, there’s Query Store querying itself to populate the Query Store GUI.

It’s real tough. It’s a real egg on face moment. So, if we scroll down, we’re going to see all of the temp tables that got used.

Maintenance plans is what I use to screen out, you know, non-query activity that can be harsh on a server. You know, create table, alter table, not really create table, but like alter table, index maintenance, stats maintenance, stuff like that. I use that to filter out stuff here.

I use this to figure out what the Query Store options are for a database. This is sort of the raw query store data or some of the raw data that I output and format in different ways and other result sets. You know, just sort of like how each temp table was populated along the way.

So, there’s lots of useful stuff in here to figure out. Like, wait, well, this should have been in there. Why wasn’t it in there? You can kind of track down and figure out why things didn’t end up where they should have or why you’re seeing what you’re seeing in the results.

That’s the first part of it. The second part of it, over in the Messages tab, prints out every single query that gets run by Dynamics SQL. It’ll print it out here.

Before each one of these queries is a number. This number signifies the number of characters in the query. So, if you see something that is, you know, cut off or you see something that, you know, may have been some dynamic SQL that did not concatenate cleanly.

Perhaps there was some implicit conversion that happened that, like, caused the string to truncate. This is a good way to troubleshoot some of that stuff. There isn’t a lot of terribly long dynamic SQL in SP Quickie Store.

But, you know, there was enough for me to care about exactly what was going on in here. So, like, this is, like, 3,000, 2,800 characters. And this is the query that got executed in there.

And this is all the stuff that it did. And this can also be very useful because if you hit an error, it will tell you which query through the error. And then you can rerun that query to see if you can reproduce it, see if you can figure out exactly what’s going on.

That’s what I do with a lot of this stuff when I’m working with, when I’m trying to debug things. A lot of this was in here just during development just to make things easy. But I kept the debug thing in because, you know, if you run into anything while you’re using it, you know, I want to make it easy to troubleshoot for you.

So if you go ahead and open up an issue for me in GitHub, you can tell me exactly where the problem is, what you ran into. And we can, I can try to help you fix all of that stuff. Or I can help me fix all of the stuff that I need to fix in my query.

So good stuff there. Some of these, because of the way I print things out, like some of the strings are quite long and they don’t print in one thing. Print is a little more forgiving than raise error.

Raise error has a bigger cutoff. So you might see some points where the query text is not exactly the way it should be. That’s not me being bad at my job. That’s just me not having any control over how print sticks things into the messages tab.

So, excuse me. That’s a good way to troubleshoot if you run into any issues with SP Quickie Store. If you start running into performance issues, you can use the troubleshoot performance parameter.

And this will do some interesting stuff. So if we say troubleshoot performance, the first set of output is going to be every query that runs, that hits the query store DMVs, and the actual execution plan for that query.

So that’s all this section in here. I’m going to talk about what’s in there in a second. But you can see there’s a whole bunch of queries in here. This thing finished in zero seconds because, you know, I can use temp tables and the GUI can’t, I guess.

But anyway, what comes out of here is kind of cool. So there’s a pair for just about every line here.

We’ll get a query plan back that’ll show us exactly what the query was, what the execution plan was. I mean, this all finished in zero everything seconds.

So we don’t need to worry about this one. So we have that, which is cool. But then we’ll also have, also have for every query that, that ran and produced a plan, this information, which is very, very useful.

We have how long it took. We have what the current activity is. So we can control an F for this in the script. And then we have the query that executed so that we can look at it and say, hmm, maybe, maybe, maybe I could have done better here.

I don’t know. Maybe I could, maybe I couldn’t. I just don’t know. It’s impossible to tell. Pretty sweet. Huh? All right. So, and you’ll get that for every query that executes.

And, you know, you’ll see, again, the query information about how long it ran for, how long the dynamic SQL was. So you’ll see some of this, some of this information is a little repetitive, but it is all very useful when trying to figure out if there’s a performance issue with one of these queries that is fixable.

There’s a lot, a lot, a lot, a lot, a lot of times querying these DMVs, especially on very busy, very active servers, where, or servers with a lot of query store data in them, where this thing can slow down a little bit because there are just some things that do not perform well on those types of servers.

Specifically hitting some of these, some of these table valued functions that are, that sort of are masked by the dynamic management views.

A lot of the times trying to get stuff out of there, like you can’t just filter it directly. Like you can’t push predicates to some of these query table valued functions directly. You end up with like, you know, scanning a whole lot of rows and then filtering stuff out later.

That’s what this kind of combo is showing you. And does it kind of give you an idea of the kind of stuff that gets filtered in here? Sometimes it’s not even a predicate that I pass in. Sometimes it’s just something like this, where like, I didn’t, I didn’t ask for that.

I didn’t, I didn’t ask for this. This is just part of the built-in view definitions or the table valued function definition. So all good stuff to be aware of, all good stuff to try and help you if you run into performance issues while using SP Quickie Store to query the query store DMVs.

If you find anything in here that you think, you know, you can, you can show me and it’s something that is fixable by me. I’d love to look, but a lot of the times it is background stuff that I just can’t do much about.

Anyway, I think that’s probably good for this one. Again, formatting output. If you, if you want to, you know what?

I don’t think I ever talked about what happens if you put the format to zero, it takes the commas out. So if you need to like paste this into Excel or something, you can turn format output off and get the numbers without commas.

So yeah, there was that. I got so excited about debugging. I forgot to tell you that. So anyway, that’s my video. I do, I do hope you enjoyed yourselves.

I do hope that you learned something. I do hope that you will like and subscribe to my cough-free channel. Not, not throat-clearing-free channel because there’s still pollen in the air and my face feels like a brick, but that’s okay.

Do it all for you anyway. Thank you for watching. Take care of yourselves or else, or else you won’t be around to watch more videos from me.

What would be the point of life if you were dead? Thank you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Getting More Details About Queries With sp_QuickieStore

Getting More Details About Queries With sp_QuickieStore


Video Summary

In this video, I delve into the expert mode feature of SP Quickie Store, a tool that provides detailed insights into query performance in SQL Server. By enabling expert mode, you get access to numerous result sets that are typically omitted from the default output due to potential information overload. These results include memory grant feedback and compilation stats, which can be invaluable when troubleshooting complex queries or optimizing performance. I also discuss the new DMVs introduced in SQL Server 2022, such as query store hints and query variant view, offering a deeper understanding of how these features can aid in diagnosing and resolving issues related to query optimization and parameter sniffing.

Full Transcript

Erik Darling here with the wondrous, illustrious, eutiferous Darling data. Everything but cruciferous. We do not aid in digestion. We are not a fibrous vegetable in the least. We’re going to talk about today continuing in the same vein as some of the other videos that we’ve recorded recently or that I’ve recorded recently. There is no way. no we in Darling data. It’s just me alone doing Darling data things. We’re going to talk about how you can get more detail about the queries that come back in the default output for SP Quickie Store. So what we have is a parameter called expert mode. And what expert mode will do is return a whole bunch of result sets to you that it would not have made sense to put in the default output because it would have been information overload. And this is really only stuff that you want to dig into when you start dealing with other problems, deeper problems with queries. If I put this in the default output, it would mostly go unnoticed. You would have a scroll bar with a slider like this big and you would scroll for a country mile till you lose your hand in a green screen and it wouldn’t be enjoyable for anyone. I promise. It would not be enjoyable in the least. All right. So when we use this parameter and we say expert mode, please. I’m going to get back some results that you might not get back because some of these results only show up in SQL Server 2022 because SQL Server 2022 had a bunch of new DMVs diagnostic management user, whatever you want to call them added to the query store family of views that help you figure out other stuff. So we have the default output up here. All right. Down here we show or I show rather if there is any feedback happening via different various intelligent query processing features passed on to passed on to the optimizer from query store. So in this one that right now we only have some memory grant feedback, so SQL Server is adjusting the size of memory grants between runs, we can get some information about the feedback data here, which is kind of nice to have if you want to dig into exactly what the feedback is telling the optimizer to do. We see here if the feedback is good or bad, right? It could say feedback valid, it could say feedback invalid.

And then we have the created and then we have the created and then we have the created and last update times. Now there are a couple DMVs that are included in the results that right now I don’t have any data for. So there’s the query store hints DMV. So if you apply any hints to queries via query store, which is a new feature 2022, pretty cool. Actually, I think I love this feature because I can finally fix entity framework queries. And then there’s the query store. So this is a new feature. So this is a new feature笑. So this is a new feature and then there’s a new feature. So this is a new feature. So this is a new feature. So this is a new feature. And then there’s also the query store query variant view. And if you have queries that are getting parameter sensitive plan optimization feedback, those results would show up in here. Down a little bit lower are results that you will see in most any version of SQL Server, as long as it has Query Store in it.

And that is going to be stuff around compilation stats. So compilation stats is going to tell you about everything that happened during query compilation. How many times it compiled, how long the compilation took, memory bind and all this other crazy stuff.

You can kind of understand why this is not part of the default output, but can be kind of important stuff to look at if you’re dealing with, say, long compile times, other stuff like that. Now, this is one of those bum out things that I’ve talked about in other videos. Query Store does not store memory grant information the same way that the plan cache does.

Query Store only tells you how much memory a query actually consumed when it ran. It does not tell you what the full memory grant was. So there’s no way for you to say, oh, you asked for 30 gigs of memory, but you used 30 gigs of memory and you only used like 2 megs of memory.

So maybe we ought to fix something in there. There’s perhaps some poor cardinality estimation happening in there. Perhaps some parameter sniffing issue.

Who knows? Could be anything. So for this, unfortunately, there’s nothing in my plan cache. So I can’t tell you all these other juicy details because the plan cache is a horrible place. It’s volatile.

It clears frequently. I don’t know what happened to it. I didn’t change anything. My plan cache was gone. Real sad story. There’s some other stuff which I actually think is pretty cool is getting weight stats back for specific queries. So this section in here will tell you per plan ID what the top weights were.

So in here we have a lot of parallelism in the CPU. That’s probably going to be the story across most of these things. Now, one thing that I do want to point out, and I’m going to probably skip over this because this is just the total weights for all queries.

And then the next one down is your current query store settings. So this will tell you like whatever, you know, however you configure query store, all the configuration options will be in here. So one thing I want to point out is that query store weight stats, the stuff you see here, is aggregated.

Like there’s a weight category called parallelism. We don’t know if it was CX consumer, CX packet, CX sync port, CX whatever. We don’t know what it was in there.

We only have a parallelism grouping. So if we go to the help section, down at the bottom, there is a little decoder ring for which weights are included in each one of those categories. I may need to update this a little bit for 2022.

I think I checked in on that the last time I did an update on this. But now that I’m looking at it, I’m going to check it out again to make sure I did my work right. So in here is where you’ll see the types of weights that are included in each one of those.

So if you have a question or you’re unsure about what weights might be involved when you see them show up in the weight section, then this is the place to look because this will tell you which weights are included in all of the roll-ups. So that’s just a quick way to get more detail about the queries that you find in the top 10 in Query Store.

It can be very useful for troubleshooting deeper issues with a query, getting more information back about queries, especially if the plan cache is not being a useless sack of potatoes. So there you go.

Anyway, good video. Good video all around. Didn’t cough once. It’s been zero days since I didn’t cough in a video. And I guess, you know what? I’m going to record one more on Query Store.

I just have to decide what it’s going to be on. It’s going to be a surprise to everyone, myself included. If I could tell the future, I would tell you what it was going to be about. But I had to say something, didn’t I?

Anyway, like, subscribe, send me cough drops, something. Seasonal allergies. It’s a terrible time of year here in New York.

Anyway, that’s enough of me. I’m going to go figure out what to do next. And I’ll record that video and upload that video. And you’ll watch that video and you’ll like that video and you’ll subscribe to my channel. Because even if you mute me and you just want to like turn this into a weird OnlyFans thing, I don’t mind.

I don’t mind. That’s how much I love you. I don’t mind what you turn this experience into.

All right. Cool. Thanks for watching. Hope you learned something. Hope you enjoyed yourselves. I hope my allergies go away.

I hope for many things. I hope someone signs a contract for a million dollars. You know, just manifest things into our lives. All right.

See you next time.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Changing The Default Output Of sp_QuickieStore

Changing The Default Output Of sp_QuickieStore


Video Summary

In this video, I delve into the customizable output of SP Quickie Store, a powerful tool for SQL Server database professionals. I demonstrate how to leverage the `help` parameter to uncover detailed information about the stored procedure’s capabilities and parameters, making it easier to tailor your queries according to specific needs. Additionally, I explore ways to modify the default query results by adjusting the number of returned queries with the `top` parameter and changing the sort order using various metrics like executions, memory usage, and duration. By sharing these tips, I aim to help you more effectively analyze and optimize your SQL Server performance.

Full Transcript

Erik Darling here, all by myself. We’re going to talk a little bit about how you can change some of the default output of SP Quickie Store. Along the way, I’m sure that we’ll see lots of fabulous, fantastic things. So the first thing that I want to show you is how you, as a data professional, can learn more about or learn the full capabilities of SP Quickie Store. And most of it is going to be by using the help parameter. I know that a lot of us in the tech world have trouble asking for help, unless it’s Google. But I put this in here for a reason. It’s because we hold these truths to be self-documenting. Just like PowerShell code is self-documenting. And if you hit the help parameter, those P’s are really going to tell me how far off the audio sync is on this thing. You’ll get a little bit of information, a little greeting up here. Tells you a little bit about what Quickie Store is, what it can do. And then the second output is all of the parameters that you can put into SP the data type of the data type of parameters, a description, valid inputs, and what the default value for them all is. So if there’s stuff that you want to filter on specifically, you can do that. We’re going to use some of them today. We’re not going to use all of them today. Some of them will be in a different video. Specifically, I think the ones about looking for a procedure and specific plan IDs and stuff will be the next video. But this one here is just a little bit about how you can change the default output. So obviously, passing in a database name, pretty critical step since query store is on database by database. I recently added a get all databases parameter to a quickie store. So if you want to scavenge all of your query store databases, you can do that. So the first thing I’m going to show you is how you can change the number of queries that you can change the number of queries that you can change the number of queries that get sent back to you. And that is by using the top parameter. By default, top will just give you the top 10. I try to keep it to a reasonable number because often I find that if you start bringing back 20, 30, 40, 50 lines, the further down the results are, sort of the less valuable they get. It’s a little bit overwhelming. There are only so many store procedures you can tune at a time.

Anyway, but if you want to get more back and this can actually be helpful if you have sort of like longer stored procedures where you feel like there might be like if you get the top 10 back and you’re like, well, there are like 30 queries in the store procedure looking at the top 10 is good, but we kind of want to see all of them, you can get a more expansive result set for that. But I’ll show you about this store procedure thing in the next video. But we get the top 10 or sorry, we changed top to 20. We will now get 20 results back. So I’m going to go back here and you can see that I’ve really been pounding on kind of the same query over and over again. Another thing that you can change is the sort order. Now I’m going to go back to the help parameter real quick. Because I want to paste in and show you all of the different sort orders that you can actually use when you’re searching through a query store.

So you can use CPU, logical reads, physical reads, writes, duration, memory, tempDB, and executions. These will all sort by the average, not the total. Only because what I find in my day to day consulting is that if you sort by totals, you tend to get stuff that executes a lot, but doesn’t take a very long time to run. And usually I want to find the stuff that does the worst in the average execution. You might find some outliers where they had like one thing where it did like a billion reads or something or used a billion CPU ticks.

And then the rest of the time it’s pretty quick. But, you know, these are good things to sort of figure out. So these are all the possible sort orders you can use. Again, that’s all available. If you use the help parameter, you can see all the valid inputs for everything.

But this time we’re just going to use executions as an example. Remember, by default, it uses average CPU. And if we look by executions, this might actually prove my point a little bit where we have a lot of executions for some of these things. Well, not even like a lot for what we’re looking at, but the average duration and the average CPU time is pretty high.

What I find the executions parameter is good at showing you is stuff like scalar UDFs, because scalar UDFs don’t execute once per query unless they’re in line 2019, 2022, Freud, fun stuff. If they’re in line, then they don’t execute technically as often.

But if they are not in line or not in lineable, then scalar UDFs will execute once per row. You know, you put it in a select list with the returns like top 10,000, that UDF is going to have to execute 10,000 times. If you put it in like a where clause or something, and let’s say your table has a million rows in it, and all million rows come out, that UDF is going to have to run a million times once per row, produce a result, and then have the predicate applied to it.

So those can really rack up executions very quickly, and that’s usually what I use the executions sort order for. Probably the other sort order that I use the most day-to-day is memory, because I think I said in the last video that finding queries that use a lot of memory is often a good way to find queries that are ripe for tuning.

Usually you’ll find like a big sort or something in there, and you can, you know, index your data more appropriately to make the sort not require memory. You know, index order, match sort order, sometimes you can fix that.

So that’s another good thing to look at. Another thing that actually has been kind of cropping up a bit in consulting work is if you have a query that has a lot of like derived joins, subqueries, other things like that, sometimes you’ll find that like SQL Server has a cap for the memory grant, right?

Typically like a query can ask for, unless you have resource governor enabled or using the min or max grant percent hints, SQL Server has a hard cap of like 20, 25% of your max server memory setting for a single query. And so if your query has a lot of subqueries in it, whether it’s, you know, multiple CTE, multiple, you know, derived joins, subqueries, stuff like that, things where, you know, you could break up those parts of the query, you can often get the memory grant as a whole to be smaller because you have a bunch of small queries asking for memory grants rather than contributing to one giant memory grant in one big query.

I’ve always found myself quite against big monolithic queries, but this is another good reason to do that because if query operators in a big plan are unable to share memory from, you know, one operator to another, sometimes that happens if you don’t have any, you know, sort of stopping points between them like sorts, hashes, other things like that, then queries will ask for very, very large memory grants to do things where if you break them up, you can get the memory grant reduced quite a bit.

So those are the words of wisdom I have for you. Thanks for watching. Another thing that you can do to limit or to change the default results is to use the start and end date parameters. That’s right down here.

So let’s say that you have some other monitoring that showed a CPU spike or, you know, getting a bunch of errors and other stuff on like a weekend and you want to just look at a weekend, then you can use the start and end date parameters to set caps on which portion of data you want.

Like I said in the last video, by default, you get the last week of data, the last seven days. But if you want to change that to focus in on a particular day, particular event, you can absolutely do that too and just hope to, hope to whatever, whatever you have faith in that you, you, you query store captured what you want for that.

So if you, if you wanted to see what I was up to last weekend, it was not a whole hell of a lot. This is all just background system queries. I apparently, apparently I wasn’t home much.

I wasn’t doing much last weekend. I think, I think I was out and about quite a bit working on my tan, you know, my bar tan. Uh, and so I was not, uh, hammering SQL Server with, uh, all sorts of, um, you know, bad queries.

Anyway, uh, that’s sort of a few different ways that you can, uh, look at, uh, change the default, uh, output ordering, what we’re focusing on, uh, and the, uh, default span of time that we are analyzing when we’re looking at query store. Uh, that’s about it for this one.

Next video, we’re going to talk about filtering to specific procedures, hashes, plan IDs, query IDs, stuff like that. Um, and so I hope you’ll join me there. Uh, remember, like, subscribe, love me, please love me.

I mean, or just kind of, I mean, just like me enough to watch the video, I guess. That’s all I really need. View, views, I guess, right?

Big views, big view, big view energy. Anyway, uh, thanks for watching. I’m going to get working on the, the scripts for the next one, and I’ll see you over there. Goodbye.

Hopefully not forever.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Filtering To Specific Queries Using sp_QuickieStore

Filtering To Specific Queries Using sp_QuickieStore


Video Summary

In this video, I delve into the advanced customization options available in my free open-source tool, SP Quickie Store, which enhances query store results for SQL Server tuning and troubleshooting. I walk you through various parameters that allow you to filter down to specific queries and execution plans based on criteria such as execution type (successful, failed, timed out), procedure schema, plan IDs, and query text. By demonstrating these features, I show how SP Quickie Store can significantly streamline the process of identifying and analyzing problematic queries, especially in scenarios where traditional methods fall short.

Full Transcript

Oh boy. Hot on the heels of that last video. I think it’s maybe two minutes later. We’re going to look at how you can further customize some of the results that you get back from a query store using my free open source. I mean, it’s free to everyone but me. I charge myself a very, very high consulting rate to use it. How you can use that to filter down to specific queries, plans, stuff like that. Now, again, we’re going to start off with the help parameter because the help parameter will give you information about what all the parameters can do.

If I add parameters, change parameters, anything like that, this will get updated so you can stay on top of exactly what’s going on and what you might care about. But getting down a little bit further in here, this is, there’s a whole section of parameters that help, they can help you filter your results down to a specific set of things that you might care about. So if we look at the full list here, I’m going to walk you through these and then we’ll, I’ll show you some examples of them.

We have execution type, which will tell you, which allows you to filter on queries that are either, you know, successful, failed for some reason, either, you know, different reasons why queries might fail. You can find ones that ran or you can find ones that like timed out, errored out, stuff like that. If you want to look for a specific store procedure in query store, you can use the procedure schema and procedure name parameters.

Procedure schema will default to DBO. So if you have procedures in different schemas, you will need to provide that schema in order to successfully find them. If you want to include a list of plan IDs or query IDs or just a single one, well, I’ll split any comma delimited value in here out to a list and pass along and pass that into a table for searching.

Likewise, you can also choose to ignore some plan and query IDs. By default, SP quickie store will filter things out like creating, altering indexes, altering tables, stats updates, whether they’re, you know, manual or whether they happen as part of a query so that you don’t get some of that noise in there that might happen from those. Like you don’t need to know that creating or rebuilding an index took an hour because you can’t tune that.

If it’s useful for you to find, let me know. I can, you know, change the default on that. But otherwise, it’s like, what are you going to do with that with that information?

Like what? Also, there’s a section in here. So what would happen to me quite a bit in my consulting efforts is I would find queries from other sources where query plans were not necessarily part of the the data collection. If you ever look at the blocked process report or the XML deadlock report, what you’ll see is that SQL handles for the queries involved will be logged in there along with the query text, but not the query plan, which is understandable because that’s it will be a concerted effort to include a query plan and all that.

And staging XML inside of XML sounds a bit, this is a bit nightmarish, even to me, a person who spends a lot of time in XML. So, yeah, I get that. I totally get that.

But a lot of the times when you’re trying to figure out a blocking or deadlocking scenario, the query plan becomes really, really important because the shape of the query plan, you know, lookups, you know, missing indexes, things like that can really play into if like the reason why there was significant blocking or deadlocking. There is also a query text search parameter. So if you know, like if you’re looking for like some entity framework query or something where, you know, maybe something that was generated by dynamic SQL, something like that, you can search for the specific text.

On its own, SP Quickie Store will, if you don’t have wildcard searches on either end of your query, it’ll add them in for you so that we, you know, search the text for everything that you might find. But if you have weird spacing inside of the query, like if it’s like select, like carriage return, like big tabs and stuff like that, other weird white space issues, you might need to add your own parentheses inside of the string to search for stuff. So let’s just demo some things real quick here.

Let’s run this. And we’ll find that there’s a store procedure in here called vote sniffing. Let’s let’s I’ll type in a demo.

Screw it. Rules be damned. So let’s look for it. So since that is in the DBO schema, we don’t need to worry about supplying the procedure schema. If this were in a different schema, we would need to tell it.

But since this is sniffing, I think I spelled that right. Since this is just in the DBO schema, we can run this and we can search for a specific procedure name. Now, one thing that I talked about in the first video was how the all plan IDs parameter can be really useful.

And that is because if we use this, let’s just say that this was not a store procedure. Let’s say that this was a regular, a regular old, you know, query dynamic SQL, you know, entity framework, something like that query where we had no idea how else to identify. But we wanted to look for specific plans.

What we could do, stick that in there and we could get back all the plan IDs that we cared about from this list. Right. Pretty neat.

At least I think so. If you wanted to ignore any of this stuff, do any of that, we totally could. You can do the same thing with query IDs. Right. So if we come back up here, we grab query ID 312. We can say include query IDs.

And we can look for 312 and we can just focus in on anything that query ID 312 might have given us. Other things that can be useful. You know, if you want, like, let’s say that you had a bunch of query timeouts over the weekend.

Right. Let’s say or at night or sometime, some other time when you weren’t looking at things. Excuse me.

We could look at the execution type. And we could see that valid inputs are regular, aborted, and exception. I don’t know if I have any aborted or accepted queries in here.

But let’s take a look. And let’s see. Do we have anything aborted?

I have no idea. It’s possible. It’s entirely possible. No, we do. Look at that. I have two aborted queries in here. I don’t think I have any queries that would have failed.

But, you know, that’s because my queries never fail. I only quit on them. It’s never the query’s fault. It’s just me running out of patience. So anyway, this is different ways you can narrow down.

You can narrow your search down to what you care about from the results that you see here. I’m going to soapbox a little bit and say this is one of the big reasons why I wrote this store procedure. Because right now, the query store GUI doesn’t give you a way to search through any of these things or filter down to any of these things.

You can track certain query IDs, but you have to find that query ID and right-click it and track it and all this other stuff. You can’t, like, manipulate what query store brings back in a meaningful way to search through things for any of this stuff. And this is all important stuff when you’re doing query tuning work, when you’re doing consulting work, where you’re walking into a server that you may have never seen before, where you’re walking into a server where you might have to be able to put together pieces of a situation really quickly.

You know, again, coming back to, like, block process or XML deadlock report, like, you have to find those queries, you have to find those plans, you have to get in there and do stuff really quickly. So this is all sort of invaluable stuff for me in my day-to-day consulting work, and that’s why I put it in here. You know, if the query store GUI were written by someone who, or designed by someone who cares about humanity, we would be able to search and filter to these things a bit more easily.

But, alas, we do not have those options currently. I would love to film a video where I say, SPQuickieStore is no longer necessary because the query store GUI is completely up to snuff and offers you a commensurate experience. But for me, I still need all this stuff when I’m working on things in my day-to-day life.

So I made it easy for me. I made it easy for you. You know, again, please love me.

So anyway, that’s about the end of this one. You know, the usual rigmarole. Like, subscribe, join my email list, hire me to do work for you. No, whatever.

Or don’t. Maybe just go, maybe buy me a coffee someday so I can pour whiskey in it and have a good morning. Anyway, thanks for watching. Hope you learned something.

Hope you enjoyed yourselves. And I will see you in the next video where we will talk about something else entirely with SPQuickieStore. All right. Cool.

Feel good about this one. I only coughed once. It’s almost a new record for me. All right. Goodbye. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

An Introduction To sp_QuickieStore

An Introduction To sp_QuickieStore


Video Summary

In this video, I dive into SP_quickie_store, a handy stored procedure that provides insights into query store without much setup. I walk through the default output, explaining how it surfaces poorly performing queries and offers detailed execution metrics over the past week. I highlight key columns like query ID, plan ID, and various performance metrics such as CPU usage, reads, writes, and physical reads. While the procedure is straightforward to use out of the box, I also touch on its limitations, particularly in terms of memory grants, which aren’t fully visible within query store. Throughout the video, I share my frustrations with the query store GUI, finding it cumbersome and sometimes counterintuitive, and I promise more detailed explorations of SP_quickie_store’s capabilities in future videos.

Full Transcript

Erik Darling here with Darling Data, currently the only person with Darling Data. It’s a little lonely sometimes. Maybe I should talk to ChatGBT more, have some conversations, figure out what’s going on in the world around me, what’s in the news, headlines and all that, good stuff. Erik Darling here with me. So we finished talking with, well, we’ve talked about SP underscore human events about as much as I can talk about SP underscore human events without getting into my deep and profound frustrations with extended events. Erik Darling here with me. So we’re going to switch over to SP underscore quickie store so I can tell you about my deep frustrations with query store.

Erik Darling here with me. I don’t know. It’s not even query store. It’s the GUI. I think the GUI was designed by someone who hates people. Erik Darling here with me. That’s about it. Anyway, let’s talk about what you get from SP underscore quickie store right out of the box. Erik Darling here with me. So you don’t have to do much to get data back. We’ll talk about some things you can do to get data back a little bit differently in other videos.

But just off the bat, if you run SP underscore quickie store and you give it a database name that has query store enabled, you will get some very helpful results back. If you are using the database that has query store enabled, like if I just change the context to stack overflow 2013 and I run quickie store, it will default to the database that it’s it’ll check the database that it’s we’re currently using and see if query store is enabled. If not, you have to give it a database name. So let’s go on back to the master database and let’s hit F5 here and let’s talk a little bit about what you get back from the default results in SP quickie store.

So I’m actually going to go a little bit backwards here. I’m going to start with the bottom results because I want just to show you where you can get a little bit of information about what you got back. Mainly it’s this column right here. By default, SP quickie store goes back seven days to look for poorly performing queries. I used to have a default of 24 hours, but I found that the previous 24 hours would often not give me everything that I need.

There are some notes about support about how to get help using the procedure, how to debug and troubleshoot performance issues. And of course, version dates and all that good stuff, blah, blah, blah, blah, blah, happy times. Now let’s go talk about the top set of results, which by default will give you the top 10 queries that we found in query store over the past week ordered by average CPU.

Now, when I order by average CPU, I’m going to walk over here a little bit. Now, since I’m on SQL Server 2022, I have a lot of additional columns that you might not see in older versions of SQL Server. But when I go and look at what happens by average CPU, that’s this column right here, we find queries that used a lot of CPU, of course, on average.

These queries might not have very high execution counts. You might see a single execution. You might see, you know, just a low number of executions.

That’s sort of to be expected. There are different ways to filter the results that we’re going to talk about in another video. You can totally filter, but like set a lower bound for the number of executions for a query you care about.

But generally what I find is the higher you set that number, the lower this average CPU gets and the harder it is to find queries that you can make a meaningful difference tuning. Now, the count executions thing can be a little bit misleading. If you have queries that aren’t parameterized, if you have queries that I believe for recompile as well, they’ll still show up in query store.

But I think you’ll get different plan or different plan entry for them when they run. So you just might not have a completely accurate gauge of just how many executions a query has looking at this. So starting a little bit more close to home, you get the query ID.

So if you want to use any, if you want to use the query store GUI to track a query, or if you want to, you know, use it to force a plan or something, you have the query ID column and the plan ID column here. If your query is responsible for inserting multiple plans, so if one query ID has multiple plans attached to it, you’ll get all of the plan IDs in this column. This comes in handy, again, something we’ll cover in another video, because you can filter down to just the query IDs or just the plan IDs that you care about using different parameters in here.

This column will tell you if the query completed executing or if it failed for some reason. There are different outcomes for this here. If your query came from a stored procedure or if it came from an ad hoc source, like an application, any framework, something like that, then you will see either the object name that the query came from or ad hoc.

You also get back the query text as a handy clickable XML column. Alright, isn’t that lovely? Isn’t she pretty and pink? You’ll get the query plan back, alright, so you can get all that good information.

What’s the point of QueryStore if we’re not getting query plans? You get the current compatibility level that the query executed in, so you can figure out if you’re using an older or newer compatibility level. This is kind of good for tracking like the regressions if you change compatibility levels.

These columns are new to SQL Server 2022. If you have hints in QueryStore, if you have, if QueryStore is giving your, your query feedback, or if you have plan variants, meaning if your query is, is trying to be enhanced by the parameter sensitive plan optimization feature in SQL Server 2022, then this will tell you if there are any variants for the plan that you need to be concerned about.

If you’re forcing the plan, you’ll get that here. You’ll get the top weights for the query execution here. This is 2017 plus only, and only if you turn it on in QueryStore.

QueryStore by default will track weight stats in 2017 and up, but some folks turn it off if things get too noisy in there. Some other good columns in here to look at. First execution time, last execution time, so you know the window of time that this count executions comes from.

If your query is a frequently executing query, then we’ll track how many executions per second your query is responsible for. Again, stuff we’ll cover in other videos. I don’t, I don’t, I don’t like to make these too, too, too long.

Mainly because the nice camera that I have attached up there that makes me look so crystal clear and flawless. Overheats sometimes and turns off on its own. So I have about 30 minutes before that happens.

So now we have some metrics about the query execution. We have average duration, total duration, CPU. We go into reads, we go into writes, we go into physical reads, we go into memory.

Now, one aggravation that I have with query store, excuse me, is that the plan cache tells you how much memory the query asked for and how much it used. Query store only tells you how much it used. It does not tell you what the total memory grant for the query was.

This kind of sucks because one thing that I love trying to find is queries that asked for a lot of memory and didn’t use much of that memory. So we can start figuring out why these queries think they need as much memory as they do. But we don’t get that in query store.

There is a parameter that we’ll talk about in a later video for SP Quickie Store called expert mode, which will go to the plan cache and look for SQL handles of queries to get some in front, some plan cache information that is not available in query store. But again, we’ll talk about that down the line here. Some row count stuff, physical reads, tempdb usage, and context settings.

Context settings can be pretty useful because you might find that your applications do not use expected context settings. You may find that that inhibits the use of computed columns, filtered indexes, index views, things like that. So always good to have this here since I do most of my demo testing either with SSMS or with SQL query stress, maintained by the lovely and talented Eric EJ.

I get a pretty standard set of context settings for my queries. So that is the default out of the box what you get with SP Quickie Store. In other videos, we’ll talk about different things, other ways to use it, different ways to manipulate the output, the results, how to filter things, stuff like that.

So we’ll get to all that stuff. And you’ll love it. You’re going to love every minute of it because you love SQL Server.

You love query store. You love free stored procedures that people spend hundreds of hours writing in hopes that you’ll use them and love them. Right?

Oops. Everything except pay you to use them. Anyway, that’s SP Quickie Store. In a nutshell, we’re going to do some more videos and talk about other things that you can do with it. But this is good enough for now.

Thank you for watching. Like and subscribe if that’s your fetish. I mean, it’s my fetish when people like and subscribe to me. So you’d be doing us both a solid if you do the like and subscribe thing. And yeah, I’ll see you in the next video.

Thank you for watching. Have the best day. Thank you. you

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Capturing Query Wait Stats With sp_HumanEvents

Capturing Query Wait Stats With sp_HumanEvents


Video Summary

In this video, I delve into using `sp_whoisactive` to track wait stats through a unique approach by leveraging the `SP_HumanEvents` stored procedure in a different way than usual. Instead of setting up a permanent extended event session, I demonstrate how to sample wait stats for a specific duration on the server. This method is particularly useful for quick diagnostics and performance analysis without the overhead of maintaining an ongoing session. I also take this opportunity to pay tribute to Eric EJ, highlighting his contributions to the SQL Server community, including his work with SQL Query Stress and Entity Framework improvements. The video covers various aspects of `SP_HumanEvents`, such as its output format, how it creates views for easier querying, and practical insights into interpreting wait statistics by database and query level.

Full Transcript

Erik Darling here with Darling Data. Limited. I don’t have the gall to call myself unlimited because I have some very strict limitations on things. California wine, most beer, pork chops, what else? I don’t know, some other stuff. Anyway, now that we’ve been on a dinner date together, let’s talk about how we can use SP underscore human events to track wait stats. We’re going to use the store procedure in a little bit different of a way than we have in the past few videos where we set up a more permanent extended event session and sort of looked at the data that comes into there from different things happening. This time we’re going to use it to sample the wait stats for a duration of time on the server. And since I was, so there’s a little bit of a funny, just sort of bit of coincidence in this video is a while back, the lovely, wonderful, talented Eric EJ on Twitter, who took over maintaining SQL query stress from Adam mechanic when Adam, Adam sort of open source that thing, had posted a blog post about inserts. Well, it wasn’t his post. Somebody reposted it.

Someone else wrote it and did all this stuff. But a post was about inserts and goods and integers. And there was a lot of talk about index fragmentation. And I’ve been I’ve been working up some demos to show that it’s not the index fragmentation that is a problem. And this is a sort of a precursor to those demos. This is just a test sort of proc that I have to do a bunch of singleton inserts with different data types as the clustered primary key of the table. So let’s all give a round of applause to Eric EJ. He spelled also for two reasons. One, spells his name correctly. It’s Eric with a K. And two, for doing a lot of great community work. I mean, aside from SQL query stress, he does a lot of stuff with entity framework and whatnot, trying to make that better for, well, I guess for all sorts of database folks. But you know, I mostly work with the SQL Server folks who end up using entity framework. So round of applause there. We appreciate you, Eric EJ. You are, you are, you are, you you are truly an MVP in my heart, not just a Microsoft. Anyway, let’s look at this. And let’s actually kick this off running. And while that kicks off running, we’re going to start this. And this is just going to do a bunch of inserts. And it’s going to take around a minute. Now, well, that runs. I’m going to go tell you about a couple things with SP underscore human events. One is that, when you run it for a sample like this, there are a bunch of sort of analysis queries that it spits out at the end. If you use SP human events to make more permanent extended event sessions and do stuff with that, it will also create a bunch of views. This is all in the lovely documentation on GitHub. You can take a look at that if you’re, if you’re interested in more information there. But it will also create a bunch of views that mimic the output of the session data that you see when the store procedure finishes running for a duration of time. I do that because I don’t want you to have to figure out how to query things on your own to get sort of, you know, commensurate results in different ways that you might use the store procedure.

You know, I create the thing, I wrote the thing, and I want you to be able to use the thing however you feel, however you feel comfortable and try to try to give you as many ways to make it as easy as possible to get in and access things. So when you sample weight stats and you have the extended event run, it runs for however long you put in the second sample there for, and then it kills and drops the extended event session at the end. Good, good, good.

Now, let’s look at what the results give us back. We have total weights for everything across all of the time that the session ran for, right? So this is total weights.

There, well, look at, there’s a column called total weights. So we see how many instances of the weight occurred, the duration of time that accumulated for the weights, the signal duration, and then the average milliseconds per weight. So we know, we can get a feeling for a few different things here.

One is how many times the weight happened, right? Is it a lot or is it little? How much total time that weight was responsible for in the window of time that we measured? And how long, on average, that weight lasted when it occurred, right?

Like, we don’t want, like, if a weight happened, like, two or three times, or there was a long, like, say it was a lock weight or something like that, we would want to know how long queries were waiting on locks on average during that time to figure out if we have a locking problem. If the average milliseconds per weight for the locks was fairly low, then we know that it’s probably not a locking issue.

I mean, it’s something we could dig into, but it’s probably not going to be the first thing that I look at. The second one we have down here is total weights by database. So if you have, I mean, a lot, most everyone has multiple databases on their server, so you can figure out which databases had the most weights happen.

Sometimes there is a null in the database name. That’s not my fault. That’s not me not being able to resolve a database name or something. That’s in the extended event data.

That’s me pulling directly from the XML. So if database name in there is null, then there’s really nothing I can do about that for you. So whatever.

Sorry about that. Maybe you could file a bug report with Microsoft. Maybe you’ll have better luck filing bug reports with Microsoft than I do. But you’ll get just about the same information here.

Total weights, duration, signal duration, and average milliseconds per weight. That doesn’t change a whole lot from the total weights here because I don’t have a bunch of very active databases. So everything’s going to look pretty much down here like it does up here.

But then the part that I like the most, the part that I think is the coolest, is this bottom section, which is total weights by query in database. So this is where what I do.

So what I do when I can do it is find query plans and statement text for the queries that ran and accrued the weights that happened during the window of time that you measure. So again, the total weights and stuff in here isn’t going to look terribly different from what it looks like up here just because there’s not a lot of activity on my server, just me pounding away with these inserts.

So this is all going to be the same query text and plan. It is a little bit repetitive. I try to get the SQL handles in the extended event and then go to the plan cache to look for those SQL handles in the plan cache so that I can figure out the text and the query plan for them.

So I get that additional information from the plan cache. If you’re measuring something on site like this, like for a duration of time, there’s a pretty good chance that the plans will still be in the cache by the time this thing finishes running. But I can make no guarantees that it absolutely will be.

So if we look at the statement text here, this is SP human events doing a whole thing. We don’t really need to look at that. It’s quite a bit of code in there.

We have the statement text of what ran. So this is the insert that was happening. This is all the singleton insert stuff that was going on. This is the very unimpressive query plan for all the inserts that were happening.

We don’t need to save that. But then, you know, we can see in here, let’s just go for these last bottom three rows because those are the ones that we have the weights for. We can see that these queries hit some write log weights.

The average duration was pretty quick there. We were able to push those writes through pretty quick. We hit some page latch and page latch EX weights.

Those are also fairly quick here. We did accumulate a bit of time on them, but we also did a lot of them. For this run that I did here of the insert test, we did 4,000 iterations with 100 threads running for a total of 400,000 iterations there.

So 400,000 executions of those queries produced fairly minimal weights. So our insert test, I think, was, you know, pretty efficient, right? We were able to bang a lot of inserts into that table in about a minute.

400,000 inserts a minute, pretty good, right? I wish I did that kind of business. Maybe someday, maybe someday. Although I’m not sure that there are 400,000 SQL Server customers per minute that I could get contracts signed for.

I’d be very busy. I’d probably at that point be the CEO of a large corporation with many minions doing all sorts of my biddings. Ah, man.

Got to get some more minions. Anyway, this is one way to use SP underscore human events to track weight stats. Again, you can also create a persistent extended event session to grab them. And SP human events will create views for you when you do that to grab the results that look like this as much as possible.

You know, the plan cache is a volatile place. And I can’t guarantee that the plans and everything will always be in there for you to have attached to the weights that happened. But there’s not really much I can do about to fix that.

I did think about involving query store here. But, you know, in this day and age, I still don’t run into a lot of folks who have query store turned on. And there would also be some additional complications in the code to, you know, make sure that the database has query store to go look in query store for the SQL handle.

It still might not even be in there depending on query store settings. So, you know, it’s a lot of complication for what is potentially very little reward. But maybe sometime down the line if, you know, Microsoft ever decides to have query store replace the plan cache and it is, you know, on by default for everyone in the world, which it is in SQL Server 2022.

But, you know, hard time running into a lot of people with that in production just yet. Maybe then it will be worth the code review. But, gosh, I hope I’m doing something else by then.

I hope I’m retired by the time that happens. So, anyway, that’s about all I had to say here. So, thanks for watching.

Like and subscribe if that’s the kind of thing you’re into. If not, then, I don’t know, just keep hitting F5 until a new video shows up, I guess. That’s a good use of your time.

All right. DBA is a switch from hitting F5 and SSMS on SP who is active to hitting F5 on my YouTube video playlist to see if a new video showed up. But maybe you could write some sort of program that just hits when you hit F5 in one place, it hits F5 everywhere.

It just refreshes everything. Wouldn’t that be nice? Like, yeah.

Anyway. There I go again. Anyway. Yeah. Thanks for watching. Like and subscribe. And I will be recording more videos about how I use other SQL Server community tools after this. I’m going to go through a few more of mine.

SP Quickie Store and SP Pressure Detector. And then I’ll do who is active and probably the Blitz scripts as well. Because there are a couple of few of those that just do not get enough love, affection, and attention. Anyway.

That’s enough for now. Goodbye. Have a nice day. It’s not. It’s still not Friday. I’m told. Which is very depressing. All right. Well. That’s enough for me.

That’s enough of me. Thank you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Capturing Query Recompilations With sp_HumanEvents

Capturing Query Recompilations With sp_HumanEvents


Video Summary

In this video, I delve into the intricacies of query recompilations in SQL Server using SP_HumanEvents extended events to track them effectively. I explore why recompilations occur and how they can impact performance, especially when a query that was previously humming along suddenly starts using a poor execution plan. By understanding these recompilations, you can better diagnose issues and optimize your queries, ensuring smoother operations even in high-volume workloads. I also discuss common causes of recompilation such as schema changes, statistics updates, option recompile hints, and temporary table modifications, providing practical advice on how to address each scenario.

Full Transcript

Erik Darling here with Darling Data. And I think this will probably be my last video of the day because what I have discovered is the natural light that comes in through my window has a profound effect on the effectiveness of the green screen behind me. Contrary to popular belief, I do not live in SQL Server Management Studio land. I live in SQL Server Management Studio land. live in my office. And I just have SQL Server Management Studio as a backdrop for everything. I know there’s a green screen behind me and the later in the day it gets, the more distortion there is when I move around a little bit. And I don’t want you to have a bad watching experience on account of it being later in the day. So we’re going to make this the last one. And we’re going to talk about using SP underscore human events to track queries that are in the day. So we’re going to talk about using SP underscore human events to track queries that are recompiling on your server. Are query recompilations the biggest deal in the world? Maybe not exactly. But what I’ll tell you here is that if your query is humming along, reusing an execution plan, everything is doing great, and then something happens, you swear to me nothing has changed, but something has happened.

And all of a sudden, and all of a sudden, and all of a sudden, your query starts using a poor execution plan. Well, guess what? You might want to know why it recompiled. Because what you’re going to do is you’re going to send me an email, or you’re going to ask a question on stack exchange or stack space overflow. You’re going to say, hey, nothing changed, but all of a sudden, this query got slow. And everyone’s going to say, well, something changed, because you stopped using a query plan. It didn’t, and the recompilation could be a lot of things, right? Like a plan could get evicted from the plan cache because of memory pressure or something like that. Someone could clear out the plan cache. There’s a lot of reasons why you might find things happen. But those things would fall in our query compilations.

Recompilations, probably the most common reasons you’ll see would be from this list. Schema changing, statistics changing, you know, automatic statistics updates, or manual statistics updates during the course of your high volume, four batch requests a second volume workload. So, you know, temp tables changing. So, you know, temp tables changing. This doesn’t mean that, like, the definition of your temp tables change. What it means is that you have hit a threshold and modifications to the cached temp table that have call a SQL Server to recompile, which is an interesting one.

And then, of course, the ever-present option recompile requested, which, because I’m a bit of a lazy bones, is what I have asked queries to do, is request an option recompile via the option recompile hint. And that’s what I’m going to show you over in the extended event data. There are a lot of other reasons why you might see a recompile happen, depending on the, depending on local factors, the way that you use SQL Server, the way that the vendor has written queries to use SQL Server.

Query store stuff, obviously, that’s, you know, there’s a lot of, well, there’s a few query store things in here. Not a recompile is a funny one. But, anyway, there are a lot of things in here that you might see, but, you know, these are the most common, I think, that I see.

Probably statistics changed, temp table changed, option recompile. There are not a lot of people who are adding and dropping indexes or columns or other things like that in the middle of the day. So, maybe this one is not as common as I think.

But, anyway, this query setup will look pretty close to the query setup that I used for the compilations event that we looked at in the last video. Except this one is properly written dynamic SQL. It is parameterized.

Here, we have used sp execute SQL to pass a parameter value in on each execution. But, in this case, we have asked SQL Server quite nicely to recompile this query every single time it shows up. And, what we will see in the extended event that SP Human Events has so graciously set up for us is this data.

We see the name of the event. Very helpful. Thank you.

Extended events. We see when it happened. Already dating this video. We see the statement. And, now, since there’s an option recompile on here, it’s going to be fairly obvious what happened. But, in other cases, maybe where there’s not an option recompile, either at the statement level or at the store procedure level, it might not be so obvious.

And, then we will have the recompile cause over here. Option recompile requested. So, we’ve gotten a wealth of useful data out of this.

How we go about addressing the recompilation stuff kind of depends on the situation. There might be some queries where the option recompile is a good idea. Might be some queries where it’s not and you can get rid of it.

If the recompile cause is something that is not the fault of the query, something like schema changed or statistics changed, you might look at looking at what processes are changing tables in the middle of a workload. I imagine there might be some blocking involved.

Sorry, tables are indexes, columns, indexes, something like that. That might cause some significant blocking depending on other local factors. The statistics update thing, we might look at choosing to manually update statistics at some other interval if it’s causing a problem.

We might look at, if the temp table is changing, we might look at the keep plan or keep fixed plan query hint. And if it’s option recompile, we would just have to figure out if the recompile is a necessary hint for the query in order for it to run well. A lot of the times you’ll see the option recompile hint perhaps on reporting queries where you don’t care.

But if the option recompile hint is attached to a query just to fix parameter sniffing, like you decided you like to be like the one person who doesn’t use the local variable thing incorrectly to fix parameters. Like fix parameter sniffing, local variable, do this to fix parameter. Again, babies.

Then that might be another way around it. There are lots of things in here. If you run into one of the weirder ones, like four brows or set options changing or cursor options changing, well, you’ve got a really strange set of problems that you have to deal with. Boy, oh boy.

Don’t envy that. Anyway, before this green screen artifacting gets any worse, I’m going to end this video. Say thank you for watching. I hope you enjoyed yourself.

And I hope if you’re able to, if you have the constitution and the will and the means that you’re able to start drinking like I’m about to do. So if not, I don’t know, have a cup of tea. Eat a cookie, whatever you’re into.

Make sure it’s healthy and you have the consent of those around you. Consent and support of those around you. Anyway, have a good night.

Listen, you have to know if I can talk about something. I’ll show you. You have to see that dish half minhawhat. 공vakia now. There’s three minutes where I can think of this60нос around you. So let’s get started.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Capturing Query Compilations With sp_HumanEvents

Capturing Query Compilations With sp_HumanEvents


Video Summary

In this video, I delve into the issue of frequent query compilations using my go-to stored procedure, `SP_human_events`, to help troubleshoot various SQL Server problems. We explore how to identify queries that are compiled frequently and discuss why this can be a significant issue, even if it might not seem like the biggest problem at first glance. I walk through setting up an extended event session with specific parameters to track query compilations in real-time, demonstrating how to interpret the results to pinpoint problematic queries. This video is part of a series where we cover different tools and techniques for diagnosing SQL Server issues, including recompiles and wait stats, ensuring you have a comprehensive toolkit for optimizing your database performance.

Full Transcript

Erik Darling here with Darling Data and I have tried to start recording this video no fewer than five times before being hit with a sneezing fit. So if I get through this one, it’ll be nothing short of a miracle. This video is a continuation of this series of videos where I’m talking about how you can use my amazing, the most rockinest, shockinest, hip hopinest store procedure on the planet. So we’re going to be looking at the planet. SP underscore human events. To troubleshoot various SQL Server problems, we’ve talked about using it to troubleshoot blocking, we’ve talked about using it to troubleshoot query performance issues. This is going to be a short one to look at finding queries that are compiled frequently. Now, query compilations may not be the biggest problem on your system, but a lot of the systems that I look at, they are not going to be the biggest problem on your system, but a lot of the systems that I look at, they are not going to be the biggest problem on your system. more than a few times they’ve ended up being a pretty big issue.

You look at a query performance counter like batch requests a second, and then another query performance counter like compilations a second, and you might find that the number of compilations a second pretty closely follows batch requests a second.

That means that SQL Server is just constantly sitting there coming up with brand new query plans for queries that could be parameterized. Now, I know a lot of you out here are going to hear parameterized queries and have a mild freakout because you’re afraid of parameter sniffing, and God, I wish you’d stop being such babies about it.

But here we are, you being big babies, afraid of a sniff parameter, like there’s no way to fix parameter sniffing in the world, and not solving systemic workload problems.

like frequently compiling queries, not compilating, that’s my mistake. Frequently compiling queries when you could, because you don’t want SQL Server just sitting there coming up with query plans constantly every time a batch compiles.

It’s not a good feeling. So we’ve talked about a few other ways that SP underscore, human events, can help you troubleshoot other issues.

We looked at blocking, we looked at query performance. We’re going to look at compilations now. We will most likely look at recompiles next, and we will look at wait stats for the final installment in this video, where I’m going to be talking about different ways that I utilize SQL Server community tools when I am helping clients with their problems.

So I’ve already kind of done some of the legwork here, so that you don’t have to sit there and watch me hit F5 and wait for stuff to happen. So I’ve set up this extended event, and I’ve used this extended event with the keep alive parameter just to have a persistent session going.

And over in this window, I have set up a query to execute. And when this query executes, every single time, it will receive a new literal value. And this new literal value, because we are not getting simple parameterization for this query, every time this thing executes, it will compile a new query plan, which for this query is not that big a deal, because it’s not very big, it’s not very complex.

But at the same time, it’s an easy demo. And you’re going to have to suspend disbelief a little bit and live with an easy demo that shows you how this thing works and what you might want to do to fix it.

All right. So thanks. Thanks for helping me with that. So this just runs in a simple loop. Doing this little bit of…

I mean, this isn’t dynamic SQL exactly. This is like the bad kind of dynamic SQL that you don’t want to be doing, and this will definitely cause frequent query compilation.

Because every time this query runs, SQL Server will see a different literal value and say, Ooh, a brand new query. Have a brand new query plan.

Even though probably every query plan is exactly the same, because we’re just going to seek to a specific value in the clustered index. But nevertheless, SQL Server is just coming up with new plans. All the darn time for this thing.

So what does our extended event show us? Well, great question, because I’m about to show you. So some newer versions of SQL Server have an event called query parameterization data, which will give you some additional feedback about if the query has literal values, if the query could be parameterized via force parameterization, and if it’s a recompile event.

Now, I know I’m skipping ahead a tiny little bit here. Again, you’ll have to forgive me. And we will see that this query…

Let’s zoom in here. Let’s have a lot of fun. This query has literal values, is parameterizable, and is not recompiled. So this is not a recompilation event.

This is a compilation event. This is SQL Server thinking that it has discovered a brand new query and giving us a brand new query plan to go along with it, which is, I’m going to be honest with you, quite strange, because we really don’t need a brand new query plan for this, do we?

We’re just seeking to a single value in a clustered primary key. Why on earth would we need a new query plan for that? That seems weird there, SQL Server.

But here we are trying to troubleshoot this issue. The stock and standard extended event that comes along with this one on all versions of SQL Server, newer versions of SQL Server, I think 2016 or something plus, have a query parameterization data event in there.

The SQL statement post-compile event is in all of them. And that will just say, is a recompile false, and recompile cause not a recompile.

So very helpful stuff there. I’m glad I chose to show those two columns in the output. That’s smart of me. But this is one way that we can track down queries that are compiling rather than using an existing plan.

Well, we can track down queries that are compiling and sort of figure out if they’re ones that could be using an existing plan. It would be helpful if I put this statement for these things in here, wouldn’t it?

Oh, hey, look. We found some other things that are showing up in here. That’s fun. There we go.

Those are the ones I wanted to show you. This is stuff that happened afterwards because I was a ding-dong and I didn’t stop the session. So these ones in here, please, once again, suspend disbelief.

Ignore these ones a little bit. There we go. We can just cut those off. I feel better about that now. So every time this query came in, it gave us a brand new query plan.

It compiled as a brand new query. All 10 of these things, SQL Server said, oh, this number. What could I possibly do with this number? Brand new query plan for all of you.

And that’s not great. Now, the query parameterization data event also has this SQL text field. But for the query that I wrote to generate this, it is probably a bit less useful than you would find it to be either in the batch or store procedure where the compilation event is happening.

You would probably see a lot of dynamic SQL being generated, the bad kind of dynamic SQL that’s not parameterized doing this stuff. You might catch queries from that third-party vendor, which you might be that third-party vendor, so welcome to the party, pal, that are not parameterized in the application or wherever your queries may originate.

I try not to think too hard about that because my brain would turn to a puddle of mush and drip out my ears.

That just wouldn’t be fun for anyone. So this is a good way to track that stuff down. The forced parameterization setting can help in some cases where it wouldn’t help where a query is partially parameterized and there are some literal values.

You’re kind of hosed there. Forced parameterization won’t fix those. But if you’re lucky enough to have a bunch of queries that have no parameters attached to them and forced parameterization is not turned on for your database, and this could be a good way to fix it.

Otherwise, it would either be you as the, well, would be first or second part. I’m still not clear on what a second-party vendor would be. First party is you made it.

Third party is someone else made it. Second party, I don’t know. Someone made it for you maybe? I don’t know how to interpret that. And God help us if we ever figure out that there’s a fourth party involved.

So, man, there’s a lot going on here. Anyway, if you look at your SQL Server and you see that compilations are a, compilations per second is a significant portion of batch requests per second, then you may want to think about running this to see what kind of queries are causing those compiles and then come up with a reasonable solution either with the database level forced parameterization setting, yelling at developers either first, second, third, maybe even fourth or fifth party to properly parameterize their queries or, um, I don’t know.

Just shut SQL Server down. See what happens. Maybe no one will complain. Maybe everyone will say, cool, the server’s down. I’m going to go outside and enjoy myself.

I’m going to touch some grass, as my friend Arthur likes to say. Anyway, thanks for watching. Again, next video, we’ll look at recompiles or recompilations, whatever you’re into.

Video after that, we’ll look at weight stats and, um, I don’t know. From there, we’ll go on and look at a different, a different tool that I think is good to use for troubleshooting SQL Server stuff.

Thanks for watching. I appreciate you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.