Learn T-SQL With Erik: Grouping Sets vs Union All

Learn T-SQL With Erik: Grouping Sets vs Union All


Video Summary

In this video, I delve into the lesser-known world of `GROUPING SETS` in SQL Server and explore its alternatives, including `CUBE` and `ROLLUP`. While these features are powerful for complex aggregations, they often face performance challenges due to how the SQL Server optimizer handles them. I demonstrate a real-world example using Stack Overflow data, showing that even with a columnstore index, `GROUPING SETS` queries struggle to utilize batch mode efficiently, leading to slower execution times compared to equivalent queries without these features. By comparing the query plans and performance metrics, you’ll see firsthand why sometimes sticking to simpler methods can yield better results in practical scenarios.

Full Transcript

All right, we’re gonna talk about, boy, something I don’t see very often, be honest with you. Maybe I don’t see it very often because a lot of people don’t really understand how this stuff works and so they get afraid of it. But we’re gonna talk about grouping sets and alternatives here. And the alternatives are important because there’s also the distinct possibility that someone wrote a grouping sets query once upon a time and, gosh, it just didn’t perform terribly well, which is, uh, eh, that’s something that can happen. Boy, howdy. We’re gonna look at that. Um, all of this content is preview material for my course Learn T-SQL with Erik. Uh, it is on the pre-sale price right now of 250 US dollars. That’ll last you for the rest of your year. Uh, it will go up to 500 US dollars once the advanced material is published after the summer. Uh, the, the beginner material is on the very cusp of being completely published. So, uh, you’ll have many hours, uh, of, of content to get through, uh, anyway. Uh, this is all companion content, meaning not the exact content, but it is companion content to, uh, the material that Kendra Little and I will be presenting at, uh, past data community summit in Seattle. So if you attend our pre cons there, uh, you will, you will get access to this material, uh, with your price of admission. Um, I think that’s about it. So we’ll just get at a lot of this thing. And, uh, well, I mean, there’s a whole slide here. Now you can see all my secrets, right? You can see all this stuff that I do when you’re not watching. So, uh, we’re gonna get into the old stack overflow database here.

And, uh, I’m gonna show you what a grouping sets query looks like. There are of course other grouping set, uh, let’s just call them clauses that you can use, uh, like cube and rollup. Uh, cube will, uh, create distinct grouping sets out of all possible column combinations and rollup will, uh, create, um, well, rollup is a little bit more, uh, uh, complicated in what it does at least, at least it’s more complicated to explain where you define your rollup clause. And then SQL Server will take whatever column you put first in the rollup clause and match that to other columns. So it doesn’t do all unique sets. It just does like a subset of unique sets based on whatever you put in first. Uh, I cover it in way more detail in the actual material.

So if you’re really fascinated by what cube and rollup do, uh, you can, you can buy the material and watch it or just watch it if you’ve already bought it. But, uh, with grouping sets, you can define whatever sets of columns you want to group things by. Uh, I have chosen to do, uh, a few levels of grouping here, one by, uh, all, by all three columns that I’m selecting.

And then one grouping set for, uh, each of the columns individually. I could of course add stuff in here. Like I could add another line that says vote type ID and post ID and another one that says vote type ID and vote year and post ID and vote year. But just to keep, just to keep things relatively simple, I’m going to, um, I’m going to just do the three together and then the three individually.

There’s also this empty parentheses at the very end, which is going to be the global aggregate for this. So I’m going to start this thing running and you’ll, you’ll notice that I have a columnstore index up here. I have not created that yet. Uh, we’re going to save that for later.

What I want to show you here is just that, um, SQL Server, uh, SQL servers optimizer fights real good and hard against, uh, using any sort of batch mode stuff, uh, with these grouping sets queries. Uh, even like we’ll show you in a second with the columnstore index, but right, right about 30, 30 or so seconds for this thing. And you know, grouping sets queries, they tend to need to process a lot of data.

Like you’re talking about like, you know, doing like big aggregations of stuff. It would seem a fairly natural fit for the optimizer to want to use is like batch mode for these things, but we just don’t get any. You can kind of figure that out from the query plan by looking at the choice of operators, not always, but in some of these things.

So like, you know, first we have, uh, parallel exchanges, parallel exchanges, like there’s one here, repartition streams, repartition streams, repartition streams, distribute streams. Uh, that, well, that’s a filter. That’s not quite it.

Uh, gather streams over here. None of the parallel exchanges support batch mode. There are operators in the plan that support batch mode, but they all, I, I, I, at least the last time I ran this, they all ran in row mode. So like there’s a sort at the end.

Uh, there’s a filter that runs in row mode, filter support batch mode. Uh, stream aggregates here. One, two, three, uh, well, one, one up there, one down there. Those don’t support batch mode.

The hashes do support batch mode, of course, but they all run in row mode. So this whole thing is kind of cooked. This whole, this whole thing takes 30. Well, actually we should go look.

Cause like that looks pretty funny. 43 seconds. And then I got 20 seconds, 43 seconds, 32 seconds, SQL Server. What happened? Let’s, let’s go, let’s go look at the actual tail of the tape over here in, uh, the query time stats. Hopefully my big head won’t be in the way.

So, uh, that, that one parallel operator lied to us. So we used 166 seconds of CPU time to get 32 seconds of wall clock time. So what we appreciate, we appreciate SQL servers willingness to use a parallel execution plan here.

We do not appreciate the lack of batch mode here for them. Look at that number of rows that we are selecting and aggregating. Gosh darn it, SQL Server.

Why won’t you use batch mode? So what I’m going to do is I’m going to kick off, uh, creating this columnstore index. And just so I don’t get yelled at later, I’m going to make sure that that semi colons in there. We don’t want to improv.

We don’t have a lack of termination here. Do we? I’m going to get that started and I’m going to show you what an equivalent query without the grouping sets stuff looks like. Uh, so we’re going to have one query at the beginning where we select and group by all three columns.

That’s the vote type ID, post ID and vote year. Uh, I’m using a little trick that I showed off in one of my earlier videos where I’m doing this little cross apply values trick. To have, uh, one source for this date, part year creation date thing.

Uh, so that I can reference that one. Uh, well, I can reference that a few times throughout the query without having to keep rewriting date, part year creation date. Now, of course, like you have to do more typing, right?

So we have one query that does all three columns and groups by all three. We have another query that groups by just vote type ID. We have another query that groups by just post ID.

We have another query that just groups by vote year. And then we have one query that, that essentially just like doesn’t aggregate on all of them. Now, if I run all, what I want to do is run all these together. Now, keep, remember that I just created a columnstore index here.

We wouldn’t need the columnstore index for this thing to run in batch mode. Uh, SQL Server would shoot, could shoot, can choose batch mode very naturally for this. Um, I’ll show you that at the end, but the important thing is that when you look at this plan, there’s two things.

There’s a couple few things about this that are much, much different from the plan that we just saw using the grouping sets method. One, there are no parallel exchange and exchanges in this until the very end, right? There’s one necessary one at the end where we do gather streams.

Uh, but there’s also no stream aggregates. Like we don’t have all the repartition stream stuff in here. And of course, if we look at, we look at the execution modes for all of these operators, these are all going to say batch mode, batch mode, batch mode. So this whole thing runs a lot faster.

If you look at the query time stats for this one, just cause it’s not, maybe not terribly obvious from the, from the query plan as a whole. Uh, we spend, uh, 1.13 seconds of CPU time for 1.8 seconds of elapsed time, right? So that’s a huge improvement, uh, using batch mode over, uh, like row mode execution for these queries.

Let’s go back up to the, the, uh, grouping sets function query. And let’s run this with the columnstore index in place. If we let this go for a little bit, right?

It does, it does go for a little bit still. Uh, it’s already slower than the, the union all method of doing this. Um, this thing will take about 12 or 13 seconds in total. Um, and the, the query plan is different, right?

Like with, with the, um, with the columnstore index in place, you don’t see all, like, you don’t see a lot of the repartition stream stuff, but there is still some real junk in the query plan. Like this whole section here, where, uh, like very early in the plan, we gather streams. That means we end a parallel zone right here and then we restart a parallel zone right here.

But in, inside of that serial zone, right? Where we, it’s like no more parallel and then back to parallel in here, we, we do a stream aggregate. Because this is just something that grouping sets has to do for the global aggregate.

Like I’ve, I tried all sorts of tricks to get rid of this. I tried an option hash group hint. Uh, I tried a query rule off hint to prevent the stream aggregate, but still it’s stuck in there. I don’t want to start getting into all that stuff in this video, but man, this thing was just very, very persistent.

And if we look at the query time stats for the grouping sets one, like we go into properties and we look at query time stats here. Like we were at 1.7 something seconds for the one that, uh, the union all, uh, format and like 13 seconds of CPU time here, we’re up to 30 seconds of CPU time and 10 seconds of elapsed time. So this isn’t like, you know, like when I say like, I don’t see this stuff very often.

I kind of get why, like, it’s not just that the syntax syntax is kind of like weird and difficult to remember, or even like occur to you. If you need to do this sort of thing. Uh, it’s that like, even if you, you, you, you’re like a steadfast remember of syntax, which I am generally not.

I need to refer back to, you know, stuff that I wrote before to remember like half the things that I want to do. Uh, like, even if you are really good at remembering this stuff, like you might try this and just be like, holy cow, this is slow. So let’s, um, uh, let’s, uh, get rid of the, the, uh, the columnstore index that I created.

And I just want to show you that kind of naturally, uh, the union all version of this does a lot better on its own. Just sort of naturally using batch mode. SQL Server is like, I’m not afraid of batch mode here, right?

Like, like, it’s not going to be as good as when we had the columnstore index to read from. So like, if you’re doing this kind of like big analytical aggregation stuff, there’s a, you know, very, very good chance that you’re just going to want to use a columnstore is like the source of your data anyway. But even just like getting some batch mode stuff, like this, this runs pretty well or runs better even without the columnstore index, like then the grouping sets version of the grouping sets version was like 30 seconds or something.

This is, this is like twice as fast without even having columnstore is like the, the source to read from the main thing that slows down is we have to read from the clustered index over and over again. Right? Like we do a lot of stuff in there, but even that’s happening with batch mode on road store. We just don’t have those.

We just don’t have that really nicely compressed columnstore index to really speed things up as we’re reading data. So we still kind of have like this IO bound portion of the query that, um, like, like it doesn’t have the good column source source to read from, but like, it still happens in batch mode. So it’s still an improvement. And for all of these branches, notice that we don’t have like, we don’t have that break where we have like the, the, like, like the serial zone with the stream aggregate.

We don’t have all the repartition streams in here. This all pretty cleanly just runs in batch mode and does everything in batch mode and is a lot faster just naturally. Like I, of course, like I, like I said, I would absolutely prefer to have the columnstore index as a data source for this type of thing. But even without that, even just getting batch mode on rowstore, uh, this ends up a lot faster than the, uh, the grouping sets alternative.

So like when you’re, when you’re writing these types of queries, you know, of course, like what you’re reading from is very important. Um, you know, rowstore indexes, just not, not quite the same jam as columnstore indexes for these big analytical agro ag, aggregative, uh, types of queries. But, uh, like, you know, depending, like often just how you write the query and I’m going to, I’m going to say something unfortunate generally.

In general, the more you type, the better off you are, right? Like, like taking, you know, shortcuts like this, this might, you might like learn this in some like shifty Microsoft DP exam. And, you know, think that you’re, you know, like you’re, you’re the hottest cake on the block, uh, knowing, knowing how to use grouping sets, but, you know, get, get out there in the real world with, you have to do anything practical on like a meaningful set of data, not like worldwide importer works or whatever.

You’re, you’re, you’re going to want to abandon that ship pretty quickly. So anyway, uh, thank you for watching. I hope you learned something.

I hope you enjoyed yourselves. Uh, and I will see you in the next video, uh, which, um, I think this is, this is going to be a Friday video. And then, um, the next one will be an office hours episode. And boy, just, we just have so many exciting things.

So many exciting things coming up, don’t we? All right. Anyway, thank you for watching. Thank you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Erik Tests a New Microphone

Erik Tests a New Microphone


Video Summary

In this video, I’m Erik Darling from Darling Data, and I wanted to share a bit of behind-the-scenes content as I test out my new microphone. The old one met its untimely end after countless hours of discussing databases, but fear not—my audio quality remains top-notch! This video serves multiple purposes: it allows me to ensure the sound is clear across various devices so you can enjoy my content without any hiccups, and it’s a direct way for you to support my endeavors. If you’d like to contribute towards better equipment, consider signing up for a channel membership starting at just $4 a month. Your support not only helps me improve but also gives you access to exclusive perks and opportunities to ask questions during office hours episodes.

Additionally, I’m taking this opportunity to promote some of the services and courses I offer. Whether you’re looking to enhance your SQL Server performance with my consulting services or want to dive into T-SQL training, there are plenty of ways to get involved. The pre-sale price for my 24-hour T-SQL course is currently at $250, but don’t miss out on the limited-time offer! And if you’re attending Pass Data Community Summit in Seattle, you’ll receive free access to this companion content with your admission. Mark your calendars for my upcoming live appearances in New York City and Dallas, as well as Utrecht, where I’ll be sharing more insights and expertise.

Full Transcript

Erik Darling here with Darling Data. And if the first thing you’re noticing and admiring in this video is my new clip on mic, my eyes are up here, buddy, then you are a wise and you are a dedicated follower because I was talking the other day about how my old microphone, I was wearing a big headset at the time, my old microphone snapped in half from listening to me talk about databases. Hopefully you will not. snap in half from listening to me talk about databases. But we can consider the entire point of this video is Erik tests a new microphone. And the reason I’m doing this this way is because it’s very easy for me to upload this to YouTube and then listen to it across a variety of devices so I can ensure that I sound okay on a variety of devices. So let’s, let’s, let’s spiel ourselves here. If you would like me to buy a better microphone, if you have complaints about audio quality, well, you can do something about that. You can sign up for a channel membership and for as few as $4 a month, you can, you can support my endeavors to purchase nice microphones. There’s a, there’s a link down in the video description that allows you to do that. If you don’t care about my microphone, well, I don’t know what to tell you. You can like, you can comment, you could subscribe. You can also ask me questions for free privately that I will answer publicly during my office hours episodes.

Isn’t that nice? Isn’t that nice? These nice things I do for you. If you would really like me to get a new microphone, you can hire me as a consultant. It’s a great way for me to buy new microphones. And I am available to perform all sorts of miracles upon your SQL servers, health checks, performance analysis, hands-on tuning, dealing with performance emergencies and training your developers. So whenceforth you have no more performance emergencies. I do all of these things. And as always, my rates are reasonable. Anyway, uh, if you would like to buy some performance tuning training from me, I’ve got 24 whole entire hours of it. Uh, if you, uh, there’s a link down here that assembles all this stuff for you. But of course, uh, you, you, you can get the everything bundle there, uh, for a hundred, about 150 bucks with a 75% off code. And that will last you for life. No subscription required. Um, if you want to get in on my new T-SQL course. While it is at the pre-sale price of 250 US dollars, uh, you can do that now, uh, videos are dropping and being recorded. I’m going to go do some after I do this. Uh, and, uh, the, the price, once the advanced material is fully published after the summer, will go up to $500.

So please do save yourself 250 bucks unless you’re really itching to donate to the mic fund. Uh, uh, this is all companion content to the pre-cons that Kendra Little and I are teaching at Pass Data Community Summit in Seattle, November 17th to 21st. So if you are attending Pass and hopefully chosen wisely and you’re attending our pre-cons, well, guess what?

Uh, you, you will get free access to that, this companion content with, with your, with your admission there. Uh, if you would like to see me live and in person, Red Gate, Red Gate’s little road show. I’m being taken on tour sort of around the world, limited world tour, I guess.

Uh, some small clubs and venues, uh, New York City, August 18th and 19th, Dallas, September 15th and 16th. That’s the one in Texas and, uh, Utrecht, the one near Amsterdam and the, and the Netherlands. Uh, it’s a Hamlet and it’s beautiful.

Uh, October 1st and 2nd, but let’s, uh, let’s do a, a, a short video here for me to test my microphone with. All right. So, uh, a lot of the times when I’m teaching about dynamic SQL, I like to say things like, if you want to properly parameterize your dynamic SQL, you have to use SP execute SQL and you have to feed it some parameters and you have to feed it some, some values to, to, to substitute those parameters with.

And that, that’s a great way to avoid SQL injection because everyone should be trying to avoid SQL injection. It’s unpleasant, gets, gets you fired. It might get your company to go out of business.

There are all sorts of terrible things that can happen when you are SQL injected. So, uh, I, I do, I do recommend avoiding that. Askew SQL injection at all costs.

But there is kind of one funny circumstance where you can not use SP execute SQL and you can still somewhat parameterize your dynamic SQL. So we, what I’m going to do is, well, what I’ve done already is, uh, this, this only works with linked servers, at least that I’ve ever seen. Um, there might be some really insane work around where it works otherwise, but Hey, what do I know?

Uh, I know T SQL with Eric. It’s apparently what I know, but I’ve already added a loopback linked server to my very own server. Uh, I am, uh, for the, for the most part, SQL Server monogamous.

So, uh, I, I only work with one version in addition of SQL Server at a time though. I guess, I guess, I guess these days I am philandering a bit with SQL Server 2025, but, uh, we don’t, we don’t need to talk about that publicly. We can save myself a little shame and embarrassment here, but this linked server will allow me to do something kind of funny.

This linked server will allow me to, uh, declare some SQL and I’ve, I’ve got, uh, I’ve got my, my string that I want to execute, uh, being set and assigned right here between these two things. And I’ve got this, uh, ID local variable set to eight. Now, uh, in my where clause in my dynamic SQL, I’ve got a question mark, right?

And it’s sort of like that crappy store procedure, SPMS for each DB where the question mark is the database name. And you have to say like use bracket question mark thing to, to get into different databases. It’s kind of like that.

But what I can do is I can use, uh, I can use the, the less safe version of executing dynamic SQL. And I can say execute, uh, at SQL, and I can pass in the ID as a second thing. And that second thing is going to act as a parameter replacement for that question mark.

But I have to use exec at, I don’t know why Zoomit has forsaken me like that, but I have to, I have to execute this at the linked server. And when I do that and, and I, you know, go through great pains to use the right database and everything. But when I, when I run all this, uh, this server talks to itself and it returns the thing that I wanted, which is post type ID eight from the post types table.

So anyway, thank you for joining me for Eric tests, a new microphone. I hope you enjoyed yourselves. I hope you learned something.

I hope you like my new microphone because, uh, I don’t know what to get if this, if you don’t like it, perhaps, perhaps you can recommend one to me. Um, it just, it has to plug into one of these things and it has to have a very special connector like this. So if you’ve got recommendations along those lines, um, well, I’m, I’m two ears and one mouth, but that, that, that expression doesn’t really, doesn’t really resonate much.

Does it anyway? Thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Spurious Left Join Logic

Spurious Left Join Logic


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Stubborn NOT EXISTS Ordering

Stubborn NOT EXISTS Ordering


Video Summary

In this video, I delve into the fascinating world of stubborn not exists ordering in SQL Server queries. You might be wondering why I’m wearing a headset that’s making me look like an air traffic controller—well, it’s because my fancy wireless microphone setup broke, and I’m waiting for replacements to arrive. This quirky situation has led to some humorous moments, but it hasn’t stopped us from diving deep into SQL Server optimization techniques. I use dynamic SQL and XQuery to generate random not exists predicates and analyze their execution order in the query plan. By running this process multiple times, we can observe how the optimizer handles these clauses and whether reordering them might improve performance. So, grab your headphones (if you have any) and join me as we explore this intriguing aspect of SQL Server’s query optimization!

Full Transcript

Erik Darling here with Darling Data, and you might be wondering why I’m wearing a headset. Well, it’s a funny story. It turns out when you spend like 800 bucks on a fancy wireless microphone setup, this thing is fine, but the microphone that plugged in here, they don’t send you a backup of one of those. And it turns out that the little wire thing that you usually saw on my shirt about right here is very fragile. And if you try to adjust it, it’ll just snap. And so I’m waiting for my replacement ones to show up. Unfortunately, my scheduled YouTube videos are going to run out before they arrive tomorrow. So we’re going to do a few of these with the old headset, and I’m going to look like a goofy air traffic controller. While I talk to you about SQL Server. So I hope you I hope you can survive these trying times, my friends, where Erik Darling is wearing a headset, because I’m having a tough time with it. Honestly, I feel like I look stupid in this thing. Anyway, let’s talk in this video about stubborn not exists ordering. And by stubborn not exists ordering, well, of course, what I mean is the order that like SQL Server’s optimizer is famous.

For being able to take a query and do all sorts of stuff with it, play all sorts of tricks, like do fun things to like reorder joins. But one thing that it doesn’t do. At least that I can I can’t get it to ever do it is reorder the order of not exists predicates, or not exist sub queries, whatever you want to call them in a query. And I think it’s it’s kind of amusing the way that it doesn’t do this. If there’s a message here, it’s that if you’re going to write a query where there are multiple not exist checks, you may want to spend a little time figuring out if the order of them improves query performance at all.

I’m gonna I’m gonna I’m gonna give you a little spoiler here for our query, it doesn’t make much of a difference. It’s gonna run in two to three seconds anyway, because I did I did many things right along the way. But for you out there and real real user space land, it might it might actually make a difference. So before we do that, haha, we need to shh feel ourselves, sh feel all over ourselves. If you appreciate this SQL Server content, even when I’m wearing a headset, which is I know is hard.

You can sign up to become a member of the channel. And you can for as few as $4 a month, support a starving SQL Server consultant. If you if you are also starving for some reason, maybe AI took your job on already, I don’t know. You can do all sorts of fun stuff to help this channel thrive and survive. And we won’t put food in my stomach, but it’ll put a smile on my face. You can like you can comment, you can subscribe. And if you want to ask questions privately that I will answer publicly during my office hours episodes, you can do that at this link. All of the things that I talked about are very conveniently linked for you down in the video description. You don’t even have to think much about it. You can just click randomly on things until something works. Just just like with SQL Server. It’s a good time. If you need consulting from a guy in a headset.

I am available as a SQL Server consultant, not just not just a pretty face on YouTube, I could be a pretty face on a zoom call for you to health checks performance analysis hands on tuning of whatever you need tuned. Fixing your SQL Server is a SQL Server is a SQL Server. Fixing your SQL Server performance emergencies and of course training your developers so that you don’t have those emergencies anymore.

It’ll be a nice good time for you. I promise everything will go your way. You will you will you will be endowed with the the luck of the luckiest civilization out there. If you would like to get some training from me, I have 24 hours of performance tuning training.

You can get it all for about 150 US dollar rules. No tariffs on that. I promise we’re staying tariff free here in the data darling world. We go to that link you put in that discount code and you’ll get the everything bundle for $150.

It’s nice. I also have a new T-SQL course that is currently publishing as we speak. I have finished the read query portion and the modification query portion. Next up will be isolation levels and then programmability.

So it is half done. At least the beginner portion of it is the advanced portion will be after come out after the summer. The pre-sale price until all the advanced material is out is $250. If you will go up to $500 once the course content is fully fully released.

Just so you know, if you are attending past data community summit in Seattle this November, and you are attending the pre-cons that Kendra Little and I are doing on T-SQL. This is companion material to that content.

So if you attend the pre-cons, you will get access to this content as part of your admission to those. Speaking of speaking. Ho-wee.

Boy, am I my arms tired. Pass is going on tour. It’s the Red Gate Roadshow and they have cordially invited me to go to all of these things. New York City, August 18th and 19th.

Dallas, September 15th and 16th. And Utrecht, which is a hamlet near Amsterdam, October 1st and 2nd. And of course, this is all leading up to past data community summit taking place in lovely Seattle, Washington, November 17th to 21st, where Microsoft has indefinitely canceled their build conferences and has relocated them to sunny Las Vegas.

So, I don’t know, maybe, maybe pass a move to Vegas too. Who knows? Anyway, with that out of the way, what I want to show you is like, like, A, I’m going to prove my point.

But B, I’m going to show you how I prove points like this to myself. It’s often quite a process. It’s quite an endeavor, quite a chore to do these things.

But they help me. They help me understand things and they help me see things how they really are. So, what I did in order to sort of prove this out was I used my best friend, Dynamic SQL, and my other best friend, XQuery.

And what I did was I built up Dynamic SQL in a way that I could grab the query plan for the query that ran, have that query generate not exist clauses in random orders, and then give, like, show me the query that ran in the order that the clustered index things happened in on which tables, so that I could match up the order of the not exist subquery clause predicates, and the order that things happened in the query plan.

It was all very tedious. But when I got it right, it was very exciting. So, I’m going to walk you through the Dynamic SQL portion, and then I’m going to run this a few times, and then I’m going to show you the results.

All right? Cool. I hope you enjoy it. So, we have some variables up here, some local variables. These are not formal variables that we’re going to use to hold various things. This will hold the Dynamic SQL we’re going to execute.

This is going to hold the parameters for the Dynamic SQL. This is going to hold an output parameter for the Dynamic SQL. Why Zoomit just dissed me like that live on YouTube? I don’t know.

These are the parameters that we are going to pass into the Dynamic SQL. This is going to, again, this is going to be the output thing for this, and this is going to be the output thing for this down here.

Maybe I could have organized those slightly better. And we’re going to use this replacement thing here. This replacement thing is going to come in handy in a moment, which we’ll get to it. Just remember, there is a local variable called replacement.

This is going to be our general Dynamic SQL setup. We are going to set this every time, and we have this sort of, we have this thing over here that says replacement.

This is going to act as our token. This is the thing in this batch that we are going to replace with our varying not exists predicates. We also have this lovely piece of SQL in here, and this lovely piece of SQL is going to get the execution plan for the query that’s for the session that’s running here, which works.

I assure you, this is correct. As crazy as it looks and sounds, this is correct. So what we do is we set that, remember that replacement variable I told you that was very, very important.

We use this replacement variable and we set this, and we’re going to use this brand new, brand spanking new, fresh off the factory lot string ag function that came out in SQL Server 2017. And we’re going to use this instead of more XML.

There’s this XML down below. So don’t worry, all of your fetishes will be satisfied. And we are going to aggregate this column with an empty space. And we are going to say within group order by V.O.

You’re probably asking yourselves, what are V.C and V.O? Those are fantastic thoughts to have. And they are wonderful questions to have answered.

So this values clause generates two columns. One of them is this nchar10 with a not exist. And you’ll notice that each one of these not exists has a different table in it.

We have badges, comments, posts, and votes. And then we have this other thing that it generates for new ID. New ID is how we get the random ordering.

So this values clause, which is aliased as V, has the C column, which is where we’re holding the nchar10 and not exists. And then the O is where we’re holding the new ID.

So when we run this all together, we’re going to concatenate all of these not exist clauses ordered by the new IDs that get generated in here. And we are going to get randomly ordered not exists subquery clause predicates.

Okay, cool. After that, we are going to replace in the in the dynamic SQL portion, the replacement token with that string that we just generated and assigned to the replacement variable.

We’re going to execute our query and we are going to output the C and the query plan. All right. The C up here is the C is just, of course, the result of the count big.

We don’t actually do anything with this. We just throw it away. But the query plan, we do something very exciting with. You’re ready for some XML.

You look like you’re ready for some XML. So I’m going to give you some XML here, some X query for you. We’re going to print the query that runs. All right.

And then we’re going to delete some XML. Have you ever seen this before? Have you ever seen a delete from XML? We’re going to use this thing. And we’re going to.

So back story on why this is necessary. So we’re going to do a query. We’re going to do a query. When the query up here runs, there are two query plans that get generated. There’s a query plan for this query, which has the not exist stuff in it. And then there’s a query plan for getting the query plan.

Couldn’t make this stuff up. Right? Could not make these things up. So what we need to do is we need to preserve the first query plan and delete the last query plan. And apparently this works to do that.

Isn’t that insane? The second thing we need to do with XML is shred our query plan. Remember that little, that, that output thing called query plan that we, that we assigned the query plan to? Well, we’ve got something interesting to do here.

We have to select from a variable, the XML nodes for all of the clustered index scans. What we’re going to pull out of that is the node ID, which is going to tell us the order that things happened in the query. The table name that, that the thing that, so we have the node ID, which is the order that things happened in.

We have the table name, which is the thing that got clustered index scanned. And then I’m also going to select the query plan here so that we can validate our, our results. Okay.

All good. This is great. This is wonderful. I am very happy with all of this. We’re not going to do anything with this. This was me just making sure that I was, I was right about things. Uh, I ran this quite a bit to see if I could find any like real big outliers that would be like, Hey, look, this is a big performance thing, but there, there really weren’t any with, with, with, uh, with these.

Anyway, uh, what we need to do now is come back up to the top and we’re going to run set no count on once so that we connect to, and look at, look how nice this is. Look how nice this new SQL Server management studio 21 connect dialogue. We have this futuristic thing.

We can tell, we can even tell it what database we want to connect into, and we can do that. No one look at my password. Of course, this is highly, highly confidential information. Don’t look at the password, but the, we can tell it exactly which database to go to. So we, I don’t actually accidentally connect to master and hit an error the first time.

Fantastic. Thanks, Aaron. Anyway, we are all connected there. And now let’s run this once and let’s just see what happens. Uh, my, my, my VM may have restarted last night.

So this might, this first run might take a couple of seconds because we might have to read some stuff, uh, from, from disk into memory. Remember kids reading stuff from disk into memory is terribly slow. Keep your data in memory and your queries will always be faster.

So, uh, what we have now that we are finished, now that we’re down here, right? We want, we don’t know what, we don’t want to hang around up there. We want to hang out down here because our query, this is what the, this is the query that ran and gave us some stuff.

So we have the node ID, which, which is going to, and I’m going to prove all this out to you. We have the node ID. This is the order that tables were accessed in.

We have the table name and here’s the query plan. Over in the messages tab, we have the query that ran. So all of these queries start with the users table, right? So in all of these users ends up being first.

Then we have our not exist predicate clause query plan predicate subqueries. And we see the order that these happened in here. So we have badges, comments, votes, posts.

All right. B, C, V, P badges, comments, votes, posts. If we come back over to our results, we have users first, because that, that was, that was the from clause. And then we have badges, comments, votes, posts.

Okay. Well, do we know if these, how do we know these node IDs aren’t lying to us? Well, let’s look at the query plan. We have users.

Ta-da. We have badges. Ta-da. We have comments. Ta-da. We have votes. Ta-da. And we have posts.

Again, ta-da. If we hover over any of these, we’ll see the node ID down here in this little tooltip. So this is node ID 14. This is node ID number 12.

This is node ID number 10. And this is node ID number eight. Ah, we got it.

And this is node ID number six. All right. So I think, like, now that we’ve kind of proven that the setup, this is a valid test. Well, here’s our six, eight, 10, 12, 14, right?

Just like we saw up there. Let’s run this a few times and see stuff in some different orders. So now we got a completely different order on this one. Now we have posts, badges, votes, and comments.

And now these are no longer even numbers. Now these are odd numbers. We have seven, nine, 11, and 13. So remember, users, posts, badges, votes, comments. If you look at over here, here’s, oh, go away thing.

Here we have users, posts, badges, votes, comments. So the order matches again. If we keep running this, and I’ve done this hundreds and hundreds of times, like the number of loops I’ve written around this to do this over and over again is absurd.

And if we, every time we validate this output, here’s users, votes, posts, badges, comments. So VPBC over here. What do we have in the messages tab?

V, P, B, C. Every single time. So coming back to the main point, when you are writing queries with not exists, SQL Server, SQL Server’s optimizer will do, let’s just, I won’t say nothing.

Cause who knows, maybe like it almost does something, but then changes its mind and is just like, nah, I don’t think so. Hmm.

It’s not for me. SQL Server’s optimizer does not really make any attempt to reorder not exists predicates. So when you are writing queries that have not exists in them, some tables might be cheaper to access and do a not exists from, and you might be able to narrow rows down than other tables.

So always be very, very careful and cautious. The order that you write your not exists is in because you might be able to get your queries to run much faster just by reordering not exists is because SQL Server will be able to do it for you.

Anyway, that’s all I have for this one. Thank you for watching. I hope you enjoyed yourselves. I hope you learned something. And I will see you in the next video where I will still be wearing this atrocious headset until my little clippy thing shows up.

It’s not an actual clip, like a tie clip or like a clippy, like the paper clip guy. It’s just a little like, like clippy microphone thing, which I don’t know. I might start wearing for formal occasions as well with my Adidas tuxedo, because that’s how I roll.

All right. Anyway, I guess that’s good for this one. Again, thank you for watching. I hope you enjoyed yourselves. I hope you learned something and I will see you over in the next video. All right.

It’s magic. Ta-da! Ta-da!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

My Upcoming Speaking Schedule

Busy Summer


The nice folks at Red Gate have decided to put me to work.

That means I’m going on tour, and maybe getting some socks and a Hawaiian shirt.

No word on a “Lego Erik” yet.

PASS On Tour Events:

PASS Data Community Summit:

Of course, Kendra Little and I are back in action to teach back-t0-back T-SQL precons.

See you out there!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

SQL Server Performance Office Hours Episode 20

SQL Server Performance Office Hours Episode 20



To ask your questions, head over here.

Hello Mr. Erik, I’ve attended your tuning course inside a theater in Seattle years ago! If I may ask a question about the future of SQL. I heard in the past that Joe Sack left MS to MongoDB and now he has returned. That genius guy still working with improvements of sql whatever “onprem” or azure ?
If you don’t mind, could you tell us more about the stories behind your tattoos? What do they represent, and how many do you have?
Without parameterized queries, how would you suggest to decide which queries to tune?
Hi Erik! Are your educator skills just natural talent or do you have any good sources for improving that?
Give me the case against partitioned views.

Video Summary

In this video, I dive into the world of Office Hours with Darling Data, where we tackle a variety of SQL Server questions from viewers. We start off by addressing the future of SQL and Joe Sack’s recent career moves, which sparked some interesting discussions. Then, I share insights on deciding which queries to tune without parameterized queries, introducing a new feature in SP Quickie Store that helps analyze query hash totals. The conversation continues with personal questions about my tattoos and teaching skills, offering unique perspectives on how these aspects of life have influenced my work. Finally, we wrap up by discussing partition views, sharing both the benefits and challenges they present. It’s always great to engage directly with our community and hear from you all!

Full Transcript

Erik Darling here with Darling Data. And we are once again greeted with the background. So we are once again doing Office Hours! Kaboom! If you would like to ask your own questions for Office Hours, this is the link to do it. It’s down in the video description. Likewise, if you would like to support my channel and give me money to keep talking, if there were a way for you to pay me to stop talking, perhaps there would be more generosity from the greater public. I am I am open to that. I can be bought. I’m not beyond that. My morals and my ethics do not extend that far. So if you would like to pay me to keep talking, you can do that. If you would like to pay me to stop talking, shoot me an email. We can work something out. If you like this channel but not in a money way, you can like, you can comment, you can subscribe. And if you would like to hire a consultant to do SQL Server stuff because you’re having trouble with SQL Server stuff, guess what? This total package here does SQL Server stuff. Health checks, performance analysis, hands-on tuning, dealing with SQL Server performance emergencies, and of course, training your developers so that you have fewer SQL Server performance emergencies. Right down to zero SQL Server performance emergencies. We can, I can do all those things. Not we, I. There’s only me here. You only get this face. There’s no substitute face that shows up. Doesn’t know who you are or what you are.

If you would like to get my performance tuning training material, you can get all 24 hours of it for about 150 USD for life. Again, link video description. My new T-SQL course, which I finally fixed this slide for. Videos will start dropping in June. You, of course, get the pre-sale price until the advanced material shows up after the summer. This is companion material to the pre-cons that Kendra Little and I are doing in Seattle this November. So if you are attending those, you get access to this material at the price of the pre-cons. If you go to pass and you don’t come to the pre-cons, I don’t care. Right? Like, you have to show up for me and Kendra for me to care. If you would, again, speak more about the live and in-person stuff. Pass on tour. Boy, this is going to be fun. New York, Dallas, and Amsterdam. August, September, October. I will be at all three of them. And of course, I will be at Pass Data Community Summit in Seattle this November. doing the aforementioned pre-cons. So we will have a grand time with that, won’t we? But with that out of the way, let’s do these office hours questions. Let’s zoom, zoom, zoom, zoom, zoom, zoom, zoom, zoom. What do we have here?

Hello, Mr. Eric. Hello, you. I’m not sure how to address you. I’ve attended your tuning course inside a theater in Seattle years ago. A theater, you say. If I may ask a question about the future of SQL, as much as I am not a psychic, I’ll do my best. I heard in the past that Joe Sack left MS to MongoDB and now he has returned. That genius guy is still working with improvements of SQL Server, whatever, on-prem or Azure. So he did come back.

Joe Sack did go to MongoDB for about a year or so. And then he was back at Microsoft. And he was back at Microsoft for about a year and a half. And, you know, it would be inappropriate of me to comment on Joe’s situation at Microsoft. But Joe was sort of unhappy with what his role had morphed into. And so Joe went to work at another database company called Elasticsearch.

So Joe is now some sort of head honcho, not sure what a head honcho is, over at Elasticsearch. Doing a great job there, kicking butt. They are very, very lucky to have him. You know, I miss him dearly working on SQL Server, but it was not meant to be.

All right. Ooh, a personal question. Look at us go here. If you don’t mind, could you tell us more about the stories behind your tattoos? What do they represent and how many do you have?

Well, the stock answer that I have when someone asks me how many tattoos I have is all of them. Because I’m pretty well covered. But the thing where I depart on having stories behind my tattoos is that I got nothing.

All right. Like really, most of them mean absolutely nothing to me. There’s no story. There’s no meaning. There’s no like heartfelt life event that led to me getting them. Like I got a couple like wife and kid name tattoos, but those are just sort of like if I didn’t get them, like they’d be mad.

Right. That’s about it. It’s just, you know, I learned I rather I figured out at a very young age that I was the type of gentleman who wanted the who wanted attention from the type of lady who had a lot of tattoos. And I realized that the best way to get that attention was to get tattoos. And I was lucky enough to make friends with some tattoo artists, like especially like, you know, friends of mine, like lifelong friends of mine who are like starting out getting tattoos, who have gone on to like be really good at tattoos and like own tattoo shops.

But I have all their like starter work. So I have a lot of really old tattoos right now that they were just like, hey, I want to do this tattoo to practice. Can I do it on you? And I was like, dope. I’ll buy burritos.

So most of these tattoos have no real meaning, no real story, maybe just a thing that I kind of liked at the time where the tattoo artist was like, oh, I want to do this Japanese thing today. I’m like, I don’t have any Japanese stuff. Let’s do it. So like, you know, I just got covered with a lot of stuff that means nothing to me very quickly.

And guess what? It worked. May not be the most thoughtful thing in the world, but it was highly effective and painful, painful and effective. So anyway, well, we’re on the subject of pain.

Without parameterized queries, how would you suggest to decide which queries to tune? So this is actually a neat question because I recently added, well, I don’t know, I’m actually, because of how busy I’ve been recently, I have no concept of like actual time. Months have gone by where I’m like, where are we?

But I sort of recently added a new parameter to SP Quickie Store to help you decide if queries are worth tuning. And that parameter is called include query hash totals. There are underscores in there because I like putting underscores in things.

To make them readable. I don’t like the uppercase, lowercase thing. It makes me feel cramped and crowded and it gets me all claustrophobic feeling. But so I added this parameter because like, like what I would find is like I would run Quickie Store.

And like there would be this whole like list of queries that looked similar, but would have like one or like just a couple, maybe like five, six executions. And people would be like, this doesn’t run that much. I don’t want to spend time on it.

So I put this include query hash totals parameter in. And what that does is it looks at the query hash. So like if you have unparameterized queries that are effectively running the same thing over and over and over again. Right.

It’s the same query. You just have like different dates or like a different name or a different number of in clause things. All the stuff that like kind of like gets you like the same query hash, but like with different like with what gets you like the same query plan. No, it’s not the same query plan.

Gets you different query plans, but like the same query hash. Like the text of the query ends up all the same. Like it gets, it counts all that up and it gives you totals for like CPU and duration and like executions and all that stuff. All like the other metrics that are in the in QuickieStore’s output for the query at the query hash level.

So like you might find a very sneaky query that looks like it’s only executed once, but the query hash tells you otherwise. If you look at the query hash, it’s like, wow, this thing actually executed like 7,000 times. We just only see this one, like we just see this one example of it in the output.

So like when I, when queries aren’t parameterized, now that’s the tool that, or rather that’s the option that I use in my tooling when I want to figure out if something is worth going after. It’s still worth using QuickieStore and it’s still worth figuring out like, like if the query is meaningful to the workload, like other ways you can do that with QuickieStore. There’s a parameter in there called workdays.

And I like the workdays parameter because like it’ll just look at stuff that’s run Monday through Friday, but then by default nine to five. But there are two other parameters you can use to change the span of hours. But what’s nice about that is that like you, you screen out automatically all the like overnight processes that you might not care about.

If you like, if you want to focus on the overnight processes, go ahead and say, set the start and end date or whatever, or like, like use workdays, but go in reverse. Like with the, with the times, with the timestamps. But like, so I would probably use some combination of those things to do that.

Okay. Hey, look, another sort of personal question here. Look at that. Are your, hi, Eric.

Hi, you. Nice to meet you. Are your educator skills just natural talent? Oh boy. I don’t know if I call it that. Or do you have any good sources for improving that? Uh, so I have no good source for improving that. Um, any, any, any ability I have as an educator comes from being dumb.

Right. I’m not, I am not a naturally smart person. Things don’t come quickly to me. I’m like one of those like old CD burners that goes at like one X, like it, like it, it burns slow, but it burns deep.

Right. So I don’t learn things very quickly. And it like, so like when I need to learn something, like I need to like really break it down in my head a lot further than people who get things a lot more intuitively. Like people who are much smarter or more clever, whatever it is.

Like, like they, they like, it’s like, oh, like they just look at something like, oh yeah, I get it. Me. I’m like, no, no, no. I need like, no. Why does this thing go from here to there? Like I don’t get things that quickly.

So I’m good at teaching people because I’m dumb. Right. Like it takes a lot for me to learn something. And by the time I’ve learned, I’ve learned something, I feel like I’ve learned it very well and in very small pieces so that like when I have to tell someone else about it, like all those small pieces are just like burned into my, my CD brain. Right.

My CD brain. So I like, if it’s, if it’s any, if there’s anything natural, it’s cause I’m spinning slow up here. All right.

Give me the case against partition views. I don’t really have one. I don’t really have one. My only, the only real rot that I’ve found with partition views is trying to get things right so that they’re updatable. That’s a damn nightmare.

That is not a good time. I don’t suggest that. It doesn’t fit a lot of situations. So if like you want parts, if you want to use partition views, I say, go for it. Just, you know, create your, make sure that you get your constraints right and make sure that whatever needs to like, you know, like whatever view needs to be refreshed to get new data in there is running at the right like intervals and stuff.

And you’re pretty good. You know, I like partition views like, like usually quite a bit better than like capital P partitioning, just because like I can index stuff differently. If there’s like, you know, if I add new columns or remove columns, it’s easy enough to like fix that in the view definition.

When you have like proper constraints on the tables to tell a SQL Server what data lives where, you know, you can get like pretty like clean execution plans from it. So I really don’t have much against partition views aside from like trying to get them to be writable, which again, that kept me up for, that kept me up for a while trying to, trying to like get a good demo where like to where they were writable. And I, I bailed on it.

It was just, it was too much. It was too many things, too many things went wrong and happened that I, I did not like nor love. All right. Anyway, I believe that is, that is five questions.

One, two, three, four. Yeah, that’s five. I can, I can count to five. Most, I, I, I, I, I credit most of my ability to count to five from, from barbell training because doing sets of five really does get you good at counting to five. All right.

So anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something and I will see you in the next video, video, video. So thanks for submitting questions. Submit some more.

All right. That’s not a very good sales pitch, is it? All right. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Learn T-SQL With Erik: Common Table Expression Mediocrity

Learn T-SQL With Erik: Common Table Expression Mediocrity


Video Summary

In this video, I delve into the world of Common Table Expressions (CTEs) in SQL Server, often highlighting their perceived benefits and the reality of their implementation. I start by addressing a common misconception—that CTEs make queries more readable. Through examples, I demonstrate how what might seem like a well-structured query can actually be a disaster, leading to errors when executed. The video then transitions into an in-depth look at SQL Server’s handling of CTEs, emphasizing the lack of materialization and the potential for performance issues due to repeated execution. I provide practical advice on when and how to use CTEs effectively, suggesting that in many cases, dumping their results into temporary tables can significantly improve query performance. Throughout the video, I share my frustration with SQL Server’s limitations and the workaround nature of using CTEs, ultimately aiming to help viewers navigate these challenges more confidently.

Full Transcript

Erik Darling here with Darling Data. In today’s video we’re going to get back to the T-SQL learning material. This is of course the beginner stuff, so if you fancy yourself far beyond the beginner realm of learning in this area, you can feel free to go twiddle your thumbs elsewhere. But we’re going to talk about CTE today. And this is one of my favorite things to talk about. because I like watching the balloons deflate when we start talking about just how mediocre SQL Server’s implementation of CTE is. You know, instead of getting some basic useful database functionality, we get giant monolithic failures like fabric foisted upon us and I don’t know, I guess let the layoffs continue. All right, good job all around. So one of the first things that every LLM generated idiot on the internet likes to say about CTE is how they make queries more readable. Oh, they’re so readable. Look how readable the query is with the CTE. My goodness. Let’s finger point, rocket ship, green check, fire emoji our way to fame and fortune. One wonders if these people have a lot of an assistant to remind them to have an assistant to remind them to breathe. But who knows. So here’s a query that is a CTE. We can identify quite easily that it is a CTE because it starts with with. It does that. It doesn’t need a semicolon, does it? But this query is completely illegible. There is nothing readable, understandable, or even tolerable about this query. It is a disaster. If someone sent this query to me, I would throw them out a window. If you don’t have respect for me, that’s one thing, but at least have some self-respect when you write these things. Even better, is if we attempt to run this query, we will get an error.

And, you know, the error is very clear. But where we would go about remedying this error is not terribly clear based on this syntax, is it? So please format your queries and they will be readable. If one does not format their queries, they will not be readable no matter how many CTE one dispenses with. So with that out of the way, let’s talk about SQL Server. So with that out of the way, let’s talk about SQL Server’s utter mediocrity with CTE. Now, the big thing is that even though the word table is in the name, the result of your query is only tabular. It is not materialized to a table. Nothing, there is nothing stable about the result of your query. There are all sorts of things that can come up, especially when one starts pondering isolation levels and the timing of operations in a query plan that might, they could set one’s head on fire if one spent too long.

So it’s quite a dizzying array of issues that you could run into. But that’s getting a little bit in front of things. That’s getting a little bit in front of things. The main thing is that the query is not materialized. Even if you put a top in there or something, which does provide some like logical fencing of stuff. If you watch the unnesting video that I recorded a few days ago, you’ll see that I use top in there to prevent some unnesting. In a similar way, you could do that with a CTE.

However, that result is still not materialized. And what I mean by that is that every time you reference the CTE, generally in the outer scope of the query, and by outer scope, I mean after the CTE has been run, every time you do that you have to run the query inside of the CTE. An easy way to see what I mean by that is by just getting an estimated plan for this one, where there’s one reference to the CTE, and there’s only one time in the query plan when we touch the users table, and where we generate some query plan operators to create our row number.

We’re calling the row number function here. And there’s only one filter to remove any rows where row number is not between 1 and 100. We have to contrast that a little bit with a slightly different query, where our from clause now joins the CTE, c1 to itself. They, of course, have different aliases because, you know, you can’t alias the same thing the same way twice. You just get an error and say, hey, you already did that.

No need to do that again. So if we now get the estimated plan for this query, we will see that we have now two copies of the CTE being executed, or being referenced, right, where the query gets executed. We touch the user’s table twice. We do all the stuff that we need to generate the row number twice in each of these query plans, right?

We have a lot of things that repeat in here, and we have two filter expressions, one for each time that we filter on the row numbers down here in the where clause outside of the CTE. And this is a very general pattern that you will see over and over again if you are the type of person who uses CTE and then re-references that CTE multiple times in the resulting query. So be very careful with this. If you’re going to do this sort of thing, I mean, you know, if the thing in your CTE is small and compact and easy enough to run, you might never have a problem with it.

But if you find your queries that exhibit this pattern slowing down considerably, strongly consider dumping the result of your CTE into a pound sign or hash sign temp table, and then using that temp table in your outer query instead. So that’s what it did. Now, it’s not just the re-referencing the lack of materialization that this can cause an issue with.

Quite often, even if you have every CTE, like if you stack CTE together, you have like a whole chain of them, and you have a different query in each one. And then in the outer query, you only talk to each CTE once, but you like join the results together in some way, either with a traditional join or like an in sub query or not in or exists or not exists or anything like that.

Cardinality estimation gets very, very difficult when you start combining all those things together. The reason for that is, you know, cardinality estimation can be difficult enough. If you think about a query plan, if you read it from left to right, you of course sort of get the logical flow of the query and like, you know, how things like got to where they needed to get to.

But if you start at the outer edge of the query, by that I mean like the stuff that’s kind of behind my big head, like these things over here. This is like the outer edge of the query plan. And like, you know, cardinality estimation can be tough there if you have a rather complex set of predicates against the table or just confusing like weird or stuff going on over and over again. This is like the outer edge of the query plan here. So like cardinality estimation here might be okay.

But as you start moving across the plan, when you start attempting to join complex expressions together, like as you get like, like deeper in like, or rather like, like further to the left in your query plans. That’s where cardinality estimation generally tends to fall apart. And that’s where materializing results into temp tables can be very valuable. Because now SQL Server, like even if it messes up cardinality estimation completely in the query that populates the temp table, once that result is materialized, SQL Server has every opportunity to better cardinality estimation with that, like physically materialized result.

So like, like, like, like when you see query plans, and like SQL Server’s optimized result cost-based, right? Figures out like, like along the way, it figures out all these different plan shapes and candidate plans and, you know, substitutes different operators to do different things and reorders joins and all sorts of crazy mathy stuff. But like, like, like, like, like, SQL Server is trying out these candidate plans, you’re going to see a lot of like weird Franken plan mix and match stuff where it’s like, oh, this is cheap.

Oh, now put in this cheap part. Okay, now put in this other cheap part. So like cardinality estimation can get really, really wonky and big, complicated plans, because all of a sudden, they’re sort of like these stitched together cost based choices. And things can really start misaligning. So like, it’s not just the lack of materialization with the CTE that can be painful, even if you don’t like, even if you don’t re-reference a CTE, string together a whole bunch of complicated ones, that can, that can also just make life weird. So like, like, I tend to avoid that as much as possible. A rather uncomplicated example, and this is not like a bad cardinality estimation example, this is just to like show you that sometimes like not, you don’t always, the lack of materialization doesn’t always come back to hurt you with the query being rerun.

This is just a simple stacked CTE where, you know, we have C1 here, and we run the query in here. And then down below it, we have C2. And we, when we reference, well, we definitely reference C1 here, and then select from C2 in the outer part. But what’s nice, like, C2 doesn’t mess this up. This doesn’t end up, this doesn’t, like, stacking CTE doesn’t result in the user’s table being hit twice, or the query inside the initial CTE getting executed twice. We only have that once in the stacked sort of thing there, the stacked CTE list here, where things, and actually, something that I think is kind of nice about this one, is you’ll notice that, like, in here, we generate our row number, right? And down here, we filter on that row number between one and 1000. But then in the outermost query, the outermost scope, we filter to where the row number is between 200 and 500, right? Which is a narrower sort of narrower set of, narrower range of values than 100 and 1000.

SQL Server only chooses to filter once, like, we don’t have an intermediate filter, and then a secondary filter, SQL Server just does one filter to where it’s between 200 and 500. So the optimizer does some work and just kind of like throws this portion out, it just says, we don’t actually need you to do anything, because like, you’re not really, like, there’s like no benefit to this. If we did something where, like, this was between 1 and 50, and this was between, like, 10 and 30, then it would filter twice, or like, then it would just filter out here.

So where CTE generally become useful is when you do things that are disallowed by T-SQL, like, just, like, on their own in a single query. One of the, probably the most, like, useful common example would be, like, deleting a top number of rows in an ordered way, right? So if we wanted to delete from this table, based on this where clause, and we wanted to order it by something, notice that we have a little red squiggle here, right?

SQL Server is like, like, IntelliSense is already telling us, hmm, I don’t know about this one. I don’t think that’s going to fly. And if we try to get an estimated plan for this, it would just say incorrect syntax near the keyword order.

Right? It doesn’t tell you, like, hey, you can’t do that. Like, you’re just going to sit there and stare at this query and be like, how, where, there’s no, there’s nothing wrong. If I run this part, I get a query plan.

But if I try to order by here, I don’t get a query plan. Why? There’s no, there’s nothing wrong here. Is it like you’ll start, like, putting this in, like, notepad++ and looking for, like, strange empty space characters and losing your mind. It’s just a T-SQL limitation.

But you can do that with a CTE where if you put a select top 1000 query in the CTE with your order by, you can delete from that and get a query plan just fine. Right? So this works, but just doing it in one query doesn’t.

So a lot of the utility and use of CTE is not performance. It’s not readability. And it’s certainly not, like, some sign that you know what you’re doing with T-SQL if you use them.

It’s how you use them. Right? Like, really, where they come in handy is where you do things that you can’t just do in one simple query. It’s like, you know, coming back to the row number stuff, you have to put that row number in some sort of derived table expression, whether it’s a CTE, whether it’s a derived table, you know, like, anything like that, in order to filter on the row number.

Other databases have a qualify clause that allow you, that allow, it’s sort of like a secondary where clause that allows you to filter on stuff that happens in the select list. Remember, we talked about logical query processing. Select happens almost last when queries are logically processed.

So stuff that you talk about in the select list isn’t visible to the where clause. If we had the qualify clause, it would be visible there. But we don’t.

Instead, we have fabric, fall down fabric, which, you know, just complete waste of our lives. So, like, most of the use of CTE just comes down to getting, like, T-SQL, there’s a workaround. Right?

Like, it’s never like, hey, there’s a straightforward way to do this. It’s always like, there’s this weird hack I read about. Right? It’s like, it’s never, almost never just like, oh, yeah, just do this one simple thing. It’s always like, no, no, no. You have to do this, like, four other things to get it to work, but it’ll work.

So, like, it’s really just a T-SQL limitation where we have to generate the row number in here before we can filter on it anywhere else. Or before we, like, you know, we can order by it up there, but we couldn’t filter on it in there. Because order by happens after select, but where happens way before select.

So, like, would it be nice if we had the qualify clause? Yes. Would it save us a lot of weird time and, like, typing and all the other stuff? Yes.

But, hey, it’s more important that everyone has a non-functional data late or something. Right? Okay. Anyway, thanks for watching. I hope you enjoyed yourselves. I hope you learned something.

And I will see you in another video where we will probably talk about something else T-SQL because that seems like a reasonable thing to do. Of course, the pre-sale on this course, you can buy this course at the pre-sale price, 250 bucks, down in the video description there. This is all companion material to the T-SQL seminars that Kendra Little and I will be teaching at Pass Data Community Summit in Seattle this November.

If you are attending those, you will get access to this companion material as part of your admission to the pre-cons. Otherwise, you will have to buy it from me. And if you wait too long, it won’t be the pre-sale price anymore.

It will be 500 bucks and you will say, can I still get the pre-sale price? And I will say, no. Why didn’t you buy it in the months that you had to buy it for the pre-sale price?

Ding dong. Anyway, I am going to go do something else now. CTE have once again found a way to depress me.

Anyway, thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

SQL Server Performance Office Hours Episode 19

SQL Server Performance Office Hours Episode 19



To ask your questions, head over here.

Tell me about the perils of using SNAPSHOT for writes. Is learning how to deal with conflict detection really worse than the pains of pessimism?
Why SqlServer doesn’t have parallel rollback ?
How can you be for SORT_IN_TEMPDB and for Accelerated Database Recovery? If tempdb is good for performance, then isn’t ADR bad for performance?
Your experience says to prefer the legacy cardinality estimator. What is fundamentally wrong with the new estimator?
You previously discussed why you use SORT_IN_TEMPDB. Would you default to that in Azure SQL Database given the lack of control over tempdb (including sizing) in that product?

Video Summary

In this video, I dive into answering five questions submitted by my YouTube community during an Office Hours episode. We cover a range of topics, from the practical aspects of using Snapshot isolation in transactions to the more technical question about why SQL Server doesn’t support parallel rollbacks. Additionally, we explore the differences between the legacy and new cardinality estimators, discussing their performance and personal preferences based on testing. The session also delves into the use of Sort operations and TempDB in Azure SQL Database, weighing the pros and cons despite potential limitations. Whether you’re a seasoned SQL Server professional or just starting out, there’s plenty to learn from these real-world questions and my insights.

Full Transcript

Erik Darling here with Darling Data and, well, we are background folk again, aren’t we? We’ve got a dumbbell, barbell behind us. It’s not a dumbbell. Dumbbells don’t bend like that. Barbells bend like that. And we’ve got a Darling Data logo, so that must mean we are doing another Office Hours episode where a helicopter of some sort. Hopefully it’s not a drone strike. Microsoft’s finally after me. Anyway, we’re going to do an Office Hours episode where I answer five entire questions submitted by you, my adorable users. If you want to submit your own question, you can go to this link, which is down in the video description. If you would like to support my channel, if you’re like, wow, this man deserves to get paid for all the work he does, well, you can do that also down in the video description. If you don’t feel like I deserve to get paid, maybe I am well deserving of a like or a comment or a subscription to the channel. I don’t know. I think we’re up to still around 60 paid subscribers and a little over 7,000 paid members and a little over 7,000 unpaid subscribers.

So I think a few of you might like the channel. All right. If you need help with SQL Server, health checks, performance analysis, hands-on tuning, dealing with performance emergencies and whipping your developers into shape so you have fewer emergencies. Well, I happen to be pretty good with a whip. Just saying. Doesn’t have to end there. If you would like to buy my performance tuning content, you can get all 24 hours of it for 75% off. That is about 150 USD and that is for life or 8 life. You can do that with the link down in the video description. And of course, I have a new T-SQL course with me, Eric. Ignore that. I need to fix that at some point. I’ll remember to do that someday. Funny story with Podia. But anyway, it’s on pre-sale price now. 250 bucks. It’s going to go up to 500 bucks when the advanced material drops after the summer. If you’re attending Kendra and I’s past pre-cons, you get access to all of the content here with the price of admission. So that’s a nice deal for you.

If you want to catch me live and in person, I will be at all three of the Pass On Tour events. That’s New York City, Dallas, and Amsterdam taking place August, September, and October of this year. And then, of course, at Past Data Community Summit taking place in Seattle in November of this year. So you could see me four times this year if you were really ambitious.

I don’t know. That might be too much for both of us. I know how social you people are. Let’s not push it. Anyway, let’s go answer some of these questions. Has anything ever been less useful than the Dropbox badge that shows up here? Like, there’s nothing useful about that. Nothing useful about that has ever happened.

All right. Anyway, let’s start here. That’s right at the very top. Let’s see. One, two, three, four, five. All right. Five questions. Tell me about the perils of using Snapshot for rights. Is learning how to deal with conflict detection really worse than the pains of pessimism?

Well, you know, it does depend a little bit on your, I guess, skill and comfort as a developer. Like, the main peril for, like, using Snapshot for rights is you get errors if you try to update, if two queries try to update the same thing in a Snapshot transaction. If you are cool enough with dealing with those errors, then it’s not a big deal.

You know, of course, the pains of, you know, non-snapshot rights can be, you know, under most isolation levels be like lost updates, right? Like, you know, like, like one query could do something and like another query could immediately overwrite it and that’s not a good time. So, really, it just, you know, it’s just kind of like picking your poison.

Like, like, what’s a bigger problem? If you’re cool with dealing with the errors that come along with the, like, like right conflicts, then cool, go with it. I’m not going to try to talk you out of it.

Me, personally, you know, really depends on, like, for me, you know, it’s more of like an application, like, like expectation issue. Like, like, like, like, like what, what, what would make an, what would make most sense to the end user? Like, what, what is the most sensible end result of two queries trying to update the same thing?

Is it one query failing or is it one query overwriting what the other query just did? Like, really just comes down to that for me. Let’s see here.

Why SQL Server doesn’t have parallel rollback? I don’t know. That’s Microsoft. Do I look like Microsoft? I can’t tell you these things. They didn’t, they didn’t implement it.

It’s doable. I don’t know why. Maybe it’s hard or something. I don’t know. Why don’t you go work for Microsoft and put it, write it into the product if it means that much to you. All right.

How can you be for a certain 10 dB and for accelerated database recovery? If 10 dB is good for performance, then isn’t ADR bad? Are you drunk?

These are… What? Huh? This doesn’t even make sense. I can’t answer this.

This is… It’s mind-blowing. Anyway. Ah. I’m just going to forget that. I’m going to start drinking after that one. Maybe I am too sober to answer that question.

I should get drunk and try to reread that one. Your experience says to prefer the legacy cardinality estimator. What is fundamentally wrong with the new estimator? Well, fundamentally, it doesn’t estimate things as well most of the time.

You know, like when I’m writing demos for, you know, my classes and, you know, for my videos and all that other stuff, you know, I always give both estimators a chance to see which one does a job that, you know, I am happier with. And just probably like 75, 80% of the time, it is the legacy cardinality estimator that does the better job. The default cardinality estimator, or as Microsoft calls it, I don’t call it that.

I just call it the new one because that’s all it is. It’s new. Most of the time, the new one, meh, just doesn’t do it for me.

It’s either like a guess that’s close enough to legacy or it’s a guess that’s way wronger, way more wronger-ish than legacy. So, you know, I don’t have any specific things to, like, show you these differences. It’s just, you know, just a general testing that I’ve found.

It’s just not quite as good. You previously discussed why you use Sort and TempDB. Would you default to that in Azure SQL database given the lack of control over TempDB, including sizing in that product?

Yeah, I think I still would, or rather I still do. I don’t really see a need not to. Honestly, I can’t think of a good reason why either of the things you mentioned would prevent me from doing that.

You know, like, the stuff, the limit, the TempDB limitations that I really care about have nothing to do with that. Like, I think both Managed Instance and Azure SQL DB, like, neither one of those still allow for the in-memory TempDB metadata, and sorting in TempDB would have no effect on that.

If TempDB performance, like, if you test it and you find TempDB performance is worse for creating indexes or whatever you’re doing with indexes when you sort in TempDB, then certainly stop. But, like, for me, from just, like, a general, like, I’m going to create this index perspective, I would still prefer to sort in TempDB, regardless of the locality of my database, unless testing proved otherwise.

There may even be times on-prem when a sort in TempDB would be like, hey, why is this slow? I don’t know.

TempDB is created on, like, an old pile of boar’s head Swiss cheese. It’s on some rye bread and salami in there. It’s like, I don’t know.

TempDB sucks, don’t go there. If it’s okay, go there. You know? It’s like, again, it comes back to, like, the public restroom metaphor for TempDB. If you open the door and you don’t like what you see, close the door.

All right? Walk away. Go pee behind a tree or something. Anyway, thank you for watching. Thank you for sending in questions, by the way.

I hope you enjoyed yourselves. I hope you learned something. For the person who asked the question about accelerated database recovery, I hope you have sobered up by now. Perhaps you could restate that question in a way that a sober person could understand.

Not that I’m sober. I’ll, like, permanently. Just when I do these, I tend to be. So perhaps you’re just on a different wavelength there. Anyway, thanks for watching.

I will see you in another video. Doing another thing, I suppose. Makes sense then. All right. Cool. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

A Little About TOP WITH TIES In SQL Server

A Little About TOP WITH TIES In SQL Server


Video Summary

In this video, I dive into the concept of `TOP (1) WITH TIES` in SQL Server and address some common confusion around why it might return more rows than expected. Erik Darling from Darling Data explains how the lack of an appropriate tiebreaker can lead to returning all matching rows instead of just one. I walk through a practical example using a hypothetical post table, demonstrating that without an order-by clause or a proper tie-breaking column, `TOP (1) WITH TIES` will return all rows that match the criteria, not just one. The video also covers how to use window functions like `DENSE_RANK()` to better understand and control which rows are returned when using this query construct.

Full Transcript

Erik Darling here with Darling Data and in today’s video I’m going to attempt to answer a question because I posted a video where I described using top one with ties a little bit some time ago and some people still didn’t get it. Did not understand why top one with ties would return a lot of rows sometimes. some ties. So we’re gonna talk a bit about that. I don’t know. Honestly, it’s Saturday here and I’m not feeling terribly creative and I just need something easy to do right now. So screw it. If you like this channel and you would like to support my endeavors to bring you usually very thoughtful, energetic, SQL Server content with the occasional screw it, you can sign up for a membership. There’s a link down below. It says join or become a member or something. I forget what. I don’t watch these things. You crazy listening to my own voice, seeing my own face. Woof! Why would I put myself through that? If you would like to support this channel in some other way, perhaps $4 a month is just too rich for your blood, you can like, you can comment, you can subscribe. And of course you can ask us to ask me questions for free privately that I will answer publicly during my office hours episodes where I answer five user submitted questions at a time. You can ask whatever you want. I don’t care. If you need consulting help with SQL Server, still powerhouse number one. SQL Server consultant outside of New Zealand. Beer Gut Magazine says so, so it must be so. Whether you need health checks, performance analysis, hands on tuning, dealing with SQL Server, dealing with SQL Server, dealing with SQL Server performance emergencies, or teaching your developers to not be such dimwits so you have fewer performance emergencies, all of these things become possible through yours truly at a very reasonable rate. So, get at me with that. Anyway, if you would like to get some performance tuning training content from me, you can get all 24 hours of my currently available stuff at that URL.

Well, with that discount code, it comes down to about 150 USD and that lasts you for the rest of your life. There is no subscription necessary. If you would like to get in on the presale price for my upcoming T-SQL course, you can get it now for 250 bucks. That will not last forever. And if you wait and it goes up to 500 bucks and you’re like, hey, can I get a discount? The answer is no. You missed out. You will have had months to do this. I urge you to do it now rather than later. When it will cost you twice as much. This is, of course, companion material to the pre-cons that Kendra Little and I will be teaching in Seattle this November about T-SQL. So, if you’re going to attend those pre-cons, you get all this stuff for free. Well, not for free. You get it with the price of admission, which if your company is paying for it, that basically makes it for free.

But this is work-related stuff, so I would hope that your company would pay for or at least reimburse you for buying this. All right. It would be kind of crazy to not. Anyway, speaking of leaving the house, I will be on tour with Redgate all summer long. I feel just like Lars Ulrich. I think that’s how you say his name.

New York City, August 18th to 20th. Dallas, September 15th to 17th. Then Amsterdam, October 1st to 3rd. And that all leading up to the main event at the Past Data Community Summit, Seattle, November 17th to 21st. Come hang out. Watch me be a SQL Server monkey. Live and in person.

With that out of the way, though, let’s talk about these top one with ties. And don’t worry, we’re not going to discuss anything lascivious in this video having to do with ties. We’re not going to say any dirty stuff like foreign hand knot.

Because we don’t tie ties like we’re going to our 8th grade dance. We’re adults. We use half Windsor’s because we are grown people with necks. We’re also not in a talking heads video.

So in the post table, for post type ID 3, there are, I think, 167 rows. So if we run this query, we will get back all 167 rows of post type ID 3. All right. 167 right there.

And then the way that top one with ties works, and actually I should show you this first. So the first thing I’m going to do is run this query without the order by. And honestly, this is perhaps something that should happen with any query where top is involved.

But we actually get an error when we do this. The top end with ties clause is not allowed without a corresponding order by clause. Perhaps this should happen for any top that we use.

I don’t know. But the point is that if we order by post type ID, and we’re filtering to post type ID 3, we’re only going to have post type ID.

So there is no tie breaker when we do this. Is there no semicolon there or there or there? I don’t know. I feel very foolish now. But when we do this, we still get back all 167 rows.

If we make that a little bit wider and we scroll on down, without some sort of tie breaker in place, there is no tie to break. This is all post type ID 3 going out through all the results.

So there is nothing to break our tie. We could break the tie very early if we added in a unique column to the order by. So the ID column is the clustered primary key in this table, which means that it is all unique and every row is unique.

And so we don’t get past the first row when we do this. This just says one row. Cool.

We broke the tie early. There were no ties after that. There were no duplicates in ID. So there are no ties after the ID broke the ties very, very early. We could also add in sort of a late tie breaker, right?

So if we run this and we say top, like select the top one with ties, and we order by owner user ID, we don’t get back 167 rows anymore. We get back eventually after waiting some indeterminate amount of time, we get back 164 rows.

So if we come down here, we will see that we only got 164 instead of 167. There are three missing rows here. Now the owner user ID for all of these is negative one, which if you’re keeping track at home, I think that’s the ID for the community bot for Stack Overflow.

So this is all negative one. The only thing that we can infer from this is that something different than negative one eventually happened, and then we broke the tie there, and then we got nothing further back, right?

So we were able to not return three rows that happened after the tie was broken. If you want to see what breaks the tie, you might want to consider using the dense rank windowing function, where you will see what dense rank does is ranks everything together where the partition is the same, and then as soon as the partition resets, then we get a new one, right?

So rather than like row number, which will give us a contiguous number going up, and rank, which will give us like a weird like broken set of numbers when the new, if there are any ties in a new ranking starts, dense rank gives you contiguous numbers, right?

So if we run this, and we say I want the dense rank of all this stuff partitioned by post type ID, ordered by owner user ID, and then we order by owner user ID on the way out, this first column is our dense rank.

So like I said, unlike row number, which would give us like contiguous numbers counting one through whatever until we got to a new partition thing, this just gives us all one, right? And this kind of makes sense for how the tie is broken, because eventually after all these ones, we get down to a new owner user ID, and then we get 234.

Now if we used rank, this would give us non-contiguous results after the tie. I think there’s a start of like three or something. Actually, let’s just find out real quick. All right, we’ll just take the dense out of that, and I’ll show you.

All right, that makes the most sense to do, right? So if we run this now, oh no, it goes 165. So we had 164 rows of one, and then we have 165, 166, and 167.

Boy, was I silly. So learn T-SQL with Eric. So we have these numbers in here.

So if we put the dense rank back, which is what we wanted to do here anyway, then we will get back all the ones, right? And then 234.

And of course the 234, if we scroll across a little bit to owner user ID, we’ll see that’s where we started getting new values back. So this is what top one with ties does. As long as your ordering elements continue to supply the same value over and over again, you will continue to get rows back until you reach the end of the result set.

Or, I mean, presuming in this case that you never return a tiebreaker like we did with just ordering by post type ID, you’ll just keep getting rows back until you reach the end of the result. Only when you add in a column that eventually breaks the tie do you stop returning ties because the tie has officially been broken and you have found all of the ties available.

So anyway, I hope you enjoyed yourselves. I hope you learned something. I mean, I just spaced on the rank function. That’s my bad.

Oh, man. I’m still going to publish this too. That’s where I’m at. So this is how top one with ties sort of works. And this is how you can get lots and lots of results back if you do not have an adequate tiebreaker at some point in your query.

What that adequate tiebreaker is is between you and your database. I can’t tell you what it should be. I can’t tell you what you should use there.

All I can say is use your best judgment. All right. Cool. Thank you for watching. Goodbye.

Bye. Thank you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

A Little About Subquery Unnesting In SQL Server

A Little About Subquery Unnesting In SQL Server


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.