SQL Server Books I Recommend

SQL Server Books I Recommend



Head on over to the books page!

Video Summary

In this video, I delve into my extensive collection of SQL Server books, sharing insights and recommendations based on years of experience in the field. From the foundational works like Ken Henderson’s “The Guru’s Guide to SQL Server Architecture” and “Gone But Not Forgotten,” which provide a deep dive into core concepts and are surprisingly relevant even today, to more recent titles such as Itzik Ben-Gan’s “T-SQL Fundamentals” and Louis Davidson’s “Pro-SQL Server Relational Database Design and Implementation,” I cover a wide range of topics that will benefit both beginners and seasoned professionals. Each book offers unique perspectives and practical knowledge, contributing incrementally to one’s understanding of SQL Server. Whether you’re looking for fundamental architecture insights or advanced troubleshooting techniques, there’s something in this collection for everyone.

Full Transcript

Erik Darling here with Darling Data. In today’s video, because I have gotten no fewer than like 10,000 questions on my office hours thing about SQL Server books or database books, we’re going to talk through my stack of books. And I have even put together a website, a page on my website that lists all of the books that I recommend. They are, of course, Amazon affiliate links because I spent 10 minutes putting together this list. And I figure over the span of time, the 53 cents that I’ll make from you people clicking these links and maybe buying things will be adequate compensation for my effort there. So let’s talk about books. The first one, and this is a granddaddy book. This is The Guru’s Guide to SQL Server Architecture and Architecture and Internals by Ken Henderson by Ken Henderson. This is by Ken Henderson, who’s dead. So RIP Ken. This one has a CD-ROM in it that is unopened, which I’m pretty psyched about. I did have to get a lot of these books second-hand. And I think one of my favorite things about buying second-hand books is the weird stuff that you find in them. I bought one sort of recently, unrelated to SQL Server, that had like a laminated, like kids-made bookmark, that said, like, Happy Father’s Day 2008. And my wife found it and she got kind of freaked out, but it was all explained. Anyway, this is a very good book. Old, but well worth it because you learn a lot from this stuff. This fills in a lot of fundamental knowledge stuff that a lot of people are missing.

We’re going to stick with Ken Henderson for a couple more here because Gone But Not Forgotten and certainly Not Gone Without Leaving is his mark on the world. We have the Guru’s Guide to Transact SQL, which covers many great and interesting T-SQL concepts and conventions. Granted, this is, again, old, but very useful. The one thing that is not in here, aside from a CD-ROM which is missing, is anything about API cursors, which I recently had some fun with and I will probably do a video about because, you know, that’s what I do. I have fun with things and I make videos about them for you. The third and final Ken Henderson book, which also has an unopened CD-ROM in it, is the Guru’s, well, sorry, there’s a sticker over there.

The Guru’s Guide to SQL Server, the Guru’s Guide to SQL Server, Store Procedures, XML and HTML. Granted, I would probably not recommend doing much HTML with SQL Server, but XML still around to this day. This of course predates JSON, so we can’t really go into detail on that. But everything else in there is absolutely wonderful. Moving on now from the Ken Henderson wing of my library into the, well, actually, no. This is, this is, there’s one more from Ken Henderson before we move on to other ones. SQL Server 2005 Practical Troubleshooting.

Well, sorry, this is edited by Ken Henderson. Who is the author on this? This might have like 17 authors. I don’t know. Let’s see. Let’s open this up. Edited by Ken Henderson. Let’s see what we got here. We got a table of contents. This is going to go on for a while. About the authors. All right. Okay, cool. Well, let me, there are, there are a number of notable people in here. Some of them I haven’t heard of, but we have August Hill.

We have Cesar Galindo Ligari, who is still on the SQL Server Optimizer team. Very smart fella. Ken Henderson, of course. Samir Tajani. Santeri. Oh boy. Vudelenin. Slava Ox. Hey, my pal Slava Ox. Wee Zhao. Bart Duncan. Great SQL Server blogger from back in the day. And of course, Bob Ward from back when he was in, it’s going to be hard for me to show you this, but actually, yeah, that’s not working out well at all.

But this is Bob Ward when he was still in Microsoft Customer Support Services. And Cindy Gross is the final one noted here. So that, that, this is a collection of authors. So edited by Ken Henderson. Sort of a slightly weird thing there. Anyway, now we’re going to move on to the Kaylin Delaney et al. wing of my library. We’re going to start off with Inside SQL Server 2005 Query Tuning and Optimization. Now I know what you’re going to say. Query tuning in 2005 is totally different than it was today. It’s not. A lot of the same stuff still applies. We just have some new tools and some new techniques, but there is a lot of fantastic information in here for folks who need to learn fundamentals and who need to maybe see just how similar and just how consistent the concepts in database query tuning and optimization are.

This one is maybe not so important to query tuning, but it is, it is a cool book. This is Microsoft Inside the Storage Engine for SQL Server 2005. Now I know that the storage engine has had many changes and many things added to it and stuff like that, but there is still a lot of very good foundational knowledge in this book. Next up, we have Microsoft SQL Server 2008 internals. Internals knowledge, very good stuff to have. Even in 2008, there’s good stuff to learn in here. One thing that you’re going to find across all of these books is that you’re going to pick up something new in all of them, right? I don’t mean new in like, oh, this is like, like, obviously we’re up to SQL Server 2022. There have been whispers of SQL Server 2025 already. But one thing that you’re going to get across all of these is incremental. There’s going to be some stuff in some books that you might not see in the other books.

There’s going to be some of these books that you might not see in the other books. And there’s going to be a knowledge accumulation for you as you go across your learning journey. And then this is probably the last of the great SQL Server internals books, SQL Server 2012 internals. So the 2008 book was, had contributors, Paul S. Randall, Kimberly L. Tripp, Connor Cunningham, Adam Mechanic. All right, a lot of good stuff there.

And this one here, we have, we still have Connor Cunningham. We got John Cahias. We got Paul Randall. We got Bob Beauchemin. We got all sorts of smart people contributing to these books. Now, we’re going to move on to the Itzik Ben-Gan wing of my library.

And we are going to see T-SQL Fundamentals. This is the fourth edition. Itzik, before he went into semi-retired hermit phase, did update this. This is the latest edition of that. Continuing on with the Itzik wing of my library, we have T-SQL Querying.

Thick book, good book. I’ve had this one since, oh, it came out in about 20, what was it, 2015, I think, 20, somewhere there. This one is Itzik, Dajan Sarka, Smartfella, Adam Mechanic, Kevin Farley. Kevin Farley, who recently retired from Microsoft. Good for him.

Next up, a slightly more recent book. We have Pro-SQL Server Relational Database Design and Implementation by Louis Davidson. Louis was kind enough to have me on the Redgate Simple Talks podcast recently.

If you haven’t listened to that podcast generally, or at least my episode, I can highly recommend you go do that. And the final book that I have here is by a fella named Dimitri Karatkevich, SQL Server Advanced Troubleshooting and Performance Tuning. I do like this book quite a bit. There was actually even something in here that he discussed.

I forget exactly what it was at this point. There was something with the DMV query that I thought was cool. I actually realized that I didn’t have that in SP pressure detector, so I added it in. I think I even have his name in the pull request for it, but I forget a little bit.

But anyway, this is a very good book. And I think one of the things that I liked best about this book is, you know, a lot of the times that I’m reading something, I have, like, the stuff that I would think and say when I’m talking about something.

And, like, he would be, like, talking about a topic, and it would, like, you know, like, he’d, like, you know, make a point about something. And then I’d be like, yeah, but, like, this other thing that you have to take into account with it. And then the next sentence would be like, but of course you must consider.

And I was like, yes, this is a very good book. So if you are a bookish person, those are the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 books about SQL Server that I generally recommend. There are a couple notable books that are not on this list because they are not about SQL Server specifically that are also good.

They are Database Reliability Engineering and Designing Data Intensive Applications. But you can get all of, you can get the full list with Amazon links to purchase these books on my website. That’s going to be erikdarling.com slash books.

I will have the link to my site in the video description here. And you can go buy them and you can go learn from them. And maybe, since you’re most likely buying used books, maybe you can find some cool artifacts from the sands of time in there.

But anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something.

And I hope that you will go buy some books if that is your preferred vehicle for learning about things. So anyway, thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

The Weird World Of SQL Server API Cursors #tsql2sday

The Weird World Of SQL Server API Cursors #tsql2sday



Thanks for watching!

Video Summary

In this video, I delve into the fascinating yet somewhat obscure topic of API cursors in SQL Server, exploring their unique capabilities and usage scenarios that go beyond the typical row-by-row processing. While most people might cringe at the mention of cursors due to common misconceptions or past frustrations, I aim to demystify these powerful tools by walking you through a practical example where we use an API cursor to batch update multiple rows efficiently. This approach not only showcases the flexibility and power of SQL Server but also highlights the importance of understanding its less commonly known features for advanced database management tasks.

Full Transcript

Erik Darling here with Darling Data. In today’s video, we’re going to get into the weird, wild world of API cursors. Now, this is something that most people who use SQL Server have never heard of and probably have never used. If you’re the type of person who just has the standard Pavlovian response, seeing a cursor declared or used anywhere and complains and gripes about it, this is not the video for you. I don’t like you much. So there’s that. We’re going to talk today about how API cursors can be used in ways to traverse more than just a row at a time. This is not a row by agonizing row thing. We can actually process, select, select, modify multiple rows at a time using API cursors, but we’re going to have to do a little, we’re going to have to do a little bit of digging into the vaults to, to understand what’s going on here. So with that out of the way, if you like this content and you would like to support my efforts to bring you high quality SQL Server content like this, you can sign up for a channel membership link right there.

down in the video description. If you are too poor because you spent all your money griping about cursors in SQL Server to some LLM trying to get it to write a blog post for you. Well, there are the ways to support the channel. You can like, you can comment, you can subscribe, and you can ask me questions privately that I will answer publicly during my office hours episodes. If you need help with SQL Server, performance help with SQL Server, I can do all this stuff. And of course, as mentioned is rated by beer gut magazine to be the best SQL Server consultant in the world outside of New Zealand. Today’s video will of course, help establish why outside of New Zealand is particularly important to that distinction. If you would like to get some very high quality, very low cost SQL Server training content, you can get all 24 hours of mine for about 150 US dollars. And you get that for life. There is no return fee on that. The link is up there. The discount code is there, of course, all fully assembled for you down in the video description.

Of course, if you would like to hang out in person. And I don’t know, high five, take selfies, sign autographs. I don’t know, ask me how to set Mac stop. You can of course come to SQL Saturday New York City 2025 taking place on May the 10th in lovely Times Square Manhattan at the Microsoft offices. I believe it’s 11 Times Square. If you go to my website, there’s a link up in the corner. All that stuff. Go there. If you go there. If you go there. If you go there. If you don’t know what my website is, well, geez, that’s a scary thought. It’s a scary thought. Anyway, let’s talk about API cursors. So before I show you my thing, what I want to show you is where I learned about API cursors from and sort of like why I got interested in them. Because I think they’re just bizarrely interesting things. So Paul White from New Zealand has a couple blog posts or not a couple blog posts, has a couple Q&As on the database administrator stack exchange site. Now, I want you to pay careful attention. I’m not logged in up here, right? There is a login prompt. So I don’t want you to think that like I haven’t, I haven’t upvoted these questions because when I’m logged in, you absolutely will see.

That these things have been upvoted to the nth degree. I will put the links to these questions in the video description, hopefully, if I remember. We’ll see. But anyway, if we scroll down here to the answer, we will get to what Paul said. And of course, this is exactly the behavior of an appropriately configured API cursor. Now, if you look at this code, it is some of the most outlandish stuff I have ever seen written. Right? We have some things declared and set using these horizontal lines, some sort of XOR, bitwise, something or other to make numbers out of multiple numbers that make sense to API cursors.

And then there are some stored procedures like SP cursor open, SP cursor option and SP cursor fetch. And I believe it’s not this one. There’s also an SP cursor close. So this is where I first saw anything about API cursors. Now, if you work with SQL Server. And you work with like a vendor product and you see lots of queries running like fetch API something, something, something, something. They are using API cursors, but they are probably not using correctly configured API cursors. They are probably using just whatever stock crap they came up with. As we’ll see in a moment, the Microsoft documentation on API cursors is not good.

This is the other answer I saw where Paul brought up API cursors. I don’t think there are any others on Stack Exchange. There are probably some elsewhere in the world. But a favorite. Look at that. Look at that delicate U in there. Favorite solution of mine is to use an API cursor. I didn’t mention anything about it being correctly configured in this case, but we can assume because it’s Paul, it is correctly configured.

But it uses sort of the same set of stored procedures there. And there is this absolutely wild thing. Well, cursor status global my cursor name equals one. Keep going and finding things. So if you want some background and you want to see some API cursor code, the links to these will be in the video description.

Now, the Microsoft documentation on API cursors is incredibly sparse. Like you’ll get some like you’ll get like valid stuff in here, but there’s really no mention of like correctly configuring them. Like you get a lot of information about cursor stuff, which like if you understand cursors generally would make makes more sense to you.

But if you don’t understand cursors generally and you are the type of person who just, again, has the dog whistle response to like, oh, cursors. Oh, there’s very little hope for you anyway. There’s SP cursor. So we just looked at cursor open.

This is SP cursor, which has sort of the same set of stuff in there. And then we have SP cursor fetch, which does a whole bunch of other stuff. There are other SP cursor procedures in the mix, of course.

There are all sorts of weird things you can do with cursors that you probably didn’t know you could do. So there’s a wide world out there. Now, what I wanted to show you is the thing that I wanted to do.

Now, this is in no way supposed to besmirch the wonderful Michael J. Swart post about batching modifications. This is just an alternate approach to batching modifications. Not to say that you need to do this when you batch modifications, but you might be able to have some fun with it at some point in your life.

So I’m going to walk through the code and then I’m going to run the code. And I’m going to point out exactly where I got a little assistance with the code because things were kind of annoying. So I am creating a table of sample data.

That much should be very obvious. I’m going to put 10,000 rows into the sample data. And in those 10,000 rows, I’m going to mark 5,000 of them as needing an update. Right.

So case when this number, the module is 2 equals 0, then 1. So out of the 10,000 rows, like 5,000 of them will have the needs update set to 1 based on this row number. All right.

After that finishes, I’m going to show you the 5,000 rows that need an update. And then we are going to get into the cursor stuff. Now, the goal of this cursor is to update 1,000 rows at a time.

Right. And we’re going to handle that with some of these fancy parameters in here. Now, this is, again, not for the faint of heart.

This is a very difficult to follow set of things. There are a lot of things to declare and keep track of. But here is our query that runs when the cursor runs.

We are also going to declare this dummy table. And the purpose of this dummy table is to eat results. So usually on every time, on every execution of these cursor procedures, SQL Server returns a result set from the cursor.

I don’t want to see that because I want, like, rather, I didn’t want to see that because I was like, you know, like, they just clog up the screen. It looks silly. It makes me, you know, gives me the face vibrations I don’t like.

And so Paul suggested doing the insert top zero into dummy when executing the procedure. All right. So if that looks just stunningly out of this world, insane to you, that’s what that’s doing.

That’s the purpose of that. On each trip through the cursor, I’m going to select a row count so you can see that a thousand rows come out of a thing. I’m going to put this on GitHub.

I usually don’t, but this is so weird that, you know, screw it. I’m going to put it out there. And so this is what does the update in here. And this is so bizarre.

This is so weird. All right. I have to tell it which table to update, which I guess technically I can put an empty string in here because there’s only one table, but whatever. And then here’s what I’m doing.

I’m setting the price times 1.10. I’m setting last updated to sysdate time. And I am setting needs update to zero. Okay.

So then I’m going to show you via the row count big function that how many rows I do at a time. We’re going to fetch the next batch and we’re going to do all this stuff. And then at the very end, I am going to show you, I’m going to verify that the updates ran.

So if you are all ready to see this happen, let’s run this. And let’s admire these results for a moment. So these are the 5,000 rows that need an update.

All right. You’ll see that this, like just looking at the top, like I guess there’s eight visible rows here. We have two, four, six, eight, 10, 12, 14, 16.

We’re pretty much counting by twos. Right. These are all the ones that needed an update. Here’s the original price. The original quantity. I didn’t change quantity, of course, just price.

And then down here, you’ll see that these numbers did go up or rather these numbers did change. Right. So this is the last query that shows the data that I just messed with. Product two, four, six, eight, 10, 12, 16.

The prices are all 1.10 higher. And the last updated has been incremented to today. And the needs update is now set to zero. So these went from 2024, 630 to 2024, 317.

So this did work. And if you look at the five calls to row count in here, there’s 1,000, 2,000, 3,000, 4,000, 5,000. So I updated these 5,000 rows, 1,000 rows at a time.

And I did that using an API cursor. This type of syntax is not available. Rather, this type of behavior is not available using a stock and standard cursor.

You do have to get into the weird world of API cursors and do stuff like this. So when do you use these? These?

Probably when you have gotten to the point where you cannot be satisfied by normal queries. I get a thick callus for stuff in SQL Server these days. And this was a nice way to sand that callus down and feel things again.

So, you know, of course, thank you to Paul for the assistance with some of the coding here. And, of course, for publishing the original answers that opened up my world and mind to API cursors. And, yeah, I hope that this will encourage you to keep learning about things in SQL Server because there are all sorts of interesting things you can do once you peel the world back a little bit.

Anyway, thank you for watching. I hope you learned something. I mean, I’m pretty sure you learned something. I hope you enjoyed yourselves.

And, again, the links to the code and to the original DBA stack exchange questions so you can read more about API cursor stuff will be in the video description. But, again, this is a weird one. I admit it.

I fully and totally admit this was a weird video. But it was something that I was rather proud to show off because most of the stuff that I do is pretty much like, you’re going to see this every day and it’s going to be a problem.

This is nice and weird. Anyway, I’m out of here. I’m going to go un-weird myself a little. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

SQL Server Performance Office Hours Episode 7

SQL Server Performance Office Hours Episode 7


How do I size SQL Server for new project? Where to start, what to take into account? THX!
When are you going to start adding your referenced links to the details section so i can copy and paste? <3
Should one still care about Index Fragmentation in the days of Azure VM Premium disks, or local SSD’s
When (if ever) will we see a Tyson vs Paul type grudge match between Erik and Brent
your face looks smaller lately. r u sick?

To ask your questions, head over here.

Video Summary

In this video, I address several questions from viewers. First, I tackle the topic of index fragmentation in modern storage solutions like Azure VM premium disks and local SSDs, explaining that while logical fragmentation is less of a concern, physical fragmentation can still impact performance but isn’t typically measured by standard scripts. Additionally, I discuss my recent weight loss journey, attributing it to a combination of reduced gym attendance due to the pandemic and aging, which has made maintaining high strength levels more challenging. The video also includes some light-hearted questions about potential future collaborations with other industry experts and a bit of self-deprecating humor regarding my appearance.

Full Transcript

Erik Darling here with Darling Data and boy howdy we’re gonna do it. We’re gonna drop in office hours here. Where I answer five of your questions at a time that you submit to me through my form. If you want to ask a question here you can do that at this link. This link is available in the video description down below. While you’re looking at the video description down below, if you think, boy, this Erik Darling sure does produce a lot of content that I rather enjoy. Maybe I don’t always learn something. Maybe I do. Maybe I don’t always enjoy myself. Maybe I do. But I would like to support those efforts. You can, you can, while you’re, while you’re floating around down there, there’s, you can, you can become a subscribing member of the channel. And for as little as, as little as $4 a month, you can, I don’t know. What, what is, what is $4 a month? At the end of the day. It’s really what it, what it does in the aggregate, right? About 60 other people have decided to be so kind as to support my efforts here. So, uh, you would, you would join the, that choir of angels.

And I would, I would be eternally grateful to you in the aggregate. Uh, if you, if you ran out of $4 a month, and maybe you died, uh, you can, you can, from the grave, you know, this is America. So, if you can, dead people vote. So, if you, dead people want to like, comment, and subscribe, you can, uh, you can, of course, do that as an alternate means of supporting the channel, uh, from this mortal coil or, uh, from, from, from beyond. If, uh, dead or alive, you would like help with your SQL Server, uh, I am, I am very good at all of these things. Uh, some, some, by some, I mean the, the nice folks at Beargut Magazine might say that I am, I am the best in three out of four hemispheres of the world at it.

Uh, if there weren’t for that damn island nation in the, in the Pacific, I would, I would reign supreme over the entire globe. And if you need a health check, some performance analysis, some hands-on tuning, uh, if you are having a performance emergency, or if you want to get your developers training so that, uh, your SQL Server stops being consistently on fire, well, you can call me up and I’ll do that. And as always, my rates are reasonable.

Uh, if you would like some reasonably priced SQL Server performance tuning training, I have that as well. About 24 hours of it. Uh, available at that URL with that, with that coupon code, which is also fully assembled for you down in the video description.

So, there’s a lot, there’s a lot you can do with the video description that will, that will clarify many things in your life for you. Uh, SQL Saturday, New York City. Come in your, come in your handy dandy way, your happy way, uh, on May the 10th.

That is this May the 10th of 2025, taking place at the Microsoft offices in Times Square. Highly suggest you, uh, find that event and buy a ticket because, uh, space is limited and those tickets are going, what, faster than Sbarro pizza slices. So, that out of the way, uh, let’s, let’s do this office hours thing.

Got, got quite a lineup of questions here. Uh, first and foremost, uh, we have this question. How do I size SQL Server for new project?

Where to start? What to take into account? Fix. Well, uh, there’s a lot of stuff to collect on this, right? Um, new project.

Wow. A lot of stakeholders involved. Um, uh, you know, some, something that might help is if, uh, this is, uh, a new project. That, uh, is using an existing third-party piece of software.

Third parties will often publish some sort of minimum specs for, uh, what, what they expect out of a SQL Server. Uh, this would include hardware, version edition, all that good stuff. Um, and, uh, you, if, if this is a, for a third-party piece of software, you might, you might even ask them, uh, what the typical, what the average installation is.

Um, is for a SQL Server, at least, at least as far as database size goes. They might, they might have some ideas there that would help you out. Um, but if I, uh, you know, other things that you might want to take into account is talking to various stakeholders about the importance of this project.

Um, you know, it might, it might be something where, uh, you know, high availability and disaster recovery are a must off the bat. It might also be a thing where they’re like, let’s just, let’s just build an MVP and see how it goes. Uh, a lot of the times, um, you know, when these projects start off, uh, they are on, uh, you know, the, the database has no data in it.

So, if you’ve got that going for you, it doesn’t, almost doesn’t matter what, what size SQL Server you start with, uh, especially given the flexibility of hardware configurations, both with virtual machines and the cloud these days. Um, uh, I had another thing to say there. Um, if this is, uh, you know, a new project that is perhaps based off an existing project, you might take a look at the, the current set of things there.

Uh, like, you know, whatever, whatever hardware is in the current SQL Server and all that. Um, look, there’s, there’s not, there’s not a whole lot here for me to go on. So I could just keep listing off different things that you, you might think about and ask as you’re doing this.

But, uh, really, um, this is the sort of thing where that, you know, DBAs do, do have to get paid for because sizing a SQL Server is not just a one-time set it and forget it thing. You need to, you need to keep an eye on the performance of this SQL Server if this is something that you care enough about to, to ask the question. Uh, I would say that, uh, you know, you want to keep an eye on those weight stats and make sure that your hardware is keeping, whatever hardware you initially assigned to, that is keeping up with the workload, the number of users, all the queries that are in there.

Um, you know, if it’s a, if it’s an in-house project and people keep adding features and stuff, you’re going to have to keep looking at indexes and other aspects of the database to make sure that it stays in touch with development reality. So, uh, I don’t know, like, I think maybe, maybe it depends a little bit on, uh, how much people are willing to spend on it at first. Uh, you know, if it’s, if it’s going into, it’s going into the cloud, uh, whether it’s on a VM or on a platform as a service offering from any cloud vendor, you might want to ask what people are willing to spend on it because that’ll do a pretty good job of dictating, uh, exactly how much hardware you can get out of it.

Uh, if it’s, if it’s going to be an on-prem virtual machine or something, uh, then it has a little bit less effect. But, uh, you know, you might start, you might think about like, um, you know, for standard edition, right? And I’m not, I’m not saying that you would ever use standard edition.

I’m not calling you that much of a cheapskate. But what I am saying is that for a lot of people, when they build a standard edition box, uh, like VM somewhere, uh, they will, uh, choose a set of hardware. Uh, that maximizes the capability of SQL Server standard edition, uh, you know, somewhere between eight and 24 cores, depending on, uh, the workload that hits it.

Uh, 192 gigs of memory because SQL Server standard edition, uh, well, it is capped at 24 CPUs these days. And at least as far as I’m aware, SQL Server 2025 has not changed any of the capacity limits for SQL Server standard edition. So you still have the 128 gig cap on the buffer pool, but what you, but, uh, what, what a very common practice with standard edition is, is to give the SQL Server, uh, about 192 or so gigs of memory.

Set max server memory somewhere in the 180s. And that way you have 128 gigs of data, of memory rather for the buffer pool. And then SQL Server is allowed to use memory between the end of the buffer pool and max server memory for all sorts of other things.

So it might be smart enough to just start all your builds with a maxed out standard edition build, even if you’re using enterprise edition. That’s, um, usually an okay way to go. Of course, uh, CPU count has a much bigger impact with enterprise edition than with standard edition.

Um, it being the, the, the $2,000 of a core versus $7,000 of course. So again, it really does come back to budgetary constraints and what people are willing to spend on this hardware. Doesn’t it?

So make sure you get those numbers, right? Second question here is, when are you going to start adding, oh, that was the wrong button. When are you, when is zoom it going to start listening to me?

Uh, when are you going to start adding your reference links to the detail section so I can copy and paste? Well, uh, I, I, I always endeavor to include all of the, the reference links in my, in my video descriptions. If I ever miss them, please feel free to point it out.

Uh, and I will correct, I will aim to be as eventually correct as MongoDB and, and, and, and get that in there for you. But, um, I am, I am an imperfect soul and all I can do is beg for your forgiveness and, and, and, and try to correct any, any issues in that area. And let’s see here.

Should one still care about index fragmentation in the days of Azure VM premium disks or local SSD, SSDs? What? Apostrophe abuse right off the bat.

Uh, no. So look, this is something that I’ve, I’ve, I’ve talked about a bit. Uh, the type of index fragmentation that most people, uh, cared about at some point in time was logical fragmentation. It’s data pages being out of order, uh, on, on, on disk.

And that is, no, that is not something that I tend to care much about when it comes to, um, when it comes to SSDs or flash or, or, or memory or like, you know, RAM memory, uh, not RAM disks. That’s not for SQL Server. Uh, but, uh, there, there is always a specter of physical fragmentation that is empty space on data pages.

Um, and that can affect scan density. That can affect read ahead, read size. And, you know, it might be something that you want to look at.

The problem is that, um, not, not many, uh, readily available, uh, index maintenance scripts measure physical fragmentation. They all measure logical fragmentation. And there’s not really a good correlation between a logically fragmented index and a physically fragmented index in either direction.

Uh, if you want to go and start measuring, uh, uh, physical fragmentation. And you want to start, uh, rebuilding or reorganizing indexes or in order to, uh, cram your data pages more densely, uh, with data. Then you are, you are welcome to figure out at what threshold that makes sense for you to do and, uh, and pursue that endeavor.

But, um, if you are, if you are just asking me, like, how to configure all the scripts or something or something like that, it’s not in there. So, um, that is something that you have to kind of figure out a bit on your own. And it’s not something that, that I want to get into the business of doing because there are all sorts of situations out there where, uh, even physical fragmentation would have no profound effect on a workload.

If the queries are, if the majority of the queries are performing index, index seeks, it is index scans that are affected by the page density that is lessened by physical fragmentation. And seeks don’t really have that sort of performance hit. So, uh, this is a weird question.

When, if ever, will we see, no, zoom it, listen to me. When, if ever, will we see a Tyson versus Paul type grudge match between Eric and Brent? It’s a, it’s a very strange question.

Um, I wasn’t aware of a grudge between myself and Brent. Uh, if there is one perhaps that I’m unaware of, uh, you can feel free to, uh, enlighten me about that. Uh, at least the last, last time I spoke to him, uh, fairly recently, there was, there was still no, there was no grudge.

So, uh, the, the, the, the, um, potential for a grudge match does infer the existence of a grudge. But, um, at least I am unaware of one. And finally, your face looks smaller lately.

Are you sick? Well, thanks for noticing. I do appreciate it. Uh, a fella does work hard to, to keep in shape. Uh, no, I, I’m, I’m, I’m not sick.

I am, I am, I am as well as I’ve ever been. Uh, I did, I did lose some weight though. Uh, if you, if you want, if you, if you, if you care about a full, uh, the full story, uh, you can, you can keep listening. If you, if you don’t care for, uh, any sort of explanation, uh, you can, you can stop watching the video now.

But, uh, around about, uh, 2016, I got very much into, uh, strength training and like sort of a, like a powerlifting type thing. And, uh, you know, before that, I, I just kind of horsed around in the gym like many misled, uh, time-wasting young gentlemen. And, uh, I just realized, after a while, I was just like, this isn’t really getting me where I wanted.

So, um, I, I, I discovered the, uh, the starting strength program by, by Mark Ribiteau. And I started doing, uh, the novice linear progression. And, uh, after, after a few years, uh, I got, I got my lifts up pretty good.

But I, I was also, uh, you know, that, that getting stronger does require putting on body weight. Generally, if you want to gain muscle, you’re bound to gain some fat in there. But, uh, you know, I, I, I bulked up to about 225, 220 in there.

And, uh, you know, my lifts were my, but I had like lifts to match that. You know, I was dead lifting around 600. I was squatting over 500.

Um, I was benching over 300. I had a 250 something pound overhead press. And these are all for like singles. This wasn’t like me just banging out reps with that stuff. But I did get my lifts up pretty high.

And, uh, but, you know, I felt kind of okay about it because that’s the goal I was pursuing at the time. And the additional body mass was, uh, was sort of required for me to pursue that goal. But then, you know, um, COVID came around and, uh, in New York, gyms closed down for quite a while.

And then when they reopened, uh, you still had to wear a mask in the gym, which I, which I did try to do a couple of few times, but for the type of lifting that I was doing, it was very uncomfortable.

So, um, you know, I, I did, I did drop my weight back down a little bit and, uh, then I lost about 20, 25 pounds, uh, because I just wasn’t working out and, you know, carrying all that around when you’re not actively lifting heavy weights is kind of, but then I could just never, um, you know, uh, I got older, my, my, my, my mid middish forties.

And, uh, I could just never get, um, the type of training going where, uh, I was, I was getting my lifts back up to that. Right.

After take, after that long of a layoff in the gym, it’s like, you’re starting basically from scratch. And so, uh, yeah, you know, I just, I just kind of realized that, uh, is a, is a, is an aging gentleman and, uh, you know, uh, sort of struggling a bit with training consistency and stuff like that, that I just wasn’t going to get my, my lifts back up to where they were.

So, uh, I just, you know, decided, you know, lifts be damned. Unfortunately, uh, I, I, I did, uh, pursue just losing weight. I was still, still, still lifting weights, but all of my, the, like my, my lifts for what they were just dropped way down.

Uh, now, um, you know, just sort of, uh, slowly trying to add back some, some strength into the mix and, uh, get them at least back to some respectable numbers. But, you know, that’s going to be a slow process because, uh, I am not, uh, I am not in the same, uh, bulking frame of mind that I was the first time around.

So, you know, it’s, you get a bit stronger, but also not, you know, bulk up to some horrible weight. I am at the, I’m at the point of my life where doctors are starting to prescribe interventions for certain things like blood pressure, cholesterol, and all that other stuff they give you because you’re going to die if you don’t take it or something.

So, uh, uh, I’m not sick, but, um, I was, I was feeling not so great there for a bit. Uh, cause, you know, I had, I had all of the, uh, the, the outer symptoms of, uh, of, of a powerlifting career without any of the, the, the strength that went along with it, which is not a, not a terribly good combination.

Uh, so, that’s my story there. If you listened, thank you. If you didn’t, I totally understand. Anyway, uh, that does bring us to the end of this office hours. Uh, again, you can submit your questions, uh, via the, the link down in the video description that says office hours, and I will happily answer them.

So, uh, thank you for watching. I hope you enjoyed yourselves. I hope you learned something. And I hope that you, uh, if you are a young, young person wasting your time in the gym, you will stop doing that.

Get some, some barbells in your life. It’s a much better way to live. Anyway, thank you for watching. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Why Does My Trigger Have Multiple Plans In SQL Server?

Why Does My Trigger Have Multiple Plans In SQL Server?


Video Summary

In this video, I dive into an intriguing aspect of SQL Server: why triggers might have multiple execution plans. Erik Darling from Darling Data Enterprises shares his insights on how and why these plans can differ based on the number of rows being processed. He explains that SQL Server caches plans for single-row and multi-row scenarios separately, which can lead to confusion if not understood properly. Along with practical demonstrations using SSMS 21 preview in dark mode (your feedback on this setup is welcome), I walk you through how to identify these different execution plans within the plan cache. This knowledge is crucial for anyone dealing with complex trigger logic and performance tuning in SQL Server environments.

Full Transcript

Erik Darling here with Darling Data, and we have a very exciting video for you today here from Darling Data Enterprises. This is all about why do I have multiple plans for triggers? And there are, you know, probably some other external reasons why you might see multiple plans for the same trigger. For example, if you have the same trigger across multiple databases and you look at the plan cache and you don’t take the database context into account, you might see multiple things in there. But this is a much more interesting internal reason for why you might have multiple plans for your trigger. Before we get into all that, man, I love you all so much for the support that you give this channel. And if you want to be included in the the people who I love for giving support to the people who I love for giving support to this channel, you can do a couple things. You can sign up for a membership. And if you do that using the link down in the video description for as few as $4 a month, or we’ll call that one espresso buck, you can support the content that I create on this channel. If you have spent all your money on caffeinated beverages or other assorted methamphetamines or uppers I don’t know whatever whatever you’re into poppers, maybe you can like you can comment you can subscribe. And if you want to ask a question privately that I will answer publicly during my office hours videos, that link right there by my my very fancy extended pinky is down also in the video description.

Slide, please. If you need help with your SQL Server beyond the scope of what a simple question or a YouTube video or a blog post or anything else can help, and you’re in the market for a young, handsome consultant with reasonable rates, I am the best in the world at all of these things.

That’s a short list of all the things in the world I am the best at, but this covers most of the ground with SQL Server. There are several other things not SQL Server related in the world that I am best at, such as picking the bottle of wine from the wine list that the restaurant has run out of. I am tops at that. Cannot be, cannot be beat. I am undefeated, undefeated at that.

Indefeatable. Invictus or something. If you would like to get some training on SQL Server performance tuning and you don’t feel like spending hundreds of dollars or thousands of dollars a year for a subscription or whatever, you can get all of mine for about 150 bucks and you can get that for the rest of your life. There’s about 24 hours of it and the fully assembled method for retrieving this wonderful deal is also down in the video description yonder over there.

All right. SQL Saturday, New York City 2025 on MAY. That is May the 10th at the Microsoft offices in Times Square. I will be there in various capacities doing things. I don’t know. I’ll probably even be wearing the same outfit, so I will be highly recognizable to you, the general public. But with that out of the way, slide please. Let’s talk about why triggers might have multiple plans. Now, I need some helper objects like some tables and just to make life easy, I’m just going to have a, you know, just a couple rows, trigger test, trigger audit. We’re just going to pretend this is an audit table that captures stuff about what got put into the test table. Then we’re going to have a trigger and it’s going to be an after insert trigger like so. All right. And we are going to just insert whatever stuff from the inserted virtual table exists into the audit table, right? So very simple thing there. Nothing, I hope, too out of the ordinary. We should make sure that we do this correctly, though. We should do create or alter. And I am recording another video here using the SSMS 21 preview with dark mode in there. If there’s any feedback on my use of dark mode or my use of SSMS 21, please let me know because I want to make sure that I’m making the best possible videos. I know some folks out in the world dislike dark mode. Other people love it. So I don’t know. Just kind of tell me how you’re feeling about it. That would be wonderful. So to round out this demo here, we’re going to clear out the procedure cache. And I’m going to pause now to tell you that SQL Server caches plans for triggers in two different ways internally, like a plan caching mechanism.

There is a trigger, a plan for your triggers that will be for a single row. And then there will be a plan for your triggers when there are multiple rows in the inserted or deleted virtual table. So that’s what we’re going to look at here. And that’s what I’m going to show you with my fancy query down below. Now, right now, of course, there should be nothing in the plan cache since I just cleared it. And we’ll tell us about that. And in between recordings, I managed to remember to increase the size of my grid text. So now we don’t have to go blind together staring at that.

But what we’re going to do now is insert a single row into our table. And now let’s interrogate the plan cache. And we will see that we have a plan cached for a single row in there, right? Which is exactly what we did. Now, if we insert multiple rows into our trigger test, we are going to have a second execution plan added that is a multi row. So here we go. We have the one use count for our trigger object type. And the set options, if you do some fancy, I forget what this is called bitwise, something maybe I forget. But if you do this, and 24 for the set options attribute, and DM exec plan attributes, you can decode between multi one row and multi row trigger plans. So if we look at the first plan that we cached, and we look at the inserted plan, we will see that the number of rows in that is all estimated at one. And if we own what we should probably close that out. See, this is the one thing that I dislike about the dark mode is like not everything is dark mode yet. So when you get when you do things like open up query plans, it’s like, you can like go blind. It’s like, like in Big Trouble in Little China, when David Lopin does the eye light thing at Kurt Russell. Jack, whatever his name is in that. And I don’t know if I clicked on the right one there. Let’s go back and try that. Let’s make sure I did. All right. So if we now click on the execution plan for the multi row trigger, and we look at this, we will see that these have changed, right? Or rather, these are just different in this in this plan. These numbers have changed between the single row plan and the multi row plan. Obviously, now we have three rows instead of one. So if you are looking at your plan cache, and you are puzzling as to why you have multiple plans for some of your triggers in there, the answer could be as simple as you are storing a plan for the execution of the trigger for a single row. And you are also storing a plan for the execution of the trigger when it processes multiple rows. Perhaps not the most interesting, perhaps not the most titillating, psychologically traumatizing SQL Server content that I’ve ever produced, but it is a useful bit of SQL Server knowledge and trivia nonetheless.

So yeah, I’m probably just gonna can this one here. We’re gonna talk about some other stuff coming up in other videos. We’ll probably continue on with the store procedure series because I owe you a few things. I owe you a few videos remaining on that. I believe we have four or five left to cover. So we’ll get those done. And I don’t know, see, we’ll just see what happens next. The world is our SQL oyster, or something like that. Anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something and I will see you over in the next video. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

All About SQL Server Stored Procedures: Dynamic SQL For Performance

All About SQL Server Stored Procedures: Dynamic SQL For Performance


Video Summary

In this video, I delve into using dynamic SQL to address performance issues in stored procedures, particularly focusing on the pitfalls of parameter sniffing and if-branch statements. I demonstrate how dynamic SQL can help mitigate bad estimates and unwanted query plan compilations by ensuring that each branch is treated uniquely. By incorporating a “replace me” token within the dynamic SQL queries, we inject specific conditions based on input parameters, forcing SQL Server to generate distinct execution plans for different scenarios. This approach not only tackles cardinality estimation issues but also helps in managing parameter sensitivity across executions. I walk through creating and executing stored procedures that dynamically adjust their query text based on input parameters, showcasing how this technique can significantly improve performance by avoiding the pitfalls of traditional if-branch logic.

Full Transcript

Erik Darling here with Darling Data. And in this episode, where we will continue to sermonize about stored procedures, we’re going to talk a bit about, of course, using dynamic SQL to fix performance problems. There are three main problems that we’re going to talk about, and then one fourth sort of bonus problem. So we got that to look forward to over the next episode. So we’re going to talk about the next, the rest of our lives. But before we go into all that, of course, let’s talk a little bit about some fun stuff, some interesting things in our lives. If you like my content, or, I don’t know, if you find the things that you watch and learn here worth money, you can click the link down in the video description, right about there, and you can you can become a subscribing member of the channel for as few as $4 a month. If you are unable to scrounge $4 a month from the couch cushions, or mom’s purse, or whatever, you can do other stuff to support my efforts here. You can like, you can comment, you can subscribe, and if you would like to ask a question privately that I will answer publicly during my Office Hours episodes, that link is also down below for you to do that.

If you would like me to show up semi-live, probably via Zoom call, but if you want me to show up to your offices, it’s fine with me. I’m not going to complain too much. Money’s money. But I can do all of these things here, and I do them better than anyone else in the world outside of New Zealand. Health checks, health checks, performance analysis, hands-on tuning, responding to performance emergencies, and training your developers so I don’t have to respond to performance emergencies. All worthy goals, and as always, my rates are reasonable. If you would like to get some training for yourself, or maybe a friend, family, colleague, I don’t know, whatever it is, you can get all 24 or so hours of mine for about $150 USD using that discount code at that URL up there.

That is also completely assembled for you down in the video description. SQL Saturday, New York City 2025 is coming your way May the 10th. I will be there slinging sandwiches and cookies and bags of chips right in your face.

We have a performance tuning pre-con on May the 9th with Andreas Walter teaching us about performance doodads and gizmos and whatnot. So I will also be proctoring that. So at the very worst, I can throw a sandwich in your face two days in a row.

With that out of the way, though, let’s talk about dynamic SQL stuff. And I have already done the wrong thing. So I promised a while back that when SSMS 21 had support in SQL prompt that I would do one of these using SQL prompt and the dark mode thing.

So now SQL prompt 10.16 added support for SSMS 21 preview. Refer documentation. So there are a few things that I had to do to get SQL prompt showing up in here.

But that’s okay. So this is SSMS 21 with dark mode. The things are all dark.

Some of the things are all dark. Some of the things are not dark yet. Namely query plans. Query plans are very much not dark. But, you know, there’s only so much you can do. I’m a little blurry here.

I think I want to crisp myself up a little bit. There we go. There we go. Now I’m feeling crispy. Maybe? No, I think I went a little too uncrisp. No, that’s less good.

There we go. All right. Now I’m feeling zombified. All right. So the stuff we’re going to talk about. Dynamic SQL. Very good for things like if branch, plan compilation, parameter sensitivity, and complex runtime logic. Things that you put in your join or where clause where it’s like where parameter equals something.

Then do this other thing. And if the parameters are variable is this other thing. Do this other thing.

And add this other thing on. Explore this branch. Because all that stuff sucks for the optimizer. And we’ll talk about that. We are also going to talk about one bonus topic around filtered indexes in Dynamic SQL. And we will use a somewhat similar pattern to get around some limitations there.

But my goal here is to show you an example of, well, I guess, all, not both, of all of these things. And build on some of the Dynamic SQL stuff that we talked about in the previous video. About using Dynamic SQL safely and correctly.

For a lot of the things that we are going to talk about with Dynamic SQL. I’m going to be just upfront and honest with you. A statement level recompile hint would solve a lot of these problems.

Not a store procedure level recompile hint. But a statement level recompile hint would solve the majority of this stuff. And you would not have to write or worry about Dynamic SQL.

Whether that’s appropriate for whatever situation you are in is up to you. If you want to use a recompile hint, I don’t care. It doesn’t bother me.

I’m not here to, like, warn you of some atrocity. Your CPU is catching fire or anything like that. I use recompile hints all the time. They’re fantastic. They solve a lot of stuff. You just can’t always get away with it.

There are also situations that we are going to talk about. Where nested store procedures. The wrapper store procedures. Like we talked about in another video. Would be either sufficient or preferred.

Usually things around security and permissions. Would drive you to that over Dynamic SQL. I’m not saying that Dynamic SQL can’t be done correctly to do that stuff.

I’m just not the person to get training from about security and permissions. Because I don’t give two toots of a horn. About either one. But once you get into using Dynamic SQL.

There are all sorts of fun and creative ways to use Dynamic SQL. To sort of avoid lots of problems in here. So let’s dive right into it here.

So I’ve created a couple indexes. On the post table. Well sorry.

One on the post table. One on the votes table. And we are going to be using those in our first store procedure example. Now we’ve talked about this in the past. But this is the store procedure series. So we’re going to talk about it again.

Again we’ve got a query here that will run if post type ID is not null. And we’ve got a query here that will run if vote type ID is not null. But as we have talked about in previous iterations of discussing this sort of if logic and store procedures.

SQL Server does not do. Oh I should have a go in there. Just safe.

It’s to be safe. SQL Server does not do a particularly good job of managing plans like this. If I go and I grab the estimated plans for these two things. We’re going to see some stuff that looks rather different. Take a look at this one.

Where the top branch is parallel. And the bottom branch is serial. Single threaded. And now we look at the bottom. The second execution. Where the null and not null parameters have been reversed.

And the top branch is single threaded. And the bottom branch is parallel. What you’re going to notice about both of these. Is that the non-parallel branch only has a one row estimate.

And that’s going to be true for up here as well. Now since I have just gotten estimated plans for these. We have no cache plan.

Which means whatever order I run these in. And compile a plan for. And we cache a plan for. Will be the one that we get the better estimate for. For the non-null parameter value.

So if we execute this. Where post type ID equals four. We get a perfectly fine execution plan. That is the parallel plan that we discussed above. SQL Server is asking for an index on the post table.

But as of right now. The way that we’re hitting the post table. Is not of any significance. This of course goes right down El Tubo. When we run this for vote type ID 10.

Because now post type ID is null. And vote type ID is 12. And this takes a full eight seconds. To give us a query plan.

And we can see. Where that one row estimate. Is no longer our friend. Because we got a whole bunch of rows back. And we spent a whole bunch of time. Doing all this stuff.

If we look in the properties here. And we look at the vote type ID parameter. You’ll see that it was compiled with a null. But executed with a 10. So we are already off to a very bad start. Now of course.

If I flip this around. And I run it for vote type ID. Let’s just say 12 first. So we get a pretty quick execution here. Then what we’re going to end up with.

Is a serial plan here. It has correct estimates now. For that vote type ID. But now when we go and run this. For post type ID equals one. The post type ID plan.

Is going to be the one row estimate. And if you’re familiar with. You know. Either my videos. Or the Stack Overflow 2013 database. You will know that post type ID one.

Has six million rows in the post table. Not one row. So this takes. 16 seconds to run. And you can see where this was. No longer such a great idea. Estimating one row here.

It’ll be the exact same scenario as above. Where post type ID. Would have been compiled. With a null value. From here. And we would not have a good time.

Now this does make an assumption. That either one or the other. Will execute. But this is the way. I see a lot of store procedures. Set up to run. So don’t tell me. That this is unrealistic.

Because a lot of the stuff. That I end up tuning. Looks a lot like this. So. I’m just going to have to deal with that. What we can do. To prevent. The execution.

Of unwanted. Or rather the optimization. And compilation of query plans. For unwanted. Or unexplored if branch statements. Is to make the whole thing dynamic.

What this will fix. Is the bad estimates. That come with. The queries that are in the if branches. This will not fix parameter sensitivity.

Within parameter uses across executions. One thing that it is very important. And important to note. Is that in. Like it doesn’t matter. With the if branch.

And it doesn’t matter. With using dynamic SQL. In this way. Like you still are. You still have the potential. For parameter sniffing. So.

Now. Another thing to keep in mind. Is that we can no longer. Just recompile the store procedure. In order to show plan differences. Now we have to. We do have to clear out the procedure cache. Because.

Or like. We could clearly look up. Like SQL handles. Or plan handles. Or something. To do this a little bit more surgically. But. Just a quick means to an end. For these demos. Is to just run dbcc free proc hash. To clear stuff out.

But just to show you what I mean. About the parameter sensitivity thing. Like if we. Hit control and l. To get an estimated plan here. Notice that we no longer have.

Any of this stuff. Like we no longer have. Like the full query plan. For either of these things. Coming out here. Right. We just have execute proc. Which means that. The.

Like when we run this. That query responsible for vote type id. Won’t do anything. Right. It’ll just be a normal. Normal. Like. It’ll just get passed over. Right. It’s left alone.

It’s only if that. If only if we. When we hit something. That gets executed in here. That a query plan. Arises for this. But this is what I mean. By the parameter sensitivity thing. Even using dynamic SQL.

If we run this for. What was that? Post type id 4 first. And then post type id 1 second. Post type id 1. Reusing the query plan.

For post type id 4. Does not work out so well. Right. This is not a good time. This takes six. Six seconds to run. We end up spilling a whole bunch of stuff.

Here. And here. And it. Like really. Like. We just. Like. We solved. Like one of the performance problems. But we still have. An additional performance problem.

And it doesn’t really matter. Doesn’t really matter much. Well I mean. It does matter that we fix the. First performance problem. With dynamic SQL. But. We still have the parameter sensitivity issue. To deal with.

We can fix that. By looking at. Some. Sort of like the frequencies. That these. Post and vote type id. These occur.

In the tables. And I still have to fix. The font size on this. But for now. We can just use some advanced. Zooming. Unadvanced zoom hitting. Apparently on that.

And we can look at the counts. For these things. And we can figure out. Like maybe. We can sort of do. Our own version. Of the parameter sensitive. Plan optimization. And bucket these things in.

In. Ways that make sense. Right. So what we’re going to do. Is we’re going to create. This procedure. Called if branch. Compilation dynamic plus.

And this is going to take. An extra step. Along the way. What I’ve done. Is I’ve bucketed. What. Well actually. Start up here a little bit. In both of the.

Dynamic SQL queries. We now have this little token. That says replace me. Right. And down. Before we execute this. We’re going to take one more step. With the dynamic SQL.

SQL. And we are going to say. Replace. And. We’re going to look in the. SQL. Placeholder that we have here. For the text. At replace me at. And if post type ID equals one.

We’re going to inject. One equals select one. If post type ID equals two. We’re going to inject. Two equals select two. If post type ID is in four or five. Then we’ll do three equals select three. If post type ID is in three.

Six seven eight. Then we’ll do four equals select four. And if someone passes in. A completely different post type ID. We’ll do five equals select five. What putting this branch in. Or what putting.

Doing that replace me thing does. Is it prevents. It like basically. Like makes the query hash out. To a different value. And it makes SQL Server. Come up with a unique query plan.

For any one of those. Select one equals. Whatever. Two equals. Three equals. Four or five equals. So we’ll get a different query plan. For each one of those. I’ve also done something similar.

With the vote type ID branch. We have the same replace me thing here. And we have a very similar. Replace call. With different vote type ID. Things.

Now remember. For vote type ID equals two. Do I still have those up? No. I got rid of those. For vote type ID equals two. That was an island unto itself. With 37 million rows. So we want that thing.

To be isolated. All on its own. But if we run this now. For post type ID four. Like we did last time. We still get a nice. Quick execution plan. For vote type ID four.

And you’ll see the. And three equals. Select three. Injected into the query there. If we run this. For post type ID equals one. We will.

I mean. Granted. There’s there’s stuff. We could do. Probably to tune these further. I’m not saying that. Like these couldn’t be better. But this does solve. The majority of the issues. That we first saw. With just the normal. If branches.

And like the. The plan compilation. Cardinality estimation thing. And then later. The parameter sensitivity thing. With sharing plans. Across different parameter values. For both post type ID.

And vote type ID. So now. In this one. We have one equals select one. And these two things. Definitely got different. Execution plans. And the same thing. Will work for vote type ID.

If we run this for vote type ID 12. We get this silly little execution plan. We’ll see three equals select three. Injected into the executed query there. And if we run this again for 10.

We will get a completely different execution plan for that. And we will see the one equals select one. Injected into the query text there. So that solves the problem for us.

With both the if branch compilation. Cardinality estimation problems. And then later.

The parameter sensitivity issues. Now next we’re going to talk about. Replacing complex query logic. With dynamic SQL.

This is a very simple demo. You know. Just make sure I can get the point across. And sort of a reasonable time frame. We’re going to have this procedure here. Called complicated runtime logic.

And we have two parameters here. One called check posts. And one called check comments. And what that ends up as. Is an exist check.

If check post equals true. And then another exists check. If check comments equals true. There are all sorts of ways you could arrange this. That won’t make a lick of difference.

You could use case expressions. You could use like. Like an and outside of the exist. You could write this in any number of ways. But as long as you write this in a way. Where SQL Server has to do this thing.

No matter what. You’re going to get weird execution plans from that. So let’s just make sure we have this created correctly. And now let’s do a worst case scenario.

Where we execute this for first. Both things being false. And we get this query plan. Right. Maybe not the best query plan in the world.

But you know. This is what happens. And now if we execute this for both of these being true. This is going to take a little while to run.

Because SQL Server did its cardinality estimation. With those branches not having anything going for them. Now. SQL Server is actually executing the query.

And having to deal with the repercussions. Of such terrible cardinality estimates. If we look at the execution plan for this. This is what it looks like.

We have. 17 million of one. And a lot of one. And a lot of 10. And what happens is.

When you write it like this. SQL Server uses what’s called. A startup expression predicate. And these get sniffed. Just in the same way. That any other parameter can. So if check post equals one or true.

Then this will do something. But it did the cardinality estimate. For check post. For check post being zero. Or false. So we got just a real crappy plan. That took almost 20 full seconds to run here.

This is another just like. We could spend all day looking at. Well not all day. We could probably spend like another couple minutes. Looking at like true false false true for this.

But this is good enough as is. What I want to do here. Is just use a dynamic version of this. To show you how this would work. In real life.

Where. With dynamic SQL. Where we would just simply do this. And just for convenience. Where one equals one here is fine. And then if check post equals true.

Then we’ll tack this exist clause on. And if check comments equals true. Then we’ll tack this thing on in here. And that should be all fairly straightforward. But if we run this for.

Check post equals false and whatever. Then we just get a count from the users table. Which I probably messed something up logically. In the first one here. But you know.

We got zero back from that. Not a big deal though. It’s good enough to get the point across. But now if we run this for check post equals true. And check comments equals true. We get a much different execution plan.

Where when SQL Server actually had. To append these checks in. Then it used them. So we use the indexes that we created. And we scan them.

Which is fine. Because we’re doing hash joins. And we don’t really have much of a where clause on there. But this is another way to solve. A complex query logic problem. So the last thing that I want to show you. Is how you can use sort of a similar thing.

To deal with filtered indexes. With dynamic SQL. So normally when you create a filter. When you create a filtered index.

Right. Which we’ve done here. Where reputation is greater than 100,000. And you run a parameterized query. And even if that parameterized query matches that expression. SQL Server can’t use that index.

Because SQL Server needs to cache an execution plan. That is safe for any parameter that gets passed in. That’s what you see here.

So SQL Server will warn you about this in the query plan too. If you look here. We’ll see this unmatched index thing. It will tell you that we had an unmatched index.

Because of parameterization. That is the index that we created on the users table. And we have this unmatched index warning down here as well. So all this stuff will tell you. That there was a filtered index available to use.

But SQL Server was unable to use it. And of course using an approach just like before. Or we can do this. Right.

And what we’re doing here is just saying. If reputation is greater than or equal to 100,000. Then replace greater than or equal to reputation. With greater than or equal to 100,000.

Like I know. Like this doesn’t actually like help a lot. Right. Because this is just saying. Like if we have like if like reputation.

We passed in reputation. It’s like 100,001. This wouldn’t make any sense. Right. So like just to help you get around this. Just to give you an idea of a way to get around this. This is what you would do.

Right. So what you would. So kind of like the idea here. Is to just give you. Like show you an example of one thing that you could do. Where this would work out. And if we run this.

Now all of a sudden. Our execution plan will show a scan of our nonclustered index. And we no longer have the unmatched index warning. And rather than having a parameter in here.

We just have this in here. Now there are different ways to accomplish this. You could. You know. Instead of using replace with literal values. You could.

You know. Concatenate whatever the reputation parameter is directly into the string. You could also like insert the reputation parameter into. Like a table variable or a temp table.

And do the replace based on like whatever value that is. There are other ways you could do this. That would. That would. That would have. That would work just fine. For like any value that got passed in here.

And also. You know. Just to complete the circle. An option recompile hint. Would also allow you to bypass this. Because you would. Like you would.

Like you would get the parameter embedding optimization. With a recompile hint. That you wouldn’t get otherwise. So another approach to this. Might be to say something like. If reputation is greater than or equal to 100,000. Then add option recompile.

Onto the end of this string. Right. So there are various ways to take care of it. This is just a simple one. To help you sort of understand the problem. And different ways to approach it. So these are typical ways that I use.

Dynamic SQL to fix performance issues. And SQL Server store procedures. Again. Statement level option recompile hints. Do fix a lot of this stuff.

For free. Without. Or not. Not exactly for free. But without having to write a whole bunch of dynamic SQL. And worry about stuff. The option recompile hint. Does have compilation overhead. So if you have.

These queries take a long time to compile. It might not be the best idea. But. You know. I think that’s a fairly rare problem. Anyway. Thank you for watching. I hope you enjoyed yourselves.

I hope you learned something. And I will see you. In the next video. Where we will talk more about store procedure stuff. And I don’t know. Maybe. Maybe I’ll surprise. Maybe.

Maybe I’ll even surprise myself. It’s hard to. Hard to tell what’ll happen there. Anyway. Cool. La la la la la. Thank you for watching. Goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Things I Wish Inline Table Valued Functions Helped With In SQL Server

Things I Wish Inline Table Valued Functions Helped With In SQL Server


Video Summary

In this video, I delve into the disappointment surrounding inline table valued functions in SQL Server, particularly focusing on their limitations and how they fall short of expectations. I explore why these functions don’t adequately address issues like local variables and kitchen sink predicates, which often lead to suboptimal query plans despite their potential benefits. Through practical examples, I demonstrate how even with indexes in place, inline table valued functions can still result in inaccurate cardinality estimates when dealing with parameters or local variables, leading to inefficient execution plans.

Full Transcript

Erik Darling here with Darling Data. In this video, we’re going to talk about where I feel disappointed by inline table valued functions. Now, every so often, you know, granted, inline table valued functions in general are my preferred mode of user defined function in SQL Server because Scalar UDFs, despite the Scalar UDF inlining feature, they’re like, that obviously can’t fix all of them, and multi-statement table valued functions, which return the results of a table variable, those two types of functions often have many, many performance issues. My, my disappointment with inline table valued functions mostly comes from the things that they don’t address that they seem like they would be a good vehicle for. So things like, you know, local variables, things like, like kitchen sink type stuff, like you would, you would just hope like that there was be some better way of dealing with that stuff, then like the current tools and methods that we have, but inline table valued functions, don’t give us a way to, to deal with that. So I’m going to talk about that in this video. Then if the slide will kind move forward, thank you. And if you can ask for that in a bit, if you’re going to ask for that, to deal with that. So if you want to ask for that in a bit, if you’re going to ask, but what are you’re saying?

If you would like to support me and Bats coming up with this sort of content for you, then you can do that. There is a little button that Bats is pointing to, or rather a link, where you can become a paid member of the channel. And for as low as $4 a month, you can help keep my eyebrows.

In good shape. If you do not have $4 a month, perhaps you have your own grooming routines that take up the majority of your disposable income, well, you can always cut your fingers off. You can like, you can comment, you can subscribe.

That also gives me all sorts of warm, fuzzy feelings. It does not do much for eyebrow shaping, but still feels pretty good. If you want to ask questions privately that I will answer publicly during my Office Hours episodes, you can do so.

That link is also available for you in the video description, and it’s a good time for everyone. If you need help with SQL Server, boy, do you. Let me tell you how much you need help with SQL Server.

More than I can fit on the screen. I am available for consulting. Believe it or not, I do all of these things at a very reasonable rate, and according to many of our nation’s finest publications, I am the best SQL Server consultant in 75% of the Earth’s hemispheres when it comes to performance tuning.

If you want some awesome SQL Server performance tuning training, well, golly and gosh, don’t I have it. I have about 24 or so hours of it. You can get it all for about 150 USD with that discount code right there.

And of course, coming to you live and in person, SQL Saturday, New York City, 2025, May the 10th, with a performance tuning pre-con on May the 9th with Andreas Walter teaching us about performance tuning stuff. I will be there handing out lunches, making sure that everyone’s happy, and I don’t know.

Maybe this will be my chance to return to bouncing. Maybe I’ll work security and just sit there and stare glumly at people and every once in a while just walk into bathrooms and make sure there’s only one set of feet in the stall.

It’s a hard job. Anyway, let’s talk about inline disappointments here. Now, I’ve got an index that I’ve already created on the post table.

It’s a great index, maybe the best index I’ve ever created. It’s basically all we need for this example. And I’ve also got a first inline valued function here, where we’re going to talk about my first disappointment with inline table valued functions and that they don’t really help with local variable problems, right?

So if we run this query here with a literal value, Siegel server is just like you would expect, is able to take that literal value and apply it as an index seek and do accurate cardinality estimation.

If we hover over this, you’ll see that that is actually passed in as a literal value. This might have some foreshadowing for future demos, but let’s not get too far ahead of ourselves. But if we run this for other values like three, or that’s a two, Eric, fingers, fingers, we will get a plan for that with accurate cardinality.

So with literal values, this query runs, even though this is a parameter up here, when we pass a literal value in, SQL Server is like, dope. I got it.

I can figure this out. But as soon as we start doing things where we declare a local variable and set that equal to something, SQL Server, even though it is still able to seek into the index, now we start getting these wacky cardinality estimates.

And it doesn’t matter what we change this to. Like, again, fingers, three. We will get our, like, 160-something rows back, but SQL Server will still guess this number of rows, which is probably not the greatest thing in the world, right?

This is like, why can’t you just pretend everything’s a literal? Why do you have to do this to me? So that’s no fun there. Another, well, what was I doing?

These two, right? Yeah, we still get, we get the wacky cardinality estimates for both. I should probably change these to the same number so that it makes a little bit more sense, right? So we do this.

We start getting the wacky cardinality estimates even from the inline table valued function. If I change both of these to three, we’ll get the same thing. Now, where it gets a little disappointing is with the, is with parameters, because you would want SQL Server, I mean, maybe, to, like, you know, be able to use inline table valued functions and maybe, you know, not give you parameter sniffing problems.

But unfortunately, they do not help us get around this either. If we run this first for three and we look at the cardinality estimates, we see one, six, seven there. And if we bump this back up to four and we get the slightly higher number of rows back, then we will still be reusing the cardinality estimate from the previous execution.

This is just one of those things where, like, you would hope that, like, you know, something inline that returns a select would just do a little bit more for you. But we just don’t, we don’t get any of that, we don’t get any of that good stuff out of it.

Maybe there should be a fourth class of function that handles this sort of thing. I don’t know. There’s just, there’s just so few good ways of handling things. You just, you just hope that something else will, will reach out and save your day.

We’ve also got this function down here called no optionals, right? And this is going to sort of give us our kitchen sink style setup. But just like with, with other stuff, if we, if we run this query, SQL Server takes that literal value and everything is fine here, right?

Everything’s all good. But as soon as we go to declared variable, we end up with a not so hot thing going on. We end up with a very typical kitchen sink predicate.

Instead of a seek, we scan the, we scan that index. You can see very clearly that is an index scan right there. And that is, that is not, that is not a good time. That is not what we wanted.

And of course, if we were to parameterize the query, and let’s say we ran this first for three, not only would we get the scan, and not only would we get the previous cardinality estimate. Oh, because you know what?

I didn’t clear out the plan cache. So if we run, let’s go back in time and do this for four, which is, I guess, the cache plan for this one currently. I did not free the plan cache for this one. We get the correct cardinality estimate from this, but we still scan, right? So SQL servers can still do cardinality for that.

But then if we switch this back to three, we will, we will retain the cardinality estimate and we will retain the scan. But you know, the, the, the, the estimate will not change for this, which is kind of disappointing as well. The only way to get, the only way to get that would be to either use a recompile hint, right?

Which, you know, is a pretty common way of getting around the kitchen sinky stuff. We go back to using the seek because, because SQL Server can, you know, just infer this as a literal value, right? We get that embedded in the query plan rather than relying on parameters.

The other way to get around that is to write somewhat unsafe dynamic SQL, where we concatenate this into the string. And of course, since, you know, it can be a little more forgiving on this because we are, we are still using a, a, a placeholder, a variable or parameter that is typed as an integer. So you can’t put like drop table something in here.

But, you know, you’re still not going to be great. I’m still not crazy about unsafe types of dynamic SQL. The other way of getting around this is to embed a literal, embed this in there is something concatenate this into the string and then run the query like this. So is this the end of the world?

No, it’s just kind of a bummer because, you know, like I said, you just want something other than like, you know, writing a bunch of tedious wrapper store procedures or writing a bunch of tedious dynamic SQL. And that’s a good example to give you some, like just some break from like these types of problems and queries. And you always hope that like, you know, like things like inline table valued functions, which have so much good use and application and can help with a wide variety of problems caused by other types of functions.

that, you know, like they would just give you some respite from these other things, but they don’t. And this is something that I do have to, you know, explain to clients a bit, which is why I’m talking about it here. But, you know, it’s just kind of a sad face for me.

Anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something, even though, you know, it’s hard to enjoy yourselves when you’re being disappointed. It’s a little difficult to maintain enjoyment when disappointment is up here.

Enjoyment tends to fall off down here. But, yeah, just, you know, it doesn’t work is, I guess, my point in all this. Anyway, I’m going to go hopefully figure out something less disappointing to talk about.

So, anyway, goodbye.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

T-SQL Tuesday 185: Video Star Edition #tsql2sday

T-SQL Tuesday 185: Video Star Edition #tsql2sday



This month’s T-SQL Tuesday invitation:

  • You can talk about whatever you want, but it has to be a video
  • Non-video entries will not be televised
  • You don’t have to be on camera
  • You can host the video anywhere you want
  • You must link back to this post so I get a pingback to find your post
  • You must include the T-SQL Tuesday Logo

T-SQL-Tuesday-Logo

Free ways to record your content:

Happy recording!

Video Summary

In this video, I’m Erik Darling from Darling Data, and I’m excited to announce that April’s T-SQL Tuesday is happening on my channel. With Steve Jones handing over the keys, I invite you to share your thoughts on any topic related to SQL Server or database management in a video format. Video content has become increasingly important as search engines struggle with AI-generated content, making it harder for creators like us to be found. Recording videos allows for a more engaging and interactive experience that can help build a personal brand and reach a wider audience. Whether you’re new to recording or have years of experience, I provide tips on how to get started using free tools like Windows 11’s snipping tool, PowerPoint screen recording, and Streamlabs OBS. Remember, while videos are the focus, you still need to publish a blog post linking back to your video for me to include in my roundup. So, grab your camera or microphone and let’s create some amazing content together!

Full Transcript

Erik Darling here with Darling Data. And this video is not, well, it is one of my videos, but this video is an invitation for other videos. You see, this, see, I’m hosting T-SQL Tuesday this month. Steve Jones put the keys in my hand. And what I want you to do is talk about whatever you want. There’s no topic, but whatever you talk about, you have to record a video for it. You know, not just writing something, you are recording. and this one. There are some good reasons for that, right? So like writing blog posts is great. I wrote blog posts for years. I might even still write an occasional blog post. But like I’m, I’m just in love with the video thing lately. Uh, blog posts are great because you have, you know, you’re carefully organized thoughts and words. You have your pictures and you have all your, any scripts you want to hand off to reference and like code you can copy and paste and like, it’s, it’s fine. But the thing is that, um, if you’re trying to copy and paste and like, it’s fine. to like build a brand or you’re trying to like build content that people find and find you and like you know know you for that’s getting harder and harder and harder uh search engines now are completely bypassing content all of their llm agents are stealing your content and using it to just answer questions without linking or referencing your stuff or like hiding like like a million lines down wherever they might have fetched some content from and like you don’t even show up so like unless someone is like specifically looking on your site for something there is a very very like like like search engines just making it impossible for people to find you right they’re just finding this answer from their llm which sucks like like if you want to be known for the stuff that you do you need to produce content in a different way that llms can’t just steal from you now like like sure they could probably still index video with words and transcripts and like you know use that at some point but right now like if you want to build like you know personalized good content that people are able to find and recognize you for video is really the like the only way to keep doing that so like you can put still put all the stuff that you would put into a blog post into a blog post written like along with the video but recording uh at least i find reaches a way different and often wider audience like when i was writing written posts like you know they go out there into the world and often like oftentimes the only comments that you would get would be someone telling you if there was like a typo or a broken link or the picture was wrong or something or like something else is off about the post and like you would just have to be like oh fix now so like but once i started recording videos and you know youtube tracks like views and likes and your channel subscribers and like like you just get like comments on stuff like it’s just way more of a like way more interactive experience so at least for me anyway um it also lets you show off your sparkling amazing personality your winning smile your confidence all that good stuff that just may not come across and just you know typed out written word which can just get kind of dull repetitive letters it’s a lot going on when you write stuff and plus you can get creative in different ways with video i don’t do a lot of editing of my stuff aside from the fact that i have like my my green screen set up but if you might be out there in the world with like a real knack for doing like cool video stuff you might have like transitions or explosions or lasers or robots or i don’t know like all sorts of stuff that you can do with videos like transitions from one scene to another i don’t get into that because i’m i’m i’m i’m this guy but if you are the type of person who gets into that stuff you are like their video world is wide open to you so uh if you’ve never recorded anything before if you’re unfamiliar with the world of recording here are a few ways that you can do it for free uh windows 11 has a snipping tool built in which i seems to support uh audio input recording now uh with powerpoint you can do an insert screen recording uh like i just like right now can just break out of uh i can break out of powerpoint a little bit if you go to the insert menu up here and then you scroll over you can do a screen recording and just plop whatever in there so if you’re a company like you know you have powerpoint you can do this right it’s not it’s not it’s not completely out of your reach uh zoomit which is a free tool in the sys internals pack uh also has screen recording built in now at least as at least as i can tell v9 has it i don’t know if like v8 had it or something but at least for the latest version built screen recording is built into that if you want to get a little bit fancier you can use something like i like i use streamlabs obs because i like to like set up my camera and whatever else so i like i show up where i want to show up and i don’t block the words on the slide you know all sorts of like good thoughtful things in there um you don’t need to include video of yourself for this like i’m not saying that you have to be on camera but we might need to hear that voice of yours so you can explain what’s going on on the screen so there are definitely free ways to do this that uh that like shouldn’t impact you too much and i would assume that like since we are five years into a lot of people working remotely you should have some kind of microphone for all those zoom meetings you may or may not go to or teams meetings that you hopefully don’t have to go to because well we don’t have to talk about that here anyway just a couple rules and regulations uh your post even though it is going to be video based uh has to have this logo in it you have to publish a blog post still that has your video in it so i can go watch it and the only way that i’m going to know to go watch it is if you link back to my blog post with this video in it so i get a little handy ping back in the comments if you like don’t ping back my post i’m not going to know to go find your post so the ping back is necessary like linking back to this post is absolutely a requirement here otherwise i’m not going to know it exists uh you might post it on social media and you like if like but you’re not tagging me in it uh or you’re not like you know tagging this post in it i don’t know where to go find you so you have to link back to this post so i get a ping back so i can do my roundup uh you do have to publish this on or around tuesday april 8th i’m not going to be too much of a stickler for this because i’m probably not going to get to the roundup until like thursday or friday or maybe even monday uh depending on how things trickle in and how i have how much time i have to like watch stuff and like you know come up with my little commentary on everyone’s thing so like just you know near tuesday april 8th would be useful so anyway uh that’s this month’s t-SQL tuesday happy recording and i can’t wait to see what everyone comes up with all right cool now go go go go do it

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

SQL Server Performance Office Hours Episode 6

SQL Server Performance Office Hours Episode 6


We have an ERP system, the code of which we do not have access to. The system causes locks of DB. We are currently using 2019. Can you give advice on how to avoid these locks? The least we want is to be able to read the data at the moments of locking. Thank you!
Hi Erik! I have a problem with indexing and was wondering if you could point me in the right direction to get started. There’s a relatively old database that’s been around since 2011ish that I’ve inherited and there’s two transaction tables that are heavily over indexed (25+ indexes). It’s gotten to the point where the indexes on the tables are 200Gb (across both tables) vs 50Gb of data. There’s a lot of very specific covering indexes that are rather large. I’d like to reduce the number of indexes but there’s so much data flying around on production it’s very hard to simulate on Dev. Creating a new index can take 20 minutes, where do I even start? Kind regards, Nick
Do you know of any issues using WAITFOR DELAY ’00:00:01′ in a tight loop. And perhaps having a handful of them at the same time on a the same server. Never mind what happens in the loop. I got that covered.
How do I tell if I already asked my stupid question?
Columnstore maintenance on 2022, what thresholds do you use and what maintenance do you run? Niko’s blogs are ancient now.

To ask your questions, head over here.

Video Summary

In this video, I dive into some common SQL Server challenges and provide practical advice on how to address them. We tackle issues like avoiding locks in an ERP system by exploring options such as read-committed snapshot isolation or snapshot isolation at the database level. For those dealing with overly indexed tables, I offer a step-by-step approach using SP_BlitzIndex to identify unused indexes and merge overlapping ones, ensuring that any necessary index changes are carefully managed. Additionally, we discuss potential pitfalls of using `WAITFOR DELAY` in tight loops and how to mitigate CPU usage issues. Lastly, I share insights on columnstore index maintenance for SQL Server 2022, emphasizing the importance of row group size and compression efficiency over traditional fragmentation checks. Whether you’re a seasoned DBA or just starting out, this session is packed with valuable tips and tricks to help optimize your database performance.

Full Transcript

Erik Darling here with Darling Data. And it’s time for Office Hours. My favorite. Alright, if you like my channel and me and this stuff and you want to sign up to support the channel with money, you can do that for as few as $4 a month. If you don’t, I get it. You can like, you can comment, you could subscribe, and you can be nice enough to ask me to do that. If you have any questions here on Office Hours, like, what do you spend four pre-tax dollars on in New York City? That would be a good question to ask. If you would like some real help with SQL Server, so I’m not asking anonymous questions that get answered publicly, you can pay me to consult on your SQL Server. I will do that. I will humbly, happily do that at a reasonable rate. We can do all of these things and more. That’s my job. I do not pay for that. I do not pay rent with YouTube. It has not reached that level of income stream yet. At this rate, I think I would need roughly 994,000 more subscribers in order to make that a reality at the subscriber to membership signup ratio.

So, perhaps someday. If you would like to get your hands on my training content, I have all of it available at that link. And if you use that coupon code, you will get it for 75% off, meaning just about $150 US for the whole kit and caboodle. Lucky you. That lasts the rest of your life. You only need to buy it once. It’s wonderful. If you would like to see me live and in person, dressed up like a lunch lady, smoking cigarettes, flipping flapjacks, I will be at SQL Saturday, New York City, 2025 on May the 10th.

Of course, there is a performance tuning pre-con with Andreas Walter on May the 9th. Full day deal that costs money to show up to. But I’ll be there, too, organizing lunch meats and sloppy joes and American chop suey and whatever other delicacies from your youth you remember fondly from the lunchroom. But with that out of the way, let’s answer some office hours questions.

And boy, do we have some doozies in here today. All right. Let’s see. One, two, three, four, five. Okay. We have the prerequisite number. We have reached the cost threshold for office hours.

So, let’s begin. Whoa. Zoom it. It’s getting a little sloppy on me here. We have an ERP system, the code of which we do not have access to.

Very typical. The system causes locks of DB. Also quite typical. We are currently using 2019. Can you give advice on how to avoid these locks?

The least we want is to be able to read the data at the moments of locking. Thank you. Well, gosh. There are a few things you can do to make your life a little bit easier in this regard. If the locks are happening because there are long-running modification queries, you could look at adding in indexes that help those long-running modification queries run faster.

That would be less locking overall. But more likely, what you are going to want to do is… Well, you do have two options.

How far you want to pursue these options does depend on vendor supportability and other stuff. If you want all the queries to not get blocked by writes, you could turn on read-committed snapshot isolation at the database level, and any read query that comes in and needs to write data would read versions of rows without having to worry about getting blocked.

It’s not dirty reads, of course. Optimistic isolation levels in SQL Server explicitly disallow dirty reads. You are just reading the version of the row prior to that modification query, doing anything with it and completing.

I have lots of videos about that. If you have any questions about it, I would highly recommend the Everything You Know About Isolation Levels is Wrong playlist, which will walk through all of that.

If vendor supportability for that sort of thing is lacking, in other words, if they say, if you turn that setting on, we can’t support you anymore, what you could do is use a setting called just snapshot isolation, not RCSI read-committed snapshot isolation, just SI snapshot isolation.

The difference is that RCSI applies to every read query that enters the database that doesn’t have any more granular locking hints on it, whereas snapshot isolation only applies to queries that ask for it.

So if you have queries that are, like you have added to the workload, let’s say, like you have some custom store procedures that do stuff, or you have custom code that reports on stuff, you could just say for your code only, set transaction isolation level allows snapshot, and you would be the only queries using those versioned rows.

Everyone else, every other query that goes in and hits the database would be subject to the normal rules of either read-committed the default pessimistic locking isolation level, where no row versioning is involved, or whatever locking hints the query supplies up to and including no lock.

So that would be how I would go there. Now we have a long one. Oh, and we have, oh boy, I mean, the crop must not have, let’s anonymize that a little bit.

We don’t need, we don’t need that kind of PII spilling out in office hours here. Hi, Eric. Hi, whoever you are. I have a problem with indexing.

I was wondering if you could point me in the right direction to get started. There’s a relatively old database. It’s been around since 2011 that I’ve inherited, and there’s two transaction tables that are heavily over-indexed, 25 plus indexes.

It’s gotten to the point where the indexes on the tables are 200 gigs. That’s not very much. It’s got to be a huge across both tables, and 50 gigs, versus 50 gigs of data. There’s a lot of very specific covering indexes that are rather large.

I’d like to reduce the number of indexes, but there’s so much data flying around on production, it’s very hard to simulate on dev. Creating a new index can take 20 minutes. Where do I even start?

Well, not on dev, because dev is not going to be a real, dev is not where you’re going to get any useful information for analysis from. I like the SP Blitz Index store procedure for a lot of reasons.

You can point it at these two tables, and you can see what the indexes are on there, and you can see what their definitions are, and you can see what their usage metrics are.

Now, where I would start is by looking for any unused indexes. If there are any of those, I would start by disabling those, not dropping them, just disabling them, because you want to make sure that any index that you get rid of is still maintained in the database metadata in case you need it back in a hurry, unless you’re very good with, you know, the scripting process where you would create, you know, the set of, like, you know, change scripts, and then a set of rollback scripts, so that you could, like, recreate or rebuild indexes, if you find out that you actually did need that index, then I would just start by disabling them.

The second thing that I would do would be to look for overlapping indexes, and that would be indexes where the key columns are either an exact match, so, like, column ABC, or, like, two indexes where the key columns are like columns ABC, and I would start with those, and then look at the includes, and see if the includes need to be merged in together, because the order of include columns in your index definition doesn’t matter, the order of key columns does, and then I would, if, you know, of course pay attention to if those indexes have anything special about them, like uniqueness, or a where clause perhaps, and, you know, factoring in what exactly, you know, you would have to do to come up with, like, like one index to replace multiple duplicative indexes.

The second thing, the second thing you would look at for the duplicative indexes are ones that are sort of superset subset indexes, where, let’s say, you have one index on columns ABC, with some includes, and maybe an index that had, like, only has key columns on columns AB, with no includes, or maybe, like, the same includes, or maybe slightly different includes, merge the includes in, and just keep the, keep the wider index, and get rid of the narrower index, just because when you’re making this first set of changes, it’s often a lot easier to keep the wider index, that’s more useful to more queries, than keeping a narrow index, and hoping that SQL Server still maybe chooses it out of the kindness of its heart.

So that’s where I would start. As far as creating a new index taking 20 minutes goes, I’m not sure where to begin helping you with that one. I would do the index cleanup before I started trying to add new stuff in.

Granted, the index cleanup can involve merging indexes, but it’s up between, it’s between you and your bosses to find a maintenance window for that.

That is, that is not something I can help you negotiate. All right. Next question. Do you have, know of any issues with using wait for delay one millisecond in a tight loop, and perhaps having a handful of them at the same time on the same server?

Hmm. Well, into my country. Never mind what happens in the loop, I got that covered. Well, you know, what happens between loops stays between loops.

But my, my one time messing up with the wait for delay thing was during the, the development of the now deprecated first responder kit store procedure SP all night log, where one of the facets of that store procedure was to like check for databases that needed to be backed up on one end or databases that needed to be restored on another end.

And my initial thing in a wait, it was, I think, I can’t remember if I, if, if I didn’t have a wait for delay of one millisecond in there, or if I had a very short wait for delay of one millisecond in there.

But, um, basically the end result was one CPU spinning at like a hundred percent over and over and over again, while that while loop just kept checking for stuff and kept looking for stuff to do.

And that apparently wasn’t, wasn’t great. Um, so if, if you would like to avoid, uh, a handful of CPUs constantly spinning at a hundred percent, um, or spiking, I don’t know if they’re going to spin at a hundred percent for what you’re asking them to do.

They spun at a hundred percent for what I was asking them to do just to like, look for stuff, uh, look for work to do, then that would maybe not be great. So I would perhaps look at a CPU graph on the server and see if the handful of while, wait for while loops, uh, it, it matches the number of CPUs that are constantly at some high level of utilization and maybe start thinking about giving that wait for delay a little bit more breathing room.

Uh, I don’t, again, I don’t, you got the loops covered, so I can’t give you any advice on how often you should be checking for a change based on, uh, what that loop is intended to do.

But, um, that is, that is what I’ve run into with it. All right. Question number four. How do I tell if I already asked my stupid question? Well, I would have already given you a stupid answer.

That’s an easy one. We got that out of the way pretty quickly. Uh, columnstore maintenance on 2022. What thresholds do you use and what maintenance do you run? Nico’s blogs are ancient now.

Uh, I haven’t really changed much, uh, in this. Um, Nico’s stuff is still, as far as I can tell, the best out there. All the scripts do not take much actual columnstore specific stuff into account. And the Tiger Team stuff is, well, I don’t think anyone actually still works on that either.

Um, you would think maybe Nico, who went to Microsoft, would offer them some help on their, uh, index maintenance stuff in the columnstore realm. Um, maybe he told them stuff to do when someone else did it, but I don’t, I don’t think those still get worked on, uh, really, if, ever, if, if at all.

Um, I, I still find, um, you know, the, the columnstore specific stuff that Nico wrote into his scripts, uh, like, just as applicable today. Um, you know, like, you know, columnstore maintenance is a lot different from rowstore maintenance.

Uh, this is not to answer your question directly. This is just for the other folks out there, uh, where, you know, uh, regular index maintenance, which typically looks for logical fragmentation is a big old waste of time.

Um, if you wanted to make a case for going out and looking for physical fragmentation of rowstore indexes, I would perhaps be a little bit more germane to your arguments for like looking for, you know, indexes that are twice as large as they need to be because there’s a lot of empty space on data pages, but columnstore indexes have a sort of different set of, um, issues.

And like fragmentation isn’t it for, for, for, for, for columnstore indexes either. Um, columnstore indexes, you have to care about row group size. And if row groups are compressed or not, uh, you have to care about like the ghost record tombstone type things.

And you have to care about how big the Delta store is. The Delta store is uncompressed row groups, right? Like that’s which, you know, if those get big enough, those can impact just how efficient your columnstore indexes are.

So those are the things you need to keep an eye on there. And I don’t think anything has changed about columnstore indexes that would make the threshold that Nico, Nico was talking about in his scripts, any less pertinent on SQL Server 2022.

Versus when he was writing them around like 2016, 20, I forget when he went to Microsoft and stopped, stopped existing as a blogger.

Um, I did see a post recently, which my least favorite, my least favorite kind of blog post, which is that I’m going to start blogging again, blog post, which, you know, is like the first post in three years.

And then the last post for another three years. So, um, at least for most people, who knows, maybe, maybe Nico will break the spell, but, uh, I don’t, I don’t really see, uh, a reason to do things any differently, uh, with columnstore because columnstore still has the same sets of issues, uh, for the performance of columnstore indexes, um, that you would, you would look at then that you would want to look at now.

So like, you know, you know, deleted rows, uncompressed rows, um, like row groups, like, like columns, like really small row groups and then some really big row groups. You would like want to get, try to get some uniformity in there if you can.

I think that, that’s, that sometimes helps, uh, things get a little bit better, but, um, yeah, that’s not, not really a lot to say on that. Unfortunately, um, yeah, I can’t really think of anything else on that.

Maybe, maybe I’ll think of something else later, but, for now that’s, that’s it. I might, maybe I’ll come back to this one if I think of something, but right now I get nothing. Anyway, uh, that is the end of these five questions for office hours.

Um, if I’m, if I’m looking at the queue now, I have, I’m up to four questions after this. So as soon as someone asks one more question, I’ll be able to do another one of these.

It should be very exciting for you. Right? Incredibly exciting. Anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something and I will see you in, uh, well, the next video, I hope about, about something else.

Maybe we’ll figure it out when we get there. I won’t we? Anyway, thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Strange Query Plans With Inequality Predicates In SQL Server

Strange Query Plans With Inequality Predicates In SQL Server


Video Summary

In this video, I delve into a fascinating aspect of SQL Server query plans—inequality predicates and their impact on performance. Specifically, we explore how parameterized queries can lead to strange query plan patterns when dealing with nullable columns. By examining real-world examples and discussing the nuances of index usage, I demonstrate practical solutions such as adding redundant predicates to improve query performance without altering indexes. If you’re facing similar issues or just want to enhance your understanding of SQL Server query optimization, this video is packed with valuable insights that can help streamline your database operations.

Full Transcript

Erik Darling here with Darling Data, and in this video we are going to examine, like, I guess it’s all on the next slide, isn’t it? Strange query plans with inequality predicates. And by inequality predicates I mean not equal to, whether it’s the exclamation point equal sign or the two opposing type characters there. because I’ve run into this, you know, I’ve reached my cost threshold for talking about it publicly, working with clients on a few things. And there are a few things about this that I think are interesting and that I hope can help you with your query tuning life because you can’t always change the indexes, right? So we’re going to talk about this. There are some things that go along with it, like, nullable columns, and parameters and parameters and stuff, but there’s only so much you can put in the title before it just becomes, like, an article unto itself. But before we do that, let’s talk about you and me and my friend Moolah.

So, if you would like to support my endeavors to bring you the very highest quality, most interesting, incisive, probably the most important SQL Server information available anywhere in the world, you can visit the link in the video description below and sign up for a membership. And for as few as $4 a month, you can keep my beard nicely aligned because, you know, the razors are expensive these days. I don’t know if you’ve bought razors lately. Man, more expensive than eggs. It’s insane. If you have spent all your money on eggs and razors or if you’re over there shaving eggs or whatever it is you do with your free time and you have run out of money, you can do other things to help this channel move along in the world.

You can like, you can comment, you can subscribe. We are up over, let me actually get a current tally of things here. I’m going to look at my YouTube app here. We have about 6,200 subscribers and about 60 paying members. So, we are reaching nearly like a 1% status there. Pretty good. Pretty good. Pretty good.

If you would like to ask a question that I will, privately, that I will answer publicly, there is also a link down in the video description to do so. There is a little Google form. You type in your question and then I answer it during office hours. I do them five at a time, which is, I don’t know, just seem like a nice number at the time.

Maybe I will change that if people are, for some reason, five-a-phobic or something. If you need help with SQL Server in a way that asking office hours questions or poking around the internet isn’t doing you much good for, I am a consultant with reasonable rates and I do all of this work with SQL Server quite effectively and quite efficiently.

And, you know, you’d have a hard time finding a better deal on performance tuning SQL Server. At least, if you’re trying to avoid just some numbskull who’s going to look at missing index requests and tell you to add them. Because that ain’t my game.

If you would like some equally reasonably priced training content, you can get all 24, 25 hours of mine at the beginner, intermediate, and expert. Not just advanced, but expert level. For about $150 USD and that is good for the rest of your life.

It just keeps going as long as you keep going. So, stay healthy out there. SQL Saturday, New York City, 2025, May the 10th, Times Square, Microsoft offices. It’s going to be a hoot.

It’s going to be a real hoot. With that out of the way, let’s talk about the subject of today’s video. Now, in order to sort of show you where this gets interesting, I have created two indexes on the post table.

One on a column called parent ID and one on the column called owner user ID. And you’ll notice a couple little red squiggles here, which means I was kind enough to create these indexes ahead of time. And if I run this query with a couple literal values, we get a very sane and rational query plan using both of those indexes.

We have an index seek into P1 and we have an index seek into P0. And when we seek into these indexes, we very efficiently evaluate. Oh, come on, tooltip, stick with me here.

This predicate, the equality predicate on owner user ID. And we seek into this index and we find where the parent ID is greater than 0 and less than 0. Or some combination there.

Both greater than and less than 0. So, not equal to 0. And that all looks pretty good. Now, there is a missing index request here. And if you are able to create composite indexes or change indexes on your server and do all that stuff, great.

We’ll talk about that in a moment. But what gets interesting is when you take a query like that and you parameterize it. So, now I have the exact same query set up.

And this would be the same with the store procedure. This is no different than using the store procedure here. But when I run the query like this, where both of these parameters have the same value, 0 and 22656. And they are the same definition in here.

And they are used the same in here. The query plan takes on a rather strange shape. Look what happens. Now, we have all this additional stuff in our query plan.

We have some constant scans. We have some concatenation. We have some top-end sorting. We have some merge intervaling.

And then we have a nested loops join to the P0 index on the table. And, of course, the P0 index is where we are looking to do our seek on parent ID, that inequality predicate. What’s particularly, let’s say, a bit icky about this one is that this whole thing is in a serial zone.

You’ll notice that SQL Server steps out of the serial zone immediately after doing that and distributes the streams parallelly to the rest of the parallel zone in the query plan. But this is where we spend the majority of the time. This thing runs for about two seconds total.

And we spend 1.7 seconds in this section right here, between the 1.4 seconds there and the 1.8 seconds there, isn’t it? Pretty close. Now, this is because SQL Server has to do some additional protections in case you ever pass in what might be a null here.

It doesn’t have to do that when you have a literal value. Part of why this has to happen is if we hover over, and I’m going to show you what happens when you flip these in a second. But both the owner user ID column, you can see that is nullable there, and the parent ID column, you can see that is nullable there.

And if we were to switch these around, and I’m not saying that this is the correct query, but if we best show you what I mean. I’m going to hit the insert key there. We don’t want that.

If I switch these around so owner user ID is an inequality predicate and parent ID is an equality predicate, the exact opposite will happen. All right. This will run for roundabout the same amount of time.

Well, actually, a little bit longer there, 3.8 seconds. Hoo-wee. But now the index seek have switched places, right? Now the index seek up here for the equality predicate is on parent ID, and the index seek down here for owner user ID is this is where things get all weird, right?

So this is the strange part of the query plan now. But let’s focus on the original form of the query, right? So this is limited to the inequality predicate with a parameterized query with a nullable column.

Now, what you can do if you want to fix this without changing any indexes is add a sort of redundant predicate here and say, and P, that’s supposed to be a dot, and the dot didn’t come through. Parent ID is not null. And if we add this in alongside our inequality predicate, all of a sudden SQL Server has a whole lot less to worry about.

And we get just about the same query performance that we were getting before, right? So this plan looks just about the same. We have an index seek.

We have an index seek. And this all takes just about 550 milliseconds, which ain’t bad at all. Now, I’m going to quote this out for a second. And I’m going to create the composite index that SQL Server was requesting on the post table.

So that’s leading on owner user ID with parent ID as a secondary key column. That’ll create in a second there. And what I’m going to do is just show you that even, like, this does help the performance generally.

But you still get the weird query plan when you don’t have the not null check on the parent ID column for the inequality predicate. So if we run this, this query will run very reasonably fast. But we still have all this weird stuff in there, right?

We still have the constant scan, the concatenation, the top end sort, the merge interval, and the index seek down here, which we can, of course, get rid of if we keep the semi-redundant predicate on parent ID not being null. And we can get a much nicer, neater execution plan when we tell SQL Server to discard any nulls that might exist in that column.

Now, the kind of funny thing here is that in the owner user ID column and the parent ID column, actually, neither one of these actually has any nulls in it. But SQL Server does still have to protect itself because it has to create a query plan.

Because what if some nulls show up? Sure, there are no modifications right now. But what if, like, three seconds later, I insert a null value in there, and all of a sudden, SQL Server has to figure out some way to cope with that?

So if you have inequality predicates in your query plans, even if they are rather quick query plans, but you have all of that, you start seeing weird stuff with the query plan pattern that I showed you before, where you have the constant scan, concatenation, top end sort, merge interval thing going on there.

All it takes is the redundant predicate to weed all this stuff out. Sometimes that is just a useful thing to do to cut down on query plan weirdness, because you never know who’s going to be looking at these query plans and getting very confused by things.

They might see all that stuff happen and say, wow, I have no idea what all that is. I don’t know. I have no clue.

So it’s just a nice formal thing to do to get rid of it with a redundant predicate and say, let’s reject those nulls out of hand, and let’s just have a nice simple index seek.

So if you are having performance problems with this type of query, that’s one way to fix it. Of course, the composite index is another way to fix it. So, you know, you might want to mine that a little bit.

And of course, if you need help with this sort of thing in real life, and you just can’t figure any of this stuff out on your own, well, my rates are reasonable.

Thank you for watching. I hope you enjoyed yourselves. I hope you learned something. And I hope that we will meet again soon in the next video. All right.

Thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Things I Wish Inline Table Valued Functions Helped With In SQL Server

Things I Wish Inline Table Valued Functions Helped With In SQL Server


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.