Introducing sp_LogHunter: Quickly Search For Important Error Log Messages From SQL Server

Sick Of It All


SQL Server has incredibly verbose error logs, and sifting through them for the highlight reel can be a pain. They’re often full of messages that you aren’t actionable.

  • Failed logins
  • Successful logins
  • Successful backups

Making matters worse is that when you open large log files, the response time for loading all those messages can be painful, and even time out.

Then you have the nice folks who create a new log file every day and keep months of log files around.

But probably the worst thing is that you can’t really search for multiple things without losing all of the other messages.

If you’re like me, and you need to know when interesting or critical events happened, you’re going to love sp_LogHunter.

Right now, it has these parameters:

@days_back int = -7 /*How many days back you want to look in the error logs*/
@custom_message nvarchar(4000) = NULL /*If there's something you specifically want to search for*/
@custom_message_only bit = 0 /*If you only want to search for this specific thing*/
@language_id int = 1033 /*If you want to use a language other than English*/
@first_log_only bit = 0 /*If you only want to search the first log file*/

Along with the usual version, help, and debug parameters I add in to my procedures.

A sample execution looks something like this:

EXEC dbo.sp_LogHunter
    @days_back = -30,
    @custom_message = N'he''s dead, jim';

I’ll have a video walkthrough tomorrow to talk about the code and show some results in action.

If you run into any issues or would like to see any enhancements to the procedure, head over to GitHub to open an issue.

You get sp_LogHunter and raise any issues here.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

A Little About Locking And Isolation Levels In SQL Server

A Little About Locking And Isolation Levels In SQL Server


Video Summary

In this video, I delve into the intricate world of locks, blocking, and isolation levels in SQL Server, sharing insights gained from years of consulting work that often involves troubleshooting store procedures for blocking and deadlocking issues. I explain how not all locking leads to blocking and discuss various isolation levels—read committed, snapshot, repeatable read, serializable—and their implications on query execution. Additionally, I cover different types of locks SQL Server can take, such as row identifier, key or row level, page, extent, heap or b-tree, object, application, metadata, allocation unit, and database-level locks, providing practical examples to help you better understand these concepts in action.

Full Transcript

Erik Darling here with Darling Data. And I just slammed a shot of espresso, so I am quite jazzed to do this video. What I wanted to talk about in this video is a little bit about sort of locks, blocking, and isolation levels in SQL Server, because I get a lot of questions about it. I suppose that’s the peril of working on store procedures that help folks sort out blocking and deadlocking issues. SQL Server, SP Blitzlock, do a ton of work on that, SP Human Events Block Viewer. SP Blitzlock will look at deadlocks, SP Human Events Block Viewer will look at blocking chain stuff. Both very useful store procedures to me in the consulting work that I do. But the thing that we’re going to talk about off the bat is that not all locking leads to blocking. Many modification queries under various circumstances can push through without impacting any other user queries at all. With optimistic isolation levels, you can, of course, avoid a lot of the most basic reader on writer and writer on reader locking and blocking issues that SQL Server offers you by design under the pessimistic read committed isolation level that it uses by default, which was a terrible mistake.

And you can also, under the snapshot isolation level, to some extent, have modification queries not block each other. But I don’t run into a lot of people using that very happily or very successfully. All right, so let’s start at the top. There are different types of locks that can be taken. If you have a table that does not have a clustered index, and you have row identifiers in it, SQL Server can take row identifier locks. It can also take key or row level locks on clustered and nonclustered indexes.

I use key and row interchangeably because a lot of people in the database universe do that. I do not want to make sure that everyone understands me when I’m talking about these things until someone comes down from Mount Database and has tablets in hand that tells me which term to use for those types of locks. From here on out, I’m going to use them both and use them interchangeably. SQL Server can lock pages, it can lock extents, it can lock heap or b trees, it can lock objects, it can lock application. If you use sp-get-app-lock, you can sort of take a logical lock via a portion of code that disallows other portions of other code to execute until that lock is released.

SPF app-lock is great. I have an old video about that. I might update that in the future, I don’t know. It can take metadata, allocation unit and database level locks. All right, so going back up the list a little bit, I think the extent locking thing was a bit more common when TempDB had a lot of mixed extents in it.

SQL Server would lock extents, that’s why there used to be a lot more. They used to have to use trace flags to avoid a lot of TempDB contention stuff. You can further avoid metadata contention in TempDB by using the in-memory TempDB system tables that popped out with SQL Server 2019.

Got some improvements in 2022. A lot of metadata locks that you might see otherwise would be, you know, if you’re, well, I mean, like, so reading from TempDB system tables while there’s a lot of TempDB activity can cause blocking in there. You know, some of the views that describe tables, indexes, identity columns, things like that, those can get blocked if there’s, like, active inserts to, big inserts to a table or something.

Those are sort of typical metadata locks that I’ve seen around. Also, if you’re updating stats, and, like, queries and store procedures are trying to compile, I’ve seen metadata contention around that. Allocation unit locks I don’t think I’ve seen in the wild.

I would imagine those would have something to do with files, file groups, partition switching, something along those lines. But, again, I don’t, that’s all speculative. I don’t, like I said, I’ve never actually seen allocation units get locked or cause blocking in my travels through SQL Server.

And, of course, database level locks, stuff like, you know, altering the database, alter database commands. You might also see metadata locks if you alter objects within the database. And then you have lock.

And so those types of locks, a lot of those locks, especially, like, the key, page, and object level locks. And, well, at the query level, you can affect those with various hints. You can instruct the storage engine how you think it should go.

But if you don’t do that, then the storage engine really decides both whether to start with the rid or key slash row locks or page locks and then escalate up to object level locks. You can instruct the query to start with one of those, but it’s still, escalation is still possible. You can also, of course, use, like, tab lock, tab lock X to lock down an object entirely.

I do that a lot with inserts into temp tables in order to influence a parallel execution plan. You do need compat level 130 and up to get that going, or you select into and compat level 130 and up. Then you have lock modes.

So shared update, intent exclusive, exclusive, schema stability, schema modification, bulk update, key range. Key range is pretty specific to the repeatable read and serializable isolation levels. You can, to some extent, affect the lock modes that queries use with query hints.

I think probably the most common one you would see is instructing select queries to use upd lock hints. Maybe in combination with some other hints to lock specific rows via select for, like, queuing processes, other stuff like that. Then you have isolation levels.

Now, isolation levels are not decided by the storage engine at all. Isolation levels are decided by queries and database settings and stuff like that, so to some extent, connection string settings. But if left to its own devices, the storage engine, together with the query optimizer via cardinality estimation, decides locks and lock modes.

Now, depending on the intent of the query, the lock mode might be sort of unavoidable. You know, if you’re updating, deleting, inserting data, you’re going to need exclusivity at some point in those processes. And if you’re altering something about the database, you’re going to need exclusivity via, like, alter table, alter column, alter database, stuff like that.

So that stuff gets decided kind of for you, kind of depending on what the query is and what hints you provide. Also, indexes have a lot of impact on the types of locks that the storage engine will be influenced to use. Isolation levels are completely decided by you, the query writer, or you, the database administrator.

Now, you can choose read uncommitted or no lock. You can choose, well, again, the database default is read committed. You can enable read committed snapshot isolation to help your reader and writer queries not step all over each other.

You can enable snapshot isolation to do the same thing, but that does require you to request that isolation level either at the query or at the store procedure level for that to happen. You can also choose repeatable read or serializable. Certain things in the SQL Server will escalate to the serializable isolation level to complete.

Probably the most common one of those would be foreign keys with cascading actions. So if you’re foreign keys, cascade, updates, deletes, or anything like that, then those will be promoted to serializable to respect referential integrity and all that good stuff. You can also, if you so choose, use what’s called the read past hint.

Read past is often sort of confused with the no lock read uncommitted hint, but it is different. Rather than reading in-flight changes to rows, if rows are locked in a table or index, it just says, we’re just not going to read those. We’re going to go on to the next thing.

That’s also very helpful with queuing processes where you don’t want the query trying to find the next thing to work on in the queue to get stuck waiting for something else to release a lock on a row so that it can proceed. The hints in the indexing for those types of processes are really important. So the important thing to remember here is that isolation levels are decided by you, by database level settings, and by certain conventions that you use in the database.

Again, cascading foreign keys, probably the most common thing that you’ll see get escalated to the serializable isolation level. But the isolation level is largely dictated by you. Lock mode is largely dictated by the intent of the query or command that you’re sending over to SQL Server.

You can impact this a bit via various locking hints, but for the most part, again, like I said, some of these lock modes are unavoidable depending on the intent of the query, especially insert, update, delete, or the most unfortunate knuckleball in the world, the merge syntax. And then locks are typically decided by the storage engine and cardinality optimization from the query optimization process. But you can also, to some extent, send directives and hints to the storage engine about what type of locks you want to take.

It is, I guess, a fairly common practice to request row locks, but that can be ignored and those can be escalated up to object level locks. Page level locks are typically not ignored, but those can also escalate up to object level locks. And then, you know, everyone has a weird day.

But of course, like I said before, indexing also will have a bit to do with the types of locks that SQL Server takes, especially for modification queries. It’s very important to index appropriately for the where clause and join conditions of your modification queries. The easier it is for SQL Server to find the data that it needs to modify, typically the better choices it will make as far as which locks to use when those queries execute.

So, a little bit of fun there. Anyway, that’s about enough about that. A lot of abouts in there.

People think I’m Canadian when they hear me say about, because I don’t know why. Maybe I’ve just got a Canadian tongue. Never can tell.

I don’t have any other Canadian mannerisms, though. I can tell by my lack of politeness and… I don’t know.

I don’t want to besmirch any Canadians. My Canadian friend Michael really helped me out with SB Human Events Block Viewer, which I just recorded a video about. So, I’m going to leave the Canadians alone on this one.

Unless they start any more wildfires on me. Turn New York into Mars again. Anyway. Thank you for watching. I hope you enjoyed the video.

If you did, thumbs up. It’s greatly appreciated. I like watching those numbers go up. If you like this type of content, hit the little bell button to subscribe to the channel. I’ll produce a lot of stuff.

Though I am going to be on vacation for a little bit in August. So, video production will necessarily grind to a screeching halt then, most likely. But, you know.

I’ll be back at it someday. And, of course, I hope you learned something. I hope you enjoyed yourselves. And I will see you in another video sometime. Thank you.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Why You Should Attend My Precon At PASS Data Summit This Year

Radio Famous



You’re not gonna wanna miss it! Registration is here.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Why You Should Attend PASS Data Summit This Year

Promo Video



I couldn’t have said it better myself. Registration is here.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

A Little About Windowing Functions And Joins In SQL Server

Do Best


Video Summary

In this video, I delve into some common issues in query writing and indexing that can significantly impact performance. Starting off with a few personal updates, I recently got a new haircut and am experimenting with different socks—though you won’t see them in the video, I’m trying to figure out which level of subscription is best for revealing my sock choices! Moving on from the fun stuff, I address a query scenario where using `ROW_NUMBER()` in conjunction with joins can lead to performance bottlenecks. By examining how SQL Server handles this situation without indexes and then creating an appropriate index, we uncover some interesting insights into parallel execution plans and the importance of proper indexing for windowing functions. This video is inspired by a real-world issue I faced recently where a query was running extremely slow, taking up to five minutes before optimization. Through better syntax and strategic indexing, we see significant improvements in both performance and execution plan efficiency.

Full Transcript

Erik Darling here with Darling Data. And I just want to give you a couple personal updates on me. One is that I got a haircut. Feeling good about the haircut. Two is I’m wearing a new pair of socks. You can’t see that. I haven’t quite decided which subscriber level I want to start revealing my socks at, but we’ll figure that out eventually. From a technical perspective, well, a few things. Got a rather brusk comment the other day. Someone saying that they could hear me breathing and they didn’t like that I cleared my throat and that I was full of myself. So, I’ve taken some steps to remedy all three of those things. One, as a fellow misophonia sufferer, I understand the… as well.

So. it, as sortwa- not because I believe in juice cleanses but just because I am trying to be less full of myself and so I am replacing myself with various cold-pressed juices that’s going fairly well I am very very much not as full of myself after a few days of that but in this video what I’d like to talk about is a common I think well it’s a mix of it’s a mix of problems all right it’s sort of a query writing problem and then when you once you write the query correctly you might even uncover an indexing problem and the problem we’re going to talk about is related to when you need to do a join that involves a row number or a windowing function that you’re filtering on you can see in my query here I have row number generating some some numbers over rows and then I’m filtering on that row number outside of the the join so if we have no indexes on the table and I’m pretty sure that I already dropped indexes but we can validate that there if we get rid of all the indexes on the table now this is the index I’m gonna create and I’ll talk to you about why of course because I care about you and I care about you learning for free what we have here when we do when we run this query I’m gonna try not to scroll down any further and give away too much about the the rewrite but we’ll stick with this for now when we run this query it’s gonna take about two and a half seconds at least it did before it might be might be a little bit longer oh I was so close I was so very close 2.553 I was almost on the nose there and I mean that’s not horrible but I’ve written I’ve engineered this query a bit to not be so horrible so that I can record a short video and not sit here for a half hour waiting for a query to run this video was inspired by the fact that I actually just fixed a query like this today it was running for four and a half five minutes so it can get kind of gnarly out there anyway so what this query does is it goes parallel which is understandable because this query has quite a bit of work to do when multiple CPU threads and cores working on this thing all at all at once it’s probably a good idea and SQL Server asks for an index to support this query it says an index would be very useful here I want an index on post type ID and score and it wants us to include owner user ID which is I guess like a an okay junior DBA index it doesn’t take into account several things I mean well it is focused on on the where clause that we have inside of the join where we have an equality predicate on post type ID and an inequality predicate range predicate on score it doesn’t really take into account the fact that we need to partition by owner user ID order by score here order by score here that we need to join on owner user ID here so well it would help the where clause wouldn’t be very useful for any other part of the query this is another another big reason why over the years I’ve grown to really just trust missing indexes don’t think I don’t think they’re often worth prescribing to people outside of rather narrow circumstances where there’s sort of just blood everywhere and you need to you need to like there’s just no nonclustered indexes on any tables and you need a starting point and you can you can tune queries more individually from from there on out but uh so that’s not it’s not a very good missing index request but it will at least get you somewhere right wouldn’t wouldn’t get you all you know across the finish line but it would get you at least it would at least get your sneakers on so you could you could make you could make the run a better way of writing this query is usually to use uh the apply syntax in this case we have a left join up top so we’re going to use outer apply here to make sure that our query is semantically correct which is a I hear it’s important people want accurate results at least sometimes mountain no lock hence I see makes me wonder why people are like oh well I don’t think the results are right like well were they ever good question but I’m not going to run this as is because if we run this as is it’s going to take a long time and the reason it’s going to take a long time is well if you’ve been paying attention to my youtube channel or my blog for any uh any period of time you may have seen me post or talk about eager index spools and we’re going to hit one of those here the reason why we’re going to hit one of those here is because when we use the outer apply syntax sql server is going to choose a nested loops join to uh execute this query uh where is above we used a hash join and nested loops join because we take a row and we go do some work well if we don’t have a good index we’re going to end up scanning that post table a whole lot so sql server I want you to pay close attention up here because there is no green text along here like there was in the query above saying hey we could use an index sql server just creates one for you and if we look at what happens we talk about this a little bit now I know I have another video on eager index pools somewhere somewhere along my channel but you know I’m just going to talk through it a little bit because I don’t want to make you go jumping around from video to video searching for things I don’t know it doesn’t seem like fun to me so uh there there are a few fundamental issues that I have with eager index pools one is that in the context of a parallel plan they are absolute liars uh they are built on a single thread you can read from them uh multi-threaded but uh reading all of the data out of the post table will happen on a single thread that’s like a 17 million row table building a 17 million row index on the fly is not fun especially because uh spools in general which are built over in temp db don’t really have any of the optimizations that creating temp table selecting data into inserting data into creating indexes in temp db have uh they are uh built a row at a time which is also another rather unfortunate scenario uh for the eager index pool um if you find that a weight on your server called exec sync e-x-e-c-s-y-n-c all one word is pretty high you should start going through query plans looking for large eager index pools small ones probably won’t make that big of a difference if the table is like you know 10 000 100 000 500 000 rows you probably won’t have too much of an issue building uh an eager index pool in a plan and i mean if you have to build it multiple times that’s a different story but um usually it’s when tables tend to get on the larger side i think 17 million rows qualifies at least in this case for being on the larger side uh the bigger the table is the more painful building the spool gets um and of course sql server doesn’t offer you a missing index request to say hey i’d like to stop building this index every time the query runs i’m just going to build it every time it runs and i’m going to make fun of you behind your back so this is how uh using better syntax can uh sort of uh unveil indexing issues uh in in the uh in the in the database and uh if we hover over the eager index pool we can of course see the definition of the eager index pool that’s equal the the definition of the index rather the sql server creates uh the output list is irrelevant in this case because the score column is represented here already so we don’t need to care too much about that uh what we do need to do is take into uh mentally take into account that we are seeking on three columns owner user id post type id and score all right so for most eager index pools you want to take the seek predicates and create an index with a key on the seek predicate columns and then if there are any columns that are in the output list that are not already represented in the key of the index and we’d want to add those to the included columns cool we got that we’re good there so again i’m not going to run that because it would take a while to run but i want to i want you to keep in mind that when we run this query i’m just going to take again about two and a half seconds 2.6 this time shocking uh when we run this query we get a fully parallel execution plan we scan the clustered index and we filter to where the row number equals one here then we come out here and we sort our data here we also have another sort inside the inner part of the nested loops join and because we don’t have an index that helps us with the uh with the row number function again that’s owner user id partition by and score descending ordered by uh we have to we have to sort that data for all of the results that come out of the post table here right that that that sort is going to ask for 490 megs of memory to to run and do that so let’s create our index and let’s see how the plans for these change now the index i’m creating leads with owner user id and it’s going to lead with owner user id because i’m tailoring this index to the apply syntax if i were trying to tailor this syntax this index to the left join syntax i would probably put post type id score first and then owner user id because we can’t do the filtering for this query until we hit the join with the apply syntax and we when we get the nested loops join we can get uh the apply nested loops version of a nested loops join and push the owner user id column into the inner side of the nested loops where it acts as a predicate which i’ll of course show you when we get there eventually so let’s run these two queries now and we’re going to see how the execution plans for these change now i know we only looked at the estimated plan for the apply syntax before but again i don’t i didn’t want to sit there for you know a couple minutes waiting for that to run first thing i want you to notice is that sql server uh asks for a different index on the well actually that’s for the same index on the post table uh because it still wants the filtering in here first because it has to join on this later but again you know owner user id is a join column i don’t understand quite understand why we wouldn’t want that to be um we wouldn’t want that to be uh in the key of the index so this plan is single threaded now uh and this is what happens a lot when you’re indexing for queries especially that use windowing functions in the real world is something that i see happen all the time you create a better index and even though this query is has a cost of sorry an estimated cost of 108.731 query bucks a sql server does not choose a parallel execution plan for it perhaps sql server is catching on to the fact that parallel merge joins were a mistake and because it chooses a merge join here we don’t need we would we would need to uh we we sorry we run the risk of uh getting all sorts of uh intra-parallel query uh uh thread dependencies and deadlocks because of potential ordering but um one thing that is kind of interesting about this query plan is most of so uh most of the time um you know you like the merge join of course expects sorted input and since we have owner user id sorted we can do the merge join without a uh without without a sort operator in the plan and double that with the fact that you know uh the row number function is kind of putting owner user keeping owner user id in order we don’t need to sort the data any any further to support the merge join it’s a little interesting that uh sql server did not join a did not choose a hash join here or it did not and that it did not choose a nested loops join here considering we have a pretty prime index to support a nested loops join and we would have uh at least a pretty good scenario for a merge join we could if this plan went parallel we used a bitmap we could probably do a little bit more work there or a little bit less work there especially because we end up you know it’s a merge join we have to scan this whole index anyway all right we have to do an index scan of the nonclustered index i’m going to frame that up because i’m kind of standing in the way of it all right and this is why sql server wants the missing index request on these two columns again here because it has to do this filtering first and then way over at the merge join is when we can finally evaluate if all right a little bit of screen lag there’s where we can finally evaluate if owner user id equals the id column in the users table so the the second query the outer apply query is in a slightly different shape uh rather than taking 4.2 seconds this takes about a little under 500 milliseconds you zoom in on that hello screen lag 483 milliseconds this does get a parallel plan now i could have nerfed this to get a max stop one plan or a single threaded plan but i chose not to because i want you to see what i encounter a lot of the times in query tuning and that is that with a like you know better indexing in place oftentimes the the query pattern above that i showed you with the left join to the the select from the post table then joining to the user’s table outside and also filtering to the row number function outside uh is slower right a lot of the times because sql server will choose a serial execution plan for that now part of the beauty of using the apply is that we don’t do one big scan of the post table we take a pretty small set of rows from the users table right we take about 13 000 rows from the users table we go into our nested loops join and for each iteration of the nested loops join we seek into uh the index that we created on the post table which you know that’s not bad for the amount of rows that we have to go and do go and like find over there and go do stuff with uh we generate our window function and of course over here we filter out our window function at the end which is good enough i think bring the query from you know uh the the original form with no indexes to support it to uh the the form where we do have an index to support the query and then uh writing the correct syntax we made it we made enough of a dent in this query i think to to say hey i think i think i think we’re in good enough shape here there’s not really a whole lot else you could do now keep another reason why um over time i’ve grown to distrust the misinformation that are that is missing index requests is now sql server for this query is saying oh we need an index on the users table on reputation and display name well i guess we guess we could shave 76 milliseconds off this thing maybe i don’t think that’s really i don’t think that’s really what i would call a victory at least i wouldn’t i wouldn’t i wouldn’t i would never go to someone who hired me and say hey we’re down 76 milliseconds here’s the invoice doesn’t seem doesn’t seem like a good plan to me anyway uh to kind of recap what we talked about here because we are getting up around the 20 minute mark and i must my mustache is a little bit itchy uh when you’re writing queries where you’re going to use a windowing function to uh determine the top you know one thing for something we could you know again if i was working on this for a long time i might experiment with writing the query as a top one uh or something like that it all kind of depends a little bit might even might even try like a max query here right get the see how that works so it kind of depends on the data and the indexes a little bit might also depend on you know if i’m even able to create indexes maybe i maybe i’d want to do just like a big aggregate where i get batch mode or something but when you’re writing queries that need to find the this specifically where you’re looking for the row number you’re filtering to the row number and uh you need it to be fast generally the better i the better prescription for writing this is to use the apply syntax to encourage the optimizer to use a nested loops join and also to have a decent supporting index for what the query is uh well in this case the join columns the filtering columns and of course the the windowing function columns are all very important things to consider um one pretty big deal that uh that i find is uh if you don’t have an index that perfectly supports the windowing function and you end up like we saw originally with with a sort on the inner side of the nested loops to put the data in order for the windowing function uh that one big sort on the inner side is typically a whole lot more painful than doing one sort per owner user id on the inside right because you you you would you would you would only need to sort data that you found for each owner user id rather than sorting the entire table that you’ve that you found after filtering out these predicates by owner user id so i forgot what i was saying must be the juice cleanse cleaned out my brains apparently i don’t know what that says about where my brains are anyway uh thank you for watching if you liked this video even in the tiniest bit like an iota if you like this video uh please also hit the thumbs up button to like it uh if you like me my haircut you want to see my new socks uh or you appreciate my uh advanced noise gate technology or you no longer have to know that i am a biologically viable uh entity you can subscribe to my channel for more noise-free sql server content uh we hit 21 minutes on this which i did not intend to do so i’m going to uh i don’t know maybe i’ll go faint anyway uh thank you for watching

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

T-SQL Tuesday: 164, The Feelings Roundup #tsqltuesday

T SQL Tuesday Logo

Headline News


For this most recent T-SQL Tuesday, I challenged bloggers (using the term challenge weakly here) to think of the last time code they saw made them feel a feeling.

I wasn’t terribly specific about what kind of feelings were in play, and so I kind of expected some negative ones to creep in. Most of the results were overwhelmingly positive.

This challenge made me realize that code, like people, comes in all shapes and sizes. And that code, like music, has quite a wide audience. Some folks get down with the Bieber, and others need a full symphony to locate their jollies.

I’m not judging here, just making a casual observation. Just don’t wear a t-shirt of the language to the conference, and we’re still cool.

Anyway, on the roundup!

Comment Section


I’m curating these from the comment section of my post, in order. Here at Darling Data, we strive for fairness in all things.

Extended Viewing


If you wrote a post that didn’t ping back to me, or you didn’t leave a comment with the link, please let me know so I can add it here.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

Introducing sp_HealthParser: Digging Deep Into The System Health Extended Event Session

Boredom and XML


Get the code here!

I recently found myself in the midst of a client issue that lead me to need a bunch of corroborating data from the system health extended event session.

There’s hardly any good documentation on it, and even fewer well-written queries or resources for parsing data out of it.

So now I’m making it easy for you, because I care about you more that Microsoft does.

If you need further proof of that, just look at the Query Store or Extended Events GUI.

Now look at sp_QuickieStore and sp_HumanEvents.

Who loves you, baby? I do.

Activated Development


Since this is currently in beta, it’s missing a lot of the bells and whistles that my other stored procedures have.

Right now, it just pulls all of the useful performance data out that I can get at:

  • Queries with significant waits
  • Top waits by count
  • Top waits by duration
  • Potential IO issues
  • CPU usage details
  • Memory usage details
  • Critical system health issues
  • CPU intensive queries
  • An incredibly nerfed blocked process report
  • Query plans for blocked queries

I know that there’s gobs of data around errors and security and all that jazz, but that stuff is often irrelevant to what I’m trying to coax out of a SQL Server.

In the future, I’ll be doing what I can to make sure I’m pulling all of the performance-related event data that I can, and trying to add some analysis and additional filtering to each section.

If you have any feedback, please open issues on GitHub.

Get the code here!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

sp_HumanEventsBlockViewer: Update Roundup!

Busy Bee


I’ve been spending a lot of time lately working on my free scripts. And, I know, I’ve been recording videos about them, but some folks out there like to read things.

In this post, I’m going to talk about a couple cool changes to sp_HumanEventsBlockViewer, a procedure I wrote to analyze the blocked process report via Extended Events, and wish I had given a snazzier name to.

You see, when I wrote it, I pictured it as a utility script for sp_HumanEvents, which will set up the blocked process report and an extended event. But it turns out I use it a lot more on its own.

Go figure.

Well, you live and your learn. Sometimes.

Plans!


It used to be that there was a column in the output with SQL handles pulled from the blocked process report, and you could take those SQL handles and run Your Favorite Plan Cache Script to (maybe) find the plans for those queries.

That’s clunky, as a wise man once said. Now, there’s an additional result set with all of the available cached plans related to the blocked/blocking queries.

It will look something like this, and there are many related query execution metrics also returned. It just doesn’t make a good screenshot to capture them all.

2023 07 07 17 48 09 scaled
metrical system

Priorities!


The findings results section used to only be sorted by the check ID. Through the magic of window functions and aggregates, I’m now also sorting the results by which database/objects/whatever had the highest amount of blocking.

It should look something like this:

2023 07 07 17 52 22
get to it

The code that handles this is pretty cool, and it’s not something I’ve seen many people do. It’s an aggregate inside of a windowing function, that does something like this:

SELECT TOP (10)
    p.OwnerUserId,
    n = ROW_NUMBER() OVER
        (
            ORDER BY 
                COUNT_BIG(*) DESC
        )
FROM dbo.Posts AS p
GROUP BY
    p.OwnerUserId;

In a normal query, you could just sort by COUNT_BIG(*) DESC to order results, but when you’re putting results into a table to return later, the sorting won’t be preserved.

Assigning a row number to the aggregates means I can sort by check number, and then the row number within each check, to put the worst offenders up at the top.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

sp_PressureDetector: Update Roundup

Busy Bee


I’ve been spending a lot of time lately working on my free scripts. And, I know, I’ve been recording videos about them, but some folks out there like to read things.

In this post, I’ll be talking about some additions and changes to sp_PressureDetector, my script to quickly detect server bottlenecks like CPU, memory, disk, locking, and more. Well, maybe not more. I think that’s all of them.

Disk Metrics


I added  high-level disk metrics similar to what’s available in other popular scripts to mine. Why? Sometimes it’s worth looking at, to prove you should add more memory to a server so you’re less reliant on disk.

Especially in the cloud, where everything is an absolute hellscape of garbage performance that’s really expensive.

By default, I’ll show you results where either read or write latency is over 100ms, but you can change that with the following parameter:

EXEC dbo.sp_PressureDetector
    @minimum_disk_latency_ms = 5;
2023 07 07 17 03 25 scaled
diskenstein

Results may vary. Mine look like this.

CPU Time


This only works for SQL Server Enterprise Edition right now, because it uses a DMV related to Resource Governor.

In the wait stats output, you’ll see how many hours of CPU time queries have consumed since server startup. I know, someone could clear out the Resource Governor stuff, but I’m willing to embrace that as an incredible rarity.

2023 07 07 17 17 38 scaled
yay!

I’m also aware of the fact that I could get similar information from sys.dm_os_schedulers, but that’s only available in SQL Server 2016+, and I sometimes have to support older versions.

On the fence a bit about doing some checks, but right now it’s like…

  • Are we on Enterprise Edition? Use the Resource Governor thing
  • Are we on Standard Edition? Is it 2016 or better? Use the other thing
  • If not, then what?

I wrote a similar bit of code into sp_BlitzFirst, and the fallback is to sum all the CPU time from queries in the plan cache, but that’s awfully iffy. Most plan caches I see, all the plans are less than 24 hours old.

If I figure something else out, I’ll work on it, but for now I’m sticking with this.

New Columns


Down in the CPU details section, there are some new columns that detail things like

2023 07 07 17 29 33 scaled
torso

These are useful, especially during THREADPOOL demos, ha ha ha.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

sp_QuickieStore: Update Roundup

Busy Bee


I’ve been spending a lot of time lately working on my free scripts. And, I know, I’ve been recording videos about them, but some folks out there like to read things.

Why haven’t I been writing lately? I haven’t felt like it. I’ve been enjoying getting my video recording set up worked out, even though one anonymous user hates that I clear my throat sometimes.

I can assure you, anonymous user, it would be far more unpleasant to listen to me talk with a bunch of allergy in my face. Tis the season, and all that.

Anyway, the next few posts are going to detail what I’ve been working on. This one is about sp_QuickieStore, which is my stored procedure to get and search through Query Store data.

All Of’Em


The first thing on the list that I want to talk about is the ability to cycle through all databases that have Query Store enabled.

If you have a lot of databases with it turned on, it can be a real hassle to go through them all looking for doodads to diddle.

Now you can just do this:

EXEC sp_QuickieStore 
    @get_all_databases = 1;

And through the magic of cursors, it’ll get all your worst queries in one go.

AdProc


The next thing is the ability to filter to a specific type of query, either ad hoc, or owned by a module.

Why? Well, sometimes I work on vendor systems where the user queries are submitted via ORM-type things, and more complicated background/overnight tasks are owned by stored procedures.

I also work on some systems where folks write stored procedures to touch vendor tables, and they want to focus on those because they can’t touch the vendor code.

For that, we can do this:

/*ad hoc*/
EXEC sp_QuickieStore 
    @query_type = 'a';

/*module*/
EXEC sp_QuickieStore 
    @query_type = 'literally any other letter in the alphabet';

I know this looks silly, but there’s no great way to differentiate what kind of module owns the code for non-ad hoc queries. View? Function? Procedure? Whatever.

If you care about only ad hoc queries, put an ‘a’ in there. If you care about code owned by modules, put anything else in there. That’s all it’s checking for, anyway.

Time Light Zone


This was a tough one to do, and it’s something that not even the Query Store GUI does correctly when searching through data.

/*time zone*/
EXEC sp_QuickieStore 
    @timezone = 'Eastern Standard Time';

/*time zone*/
EXEC sp_QuickieStore 
    @timezone = 'Eastern Standard Time',
    @start_date = '20230707 09:00 -04:00';

The first command will show you first and last execution times in whatever valid time zone you choose, which can be used to override the default behavior of displaying them in your server’s local time.

That’s cool and all, but now when you search through Query Store data based on start or end dates, I’ll convert your search to UTC time, which Query Store stores data in.

In the background, I find the difference in minutes between your local time and UTC, and manipulate your start and end dates to match.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.