Erik and Kendra Rate SQL Performance Tuning Techniques

Erik and Kendra Rate SQL Performance Tuning Techniques



Wanna catch us at PASS Data Summit this November? Get your tickets here!

Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Updated First Responder Kit: A Remembrance Of Scripts Past

In The Beginning


Or at least at some point, back when I gave all my precious blog blood somewhere else, I used to quite enjoy writing the release notes for the First Responder Kit. It was fun, and there were a lot of contributors to credit.

This most recent release had a note in it that got me taking a stroll down memory lane.

Deprecating sp_BlitzInMemoryOLTP, sp_BlitzQueryStore, and sp_AllNightLog

sp_BlitzQueryStore was originally written by Erik Darling when he worked here. He’s moved on to start his own excellent company, plus his own sp_QuickieStore. You should be using that instead.

sp_BlitzInMemoryOLTP was always kinda distributed as a courtesy – the real home for it is in KTaranov’s Github repository, and you can still find it there. It hasn’t been updated in over 6 years, and I’ve never seen anyone using it, so I’m removing it to streamline support issues.

sp_AllNightLog was a ton of fun when we built it several years ago, but it’s consistently had a problem. Companies start using it, then decide they want to build something even more ambitious, typically a C# service with robust error handling and scheduling. sp_AllNightLog isn’t the kind of thing I want to encourage beginners to use – it’s complex.

I didn’t have much to do with sp_BlitzInMemoryOLTP. I’m still not entirely sure what it does. All I know is that In-Memory was the hottest frog in the pan for exactly 14.9 minutes.

But I have some quite fond memories of building sp_BlitzQueryStore, and sp_AllNightLog.

I didn’t write every bit of code in either one, but I definitely started work on and them pitched in quite a bit on both. Other contributors deserve whatever credit they’re publicly willing to take.

If you’ll permit a fella with a lot more grey on his face than there was when these two procedures were first F5 birthed into the world to reminisce a bit, I would like to eulogize them here.

sp_BlitzQueryStore


I had been working on sp_BlitzCache just about full time for a couple years, since Jeremiah (who wrote it originally) had decided to embark on a professorial career. When Query Store got announced, I knew I wanted to write something for it.

After all, this seemed like a no-brainer for folks on SQL Server 2016 to adopt. I just had no concept of what I wanted to do, until one day…

I had just gotten off the phone with the worst credit card company in the world, because someone had purchased ONE-HUNDRED $99 Play Station gift cards from a Russian IP address with a .ru email, and they told me that I would need to fill out 100 dispute PDFs to dispute each charge separately.

I forget where I was walking home from, but I was in the company Slack, and I had a message from BrentO asking how I wanted to approach it, and I felt like I had to make something good up on the spot. It turned out to be: I want to find all of the worst metric spikes, and grab the queries that ran during them. So it would look for the highest CPU, reads, writes, memory, tempdb, etc. and look for the queries responsible for them. And since we have all this groovy historical data, I wanted to show which queries were sensitive to parameter sensitivity by looking for wild swings in those metrics.

In theory, this was a great idea. In practice, those queries were god awful slow. It wasn’t all my fault, of course; I can’t take full credit. I see a lot of scripts (including queries from the SSMS GUI) that hit Query Store which are equally as God awful slow.

Perhaps ironically, some of the absolute slowest points in any Query Store query are the ones that hit the “in memory” tables.

sql server query plan
that is seven minutes, thank you for asking

At any rate, SQL Server 2016 adoption was fairly slow, and Query Store adoption was even slower. It was even hard to recommend turning it on at first because of all the bugs and issues that were cropping up and getting fixed in CUs (and even then, Service Packs). SQL Server 2017 didn’t help things at all, and I was out on my own in the world by the time SQL Server 2019 got released.

So poor ol’ sp_BlitzQueryStore languished a bit. Of course, as I added checks and gizmos to sp_BlitzCache, I’d also add them to sp_BlitzQueryStore, but… It just wasn’t the same every day utility belt tool for me.

When I sort of lost faith in the whole thing was sometime in 2018 when I tweaked a query in sp_BlitzQueryStore to try to speed things up, and it made my local SQL instance stack dump, and I had to manually restart it. If that happened with a client, hoo boy!

But here’s to you, sp_BlitzQueryStore! It’s how I first started learning the Query Store DMVs, how they related, and what data was in them.

You’re like that long-term relationship that ends before you meet the person you end up marrying.

sp_AllNightLog


This made me feel cool, because:

  1. I’d always loved Log Shipping (still hate AGs)
  2. It was my first “programming” stored procedure

Let me clarify point 2 a bit, because I’m not one of those “SQL isn’t coding” people. Most things that I write only work with data. This worked with REAL LIVE FILES. Writing them. Copying them. Restoring them. Across two different servers. Where neither one knew the other existed.

Wild. WILD!

I’m not sure if I’m allowed to say the name of the company that wanted it, but they were based in Chicago, so I ended up flying out there a couple times to work on it along with Brent.

That kind of stuff had never happened to me before, and has only happened a couple times since.

I learned some hard lessons from this one:

  1. If you fat-finger an invalid date value in an output parameter, you’ll end up with a REALLY HARD BUG TO FIND
  2. If you don’t add a WAITFOR to looping code that’s constantly looking for new databases, new backups to take, and new backups to restore, you can really drive one CPU crazy

It was also when I learned that you can’t add triggers to “system” tables, like restorehistory, in the dbo schema, in msdb. If we could have done that, a few things would have been way easier.

Of course, my fondest memory of this one was when it finally worked. I remember hitting a bazillion errors and issues and debugging stuff for ages. And then one magical day, the Agent jobs started up, and it was all green.

I was far from alone in working on it; I don’t want it to sound like THIS ONE’S ALL ME. There were a group of 3-4 people who put in work writing and testing things.

That was kind of the nicest thing about it — real collaboration with real people in real life — not just pushing changes around the internet and asking people to test them.

The Departed


While it is a bit sad to see them go, I totally understand why they had to. It’s difficult to be responsible for large code repos that you don’t use a lot, and have become unfamiliar with because you don’t use them regularly.

I’m a bit surprised that sp_BlitzBackups didn’t also end up in the deprecation pile. It hasn’t had an issue opened since 2019, or any commits aside from version bumps. But maybe it’s just that well-written! If that ever does get deprecated, I have exactly one funny memory of the writing process, and it probably won’t get a blog post. I can spare you the drama of “it runs really slow when there are hundreds of databases” and “how slow?” and “like a minute” and “why do you need to run it every 15 seconds anyway?”.

Perhaps the most surprising thing about the years since 2016 is that not a single third party monitoring tool has embraced Query Store data in their performance diagnostics. Hopefully someday someone tells them about it, I guess?

But hey, that’s enough about that! Go get the most recent release of the First Responder Kit because it has a bunch of new and improved in it. Enjoy your shinies while they last.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Difference Between Read Committed And Read Committed Snapshot Isolation In SQL Server

The Difference Between Read Committed And Read Committed Snapshot Isolation In SQL Server



Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Is Using OPTIMIZE FOR More Reliable Than Forced Plans In SQL Server?

Parameter Sniffing?


I often see clients using forced plans or plan guides (yes, even still, to this day) to deal with various SQL Server performance problems with plans changing.

There’s usually an execution plan or two floating around that seems to be a good general idea for a given query, and a couple weird high-end and low-end outliers for very specific populations of values.

This is especially common in third party vendor environments where code and/or index changes may not be allowed without the okay from the high priest of tech support who only answers questions when their celestial craft passes near Earth every 27 years.

Of course, forced plans and plan guides can both fail. You may also run into a “morally equivalent plan” in Query Store that looks quite morally ambiguous.

Recently while working with a client, we came across just such a scenario. And of course, of the many reasons why a forced plan might fail, this one was just a… general failure.

The fix we came up with was to track down the compile values for that nice middle ground plan, and use OPTIMIZE FOR to push that plan shape into reliably reality.

Territory


Here’s a close enough approximation to what we did, with a good-enough demo. Trying to get a more realistic one was hard without a much more complicated schema, which the Stack Overflow is not.

An index!

CREATE INDEX 
    p
ON dbo.Posts
    (OwnerUserId)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

And a procedure!

CREATE OR ALTER PROCEDURE 
    dbo.OptimizeForStuff
( 
    @ParentId integer = NULL, 
    @PostTypeId integer = NULL,
    @OwnerUserId integer = NULL
)
AS
BEGIN
    SET NOCOUNT, XACT_ABORT ON;
    
    SELECT TOP (1000) 
        p.*
    FROM dbo.Posts AS p
    WHERE (p.ParentId = @ParentId OR @ParentId IS NULL)
    AND   (p.PostTypeId = @PostTypeId OR @PostTypeId IS NULL)
    AND   (p.OwnerUserId = @OwnerUserId OR @OwnerUserId IS NULL)
    ORDER BY 
        p.Score DESC, 
        p.Id DESC;
END;

All good so far, even if it does have an air of laziness.

Darwin


The problem was that when the query executed something like this:

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = 22656, 
    @ParentId = NULL, 
    @PostTypeId = 2;

It got a good-enough fast plan:

sql server query plan
i like you.

But when the query executed in almost any other way:

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = 8, 
    @ParentId = 0, 
    @PostTypeId = 1;

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = 1349, 
    @ParentId = 184618, 
    @PostTypeId = 2;

It got this sort of lousy plan.

sql server query plan
star dust

Even Worse


When stranger executions came along, things got way worse!

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = NULL, 
    @ParentId = 0, 
    @PostTypeId = 1;

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = NULL, 
    @ParentId = 184618, 
    @PostTypeId = 2;
sql server query plan
condemned

We need to avoid all of this.

Step Up


Here’s what we did (again, round about) to make sure we got the generally good plan across the board, without failures!

CREATE OR ALTER PROCEDURE 
    dbo.OptimizeForStuff
( 
    @ParentId integer = NULL, 
    @PostTypeId integer = NULL,
    @OwnerUserId integer = NULL
)
AS
BEGIN
    SET NOCOUNT, XACT_ABORT ON;
    
    SELECT TOP (1000) 
        p.*
    FROM dbo.Posts AS p
    WHERE (p.ParentId = @ParentId OR @ParentId IS NULL)
    AND   (p.PostTypeId = @PostTypeId OR @PostTypeId IS NULL)
    AND   (p.OwnerUserId = @OwnerUserId OR @OwnerUserId IS NULL)
    ORDER BY 
        p.Score DESC, 
        p.Id DESC
    OPTION
    (
        OPTIMIZE FOR 
        (
            @OwnerUserId = 22656,            
            @ParentId = 0, 
            @PostTypeId = 2
        )
    );
END;

Which gets us the original fast plan that I showed you, plus faster plans for all the other executions.

For example:

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = 8, 
    @ParentId = 0, 
    @PostTypeId = 1;

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = 1349, 
    @ParentId = 184618, 
    @PostTypeId = 2;

Go from 1.5 seconds to ~300ms:

sql server query plan
dorsal

And the two outlier queries improve quite a bit as well (though neither one is exactly great, admittedly).

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = NULL, 
    @ParentId = 0, 
    @PostTypeId = 1;

EXEC dbo.OptimizeForStuff 
    @OwnerUserId = NULL, 
    @ParentId = 184618, 
    @PostTypeId = 2;
sql server query plan
subjected

In all cases, the plan is generally better and faster, and sharing the plan across (though imperfect for the outliers) tamped down the extreme performance issues that were there before with attempts at forced plans.

Posit Hell


While I’m no great fan of OPTIMIZE FOR UNKNOWN, using a specific value can act like a less faulty version of plan forcing.

You shouldn’t pull this out every time, because it is a bit of duct tape to keep a sinking ship above water, but in oddball cases, it can be a quick and rather painless fix.

At some point, better solutions should be explored and implemented, but emergencies don’t generally allow for the greatest care to be taken

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Compiles! Recompiles! Stored Procedures! Temp Tables! Extended Events! In SQL Server

Compiles! Recompiles! Stored Procedures! Temp Tables! Extended Events! In SQL Server



Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

A Mild Annoyance With MERGE Statements And Triggers

I’m No Expert


I will leave the finer points of the problems with MERGE statements to the Michaels and Aarons of the SQL Server world.

This is just a… But why? post about them, because I’m honestly a bit puzzled by this missing implementation detail.

To get us to the point, I’m going to use a code snippet (with embellishments) from Aaron’s post here.

We’ll be starting with this table and trigger from the linked post, with a couple small tweaks to satisfy my OCD:

CREATE TABLE 
    dbo.MyTable
(
    id integer
);

INSERT 
    dbo.MyTable 
VALUES
    (1),
    (4);

CREATE OR ALTER TRIGGER 
    dbo.MyTable_All
ON dbo.MyTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
    SET NOCOUNT ON;
    
    IF ROWCOUNT_BIG() = 0 RETURN;
    IF TRIGGER_NESTLEVEL() > 1 RETURN;
  
    PRINT 'Executing trigger. Rows affected: ' + RTRIM(@@ROWCOUNT);
    
    IF EXISTS (SELECT 1 FROM inserted) AND NOT EXISTS (SELECT 1 FROM deleted)
    BEGIN
      PRINT '  I am an insert...';
    END;
    IF EXISTS (SELECT 1 FROM inserted) AND EXISTS (SELECT 1 FROM deleted)
    BEGIN
      PRINT '  I am an update...';
    END;
    IF NOT EXISTS (SELECT 1 FROM inserted) AND EXISTS (SELECT 1 FROM deleted)
    BEGIN
      PRINT '  I am a delete...';
    END;
END;

After all, one of the best ways to make sure you get code right is to copy and paste it from the internet.

Quiet On The Set


As much as we all love to dunk on MERGE, like cursors, heaps, and UDFs of various types, they did give you some neat options with the OUTPUT clause, like the $action column, and the ability to get columns from other tables involved in the query. You can’t do that with a normal insert, update, or delete when using the OUTPUT clause, though I think it would be cool if we could.

Working a bit with the code linked above, here’s an expansion on it showing the additional OUTPUT capability, but this is also where my annoyance begins.

BEGIN TRANSACTION
    DECLARE
        @t table
    (
        action varchar(6),
        i_id integer,
        d_id integer,
        s_word varchar(5)
    );

    SELECT
        mt.*
    FROM dbo.MyTable AS mt;

    MERGE 
        dbo.MyTable WITH (HOLDLOCK) AS Target
    USING 
        (
            VALUES
                (1, 'one'),
                (2, 'two'),
                (3, 'three')
        ) AS Source (id, word)
    ON Target.id = Source.id
    WHEN MATCHED 
    THEN UPDATE 
           SET Target.id = Source.id
    WHEN NOT MATCHED 
    THEN INSERT
           (id) 
         VALUES
           (Source.id)
    WHEN NOT MATCHED BY SOURCE 
    THEN DELETE
    OUTPUT
        $action,
        Inserted.id,
        Deleted.id,
        Source.word
    INTO @t 
    ( 
        action, 
        i_id, 
        d_id,
        s_word
    );
    
    SELECT
        t.*
    FROM @t AS t;

    SELECT
        mt.*
    FROM dbo.MyTable AS mt;
ROLLBACK TRANSACTION;

You likely can’t guess what I’m sore about just looking at this, because this isn’t what annoys me.

This is all fine, and rather a nice showing of capabilities for an oft-maligned (by myself included) bit of syntax.

The problem is really in the trigger.

What’s Wrong With Triggers?


If you work with trigger code enough, you’ll get used to seeing:

  • Catch all triggers that do a lot of gymnastics to differentiate insert from update from delete
  • A few separate triggers to catch each modification type separately, and still do some checking to verify

The problem is that in any trigger, the $action column is not directly exposed for use to determine the action of a MERGE statement.

Sure, you can add a column to a table to track it, or some other hacky workaround, but I consider it a quite poor design choice to not have the $action column as a part of the Inserted and Deleted virtual tables.

Having it there would also benefit generic modifications that are captured by triggers in some manner to make the absolute type of modification quite clear to query writers.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Join @bobwardms At SQL Saturday Boston For The Azure Workshop for SQL Professionals (@NESQLServer)

All’s Well That Friends Well


The nice folks at the New England SQL Server User Group (w|t) had me in town a week ago to present my workshop, The Foundations Of SQL Server Performance.

We had about 40 folks show up — which ain’t bad for a random Friday in May — including one attendee from Nigeria.

Not just like, originated in Nigeria. Like, flew from Nigeria for the workshop. That’s probably a new record for me, aside from PASS Precons where folks are already headed in from all corners.

Speaking of PASS Precons — me and Kendra are double teaming your Monday and Tuesday — have I mentioned that lately?

As a favor to some dear friends, here’s a promo code to get $125 off Bob’s Friday Precon, The Azure Workshop for SQL Professionals.

The precon takes places on October 4th, and the SQL Saturday is October 5th. You can register separately for the SQL Saturday only here.

If you want to catch Bob’s Friday workshop, use the QR code below, or register here and use the code PPP50 to get your discount.

bob ward
bob ward

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Two Ways To Tune A Slow Query In SQL Server

Like All Assumptions


You and Me might feel like the lower back end of a thing if we’re tuning a query that has other problems. Perhaps it’s running on one of those serverless servers with half a hyper-threaded core and 8kb of RAM, as an example.

When I’m working with clients, I often get put into odd situations that limit what I’m allowed to do to fix query performance. Sometimes code comes from an ORM or vendor binaries that can’t be changed, sometimes adding an index on a sizable table on standard edition in the middle of the day is just an impossibility, and of course other times things are just a spectacle du derrière that I’m allowed to do whatever I want. You can probably guess which one I like best.

This post is about the two other ones, where you’re stuck between derrière and ânesse. 

For the duration of reading this, make the wild leap of faith that it takes to embrace the mindset that not everyone who works with SQL Server knows how to write good queries or design good indexes.

I know, I know. Leap with me, friends.

The Query And Execution Plan


Here’s what we’re starting with:

SELECT TOP (10)
    DisplayName =
       (
           SELECT
               u.DisplayName
           FROM dbo.Users AS u
           WHERE u.Id = p.OwnerUserId
       ),
    p.AcceptedAnswerId,
    p.CreationDate,
    p.LastActivityDate,
    p.ParentId,
    p.PostTypeId,
    p.Score,
    p.CommentCount,
    VoteCount =
        (
            SELECT
                COUNT_BIG(*)
            FROM dbo.Votes AS v
            WHERE v.PostId = p.Id
        )
FROM dbo.Posts AS p
ORDER BY
    p.Score DESC;

And resulting plan:

sql server query plan
bas

We can surmise a few things from this plan:

  • If there are good indexes, SQL Server isn’t using them
  • That hash spill is some extra kind of bad news
  • Spools remain a reliable indicator that something is terribly wrong

Okay, so I’m kidding a bit on the last point. Sorta.

The Query Plan Details


You might look at all this work that SQL Server is doing and wonder why: With no good, usable indexes, and such big tables, why in the overly-ambitious heck are we doing all these nested loop joins?

And the answer, my friend, is blowing in the row goal.

The TOP has introduced one here, and it has been applied across the all of the operators along the top of the plan.

Normally, a row goal is when the optimizer places a bet on it being very easy to locate a small number of rows and produces an execution plan based on those reduced costs.

In this case, it would be 10 rows in the Posts table that will match the Users table and the Votes table, but since these are joins of the left outer variety they can’t eliminate results from the Posts table.

The row goals do make for some terrible costing and plan choices here, though.

sql server query plan
blue = row goal applied
orange = no row goal applied

This all comes from cardinality estimation and costing and all the other good stuff that the optimizer does when you throw a query at it.

The Query Rewrite


One way to show the power of TOPs is to increase and then decrease the row goal. For example, this (on my machine, at this very moment in time, given many local factors) will change the query plan entirely:

SELECT TOP (10)
    p.*
FROM
(
    SELECT TOP (26)
        DisplayName =
           (
               SELECT
                   u.DisplayName
               FROM dbo.Users AS u
               WHERE u.Id = p.OwnerUserId
           ),
        p.AcceptedAnswerId,
        p.CreationDate,
        p.LastActivityDate,
        p.ParentId,
        p.PostTypeId,
        p.Score,
        p.CommentCount,
        VoteCount =
            (
                SELECT
                    COUNT_BIG(*)
                FROM dbo.Votes AS v
                WHERE v.PostId = p.Id
            )
    FROM dbo.Posts AS p
    ORDER BY
        p.Score DESC
) AS p
ORDER BY
    p.Score DESC;

You may need to toggle with the top a bit to see the change on your machine. The resulting plan looks a bit funny. You won’t normally see two TOPs nuzzling up like this.

sql server query plan
scientific

But the end result is an improvement by a full minute and several seconds.

Because the inner TOP has a bigger row goal, the optimizer changes its mind about how much effort it will have to expend to fully satisfy it before clenching things down to satisfy the smaller TOP.

If you’re only allowed quick query rewrites, this can be a good way to get a more appropriate plan for the amount of work required to actually locate rows at runtime, when the optimizer is dreadfully wrong about things.

The Index Rewrite


In this case, just indexing the Votes table is enough to buy us all the performance we need, but in my personal row goal for completeness, I’m going to add in two indexes:

CREATE INDEX 
    v 
ON dbo.Votes 
    (PostId) 
WITH 
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

CREATE INDEX 
    p 
ON dbo.Posts 
    (Score DESC, OwnerUserId) 
WITH 
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

Going back to the original query, we no longer need to play games with the optimizer and pitting TOPs against each other.

sql server query plan
wisdom teeth

This is obviously much faster, if you’re in the enlightened and enviable position to create them.

Perhaps you are, but maybe not in the exact moment that you need to fix a performance problem.

In those cases, you may need to use rewrites to get temporary performance improvements until you’re able to.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Actual Execution Plans Finally Show Lock Waits!

Shorty


This is a short post. I know you’ve been somewhat spoiled by longer posts and videos lately!

Just kidding, y’all don’t pay attention (with the exception of Kevin Feasel), so maybe this will be consumable enough for even the most squirrel brained amongst us.

A long time ago, I complained that wait stats logged by actual execution plans don’t show lock waits. That seemed like a pretty big deal, because if you’re running a query and wondering why sometimes it’s fast and sometimes it’s slow, that could be a pretty huge hint.

But now, if you run a query that experienced lock waits, you can see that in the details. Just highlight the root operator, hit F4 or right click and go to Properties, and look under the wait stats node, you’ll see this:

sql server query plan
cherry bomb
  • When did this get added? I have no idea.
  • How far was it back ported? I have no idea.

I could look on VMs with older versions of SQL Server, but it’s dinner time. Or as they call it in Saskatchewan, “supper”.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

T-SQL Tuesday: Your Favorite Job Interview Question

Hello Yes Nice To Meet You


T-SQL-Tuesday-Logo

This month we have an invitation from Kevin Feasel.

Since Kevin is often kind enough to link to my posts via his Curated SQL feed/aggregator, I figured I’d add to the SEO pyramid scheme this month.

As part of my consulting services, I often help clients decide if they could keep a full time database person (there are too many potential titles to list, here) busy for 40 hours a week for the foreseeable future.

If the Magic 8-Ball comes back with a Yes, I’ll also help them write or review their job posting, and screen candidates. If a resume makes it past various detectors and doesn’t end up in the randomly generated unlucky pile, I’ll interview them.

Me. Personally.

Not in-person, though.

There’s not enough armed security in the world for me to get that close to HR.

The Production DBA Question


If someone is going to be in charge of production DBA tasks, I’ll of course ask questions about experience with whatever model is in use, or is to be implemented by the company.

So like, if the company has Availability Groups, or wants Availability Groups, we’ll talk about those.

For flavor, I’ll also ask them why anyone would be so insane as to not just use a Failover Cluster with Log Shipping.

But the real question and answer that tells me if someone knows their business is this: When you set up backups, how often do you take log backups?

If anyone says “every 15 minutes” without any further clarification or qualification, they immediately go into the “No” pile.

See, 15 minute log backups are a meme in the same Pantheon as 5% and 30% for index fragmentation. Neither answer shows any deeper understanding of what exactly they’re doing.

Log backup frequency is a business requirement based on RPO goals (Recovery Point Objective). If your RPO goal is less than 15 minutes of data loss, 15 minute log backups don’t meet that goal.

And anyone who says that Availability Groups help meet RPO goals gets their resume burned in front of them.

The Developer DBA Question


For anyone responsible for performance tuning, I need to make sure that they’re staying current with their learning and techniques.

I’ll show a lot of pictures of queries and query plans, ask about various anti-patterns and tuning techniques, but my absolute favorite is to show them pictures of query plans.

Actual execution plans, as it were.

 

sql server query plan
fine mess

Why is this important? Because at least half of the people I show this to will totally ignore the operator times and start talking about costs.

If you’re looking at a plan where this much information is available, and all you can dredge up to troubleshoot things are estimated costs, I’ll probably mail you a copy of this.

And I’m not kidding here — it shows a complete lack of attention and growth — what’s the point of calling yourself a performance tuning expert if your expertise peaked in 2008?

There Are Good People Out There


They may not spend every waking moment writing, recording, speaking, or chasing MVP status.

And that’s fine — I don’t care if you partake in that stuff or not — there are plenty of very smart people out there who don’t file quarterly paperwork or schedule posts months in advance.

What I care about is that you aren’t just an automaton seeking a new place to go through the motions in.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.