Reconsidering SQL Server’s Missing Index Requests

Milk Carton


Part of reviewing any server necessarily includes reviewing indexes. When you’re working through things that matter, like unused indexes, duplicative indexes, heaps, etc. it’s pretty clear cut what you should do to fix them.

Missing indexes are a different animal though. You have three general metrics to consider with them:

  • Uses: the number of times a query could have used the index
  • Impact: how much the optimizer thinks it can reduce the cost of the query by
  • Query cost: How much the optimizer estimates the query will cost to run

Of those metrics, impact and query cost are entirely theoretical. I’ve written quite a bit about query costing and how it can be misleading. If you really wanna get into it, you can watch the whole series here.

In short: you might have very expensive queries that finish very quickly, and you might have very low cost queries that finish very slowly.

Especially in cases of parameter sniffing, a query plan with a very low cost might get compiled and generate a missing index request. What happens if every other execution of that query re-uses the cheaply-costed plan and runs for a very long time?

You might have a missing index request that looks insignificant.

Likewise, impact is how much the optimizer thinks it can reduce the cost of the current plan by. Often, you’ll create a new index and get a totally different plan. That plan may be more or less expensive that the previous plan. It’s all a duck hunt.

The most reliable of those three metrics is uses. I’m not saying it’s perfect, but there’s a bit less Urkeling there.

When you’re looking at missing index requests, don’t discount those with lots of uses for low cost queries. Often, they’re more important than they look.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

What’s Really Different About In-Memory Table Variables In SQL Server?

Kendra, Kendra, Kendra


My dear friend Kendra asked… Okay, look, I might have dreamed this. But I maybe dreamed that she asked what people’s Cost Threshold For Blogging™ is. Meaning, how many times do you have to get asked a question before you write about it.

I have now heard people talking and asking about in-memory table variables half a dozen times, so I guess here we are.

Talking about table variables.

In memory.

Yes, Have Some


First, yes, they do help relieve tempdb contention if you have code that executes under both high concurrency and frequency. And by high, I mean REALLY HIGH.

Like, Snoop Dogg high.

Because you can’t get rid of in memory stuff, I’m creating a separate database to test in.

Here’s how I’m doing it!

CREATE DATABASE trash;

ALTER DATABASE trash 
ADD FILEGROUP trashy 
    CONTAINS MEMORY_OPTIMIZED_DATA ;
     
ALTER DATABASE trash 
ADD FILE 
(
    NAME=trashcan, 
    FILENAME='D:\SQL2019\maggots'
) 
TO FILEGROUP trashy;

USE trash;

CREATE TYPE PostThing 
AS TABLE
(
    OwnerUserId int,
    Score int,
    INDEX o HASH(OwnerUserId)
    WITH(BUCKET_COUNT = 100)
) WITH
(
    MEMORY_OPTIMIZED = ON
);
GO

Here’s how I’m testing things:

CREATE OR ALTER PROCEDURE dbo.TableVariableTest(@Id INT)
AS
BEGIN

    SET NOCOUNT, XACT_ABORT ON;
    
    DECLARE @t AS PostThing;
    DECLARE @i INT;

    INSERT @t 
        ( OwnerUserId, Score )
    SELECT 
        p.OwnerUserId,
        p.Score
    FROM Crap.dbo.Posts AS p
    WHERE p.OwnerUserId = @Id;

    SELECT 
        @i = SUM(t.Score)
    FROM @t AS t
    WHERE t.OwnerUserId = 22656
    GROUP BY t.OwnerUserId;

    SELECT 
        @i = SUM(t.Score)
    FROM @t AS t
    GROUP BY t.OwnerUserId;

END;
GO

Hot Suet


So like, the first thing I did was use SQL Query Stress to run this on a bunch of threads, and I didn’t see any tempdb contention.

So that’s cool. But now you have a bunch of stuff taking up space in memory. Precious memory. Do you have enough memory for all this?

Marinate on that.

Well, okay. Surely they must improve on all of the issues with table variables in some other way:

  • Modifications can’t go parallel
  • Bad estimates
  • No column level stats

But, nope. No they don’t. It’s the same crap.

Minus the tempdb contetion.

Plus taking up space in memory.

But 2019


SQL Server 2019 does offer the same table level cardinality estimate for in-memory table variables as regular table variables.

If we flip database compatibility levels to 150, deferred compilation kicks in. Great. Are you on SQL Server 2019? Are you using compatibility level 150?

Don’t get too excited.

Let’s give this a test run in compat level 140:

DECLARE @i INT = 22656;
EXEC dbo.TableVariableTest @Id = @i;
SQL Server Query Plan
everything counts in large amounts

Switching over to compat level 150:

SQL Server Query Plan
yeaaahhhhh

Candy Girl


So what do memory optimized table variables solve?

Not the problem that table variables in general cause.

They do help you avoid tempdb contention, but you trade that off for them taking up space in memory.

Precious memory.

Do you have enough memory?

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Improving The Performance Of Modifications In SQL Server

Pssst!


If you landed here from Brent’s weekly links, use this link to get my training for 90% off.

The access is for life, but this coupon code isn’t! Get it while it lasts.

Discount applies at checkout, and you have to purchase everything for it to apply.

Obviously


There are going to be situations where it’s smarter to change different aspects of code like this:

But I know how it is out there! Sometimes it’s hard to get in and change a bunch of logic and tinker with things.

In some cases, you can improve performance by wrapping chunks of code in transactions.

Fear Of Commitment


In this example, there’s an automatic commit every time the update completes. That means every time we step through the loop, we send a record to the transaction log.

This can result in very chatty behavior, which even good storage can have a tough time with. There are likely other aspects of transaction logging impacted by this, but I only have so much time before this call starts.

SET NOCOUNT ON;

DECLARE 
    @cur_user int = 0,
    @max_user int = 0;

SELECT 
    @cur_user = MIN(u.Id), 
    @max_user = MAX(u.Id)
FROM dbo.Users AS u
WHERE u.Age IS NULL;

WHILE @cur_user <= @max_user
BEGIN

    UPDATE u
        SET u.Age = DATEDIFF(YEAR, u.CreationDate, u.LastAccessDate)
    FROM dbo.Users AS u
    WHERE u.Id = @cur_user
    AND   u.Age IS NULL;

    SET @cur_user = (SELECT MIN(u.Id) FROM dbo.Users AS u WHERE u.Id > @cur_user);

END;

This code runs for nearly 5 minutes before completing. Looking at a ~60 second sample turns up some gnarly gnumbers.

barkley

Batched Commit


Without changing the logic of the update, we can get things in better shape by using transactions and periodically committing them.

SET NOCOUNT ON;

DECLARE 
    @rows bigint = 0,
    @cur_user int = 0,
    @max_user int = 0;

SELECT 
    @cur_user = MIN(u.Id), 
    @max_user = MAX(u.Id)
FROM dbo.Users AS u
WHERE u.Age IS NULL;

BEGIN TRANSACTION;

WHILE @cur_user <= @max_user
BEGIN

    UPDATE u
        SET u.Age = DATEDIFF(YEAR, u.CreationDate, u.LastAccessDate)
    FROM dbo.Users AS u
    WHERE u.Id = @cur_user
    AND   u.Age IS NULL;

    IF @rows = (@rows + @@ROWCOUNT)
    BEGIN
        COMMIT TRANSACTION;
        RETURN;
    END;
    ELSE
    BEGIN
        SET @rows = (@rows + @@ROWCOUNT);
        SET @cur_user = (SELECT MIN(u.Id) FROM dbo.Users AS u WHERE u.Id > @cur_user AND u.Age IS NULL);
    END;

    IF @rows >= 50000
    BEGIN
        RAISERROR('Restarting', 0, 1) WITH NOWAIT;
        SET @rows = 0;
        COMMIT TRANSACTION;
        BEGIN TRANSACTION;
    END;

END;

IF @@TRANCOUNT > 0
COMMIT

The first thing we’ll notice is that the code finishes in about 1 minute rather than 5 minutes.

How nice! I love when things move along. The metrics look a bit better, too.

scalp issues

We have almost no waits on WRITELOG, and we write far less to the transaction log (35MB vs 13MB).

We also got to do some snazzy stuff with @@ROWCOUNT. Good job, us.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

When Index Sort Direction Matters For Query Performance In SQL Server

Ever Helpful


I got a mailbag question recently about some advice that floats freely around the internet regarding indexing for windowing functions.

But even after following all the best advice that Google could find, their query was still behaving poorly.

Why, why why?

Ten Toes Going Up


Let’s say we have a query that looks something like this:

SELECT
    u.DisplayName,
    u.Reputation,
    p.Score, 
    p.PostTypeId
FROM dbo.Users AS u
JOIN
(
    SELECT
        p.Id,
    	p.OwnerUserId,
    	p.Score,
    	p.PostTypeId,
    	ROW_NUMBER() OVER
    	(
    	    PARTITION BY
    		    p.OwnerUserId,
    			p.PostTypeId
    		ORDER BY
    		    p.Score DESC
    	) AS n
    FROM dbo.Posts AS p
) AS p
    ON  p.OwnerUserId = u.Id
    AND p.n = 1
WHERE u.Reputation >= 500000
ORDER BY u.Reputation DESC,
         p.Score DESC;

Without an index, this’ll drag on forever. Or about a minute.

But with a magical index that we heard about, we can fix everything!

Ten Toes Going Down


And so we create this mythical, magical index.

CREATE INDEX bubble_hard_in_the_double_r
ON dbo.Posts
(
    OwnerUserId ASC, 
    PostTypeId ASC, 
    Score ASC
);

But there’s still something odd in our query plan. Our Sort operator is… Well, it’s still there.

SQL Server Query Plan
grinch

Oddly, we need to sort all three columns involved in our Windowing Function, even though the first two of them are in proper index order.

OwnerUserId and PostTypeId are both in ascending order. The only one that we didn’t stick to the script on is Score, which is asked for in descending order.

Dram Team


This is a somewhat foolish situation, all around. One column being out of order causing a three column sort is… eh.

We really need this index, instead:

CREATE INDEX bubble_hard_in_the_double_r
ON dbo.Posts
(
    OwnerUserId ASC, 
    PostTypeId ASC, 
    Score DESC
);
SQL Server Query Plan
mama mia

Granted, I don’t know that I like this plan at all without parallelism and batch mode, but we’ve been there before.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Signs You Need Dynamic SQL To Fix Query Performance Problems In SQL Server

Nothing Works


There are things that queries just weren’t meant to do all at once. Multi-purpose queries are often just a confused jumble with crappy query plans.

If you have a Swiss Army Knife, pull it out. Open up all the doodads. Now try to do one thing with it.

If you didn’t end up with a corkscrew in your eye, I’m impressed.

En Masse


The easiest way to think of this is conditionals. If what happens within a stored procedure or query depends on something that is decided based on user input or some other state of data, you’ve introduced an element of uncertainty to the query optimization process.

Of course, this also depends on if performance is of some importance to you.

Since you’re here, I’m assuming it is. It’s not like I spend a lot of time talking about backups and crap.

There are a lot of forms this can take, but none of them lead to you winning an award for Best Query Writer.

IFTTT


Let’s say a stored procedure will execute a different query based on some prior logic, or an input parameter.

Here’s a simple example:

IF @i = 1
BEGIN
    SELECT
        u.*
    FROM dbo.Users AS u
    WHERE u.Reputation = @i;
END;

IF @i = 2
BEGIN
    SELECT
        p.*
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = @i;
END;

If the stored procedure runs for @i = 1 first, the second query will get optimized for that value too.

Using parameterized dynamic SQL can get you the type of optimization separation you want, to avoid cross-optimization contamination.

I made half of that sentence up.

For more information, read this article.

Act Locally


Local variables are another great use of dynamic SQL, because one query’s local variable is another query’s parameter.

DECLARE @i int = 2;
SELECT
    v.*
FROM dbo.Votes AS v
WHERE v.VoteTypeId = @i;

Doing this will get you weird estimates, and you won’t be happy.

You’ll never be happy.

For more information, read this article.

This Or That


You can replace or reorder the where clause with lots of different attempts at humor, but none of them will be funny.

SELECT
    c.*
FROM dbo.Comments AS c
WHERE (c.Score >= @i OR @i IS NULL);

The optimizer does not consider this SARGable, and it will take things out on you in the long run.

Maybe you’re into that, though. I won’t shame you.

We can still be friends.

For more information, watch this video.

Snortables


Dynamic SQL is so good at helping you with parameter sniffing issues that I have an entire session about it.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Mind Your OUTPUT Targets In SQL Server, Some Of Them Hurt Query Performance

Browser History


I’ve blogged about OUTPUT a couple times, and those posts are Still Accurate™

But it’s worth noting that, for the second post OUTPUT forced the query to run serially with no target; just returning data back to SSMS.

Depending on the query behind the putting of the out, parallelism could be quite important.

That’s why in the first post, the put out into a real table didn’t cause performance to suffer.

Of course, if you OUTPUT into a table variable, you still have to deal with table variables being crappy about modifications.

Samesies


If you compare the performance of queries that output into a @table variable vs one that outputs into a #temp table, you’ll see a difference:

SQL Server Query Plan
bang bang bang

Even though the parallel zone is limited here, there’s a big difference in overall query time. Scanning the Votes table singe-threaded vs. in parallel.

When you’re designing processes to be as efficient as possible, paying attention to details like this can make a big difference.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

DROPCLEANBUFFERS: A Better Test Of Storage Than Query Performance For SQL Server

Relics


When I’m tuning queries, people will often ask me what metrics I look at to judge efficiency. Usually, it’s just getting things to be done faster.

Sometimes it’s okay to use more CPU via a parallel plan to get your query faster. Sometimes it’s okay to do more reads to get a query faster.

Sure, it’s cool when it works out that you can reduce resources overall, but every query is special. It all sort of depends on where the bottleneck is.

One thing I’ve been asked about several times is about how important it is to clear out the plan cache and drop clean buffers between runs.

While this post is only about the dropping of cleanly buffers, let’s touch on clearing the plan cache in some way quickly.

Dusted


Clearing out the plan cache (or recompiling, or whatever) is rarely an effective query tuning mechanism, unless you’re working on a parameter sniffing issue, or trying to prove that something else about a query is causing a problem. Maybe it’s local variables, maybe it’s a bad estimate from a table variable.

You get the point.

But starting with a new plan every time is overblown — if you change things like indexes or the way the query is written, you’re gonna get a new plan anyway.

If you’re worried about long compile times, you might also want to do this to prove it’s not necessarily the query that’s slow.

Busted


Let’s look at a big picture. The script that generates this picture is as follow:

--Calgon
DBCC DROPCLEANBUFFERS;

--o boy
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

--table manners
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

--so considerate
CREATE INDEX p ON dbo.Posts(Id);

--y tho?
DBCC DROPCLEANBUFFERS;

--woah woah woah
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

--hey handsome
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

We’re gonna drop them buffferinos, run the same count query twice, add a real narrow index, then count twice again.

Great. Great. Great.

SQL Server Query Plan
different lifetime

Annalieses


For the first two executions, we performance tuned the query by about 30 seconds, just by… reading data from memory.

Hm. Okay. Unless you’re trying to prove that you don’t have enough memory, or that storage sucks, you’re not really convincing me of much.

Yes, RAM is faster than disk. Now what?

For the second two executions, query performance got way better. But reading the smaller index from disk hardly changed overall execution time.

If it’s not a strong enough argument that getting a query from 14 seconds down to half a second with a better index means you need an index, you might be working for difficult people.

Of course, Size Matters™

brick to back

The second query finishes much faster because we have a much smaller amount of data to read, period. If we had a where clause that allowed our index to seek to a specific chunk of data, we could have done even less work. This shouldn’t surprise anyone, though. Reading 450MB is faster than reading 120,561MB.

This is not a math test.

Coasting


Starting queries out with an empty buffer pool doesn’t really offer any insights into if you’ve tuned the query. It only exaggerates a difference that is usually not a reality.

It is a useful tool if you want to prove that:

  • You need more memory
  • You need better storage
  • You need a different index

But I sure wouldn’t use it to prove that I made a query better.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Residual Predicates In SQL Server Query Plans

We Will Talk About Things And Have Fun Now


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

USE StackOverflow;

EXEC dbo.DropIndexes; 

/*
CREATE INDEX east 
    ON dbo.Posts
        (PostTypeId, Score, OwnerUserId) 
WITH ( MAXDOP = 8, 
       SORT_IN_TEMPDB = ON, 
       DATA_COMPRESSION = ROW );
*/

DROP TABLE IF EXISTS #t;
GO 

SELECT   
    u.Id,
    u.Reputation,
    u.DisplayName,
    p.Id AS PostId,
    p.Title
INTO #t
FROM dbo.Users AS u
JOIN dbo.Posts AS p
    ON p.OwnerUserId = u.Id
WHERE u.Reputation >= 1000
AND   p.PostTypeId = 1
AND   p.Score >= 1000 
ORDER BY u.Reputation DESC;



/*
CREATE INDEX east 
    ON dbo.Posts(PostTypeId, Score, OwnerUserId);
*/
SELECT 
    t.Id, 
    t.Reputation, 
    ( 
        SELECT 
            MAX(p.Score) 
        FROM dbo.Posts AS p 
        WHERE p.OwnerUserId = t.Id 
        AND   p.PostTypeId IN (1, 2) 
    ) AS TopPostScore,
    t.PostId, 
    t.Title
FROM #t AS t
ORDER BY t.Reputation DESC;


/*
Usually I love replacing select 
list subqueries with APPLY

Just show the saved plan here
*/
SELECT 
    t.Id, 
    t.Reputation, 
    pq.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --We have to use outer apply to not restrict results!
(
    SELECT 
        MAX(p.Score) AS Score
    FROM dbo.Posts AS p 
    WHERE p.OwnerUserId = t.Id 
    AND   p.PostTypeId IN (1, 2)
) AS pq
ORDER BY t.Reputation DESC;


/*
TOP (1) also spools
*/
SELECT 
    t.Id, 
    t.Reputation, 
    ( 
        SELECT TOP (1) 
            p.Score
        FROM dbo.Posts AS p
        WHERE p.PostTypeId IN (1, 2)
        AND   p.OwnerUserId = t.Id
        ORDER BY p.Score DESC 
    ) AS TopPostScore,
    t.PostId, 
    t.Title
FROM #t AS t
ORDER BY t.Reputation DESC;

SELECT 
    t.Id, 
    t.Reputation, 
    pq.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId IN (1, 2)
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq
ORDER BY t.Reputation DESC;


/*
CREATE INDEX east 
    ON dbo.Posts(PostTypeId, Score, OwnerUserId);
*/
SELECT 
    t.Id, 
    t.Reputation, 
    pq.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --This one is fast
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 1
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq
ORDER BY t.Reputation DESC;

SELECT 
    t.Id, 
    t.Reputation, 
    pa.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --This two is slow...
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 2
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pa
ORDER BY t.Reputation DESC;


/*
Use the Score!
*/
SELECT 
    t.Id, 
    t.Reputation, 
    ISNULL(pa.Score, pq.Score) AS TopPostScore,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --This one is fast
(
    SELECT TOP (1) 
        p.Score --Let's get the top score here
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 1
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq
OUTER APPLY --This two is slow...
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 2
    AND   p.OwnerUserId = t.Id
    AND   pq.Score < p.Score --Then use it as a filter down here
    ORDER BY p.Score DESC
) AS pa
ORDER BY t.Reputation DESC;


SELECT 
    t.Id, 
    t.Reputation, 
    ISNULL(pq.Score, 0) AS Score,
    t.PostId, 
    t.Title
INTO #t2
FROM #t AS t
OUTER APPLY --This one is fast
(
    SELECT TOP (1) 
        p.Score --Let's get the top score here
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 1
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq;


SELECT 
    t.Id, 
    t.Reputation, 
    ISNULL(pa.Score, t.Score) AS TopPostScore, 
    t.PostId, 
    t.Title
FROM #t2 AS t
OUTER APPLY 
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 2
    AND   p.OwnerUserId = t.Id
    AND   t.Score < p.Score --Then use it as a filter down here
    ORDER BY p.Score DESC
) AS pa
ORDER BY t.Reputation DESC;



/*
What happened?
 * Index key column order
   * (PostTypeId, Score, OwnerUserId)

Other things we could try:
 * Shuffling index key order, or creating a new index
   * (PostTypeId, OwnerUserId, Score)
 
 * Rewriting the query to use ROW_NUMBER() instead
  * Have to be really careful here, probably use Batch Mode

*/

/*
CREATE TABLE dbo.t
(
id int NOT NULL,
INDEX c CLUSTERED COLUMNSTORE
);
*/

SELECT 
    t.Id, 
    t.Reputation, 
    pa.Score,
    t.PostId, 
    t.Title
FROM #t AS t
LEFT JOIN dbo.t AS tt ON 1 = 0
OUTER APPLY
(
    SELECT 
        rn.*
    FROM 
    (
        SELECT
            p.*,
            ROW_NUMBER()
                OVER
                (
                    PARTITION BY 
                        p.OwnerUserId
                    ORDER BY
                        p.Score DESC
                ) AS n
        FROM dbo.Posts AS p
        WHERE p.PostTypeId IN (1, 2)
    ) AS rn
    WHERE rn.OwnerUserId = t.Id
    AND   rn.n = 1
) AS pa
ORDER BY t.Reputation DESC;


DROP TABLE #t, #t2;

 

Your Reputation = 1

Untechnical


The title might look like it’s technical, since usually when I’m talking about Reputation = 1, I’m talking about querying the Stack Overflow database.

This post isn’t about that Reputation, it’s about your Reputation, and hopefully how you choose to protect it while you exist, and after you not exist.

This also isn’t a horn-tooting post. If you think I am horn-tooting, refer back to this sentence. There is, likewise, no high-horsing here. There are no horses, high or otherwise.

I try to teach people things here. This is just another thing.

Hazy Shade Of Writer


Late last year I agreed to write a book. It came to my attention after the fact that the publisher had recently released a work by someone who, several years ago, was found to have exhibited a pattern of untoward behavior with female members of the SQL Server community. I brought this up, and they were dismissive of my concerns on the matter.

I’m stopping short of adding any further details, because I don’t want anyone who has experienced trauma from this to experience more. Quite the opposite.

After that interaction, I withdrew from the project. There are plenty of ways for me to help people learn about SQL Server that don’t make profits for companies that turn a blind eye to that sort of behavior.

Continuing would have meant I turn a blind eye to it, and that’s not how I want to be known by anyone, especially people who are far more likely to be victimized than I am. I’m your friend, too.

I’m not pretending that I have the strongest morals and ethics, or that I’m a paragon of virtue, but continuing with the project just didn’t sit right with me.

After all, from where I sit in my office, I can hear my two daughters playing.

Fighting.

Playing again.

Not sure.

Something in between.

They’ll figure it out.

Choice Cuts


It’s easy to build a bad reputation, quickly or slowly. Anyone can do it. It’s hard to build and keep a good reputation.

When choosing who to be involved with, it’s important to consider who they’re involved with. I don’t mean that you have to blacklist people who you disagree with (that can be a dangerous personal precedent to set), but do think carefully about the kind of behavior you’re not okay with.

If you see that behavior, you’ve probably found someone you don’t want to associate with. If you see someone condoning that behavior, you’ve probably found two.

Redemption needs to exist, otherwise there would be no point in trying to learn and change. I’ve learned some tough lessons in life and changed because of them. Many people I know have, too. It’s generally a good thing, lest we all remain babbling and unwashed.

Not that that’s a bad way to spend Saturday night, but I digress.

People also need to be able to make mistakes and come back from them. There are good ways to do that: taking ownership, apologizing, etc. I’d never give sincerity the cold shoulder.

Lunch Alone


This doesn’t just go for when it’s easy to make the right choice. One reason there is no horn nor horse for me here is that it was a very easy decision to make.

But that does make it an easy lesson to teach, hopefully. Even if no one ever said a word about the entanglement, I would have felt wrong about it. If you don’t have a good reputation with yourself, you probably won’t have one with other people.

When you hit those moments in your life, and in your career, follow your instincts. I realize that it’s hard when a project promises some esteem, or monetary gain when monetary gains are scarce, but in the long run it’s worth it.

The least regret is no regret.

Or regerts, as a wise tattoo artist once inscribed.

Thanks for reading!

Keep Away From Runnable Queues

Re-Run


There are a lot of posts about THREADPOOL on the internet, but what happens before your server threads go into the red?

You might see other signs of CPU contention — lots of waits on stuff like the CX waits, or SOS_SCHEDULER_YIELD.

But there are other signs, too, and they can show up in the lifecycle of a query.

Zombie Skeleton


The main states a query can be in are:

  • Runing: Chugging happily along
  • Runnable: Waiting for a CPU
  • Suspended: Waiting on something else (locks, pages, whatever)
  • Sleeping: Should probably disconnect maybe, I guess

If you’ve got a lot of queries that are runnable, they’re ready to run, but not getting CPU time.

There are a ton of reasons why you might see this rack up, like:

  • Too few CPUs
  • Totally untuned queries
  • VM issues like noisy neighbors, CPU limits, etc.
  • RBAR-style code

Good News!


Checks for this are coming to sp_BlitzFirst, and sp_PressureDetector.

If > 25% of  your queries are runnable, we’ll warn you about it. What you do with that information is up to your skilled hands.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.