DROPCLEANBUFFERS: A Better Test Of Storage Than Query Performance For SQL Server

Relics


When I’m tuning queries, people will often ask me what metrics I look at to judge efficiency. Usually, it’s just getting things to be done faster.

Sometimes it’s okay to use more CPU via a parallel plan to get your query faster. Sometimes it’s okay to do more reads to get a query faster.

Sure, it’s cool when it works out that you can reduce resources overall, but every query is special. It all sort of depends on where the bottleneck is.

One thing I’ve been asked about several times is about how important it is to clear out the plan cache and drop clean buffers between runs.

While this post is only about the dropping of cleanly buffers, let’s touch on clearing the plan cache in some way quickly.

Dusted


Clearing out the plan cache (or recompiling, or whatever) is rarely an effective query tuning mechanism, unless you’re working on a parameter sniffing issue, or trying to prove that something else about a query is causing a problem. Maybe it’s local variables, maybe it’s a bad estimate from a table variable.

You get the point.

But starting with a new plan every time is overblown — if you change things like indexes or the way the query is written, you’re gonna get a new plan anyway.

If you’re worried about long compile times, you might also want to do this to prove it’s not necessarily the query that’s slow.

Busted


Let’s look at a big picture. The script that generates this picture is as follow:

--Calgon
DBCC DROPCLEANBUFFERS;

--o boy
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

--table manners
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

--so considerate
CREATE INDEX p ON dbo.Posts(Id);

--y tho?
DBCC DROPCLEANBUFFERS;

--woah woah woah
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

--hey handsome
SELECT
    COUNT_BIG(*) AS records
FROM dbo.Posts AS p;

We’re gonna drop them buffferinos, run the same count query twice, add a real narrow index, then count twice again.

Great. Great. Great.

SQL Server Query Plan
different lifetime

Annalieses


For the first two executions, we performance tuned the query by about 30 seconds, just by… reading data from memory.

Hm. Okay. Unless you’re trying to prove that you don’t have enough memory, or that storage sucks, you’re not really convincing me of much.

Yes, RAM is faster than disk. Now what?

For the second two executions, query performance got way better. But reading the smaller index from disk hardly changed overall execution time.

If it’s not a strong enough argument that getting a query from 14 seconds down to half a second with a better index means you need an index, you might be working for difficult people.

Of course, Size Matters™

brick to back

The second query finishes much faster because we have a much smaller amount of data to read, period. If we had a where clause that allowed our index to seek to a specific chunk of data, we could have done even less work. This shouldn’t surprise anyone, though. Reading 450MB is faster than reading 120,561MB.

This is not a math test.

Coasting


Starting queries out with an empty buffer pool doesn’t really offer any insights into if you’ve tuned the query. It only exaggerates a difference that is usually not a reality.

It is a useful tool if you want to prove that:

  • You need more memory
  • You need better storage
  • You need a different index

But I sure wouldn’t use it to prove that I made a query better.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Residual Predicates In SQL Server Query Plans

We Will Talk About Things And Have Fun Now


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

USE StackOverflow;

EXEC dbo.DropIndexes; 

/*
CREATE INDEX east 
    ON dbo.Posts
        (PostTypeId, Score, OwnerUserId) 
WITH ( MAXDOP = 8, 
       SORT_IN_TEMPDB = ON, 
       DATA_COMPRESSION = ROW );
*/

DROP TABLE IF EXISTS #t;
GO 

SELECT   
    u.Id,
    u.Reputation,
    u.DisplayName,
    p.Id AS PostId,
    p.Title
INTO #t
FROM dbo.Users AS u
JOIN dbo.Posts AS p
    ON p.OwnerUserId = u.Id
WHERE u.Reputation >= 1000
AND   p.PostTypeId = 1
AND   p.Score >= 1000 
ORDER BY u.Reputation DESC;



/*
CREATE INDEX east 
    ON dbo.Posts(PostTypeId, Score, OwnerUserId);
*/
SELECT 
    t.Id, 
    t.Reputation, 
    ( 
        SELECT 
            MAX(p.Score) 
        FROM dbo.Posts AS p 
        WHERE p.OwnerUserId = t.Id 
        AND   p.PostTypeId IN (1, 2) 
    ) AS TopPostScore,
    t.PostId, 
    t.Title
FROM #t AS t
ORDER BY t.Reputation DESC;


/*
Usually I love replacing select 
list subqueries with APPLY

Just show the saved plan here
*/
SELECT 
    t.Id, 
    t.Reputation, 
    pq.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --We have to use outer apply to not restrict results!
(
    SELECT 
        MAX(p.Score) AS Score
    FROM dbo.Posts AS p 
    WHERE p.OwnerUserId = t.Id 
    AND   p.PostTypeId IN (1, 2)
) AS pq
ORDER BY t.Reputation DESC;


/*
TOP (1) also spools
*/
SELECT 
    t.Id, 
    t.Reputation, 
    ( 
        SELECT TOP (1) 
            p.Score
        FROM dbo.Posts AS p
        WHERE p.PostTypeId IN (1, 2)
        AND   p.OwnerUserId = t.Id
        ORDER BY p.Score DESC 
    ) AS TopPostScore,
    t.PostId, 
    t.Title
FROM #t AS t
ORDER BY t.Reputation DESC;

SELECT 
    t.Id, 
    t.Reputation, 
    pq.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId IN (1, 2)
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq
ORDER BY t.Reputation DESC;


/*
CREATE INDEX east 
    ON dbo.Posts(PostTypeId, Score, OwnerUserId);
*/
SELECT 
    t.Id, 
    t.Reputation, 
    pq.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --This one is fast
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 1
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq
ORDER BY t.Reputation DESC;

SELECT 
    t.Id, 
    t.Reputation, 
    pa.Score,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --This two is slow...
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 2
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pa
ORDER BY t.Reputation DESC;


/*
Use the Score!
*/
SELECT 
    t.Id, 
    t.Reputation, 
    ISNULL(pa.Score, pq.Score) AS TopPostScore,
    t.PostId, 
    t.Title
FROM #t AS t
OUTER APPLY --This one is fast
(
    SELECT TOP (1) 
        p.Score --Let's get the top score here
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 1
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq
OUTER APPLY --This two is slow...
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 2
    AND   p.OwnerUserId = t.Id
    AND   pq.Score < p.Score --Then use it as a filter down here
    ORDER BY p.Score DESC
) AS pa
ORDER BY t.Reputation DESC;


SELECT 
    t.Id, 
    t.Reputation, 
    ISNULL(pq.Score, 0) AS Score,
    t.PostId, 
    t.Title
INTO #t2
FROM #t AS t
OUTER APPLY --This one is fast
(
    SELECT TOP (1) 
        p.Score --Let's get the top score here
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 1
    AND   p.OwnerUserId = t.Id
    ORDER BY p.Score DESC
) AS pq;


SELECT 
    t.Id, 
    t.Reputation, 
    ISNULL(pa.Score, t.Score) AS TopPostScore, 
    t.PostId, 
    t.Title
FROM #t2 AS t
OUTER APPLY 
(
    SELECT TOP (1) 
        p.Score
    FROM dbo.Posts AS p
    WHERE p.PostTypeId = 2
    AND   p.OwnerUserId = t.Id
    AND   t.Score < p.Score --Then use it as a filter down here
    ORDER BY p.Score DESC
) AS pa
ORDER BY t.Reputation DESC;



/*
What happened?
 * Index key column order
   * (PostTypeId, Score, OwnerUserId)

Other things we could try:
 * Shuffling index key order, or creating a new index
   * (PostTypeId, OwnerUserId, Score)
 
 * Rewriting the query to use ROW_NUMBER() instead
  * Have to be really careful here, probably use Batch Mode

*/

/*
CREATE TABLE dbo.t
(
id int NOT NULL,
INDEX c CLUSTERED COLUMNSTORE
);
*/

SELECT 
    t.Id, 
    t.Reputation, 
    pa.Score,
    t.PostId, 
    t.Title
FROM #t AS t
LEFT JOIN dbo.t AS tt ON 1 = 0
OUTER APPLY
(
    SELECT 
        rn.*
    FROM 
    (
        SELECT
            p.*,
            ROW_NUMBER()
                OVER
                (
                    PARTITION BY 
                        p.OwnerUserId
                    ORDER BY
                        p.Score DESC
                ) AS n
        FROM dbo.Posts AS p
        WHERE p.PostTypeId IN (1, 2)
    ) AS rn
    WHERE rn.OwnerUserId = t.Id
    AND   rn.n = 1
) AS pa
ORDER BY t.Reputation DESC;


DROP TABLE #t, #t2;

 

Keep Away From Runnable Queues

Re-Run


There are a lot of posts about THREADPOOL on the internet, but what happens before your server threads go into the red?

You might see other signs of CPU contention — lots of waits on stuff like the CX waits, or SOS_SCHEDULER_YIELD.

But there are other signs, too, and they can show up in the lifecycle of a query.

Zombie Skeleton


The main states a query can be in are:

  • Runing: Chugging happily along
  • Runnable: Waiting for a CPU
  • Suspended: Waiting on something else (locks, pages, whatever)
  • Sleeping: Should probably disconnect maybe, I guess

If you’ve got a lot of queries that are runnable, they’re ready to run, but not getting CPU time.

There are a ton of reasons why you might see this rack up, like:

  • Too few CPUs
  • Totally untuned queries
  • VM issues like noisy neighbors, CPU limits, etc.
  • RBAR-style code

Good News!


Checks for this are coming to sp_BlitzFirst, and sp_PressureDetector.

If > 25% of  your queries are runnable, we’ll warn you about it. What you do with that information is up to your skilled hands.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Signs Your Schema Is Hurting SQL Server Performance

To The Point


If you need to search for:

  • Leading wildcards
  • Substrings
  • Charindexes
  • Patindexes
  • Replaces
  • Lefts
  • Rights
  • Concats
  • Any combination of TRIMs
  • Columns with prefixed values
  • Columns with suffixed values
  • Concatenated columns
  • String split columns

Something is broken in the way that you store data.

You’re overloading things, and you’re going to hit big performance problems when your database grows past puberty.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Stressing tempdb and Observing Contention In SQL Server

As Recently As Now


Even on SQL Server 2019, with in-memory tempdb metadata enabled, and an appropriate number of evenly sized data files, you can experience certain types of contention in tempdb.

It’s better. It’s definitely and totally better, but it’s still there. With that in mind, I wrote a stored procedure that you can stick in your favorite stress tool, to see how tempdb handles different numbers of concurrent sessions. You can download it here, on GitHub.

If you need a tool to run a bunch of concurrent sessions against SQL Server, my favorite two free ones are:

If you need tools to help you identify tempdb contention, the ones I use are:

Strikey


While I’m running one of those stress tools, this is how I use the stored procedures above to look for contention.

For sp_WhoIsActive, it’s really simple:

EXEC sp_WhoIsActive @get_task_info = 2;

The results will start to look like this when contention heats up. Again, things are a lot better now, but it can still happen.

sp_WhoIsActive
hello… you

For sp_HumanEvents, it’s still pretty simple:

EXEC dbo.sp_HumanEvents 
    @event_type = 'waits', 
    @seconds_sample = 10, 
    @wait_type = N'PAGELATCH_UP, PAGELATCH_EX, PAGELATCH_SH';

Since I want to specifically look for waits that indicate tempdb is mixing it up, I’ll put the most common PAGELATCH waits in.

SQL Server Query Results
cheesey plate

The output here is admittedly a bit truncated, because of limitations with the ring buffer extended event. But, you know, I think we can see enough.

If You See Contention


Check the basics first.

  • You might need more data files
  • Data files might be unevenly sized
  • If you’re on < SQL Server 2016, you might need trace flags 1117 and 1118
  • You might have a bunch of other stuff hemming up tempdb, too

Check out this video for some other things that can cause problems too.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server Filtered Indexes Need Serious Help

Minorly


Filtered indexes are really interesting things. Just slap a where clause on your index definition, and you can do all sorts of helpful stuff:

  • Isolate hot data
  • Make soft delete queries faster
  • Get a histogram specific to the span of data you care about

Among other things, of course. There are some annoying things about them though.

  • They only work with specific ANSI options
  • If you don’t include the filter definition columns in the index, it might not get used
  • They only work when queries use literals, not parameters or variables

Majorly


Part of the optimizer’s process consists of expression matching, where things like computed columns, filtered indexes, and indexed views are considered for use in your query.

I mean, if you have any of them. If you don’t, it probably just stares inwardly for a few nanoseconds, wondering why you don’t care about it.

Something that this part of the process is terrible at is any sort of “advanced” expression matching. It has to be exact, or you get whacked.

Here’s an example:

DROP TABLE IF EXISTS dbo.is_deleted;

CREATE TABLE dbo.is_deleted
(
    id int PRIMARY KEY,
    dt datetime NOT NULL,
    thing1 varchar(50) NOT NULL,
    thing2 varchar(50) NOT NULL,
    is_deleted bit NOT NULL
);

INSERT dbo.is_deleted WITH(TABLOCK)
(
    id,
    dt,
    thing1,
    thing2,
    is_deleted
)
SELECT 
    x.n,
    DATEADD(MINUTE, x.n, GETDATE()),
    SUBSTRING(x.text, 0, 50),
    SUBSTRING(x.text, 0, 50),
    x.n % 2
FROM (
SELECT 
    ROW_NUMBER() OVER 
    (
        ORDER BY 1/0
    ) AS n,
    m.*
FROM sys.messages AS m
) AS x;

CREATE INDEX isd 
ON dbo.is_deleted
    (dt)
INCLUDE
    (is_deleted)
WHERE 
    (is_deleted = 0);

Overly


If you run that setup script, you’ll get yourself a table that’s ripe for a filtered index on the is_deleted column.

But it doesn’t work with every type of query pattern. Some people are super fancy and want to find NOTs!

SELECT 
    COUNT_BIG(*) AS records
FROM dbo.is_deleted AS id
WHERE id.dt >= GETDATE() + 200
AND   (NOT 1 = id.is_deleted)
AND   1 = (SELECT 1);

I have the 1 = (SELECT 1) in there for reasons. But we still get no satisfaction.

SQL Server Query Plan
hurtin

If we try to force the matter, we’ll get an error!

SELECT 
    COUNT_BIG(*) AS records
FROM dbo.is_deleted AS id WITH(INDEX = isd)
WHERE id.dt >= GETDATE() + 200
AND   (NOT 1 = id.is_deleted)
AND   1 = (SELECT 1);

The optimizer says non.

Msg 8622, Level 16, State 1, Line 84
Query processor could not produce a query plan because of the hints defined in this query. 
Resubmit the query without specifying any hints and without using SET FORCEPLAN.

It has no problem with this one, though.

SELECT
    COUNT_BIG(*) AS records
FROM dbo.is_deleted AS id
WHERE id.dt >= GETDATE() + 200
AND   (0 = id.is_deleted)
AND   1 = (SELECT 1);

Underly


It would be nice if there were some more work put into filtered indexes to make them generally more useful.

In much the same way that a general set of contradictions can be detected, simple things like this could be too.

Computed columns have a similar issue, where if the definition is col1 + col2, a query looking at col2 + col1 won’t pick it up.

It’s a darn shame that such potentially powerful tools don’t get much love.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

ISNULL, COALESCE, And Performance In SQL Server Queries

ANSI Blandard


Sometimes there are very good reasons to use either coalesce or isnull, owing to them having different capabilities, behaviors, and support across databases.

But isnull has some particular capabilities that are interesting, despite its limitations: only two arguments, specific to SQL Server, and uh… well, we can’t always get three reasons, as a wise man once said.

There is one thing that makes isnull interesting in certain scenarios. Let’s look at a couple.

Green Easy


First, we’re gonna need an index.

CREATE INDEX party 
ON dbo.Votes
    (CreationDate, VoteTypeId) 
INCLUDE
    (UserId);

Yep, we’ve got an index. Survival of the fittest.

Here are some queries to go along with it. Wouldn’t want our index getting lonely, I suppose.

All that disk entropy is probably scary enough.

SELECT TOP (10) 
    u.DisplayName
FROM dbo.Users AS u 
WHERE NOT EXISTS
( 
    SELECT 
        1/0 
    FROM dbo.Votes AS v 
    WHERE v.UserId = u.Id 
    AND v.VoteTypeId IN (1, 2, 3)
    AND ISNULL(v.CreationDate, '19000101') > '20131201' 
)
ORDER BY u.CreationDate DESC;

SELECT TOP (10) 
    u.DisplayName
FROM dbo.Users AS u 
WHERE NOT EXISTS
( 
    SELECT 
        1/0 
    FROM dbo.Votes AS v 
    WHERE v.UserId = u.Id 
    AND v.VoteTypeId IN (1, 2, 3)
    AND COALESCE(v.CreationDate, '19000101') > '20131201' 
)
ORDER BY u.CreationDate DESC;

Pager Back


The first query uses isnull, and the second query uses coalesce. Just in case that wasn’t obvious.

I know, I know — I’ve spent a long time over here telling you not to use isnull in your where clause, lest ye suffer the greatest shame to exist, short of re-gifting to the original gift giver.

Usually, when you wrap a column in a function like that, bad things happen. Seeks turn into Scans, wine turns into water, spaces turn into tabs, the face you remember from last call turns into a November Jack O’Lantern.

But in this case, the column wrapped in our where clause, which is the leading column of the index, is not nullable.

SQL Server’s optimizer, having its act together, can figure this out and produce an Index Seek plan.

SQL Server Query Plan
you’ll live forever

The null check is discarded, and end up with a Seek to the CreationDate values we care about, and a Residual Predicate on VoteTypeId.

SQL Server Query Plan Tool Tip
goodness

Big Famous


The second query, the one that uses coalesce, has a few things different about it. Let’s cut to the plan.

SQL Server Query Plan
butcher

Rather than 157ms, this query runs for over a minute by five seconds. All of the time is spent in the Top > Index Scan. We no longer get an Index Seek, either.

SQL Server Query Plan Tool Tip
burdened

Notice that the predicate on CreationDate is a full-on case expression, checking for null-ness. This could be an okay scenario if we had something to Seek to, but without proper indexing and properly written queries, it’s el disastero.

The reason that the query changes is due to the optimizer deciding that a row goal would make things better. This is why we have a Nested Loops Join, and the Top > Index Scan. It doesn’t work out very well.

This isn’t the only time you might see this, but it’s probably the worst.

SUBTLERY


You can also see this with a pattern I often advocate against, using a Left Join to find rows that don’t exist:

SELECT TOP (10) 
    u.DisplayName
FROM dbo.Users AS u
LEFT JOIN dbo.Votes AS v
    ON  v.UserId = u.Id
    AND v.VoteTypeId IN (1, 2, 3)
    AND ISNULL(v.CreationDate, '19000101') > '20131201'  
WHERE v.Id IS NULL
ORDER BY u.CreationDate DESC;

SELECT TOP (10) 
    u.DisplayName
FROM dbo.Users AS u
LEFT JOIN dbo.Votes AS v
    ON  v.UserId = u.Id
    AND v.VoteTypeId IN (1, 2, 3)
    AND COALESCE(v.CreationDate, '19000101') > '20131201'  
WHERE v.Id IS NULL
ORDER BY u.CreationDate DESC;

It’s not as bad here, but it’s still noticeable.

The plan with isnull looks about like so:

SQL Server Query Plan
you’re fast

At 163ms, there’s not a lot to complain about here.

The coalesce version does far worst, at just about 1.5 seconds.

SQL Server Query Plan
gasp!

We Learned Some Things


In SQL Server, using functions in where clauses is generally on the naughty list. In a narrow case, using the built-in isnull function results in better performance than coalesce on columns that are not nullable.

This pattern should generally be avoided, of course. On columns that are nullable, things can really go sideways in either case. Of course, this matters most when the function results in an otherwise possible Index Seek is impossible, and we can only use an Index Scan to find rows.

An additional consideration is when we can Seek to a very selective set of rows first. Say we can get things down to (for the purposes of explanation only) around 1000 rows with a predicate like Score > 10000.

For the remaining 1000 rows, it’s not likely that an additional Predicate like the ones we saw today would have added any drama to the execution time of a relatively simple query.

They may, however, lead to poor cardinality estimates in more complicated queries.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Recompile And Nested Procedures In SQL Server

Rock Sale


While I was answering a question, I had to revisit what happens when using different flavors of recompile hints with stored procedure when they call inner stored procedures. I like when this happens, because there are so many little details I forget.

Anyway, the TL;DR is that if you have nested stored procedures, recompiling only recompiles the outer one. The inner procs — really, I should say modules, because it includes other objects that compile query plans — but hey. Now you know what I should have said.

If you want to play around with the tests, you’ll need to grab sp_BlitzCache. I’m too lazy to write plan cache queries from scratch.

Testament


The procs:

CREATE OR ALTER PROCEDURE dbo.inner_sp
AS
BEGIN

    SELECT
        COUNT_BIG(*) AS records
    FROM sys.master_files AS mf;
END;
GO 

CREATE OR ALTER PROCEDURE dbo.outer_sp
--WITH RECOMPILE /*toggle this to see different behavior*/
AS
BEGIN

    SELECT 
        COUNT_BIG(*) AS records
    FROM sys.databases AS d;
    
    EXEC dbo.inner_sp;

END;
GO

The tests:

--It's helpful to run this before each test to clear out clutter
DBCC FREEPROCCACHE;

--Look at this with and without 
--WITH RECOMPILE in the proc definition
EXEC dbo.outer_sp;

--Take out the proc-level recompile and run this
EXEC dbo.outer_sp WITH RECOMPILE;

--Take out the proc-level recompile and run this
EXEC sp_recompile 'dbo.outer_sp';
EXEC dbo.outer_sp;

--You should run these between each test to verify behavior
--If you just run them here at the end, you'll be disappointed
EXEC sp_BlitzCache 
    @DatabaseName = 'Crap', 
    @QueryFilter = 'procedure', 
    @SkipAnalysis = 1, 
    @HideSummary = 1;

EXEC sp_BlitzCache 
    @DatabaseName = 'Crap', 
    @QueryFilter = 'statement', 
    @SkipAnalysis = 1, 
    @HideSummary = 1;

Whatchalookinat?


After each of these where a recompile is applied, you should see the inner proc/statement in the BlitzCache results, but not the outer proc.

It’s important to understand behavior like this, because recompile hints are most often used to help investigate parameter sniffing issues. If it’s taking place in nested stored procedure calls, you may find yourself with a bunch of extra work to do or needing to re-focus your use of recompile hints.

Of course, this is why I much prefer option recompile hints on problem statements. You get much more reliable behavior.

And, as Paul writes:

For instances running at least SQL Server 2008 build 2746 (Service Pack 1 with Cumulative Update 5), using OPTION (RECOMPILE) has another significant advantage over WITH RECOMPILE: Only OPTION (RECOMPILE) enables the Parameter Embedding Optimization.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

If You’re Using Under 20% CPU, You’re Wasting Money On SQL Server Licensing

Sensational


CPUs aren’t normally expensive, but ask them to run a query and the cost skyrockets. It doesn’t matter where you are in the world or in the cloud, prepare to get gouged.

You’d think SQL Server was an Hermes purse.

Life is a bit worse in the cloud, where CPU count and RAM amount are inexorably tied together. Sure, Azure offers constrained vCPU instances that help with that, but still.

Money is expensive, and it never goes on sale.

Slacker


If your CPU load stays consistently under 20%, and you don’t need a bunch of cushion for regularly scheduled tasks like ETL or ELT or LET me write a sentence before changing the acronym, then what’s the point of all those CPUs?

I’m not saying they have to run at 100% and you should be using them to air fry your Hot Pockets, but what’s wrong with running at 40-60%? That still leaves you a good bit of free ticks and cycles in case a parameter gets poorly sniffed or some other developer disaster befalls your server.

When a workload is pretty well-tuned, you should focus on right-sizing hardware rather than staring at your now-oversized hardware.

Bragging Rights


It’s often quite an achievement to say that you tuned this and that and got CPU down from 80% to 20%, but now what?

Can you give back some cores to the VM host? Consolidate workloads? Move to a smaller instance size?

Fashion models are great, but they’re not great role models for servers. CPUs should not be sitting around being expensive and bored.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Implicit Transactions: Why Unrelated Queries Block Each Other In SQL Server

A Bit Sensational


I don’t really mean that unrelated queries block each other, but it sure does look like they do.

Implicit Transactions are really horrible surprises, but are unfortunately common to see in applications that use JDBC drivers to connect to SQL Server, and especially with applications that are capable of using other database platforms like Oracle as a back-end.

The good news is that in the latter case, support for using Read Committed Snapshot Isolation (RCSI) is there to alleviate a lot of your problems.

Problem, Child


Let’s start with what the problem looks like.

sp_WhoIsActive Blocking
awkward

A couple unrelated select queries are blocking each other. One grabbing rows from the Users table, one grabbing rows from the Posts table.

This shouldn’t be!

Even if you run sp_WhoIsActive in a way to try to capture more of the command text than is normally shown, the problem won’t be obvious.

sp_WhoIsActive 
    @get_locks = 1;
GO 

sp_WhoIsActive 
    @get_locks = 1,
    @get_full_inner_text = 1,
    @get_outer_command = 1;
GO

What Are Locks?


If we look at the details of the locks column from the output above, we’ll see the select query has locks on Posts:

<Database name="StackOverflow2013">
  <Locks>
    <Lock request_mode="S" request_status="GRANT" request_count="1" />
  </Locks>
  <Objects>
    <Object name="Posts" schema_name="dbo">
      <Locks>
        <Lock resource_type="KEY" index_name="PK_Posts_Id" request_mode="X" request_status="GRANT" request_count="1" />
        <Lock resource_type="OBJECT" request_mode="IX" request_status="GRANT" request_count="1" />
        <Lock resource_type="PAGE" page_type="*" index_name="PK_Posts_Id" request_mode="IX" request_status="GRANT" request_count="1" />
      </Locks>
    </Object>
  </Objects>
</Database>

We may also note something else quite curious about the output. The select from Users is sleeping, with an open transaction.

sp_WhoIsActive Blocking
melatonin

How It Happens


The easiest way to show you is with plain SQL commands, but often this is a side effect of your application connection string.

In one window, step through this:

--Run this and stop
SET IMPLICIT_TRANSACTIONS ON;

--Run this and stop
UPDATE p
    SET p.ClosedDate = SYSDATETIME()
FROM dbo.Posts AS p
WHERE p.Id = 11227809;

--Run this and stop
SELECT TOP (10000)
    u.*
FROM dbo.Users AS u
WHERE u.Reputation = 2
ORDER BY u.Reputation DESC;

--Don't run these last two until you look at sp_WhoIsActive
IF @@TRANCOUNT > 0 ROLLBACK;

SET IMPLICIT_TRANSACTIONS OFF;

In another window, run this:

--Run this and stop
SET IMPLICIT_TRANSACTIONS ON;

--Run this and stop
SELECT TOP (100)
    p.*
FROM dbo.Posts AS p
WHERE p.ParentId = 0
ORDER BY p.Score DESC;

--Don't run these last two until you look at sp_WhoIsActive
IF @@TRANCOUNT > 0 ROLLBACK;

SET IMPLICIT_TRANSACTIONS OFF;

How To Fix It


Optimistically:

If you’re using implicit transactions, and queries execute together, you won’t always see the full batch text. At best, the application will be written so that queries using implicit transactions will close out immediately. At worst, there will be some bug, or some weird connection pooling going on so that sessions never actually commit and release their locks.

Fortunately, using an optimistic isolation level alleviates the issue, since readers and writers don’t block each other. RCSI is the easiest for this situation usually, because Snapshot Isolation (SI) requires queries to request it specifically.

Of course, if you’re issuing other locking hints at the query level already that enforce more strict isolation levels, like READCOMMITEDLOCK, HOLDLOCK/SERIALIZABLE, or REPEATABLE READ, RCSI won’t help. It will be overruled, unfortunately.

Programmatically:

You could very well be using this in your connection string by accident. If you have control over this sort of thing, change the gosh darn code to stop using it. You probably don’t need to be doing this, anyway. If for some reason you do require it, you probably need to dig a bit deeper in a few ways:

Going a little deeper, there could also be some issues with indexes, or the queries that are modifying data that are contributing to excess locking. Again, RCSI is a quick fix, and changing the connection string is a good idea if you can do it, but don’t ignore these long-term.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.