CHECKDB Bug With SQL Server 2017 And ColumnStore Indexes

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server Index Key Column Order And Locking

Fall Guy


Let’s say we have a super important query. It’s not really important.

None of this is important.

SELECT   u.DisplayName, u.Reputation, u.CreationDate
FROM     dbo.Users AS u
WHERE    u.CreationDate >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()), 0)
AND      u.Reputation < 6
ORDER BY u.CreationDate DESC;

Maybe it’ll find users who created accounts in the last day who haven’t gotten any upvotes.

Shocking find, I know.

An okay index to help us find data and avoid sorting data would look like this:

CREATE INDEX ix_apathy 
    ON dbo.Users(CreationDate DESC, Reputation);

So now we know whose fault it is that we have this index, and we know who to blame when this happens.

Blocko


UPDATE u
SET u.LastAccessDate = GETDATE()
FROM dbo.Users AS u
WHERE u.Reputation = 147;
SQL Server Query Plan
Scanno
SQL Server sp_WhoIsActive Locks
Objectified

What’s going on here is that the optimizer chooses our narrower index to find data to update.

It’s helpful because we read far less pages than we would if we just scanned the clustered index, but the Reputation column being second means we can’t seek to rows we want.

The optimizer isn’t asking for a missing index here, either (okay, I don’t blame it for a query that runs in 145ms, but stick with me).

Switcheroo


If we change our index to have Reputation first, something nice happens.

To this query.

CREATE INDEX ix_whatever 
    ON dbo.Users(Reputation, CreationDate DESC);
SQL Server Query Plan
Sought
SQL Server sp_WhoIsActive Locks
Only Keys Now

With index order switched, we take more fine-grained locks, and we take them for a shorter period of time.

All That For This


If you have a locking problem, here’s what you should do:

  • Look at your modification queries that have WHERE clauses, and make sure they have the right indexes
  • Look at your modification queries that modify lots of rows, and try batching them
  • If your modification queries are horror shows, see if you can separate them into parts
  • If your critical read and write queries are at odds with each other, look into an optimistic isolation level

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Deadlock Graph Frustrations In SQL Server

Money In The Bank


Capturing deadlock graphs has come a long way since… 2008? You used to need clunky trace flags, that for some reason would dump information line by line into the error log.

Nowadays, you can get a lot of great information from the system health extended event session, or use Extended Events (to wit, probably the most value I’ve gotten from Extended Events).

But no matter what you use to collect or display them, deadlock graphs can return some confusing information.

SELECTS!

Sure, you can have selects use lock hints that’ll make them more prone to deadlocks: serializable, repeatable read, XLOCK hints and all that. But that’s not really the point. The point is that it’s usually not just a select happening in a transaction that’s taking locks.

SQL Server Stored Procedure
Heh.

With a stored procedure like this, the deadlock graph can be weird looking.

Even though it’s the updates holding locks, the select queries show up as owners.

SQL Server Deadlock Graph XML
You’re not you.

And that brings me to the next oddity!

Cut Off Text!

This is also an “issue” with the plan cache. I get it — logging the gazillion lines of text you monsters cram into a single statement would be difficult — but most of the time i’d rather have everything from the FROM on than just a bunch of selected columns.

Sometimes, though, even short text gets cut off.

SQL Server Deadlock Graph XML
?_?

Where id = what?

Why is the o in “cornholio” gone?

I DON’T GET IT. WHERE DID IT GO.

Unresolved Procedures

You know, with this information, you can, like… Nevermind.

SQL Server Deadlock Graph XML
Dammit, janet

As if figuring out deadlocks isn’t hard enough, we now have the additional pleasure of digging this out ourselves.

Why? I don’t know.

Monkey In The Tank


None of this stuff is fun to deal with, even for people experienced with troubleshooting deadlocks.

If you’re just starting out, it can make figuring the situation out seem impossible.

Don’t worry though, I’m sure there are robots coming.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

What Parameter Sniffing Looks Like In SQL Server’s Plan Cache

A Plan Has No Name


There are so many ways this question gets phrased.

  • Why is my query sometimes slow?
  • Why does restarting SQL Server make my query fast again?
  • Why does clearing the plan cache fix performance?

A while back, I put together a pretty good rundown of this on the DBA Stack Exchange site.

In the plan cache, it’s really hard to tell if a query is suffering from parameter sniffing in isolation.

By that I mean, if someone sends you a cached plan that’s slow, how can you tell if it’s because of parameter sniffing?

Cached Plans Lie


Here’s a simple example with a simple index:

CREATE INDEX ix_grindr 
    ON dbo.Users (CreationDate);

This query will return a couple days of data:

DECLARE @CreationDate DATETIME = '2013-12-30'
DECLARE @sql NVARCHAR(MAX) = 
N'
SELECT *
FROM dbo.Users AS u
WHERE u.CreationDate >= @iCreationDate
ORDER BY u.CreationDate DESC;
'

EXEC sys.sp_executesql @sql, 
                       N'@iCreationDate DATETIME', 
					   @iCreationDate = @CreationDate;
GO

This query will return a year and a couple days worth of data:

DECLARE @CreationDate DATETIME = '2012-12-30'
DECLARE @sql NVARCHAR(MAX) = 
N'
SELECT *
FROM dbo.Users AS u
WHERE u.CreationDate >= @iCreationDate
ORDER BY u.CreationDate DESC;
'

EXEC sys.sp_executesql @sql, 
                       N'@iCreationDate DATETIME', 
					   @iCreationDate = @CreationDate;
GO 

Time After Time


If we run those one right after the other, the query that returns a year of data will use the query plan of the query that returns a day of data.

This is what performance tuners call “bad”.

Let’s look at the cached plan!

SELECT ds.execution_count,
       ds.min_worker_time / 1000000. AS min_worker_time_ms,
	   ds.max_worker_time / 1000000. AS max_worker_time_ms,
	   ds.min_elapsed_time / 1000000. AS min_elapsed_time_ms,
	   ds.max_elapsed_time / 1000000. AS max_elapsed_time_ms,
	   ds.min_logical_reads,
	   ds.max_logical_reads,
	   ds.min_rows,
	   ds.max_rows,
       SUBSTRING(st.text, (ds.statement_start_offset / 2) +1,   
                 ((CASE ds.statement_end_offset  
                       WHEN -1 
				       THEN DATALENGTH(st.text)  
                       ELSE ds.statement_end_offset  
                   END - ds.statement_start_offset) / 2) +1) AS text,
	   dp.query_plan
FROM sys.dm_exec_query_stats AS ds
CROSS APPLY sys.dm_exec_sql_text(ds.plan_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(ds.plan_handle) AS dp
WHERE st.dbid = DB_ID()
AND ds.query_plan_hash = 0x2D530BDF87345191
OPTION(RECOMPILE);

This query will go look at some details about the queries I ran.

Since my code is parameterized, the cached plan for both executions looks like this:

SQL Server Query Plan
So cute though

How could this plan cause any trouble?

Metric System


The plan isn’t telling us everything, though.

Look what those other columns have to say:

SQL Server Query Execution Details
Sometimes~

We have two executions, which we knew! We ran it twice.

But the min and max for many important metrics are very different.

CPU, wall clock time, reads, rows… They all tell me that this query has wild swings in performance.

On Your Own


One simple way to go about finding queries with parameter sniffing issues might be to just look for ones with much different CPU usage.

You could easily do with this other metrics; I just like CPU as a measure of these things.

SELECT TOP (100) ds.execution_count,
       ds.min_worker_time / 1000000. AS min_worker_time_ms,
	   ds.max_worker_time / 1000000. AS max_worker_time_ms,
	   ds.min_elapsed_time / 1000000. AS min_elapsed_time_ms,
	   ds.max_elapsed_time / 1000000. AS max_elapsed_time_ms,
	   ds.min_logical_reads,
	   ds.max_logical_reads,
	   ds.min_rows,
	   ds.max_rows,
       SUBSTRING(st.text, (ds.statement_start_offset / 2) +1,   
                 ((CASE ds.statement_end_offset  
                       WHEN -1 
				       THEN DATALENGTH(st.text)  
                       ELSE ds.statement_end_offset  
                   END - ds.statement_start_offset) / 2) +1) AS text,
	   dp.query_plan
FROM sys.dm_exec_query_stats AS ds
CROSS APPLY sys.dm_exec_sql_text(ds.plan_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(ds.plan_handle) AS dp
WHERE st.dbid = DB_ID()
AND ds.execution_count > 1
AND (ds.min_worker_time / 1000000.) * 100. < (ds.max_worker_time / 1000000.)
ORDER BY max_worker_time_ms DESC
OPTION(RECOMPILE);

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Why is SYSDATETIME() slower than SYSUTCDATETIME() In SQL Server?

Consider the following code that calls SYSDATETIME() 10 million times in a loop:

GO

CREATE OR ALTER PROCEDURE #p_local AS
BEGIN
	SET NOCOUNT ON;

	DECLARE @dummy DATETIME2(7), @loops INT = 0;

	WHILE @loops <= 10000000
	BEGIN
		SET @dummy = SYSDATETIME();

		SET @loops = @loops + 1;
	END;
END;

GO

EXEC #p_local;

On my machine the code takes about 11.6 seconds to execute. Replacing SYSDATETIME() with SYSUTCDATETIME() makes the code take only 4.3 seconds to execute. Why is SYSUTCDATETIME() so much faster than SYSDATETIME()?

It’s always a CPU problem


Both while loops drive a CPU core to 100% while executing. Within SQL server, elapsed time nearly equals CPU. For this code, I wouldn’t expect waits, latches, or spinlocks to reveal any clues. Query plans, execution stats, and other DMVs are unlikely to help as well. One thing that could help is information about the internals of SYSDATETIME() and SYSUTCDATETIME(). There’s a little bit on books online:

SQL Server obtains the date and time values by using the GetSystemTimeAsFileTime() Windows API. The accuracy depends on the computer hardware and version of Windows on which the instance of SQL Server is running. The precision of this API is fixed at 100 nanoseconds. The accuracy can be determined by using the GetSystemTimeAdjustment() Windows API.

Probably all of the useful information is likely locked behind the vault at Microsoft. Fortunately, both SQL Server and the guest OS are happy to give us information about what SQL Server is using its CPU to do if we have the right public symbols and ask nicely. I know of four primary ways of doing this:

  1. A True Professional could use WinDbg (the cool kids pronounce it “wind bag”) or another debugger to step through the code of both functions. I don’t have the skills or patience to do that.
  2. DBCC STACKDUMP or sqldumper.exe can be used to create a filtered memory dump on demand. The problem with this approach is that it only provides a single snapshot and our functions execute very quickly.
  3. Extended events can provide callstacks for many events. I’m not aware of an extended event that would fire at the right times for this code.
  4. ETW tracing can provide a summary of call stacks during a sampled time period. Common choices are Windows Performance Recorder or PerfView.

Option 4 is exactly what we need. We can run a single query at a time that performs uniform work and we want to see where CPU time is spent during the entire query’s execution. My tool of choice is PerfView.

Revealing SQL Server’s secrets


I used PerfView’s default settings and collected ETW profile data separately during both while loops. After resolving symbols and changing folding to 0, there’s a clear difference between SYSDATETIME() (on the left) and SYSUTCDATETIME() (on the right):

kernelbase!GetTimeZoneInformation shows up with a high percentage for exclusive time for SYSDATETIME(). It does not show up at all for SYSUTCDATETIME(). In case it helps, here’s documention for a few of the column names:

Exc – The amount of cost (msec of CPU time) that can be attributed to the particular method itself (not any of its callees)
Exc % – The exclusive cost expressed as a percentage of the total cost of all samples.
Inc – The cost associated with this node as well as all its children (callees) recursively. The inclusive cost of the ROOT contains all costs.
Inc % – The inclusive cost expressed as a percentage of the total cost of all samples (will be 100% for the ROOT node)

sqltses!CXVariant::GetSysDatetime sounds an awful lot like SYSDATETIME() to me. It also shows up in the right place in the stack to be a match. Drilling into the methods are called by it:

Note that the call to kernelbase!GetTimeZoneInformation takes up nearly all CPU time. There’s also a call to kernelbase!GetSystemTimeAsFileTime that barely shows up as well.

Sharp-eyed readers might be wondering if SYSDATETIMEOFFSET() results in calls to sqltses!CDatetimeOffset::SetSystemDatetimeOffset. The answer is yes, but this is left as an exercise for the reader.

Again, sqltses!CXVariant::GetSysUTCDatetime sounds like awful lot like SYSUTCDATETIME() to me. Drilling into the methods are called by it:

Note the low percentage of inclusive time for the call. SYSUTCDATETIME() is so cheap to call that most of the CPU time is spent executing the loop code instead of that function. kernelbase!GetTimeZoneInformation does not show up at all. There’s is still a call to kernelbase!GetSystemTimeAsFileTime that barely shows up.

This is probably an oversimplification, but I think it’s fair to say that each call to SYSDATETIME() needs to get time zone information. This results in a call to kernelbase!GetTimeZoneInformation that is relatively expensive. There is no need to call that method for SYSUTCDATETIME().

Does this matter? It depends™


Database folks like answering questions with “it depends”. Regrettably, they don’t always provide sufficient or accurate details about what it depends on. I strive to not fall into that trap. For many queries the performance difference between the two functions won’t matter. Consider the following two batches:

DROP TABLE IF EXISTS #BENCHMARK_TIME_FUNCTIONS;

CREATE TABLE #BENCHMARK_TIME_FUNCTIONS (SAVED_DATETIME DATETIME2(7) NULL);

INSERT INTO #BENCHMARK_TIME_FUNCTIONS WITH (TABLOCK)
SELECT TOP (10000000) SYSDATETIME()
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

GO

DROP TABLE IF EXISTS #BENCHMARK_TIME_FUNCTIONS;

CREATE TABLE #BENCHMARK_TIME_FUNCTIONS (SAVED_DATETIME DATETIME2(7) NULL);

INSERT INTO #BENCHMARK_TIME_FUNCTIONS WITH (TABLOCK)
SELECT TOP (10000000) SYSUTCDATETIME()
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

SQL Server doesn’t need to execute the datetime functions once per row. If it did, we’d expect a run time difference of about 7 seconds between the two queries. Instead, they execute in nearly exactly the same amount of time. I think that the details about guarantees of this behavior were found on a Connect (RIP) comment.

The performance difference can make a difference when the function is used to profile code that executes row by row. Suppose that we have related SQL statements and we want to measure the execution time of just one of the statements by calculating the time difference before and after execution. I’m not saying that this is the best way of doing this analysis, but I suspect it to be a common technique. Below is an example stored procedure with one copy using SYSDATETIME() and the other using SYSUTCDATETIME():

GO

CREATE OR ALTER PROCEDURE #USE_SUPERIOR_AMERICAN_TIME AS
BEGIN
	DECLARE @start_time DATETIME2(7),
	@end_time DATETIME2(7),
	@total_microseconds BIGINT = 0,
	@loops INT = 0;

	SET NOCOUNT ON;

	DROP TABLE IF EXISTS dbo.OFFICIAL_STATEMENTS;
	CREATE TABLE dbo.OFFICIAL_STATEMENTS (OFFICIAL_STATEMENT VARCHAR(30));
	
	WHILE @loops <= 50000 BEGIN INSERT INTO dbo.OFFICIAL_STATEMENTS VALUES ('NO COLLUSION');
		IF @@ROWCOUNT > 0	
		BEGIN
			SET @start_time = SYSDATETIME();

			INSERT INTO dbo.OFFICIAL_STATEMENTS VALUES ('NO OBSTRUCTION');

			SET @end_time = SYSDATETIME();

			SET @total_microseconds = @total_microseconds + DATEDIFF(MICROSECOND, @start_time, @end_time);
		END;

		SET @loops = @loops + 1;
	END;

	SELECT @total_microseconds;
END;

GO

EXEC #USE_SUPERIOR_AMERICAN_TIME;

GO

CREATE OR ALTER PROCEDURE #USE_INFERIOR_EUROPEAN_TIME AS
BEGIN
	DECLARE @start_time DATETIME2(7),
	@end_time DATETIME2(7),
	@total_microseconds BIGINT = 0,
	@loops INT = 0;

	SET NOCOUNT ON;

	DROP TABLE IF EXISTS dbo.OFFICIAL_STATEMENTS;
	CREATE TABLE dbo.OFFICIAL_STATEMENTS (OFFICIAL_STATEMENT VARCHAR(30));
	
	WHILE @loops <= 50000 BEGIN INSERT INTO dbo.OFFICIAL_STATEMENTS VALUES ('NO COLLUSION');
		IF @@ROWCOUNT > 0	
		BEGIN
			SET @start_time = SYSUTCDATETIME();

			INSERT INTO dbo.OFFICIAL_STATEMENTS VALUES ('NO OBSTRUCTION');

			SET @end_time = SYSUTCDATETIME();

			SET @total_microseconds = @total_microseconds + DATEDIFF(MICROSECOND, @start_time, @end_time);
		END;

		SET @loops = @loops + 1;
	END;

	SELECT @total_microseconds;
END;

GO

EXEC #USE_INFERIOR_EUROPEAN_TIME;

The second insert query takes a total of 4381219 microseconds with SYSDATETIME() and 4164658 microseconds with SYSUTCDATETIME(). That’s a 5% difference just from changing the datetime function. Of course, given that this code is measuring the difference of times SYSUTCDATETIME() is the correct choice because it remains accurate even if there’s a daylight savings switch during code execution. My own approach is trying to avoid running one-off stored procedures around 2:00 AM.

Final thoughts


There’s a large performance difference between SYSDATETIME() and SYSUTCDATETIME(). It isn’t necessary to guess which one is slower and why. ETW tracing tools such as PerfView can reveal exactly where the performance difference comes from. Thanks for reading!

Are Self Joins Ever Better Than Key Lookups In SQL Server?

Sorta Topical


Like most tricks, this has a specific use case, but can be quite effective when you spot it.

I’m going to assume you have a vague understanding of parameter sniffing with stored procedures going into this. If you don’t, the post may not make a lot of sense.

Or, heck, maybe it’ll give you a vague understanding of parameter sniffing in stored procedures.

One For The Money


Say I have a stored procedure that accepts a parameter called @Reputation.

The body of the procedure looks like this:

    SELECT TOP (1000) 
	        u.*
    FROM dbo.Users AS u
    WHERE u.Reputation = @Reputation
    ORDER BY u.CreationDate DESC;

In the users table, there are a lot of people with a Reputation of 1.

There are not so many with a Reputation of 2.

+------------+---------+
| Reputation | records |
+------------+---------+
|          1 | 1090043 |
|          2 |    1854 |
+------------+---------+

Two For The Slow


Data distributions like this matter. They change how SQL Server approaches coming up with an execution plan for a query.

Which indexes to use, what kind of joins to use, how to aggregate data, if the plan should be serial or parallel…

The list goes on and on.

In this case, we have a narrow-ish nonclustered index:

    CREATE INDEX whatever 
        ON dbo.Users (Reputation, Age, CreationDate);

When I run my stored procedure and look for Reputation = 2, the plan is very fast.

EXEC dbo.WORLDSTAR @Reputation = 2;
SQL Server Query Plan
Getting to know you.

This is a great plan for a small number of rows.

When I run it for a large number of rows, it’s not nearly as fast.

EXEC dbo.WORLDSTAR @Reputation = 1;
SQL Server Query Plan
Stuck on you.

We go from a fraction of a second to over three seconds.

This is bad parameter sniffing.

If we run it for Reputation = 1 first, we don’t have the same problem.

That’s good(ish) parameter sniffing.

Better For Everyone


Many things that prevent parameter sniffing will only give you a so-so plan. It may be better than the alternative, but it’s certainly not a “fix”.

It’s possible to get a better plan for everyone in this situation by re-writing the Key Lookup as a self join

    SELECT TOP (1000) 
	        u2.*
    FROM dbo.Users AS u
    JOIN dbo.Users AS u2
        ON u.Id = u2.Id
    WHERE u.Reputation = @Reputation
    ORDER BY u.CreationDate DESC;

The reason why is slightly complicated, but I’ll do my best to explain it simply.

Here’s what the bad parameter sniffing plan looks like for each query.

Note that the Key Lookup plan still runs for ~3 seconds, while the self-join plan runs for around half a second.

SQL Server Query Plan
DAWG CHILL

While it’s possible for Key Lookups to have Sorts introduced to optimize I/O… That doesn’t happen here.

The main difference between the two plans (aside from run time), is the position of the Sort.

In the Key Lookup plan (top), the Key Lookup between the nonclustered and clustered indexes runs to completion.

In other words, for everyone with a Reputation of 1, we go to the clustered index to retrieve the columns that aren’t part of the nonclustered index.

In the self-join plan (bottom), all rows go into the Sort, but only the 1000 come out.

Different World


The difference is more obvious when viewed with Plan Explorer.

SQL Server Query Plan
Get’Em High

In the Key Lookup plan, rows aren’t narrowed until the end so a seek occurs ~1mm times.

In the self-join plan, they’re eliminated directly after the Index Seek, so the join only runs for 1000 rows and produces 1000 seeks.

This doesn’t mean that Top N Sorts are bad, it just means that they may not produce the most optimal plans for Key Lookups.

When This Doesn’t Work


Without a TOP, the self-join pattern isn’t as dramatically faster, but it is about half a second better (4.3s vs. 3.8s) for the bad parameter sniffing scenario, and far less for the others.

Of course, an index change to put CreationDate as the second key column fixes the issue by removing the need to sort data at all.

    CREATE INDEX whatever --Current Index
        ON dbo.Users (Reputation, Age, CreationDate);
    GO 
    
    CREATE INDEX apathy --Better Index For This Query
	    ON dbo.Users (Reputation, CreationDate, Age);
    GO

But, you know, not everyone is able to make index changes easily, and changing the key column order can cause problems for other queries.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Bad Cardinality Estimates Lead To Bad Query Plan Choices: Tuning Indexes For Better Performance

Batch Cussidy


In the last post, we looked at how SQL Server 2019’s Batch Mode for Row Store could have helped our query.

In short, it didn’t. Not because it’s bad, just because the original guess was still bad.

Without a hint, we still got a poorly performing Merge Join plan. With a hint, we got a less-badly-skewed parallel plan.

Ideally, I’d like a good plan without a hint.

In this post, I’ll focus on more traditional things we could do to improve our query.

I’ll approach this like I would if you gave me any ol’ query to tune.

Here’s what we’re starting with:

    SELECT   p.*
    FROM     dbo.Posts AS p
    JOIN     dbo.Votes AS v
        ON p.Id = v.PostId
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225'
    ORDER BY p.Id;

First Pass


I don’t know about you, but I typically like to index my join columns.

Maybe not always, but when the optimizer is choosing to SORT 52 MILLION ROWS each and every time, I consider that a cry for help.

Indexes sort data.

Let’s try that first.

	CREATE INDEX ix_fluffy 
	    ON dbo.Votes(PostId);

This is… Okay.

SQL Server Query Plan
Willingly!

I’m not saying this plan is great. It’s certainly faster than the Merge Join plan, and the optimizer chose it without us having to hint anything.

It takes 3.6 seconds total. I think we can do better.

Second Pass


I wonder if a temp table might help us.

    SELECT   *
	INTO     #p
    FROM     dbo.Posts AS p
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225';

This is… Okay. Again.

The Insert takes 1.8 seconds:

SQL Server Query Plan
Scrunchy face.

The final select takes 670ms:

SQL Server Query Plan
Add’em up.

Usually this is the point where I’ll stop and report in:

“I’ve spent X amount of time working on this, and I’ve gotten the query from 27 seconds down to about 2.5 seconds. I can do a little more and maybe shave more time off, but I’ll probably need to add another index. It’s up to you though, and how important this query is to end users.”

We could totally stop here, but sometimes people wanna go one step further. That’s cool with me.

Third Pass


The insert query is asking for an index, but it’s a dumb dumb booty head index.

SQL Server Query Plan
Thanks, Robots.

Yep. Include every column in the table. Sounds legit.

Let’s hedge our bets a little.

    CREATE INDEX ix_froggy
	    ON dbo.Posts(PostTypeId, CreationDate);

I bet we’ll use a narrow index on just the key columns here, and do a key lookup for the rest.

SQL Server Query Plan
Righto

This time I was right, and our Insert is down to 200ms.

This doesn’t change the speed of our final select — it’s still around 630-670ms when I run it.

Buuuuuuut, this does get us down to ~900ms total.

Final Destination


Would end users notice 900ms over 2.5 seconds? Maybe if they’re just running it in SSMS.

In my experience, by the time data ends up in the application, gets rendered, and then displayed to end users, your query tuning work can feel a bit sabotaged.

They’ll notice 27 seconds vs 2.5 seconds, but not usually 2.5 seconds vs 1 second.

It might make you feel better as a query tuner, but I’m not sure another index is totally worth that gain (unless it’s really helping other queries, too).

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Bad Cardinality Estimates Lead To Bad Query Plan Choices: Does SQL Server 2019 Batch Mode On Rowstore Help?

Bad Fit


In the last post, I tried to play some tricks on the optimizer to have Batch Mode make a query faster.

It didn’t quite work out the way I wanted.

That isn’t to knock Batch Mode. It’s not supposed to fix everything, and I played a silly trick to get some operators to use it.

But it did make me curious if 2019’s Batch Mode for Row Store would improve anything.

After all, no dirty tricks are involved, which means more operators can potentially use Batch Mode, and maybe we’ll avoid that whole skewed parallelism problem.

I mean, what are the odds that I’ll get the exact same plans and everything?

Spoiler


SQL Server Query Plan
50%

The Merge Join plan is still wack as hell.

The forced hash join plan got better, though.

Still Skewy?


The skewed parallelism isn’t as contagious going across without the separate Filter/Bitmap operator.

SQL Server Query Plan
Uggo

With both the Hash Join and the Scan of Votes occurring in Batch Mode, the Bitmap is able to be pushed to the storage engine.

Note that there’s no explicit Bitmap filter anywhere in the plan, but one still shows up in the predicate of the Scan of the Votes table.

SQL Server Query Plan
Something new

With 3 out of 4 threads doing… vaguely even work, we end up in a better place, especially compared to only one thread doing any work.

At least now I can show you the Batch Mode Sort lie about being parallel.

SQL Server Query Plan
Crud

Three threads with rows at the Hash Join go into the Sort on a single thread.

You Know…


As much as I love these new bells and whistles, I’m gonna go back to good ol’ fashion query tuning next.

The problem here is still that a bad estimate causes a Merge Join to be chosen in the first place.

When we force the Hash Join, query runtime is reduced at the expense of a quite large memory grant.

This is likely preferable for most people (especially with Batch Mode Memory Grant Feedback), but it doesn’t address the root cause.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Bad Cardinality Estimates Lead To Bad Query Plan Choices: Does Batch Mode Help?

Notes and Notability


In the last post, I showed you a query where a bad plan was chosen because of bad guesses.

In this post, I’m going to see if Batch Mode will help anything.

To do that, we’re going to use an empty temp table with a clustered columnstore index on it.

    CREATE TABLE #t (id INT NOT NULL, 
	                 INDEX c CLUSTERED COLUMNSTORE);

First Up, No Hints


    SELECT   p.*
    FROM     dbo.Posts AS p
    JOIN     dbo.Votes AS v
        ON p.Id = v.PostId
	LEFT JOIN #t ON 1 = 0
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225'
    ORDER BY p.Id;
SQL Server Query Plan
Deeply Unsatisfying

Though this plan is ~6 seconds faster than the Merge Join equivalent in the last post, that’s not the kind of improvement I’m shooting for.

Remember than the Hash Join plan in Row Mode ran in 2.6 seconds.

The only operator to run in Batch Mode here is the Sort. To be fair, it’s really the only one eligible with the trick we used.

Forcing this plan to run in parallel, we go back to a 27 second runtime with no operators in Batch Mode.

Next Up, Hash Hint


Disappointingly, this gets worse. The Row Mode only plan was 2.6 seconds, and this is 6.8 seconds.

SQL Server Query Plan
Hush now

The answer to why the Batch Mode plan is 3x slower lies in our Row Mode plan. Let’s look at them head to head.

SQL Server Query Plan
Pick me!

See that Repartition Streams operator? It literally saves the entire query.

The Batch Mode plan doesn’t get one. Because of that, Bad Things Happen™

Skew Job


Both plans start out with an unfortunate scan of the Posts table.

SQL Server Query Plan
Sucktown

But in the Row Mode plan, Repartition Streams does exactly what it sounds like it does, and balances things out. The number of rows on each thread is even because of this. Crazy to think about, but threads dividing work up evenly is, like, a good thing.

In the Batch Mode plan, that… doesn’t happen.

SQL Server Query Plan
Apology dog

 

We have the dreaded serial parallel query. Aside from the Scan of the Votes table, only one thread is ever active across the rest of the query.

This isn’t a limitation of Batch Mode generally, though I suspect it has something to do with why we don’t see Repartition Streams show up.

One limitation of Batch Mode is with Sorts — they are single threaded — a point this particular demo obfuscates, unless they’re a child operator of a Window Aggregate.

Well, Darn


I was expecting some fireworks there.

Maybe 2019 will help?

In the next post, we’ll look at that.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Bad Cardinality Estimates Lead To Bad Query Plan Choices

Let’s Run A Cruddy Query


We’ve got no supporting indexes right now. That’s fine.

The optimizer is used to not having helpful indexes. It can figure things out.

    SELECT   p.*
    FROM     dbo.Posts AS p
    JOIN     dbo.Votes AS v
        ON p.Id = v.PostId
    WHERE    p.PostTypeId = 2
    AND      p.CreationDate >= '20131225'
    ORDER BY p.Id;
SQL Server Query Plan
Snap City.

So uh. We got a merge join here. For some reason. And a query that runs for 27 seconds.

The optimizer was all “no, don’t worry, we’re good to sort 52 million rows. We got this.”

SQL Server Query Plan
YOU’VE GOT THIS.

[You don’t got this — ED]

Choices, Choices


Since we have an order by on the Id column of the Posts table, and that column is the Primary Key and Clustered index, it’s already in order.

The optimizer chose to order the PostId column from the Votes table, and preserve the index order of the Id column.

Merge Joins expect ordered input on both sides, don’tcha know?

It could have chosen a Hash Join, but then the order of the Id column from the Posts table wouldn’t have been preserved on the other side.

Merge Joins are order preserving, Hash Joins aren’t. If we use a Hash Join, we’re looking at ordering the results of the join after it’s done.

But why?

SQL Server Query Plan
Good Guess, Bad Guess

Going into the Merge Join, we have a Good Guess™

Coming out of the Merge Join, we have a Bad Guess™

Thinking back to the Sort operator, it only has to order the PostId column from the Votes table.

That matters.

Hash It Up


To compare, we need to see what happens with a Hash Join.

SQL Server Query Plan
smh

Okay, ignore the fact that this one runs for 2.6 seconds, and the other one ran for 27 seconds.

Just, like, put that aside.

Here’s why:

SQL Server Query Plan
Neither Nor

This Sort operator is different. We need to sort all of the columns in the Posts table by the Id column.

Remember that the Id column is now out of order after the Hash Join.

Needing to sort all those columns, including a bunch of string columns, along with an NVARCHAR(MAX) column — Body — inflates the ever-weeping-Jesus out of the memory grant.

SQL Server Query Plan
I see.

The Hash Join plan is not only judged to be more than twice as expensive, but it also asks for a memory grant that’s ~3x the size of the Merge Join plan.

Finish Strong


Let’s tally up where we’re at.

Both queries have identical estimated rows.

The optimizer chooses the Merge Join plan because it’s cheaper.

  • The Merge Join plan runs for 27 seconds, asks for 3.3GB of RAM, and spills to disk.
  • The Hash Join plan runs for 3 seconds, asks for 9.7GB of RAM and doesn’t spill, but it only uses 188MB of the memory grant.

That has impacted the reliability.

In a world where memory grants adjust between executions, I’ll take the Hash Join plan any day of the week.

But this is SQL Server 2017, and we don’t get that without Batch Mode, and we don’t get Batch Mode without playing some tricks.

There are lots of solutions if you’re allowed to tune queries or indexes, but not so much otherwise.

In the next couple posts, I’ll look at different ways to approach this.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.