If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
You probably have a bunch of nonclustered indexes that need to be read so that…
All those indexes can be modified, and written to the transaction log as well
If your database is synchronously replicated anywhere, you have to wait on that ack
There are no Batch Mode modifications
Usually when modifications need to hit a significant number of rows, I want to think about batching things. That might not always be possible, but it’s certainly a kinder way to do things.
If you can’t batch inserts and deletes, you may want to think about using partitioning to quickly switch data in and out.
And of course, don’t use MERGE.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
There was a three-part series of posts where I talked about a weird performance issue you can hit with parameterized top. While doing some query tuning for a client recently, I ran across a funny scenario where they were using TOP PERCENT to control the number of rows coming back from queries.
With a parameter.
So uh. Let’s talk about that.
Setup Time
Let’s start with a great index. Possibly the greatest index ever created.
CREATE INDEX whatever
ON dbo.Votes
(VoteTypeId, CreationDate DESC)
WITH
(
MAXDOP = 8,
SORT_IN_TEMPDB = ON
);
GO
Now let me show you this stored procedure. Hold on tight!
CREATE OR ALTER PROCEDURE dbo.top_percent_sniffer
(
@top bigint,
@vtid int
)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
SELECT TOP (@top) PERCENT
v.*
FROM dbo.Votes AS v
WHERE v.VoteTypeId = @vtid
ORDER BY v.CreationDate DESC;
END;
Cool. Great.
Spool Hardy
When we execute the query, the plan is stupid.
EXEC dbo.top_percent_sniffer
@top = 1,
@vtid = 6;
GO
We don’t use our excellent index, and the optimizer uses an eager table spool to hold rows and pass the count to the TOP operator until we hit the correct percentage.
This is the least ideal situation we could possibly imagine.
Boot and Rally
A while back I posted some strange looking code on Twitter, and this is what it ended up being used for (among other things).
CREATE OR ALTER PROCEDURE dbo.top_percent_sniffer
(
@top bigint,
@vtid int
)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN;
WITH pct AS
(
SELECT
records =
CONVERT(bigint,
CEILING(((@top * COUNT_BIG(*)) / 100.)))
FROM dbo.Votes AS v
WHERE v.VoteTypeId = @vtid
)
SELECT
v.*
FROM pct
CROSS APPLY
(
SELECT TOP (pct.records)
v.*
FROM dbo.Votes AS v
WHERE v.VoteTypeId = @vtid
ORDER BY v.CreationDate DESC
) AS v;
END;
GO
Soul Bowl
This definitely has drawbacks, since the expression in the TOP always gives a 100 row estimate. For large numbers of rows, this plan could be a bad choice and we might need to do some additional tuning to get rid of that lookup.
There might also be occasions when using a column store index to generate the count would be benefit, and the nice thing here is that since we’re accessing the table in two different ways, we could use two different indexes.
But for reliably small numbers of rows, this is a pretty good solution.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
When I’m blogging about performance tuning, most of it is from the perspective of Enterprise Edition. That’s where you need to be if you’re serious about getting SQL Server to go as fast as possible. Between the unrealistic memory limits and other feature restrictions, Standard Edition just doesn’t hold up.
Sure, you can probably get by with it for a while, but once performance becomes a primary concern it’s time to fork over an additional 5k a core for the big boat.
They don’t call it Standard Edition because it’s The Standard, like the hotel. Standard is a funny word like that. It can denote either high or low standing through clever placement of “the”. Let’s try an experiment:
Erik’s blogging is standard for technical writing
Erik’s blogging is the standard for technical writing
Now you see where you stand with standard edition. Not with “the”, that’s for sure. “The” has left the building.
Nerd Juice
A lot of the restrictions for column store in Standard Edition are documented, but:
DOP limit of two for queries
No parallelism for creating or rebuilding indexes
No aggregate pushdown
No string predicate pushdown
No SIMD support
Here’s a comparison for creating a nonclustered column store index in Standard and Enterprise/Developer Editions:
The top plan is from Standard Edition, and runs for a minute in a full serial plan. There is a non-parallel plan reason in the operator properties: MaxDOPSetToOne.
I do not have DOP set to one anywhere, that’s just the restriction kicking in. You can try it out for yourself if you have Standard Edition sitting around somewhere. I’m doing all my testing on SQL Server 2019 CU9. This is not ancient technology at the time of writing.
The bottom plan is from Enterprise/Developer Edition, where the the plan is able to run partially in parallel, and takes 28 seconds (about half the time as the serial plan).
Query Matters
One of my favorite query tuning tricks is getting batch mode to happen on queries that process a lot of rows. It doesn’t always help, but it’s almost always worth trying.
The problem is that on Standard Edition, if you’re processing a lot of rows, being limited to a DOP of 2 can be a real hobbler. In many practical cases, a batch mode query at DOP 2 will end up around the same as a row mode query at DOP 8. It’s pretty unfortunate.
In some cases, it can end up being much worse.
SELECT
MIN(p.Id) AS TinyId,
COUNT_BIG(*) AS records
FROM dbo.Posts AS p WITH(INDEX = ncp)
JOIN dbo.Votes AS v
ON p.Id = v.PostId
WHERE p. OwnerUserId = 22656;
SELECT
MIN(p.Id) AS TinyId,
COUNT_BIG(*) AS records
FROM dbo.Posts AS p WITH(INDEX = 1)
JOIN dbo.Votes AS v
ON p.Id = v.PostId
WHERE p. OwnerUserId = 22656;
Here’s the query plan for the first one, which uses the nonclustered column store index on Posts. There is no hint or setting that’s keeping DOP at 2, this really is just a feature restriction.
Higher Ground
The second query, which is limited by the MAXDOP setting to 8, turns out much faster. The batch mode query takes 3.8 seconds, and the row mode query takes 1.4 seconds.
In Enterprise Edition, there are other considerations for getting batch mode going, like memory grant feedback or adaptive joins, but those aren’t available in Standard Edition.
In a word, that sucks.
Dumb Limit
The restrictions on creating and rebuilding column store indexes to DOP 1 (both clustered and nonclustered), and queries to DOP 2 all seems even more odd when we consider that there is no restriction on inserting data into a table with a column store index on it.
As an example:
SELECT
p.*
INTO dbo.PostsTestLoad
FROM dbo.Posts AS p
WHERE 1 = 0;
CREATE CLUSTERED COLUMNSTORE INDEX pc ON dbo.PostsTestLoad;
SET IDENTITY_INSERT dbo.PostsTestLoad ON;
INSERT dbo.PostsTestLoad WITH(TABLOCK)
(
Id, AcceptedAnswerId, AnswerCount, Body, ClosedDate,
CommentCount, CommunityOwnedDate, CreationDate,
FavoriteCount, LastActivityDate, LastEditDate,
LastEditorDisplayName, LastEditorUserId, OwnerUserId,
ParentId, PostTypeId, Score, Tags, Title, ViewCount
)
SELECT TOP (1024 * 1024)
p.Id, p.AcceptedAnswerId, p.AnswerCount, p.Body, p.ClosedDate, p.
CommentCount, p.CommunityOwnedDate, p.CreationDate, p.
FavoriteCount, p.LastActivityDate, p.LastEditDate, p.
LastEditorDisplayName, p.LastEditorUserId, p.OwnerUserId, p.
ParentId, p.PostTypeId, p.Score, p.Tags, p.Title, p.ViewCount
FROM dbo.Posts AS p;
SET IDENTITY_INSERT dbo.PostsTestLoad OFF;
Unsupportive Parents
These limits are asinine, plain and simple, and I hope at some point they’re reconsidered. While I don’t expect everything from Standard Edition, because it is Basic Cable Edition, I do think that some of the restrictions go way too far.
Perhaps an edition somewhere between Standard and Enterprise would make sense. When you line the two up, the available features and pricing are incredibly stark choices.
There are often mixed needs as well, where some people need Standard Edition with fewer HA restrictions, and some people need it with fewer performance restrictions.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
In the year 950 B.C., Craig Freedman write a post about subqueries in CASE expressions. It’s amazing how relevant so much of this stuff stays.
In today’s post, we’re going to look at a slightly different example than the one given, and how you can avoid performance problems with them by using APPLY.
Like most query tuning tricks, this isn’t something you always need to employ, and it’s not a best practice. It’s just something you can use when a scalar subquery doesn’t perform as you’d like it to.
How Much Wood
Our starting query looks like this. The point of it is to determine the percentage of answered questions per month.
SELECT
x.YearPeriod,
MonthPeriod =
RIGHT('00' + RTRIM(x.MonthPeriod), 2),
PercentAnswered =
CONVERT(DECIMAL(18, 2),
(SUM(x.AnsweredQuestion * 1.) /
(COUNT_BIG(*) * 1.)) * 100.)
FROM
(
SELECT
YearPeriod = YEAR(p.CreationDate),
MonthPeriod = MONTH(p.CreationDate),
CASE
WHEN EXISTS
(
SELECT
1/0
FROM dbo.Votes AS v
WHERE v.PostId = p.AcceptedAnswerId
AND v.VoteTypeId = 1
)
THEN 1
ELSE 0
END AS AnsweredQuestion
FROM dbo.Posts AS p
WHERE p.PostTypeId = 1
) AS x
GROUP BY
x.YearPeriod,
x.MonthPeriod
ORDER BY
x.YearPeriod ASC,
x.MonthPeriod ASC;
Smack in the middle of it, we have a case expression that goes looking for rows in the Votes table where a question has an answer that’s been voted as the answer.
Amazing.
To start with, we’re going to give it this index.
CREATE INDEX p
ON dbo.Posts(PostTypeId, AcceptedAnswerId)
INCLUDE(CreationDate)
WITH(MAXDOP = 8, SORT_IN_TEMPDB = ON);
Planpains
In all, this query will run for about 18 seconds. The majority of it is spent in a bad neighborhood.
Why does this suck? Boy oh boy. Where do we start?
Sorting the Votes table to support a Merge Join?
Choosing Parallel Merge Joins ever?
Choosing a Many To Many Merge Join ever?
All of the above?
Bout It
If we change the way the query is structured to use OUTER APPLY instead, we can get much better performance in this case.
SELECT
x.YearPeriod,
MonthPeriod =
RIGHT('00' + RTRIM(x.MonthPeriod), 2),
PercentAnswered =
CONVERT(DECIMAL(18, 2),
(SUM(x.AnsweredQuestion * 1.) /
(COUNT_BIG(*) * 1.)) * 100.)
FROM
(
SELECT
YearPeriod = YEAR(p.CreationDate),
MonthPeriod = MONTH(p.CreationDate),
oa.AnsweredQuestion
FROM dbo.Posts AS p
OUTER APPLY
(
SELECT
AnsweredQuestion =
CASE
WHEN v.Id IS NOT NULL
THEN 1
ELSE 0
END
FROM dbo.Votes AS v
WHERE v.PostId = p.AcceptedAnswerId
AND v.VoteTypeId = 1
) oa
WHERE p.PostTypeId = 1
) AS x
GROUP BY
x.YearPeriod,
x.MonthPeriod
ORDER BY
x.YearPeriod ASC,
x.MonthPeriod ASC;
This changes the type of join chosen, and runs for about 3 seconds total.
We avoid all of the problems that the parallel many-to-many Merge Join brought us.
Thanks, Hash Join.
It’s also worth noting that the OUTER APPLY plan asks for an index that would help us a bit, though like most missing index requests it’s a bit half-baked.
USE [StackOverflow2013]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[Votes] ([VoteTypeId])
INCLUDE ([PostId])
GO
Index Plus
Any human could look at this query and realize that having the PostId in the key of the index would be helpful, since we’d have it in secondary order to the VoteTypeId column
CREATE INDEX v
ON dbo.Votes(VoteTypeId, PostId)
WITH(MAXDOP = 8, SORT_IN_TEMPDB = ON);
If we add that index, we can make the subquery fairly competitive, at about 4.5 seconds total.
But the issue here is now rather than poorly choosing a Sort > Merge Join, we go into a Nested Loops join for ~6 million rows. That’s probably not a good idea.
This index doesn’t leave as profound a mark on the APPLY version of the query. It does improve overall runtime by about half a second, but I don’t think I’d create an index just to get a half second better.
But hey, who knows? Maybe it’d help some other queries, too.
Indexes are cool like that.
Back On The Map
If you’ve got subqueries in your select list that lead to poor plan choices, you do have options. Making sure you have the right indexes in place can go a long way.
You may be able to get competitive performance gains by rewriting them as OUTER APPLY. You really do need to use OUTER here though, because it won’t restrict rows and matches the logic of the subquery. CROSS APPLY would act like an inner join and remove any rows that don’t have a match. That would break the results.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
Using the scenario from yesterday’s post as an example of why you might want to think about rewriting queries with Table Spools in them to use temp tables instead, look how the optimizer chooses a plan with an Eager Table Spool.
The “Eager” part means the entire set of rows is loaded into a temporary object at once.
That’s a lot of rows, innit? Stick some commas in there, and you might just find yourself staring down the barrel of a nine digit number.
Worse, we spend a long time loading data into the spool, and doing so in a serial zone. There’s no good way to know exactly how long the load is because of odd operator times.
If you recall yesterday’s post, the plan never goes back to parallel after that, either. It runs for nearly 30 minutes in total.
Yes Your Tempdb
If you’re gonna be using that hunka chunka tempdb anyway, you might as well use it efficiently. Unless batch mode is an option for you, either as Batch Mode On Rowstore, or tricking the optimizer, this might be your best bet.
Keep in mind that Standard Edition users have an additional limitation where Batch Mode queries are limited to a DOP of 2, and don’t have access to Batch Mode On Rowstore as of this writing. The DOP limitation especially might make the trick unproductive compared to alternatives that allow for MOREDOP.
For example, if we dump that initial join into a temp table, it only takes about a minute to get loaded at a DOP of 8. That is faster than loading data into the spool (I mean, probably. Just look at that thing.).
The final query to do the distinct aggregations takes about 34 seconds.
Another benefit is that each branch that does a distinct aggregation is largely in a parallel zone until the global aggregate.
In total, both queries finish in about a 1:45. A big improvement from nearly 30 minutes relying on the Eager Table Spool and processing all of the distinct aggregates in a serial zone. The temp table here doesn’t have that particular shortcoming.
In the past, I’ve talked a lot about Eager Index Spools. They have a lot of problems too, many of which are worse. Of course, we need indexes to fix those, not temp tables.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
SELECT
v.PostId,
v.UserId,
v.BountyAmount,
v.VoteTypeId,
v.CreationDate
INTO #better_spool
FROM dbo.Votes AS v
JOIN dbo.Posts AS p
ON p.Id = v.PostId;
SELECT
PostId = COUNT_BIG(DISTINCT s.PostId),
UserId = COUNT_BIG(DISTINCT s.UserId),
BountyAmount = COUNT_BIG(DISTINCT s.BountyAmount),
VoteTypeId = COUNT_BIG(DISTINCT s.VoteTypeId),
CreationDate = COUNT_BIG(DISTINCT s.CreationDate)
FROM #better_spool AS s;
Well over 500 years ago, Paul White wrote an article about distinct aggregates. Considering how often I see it while working with clients, and that Microsoft created column store indexes and batch mode rather than allow for hash join hints on CLR UDFs, the topic feels largely ignored.
But speaking of all that stuff, let’s look at how Batch Mode fixes multiple distinct aggregates.
Jumbo Size
A first consideration is around parallelism, since you don’t pay attention or click links, here’s a quote you won’t read from Paul’s article above:
Another limitation is that this spool does not support parallel scan for reading, so the optimizer is very unlikely to restart parallelism after the spool (or any of its replay streams).
In queries that operate on large data sets, the parallelism implications of the spool plan can be the most important cause of poor performance.
What does that mean for us? Let’s go look. For this demo, I’m using SQL Server 2019 with the compatibility level set to 140.
SELECT
COUNT_BIG(DISTINCT v.PostId) AS PostId,
COUNT_BIG(DISTINCT v.UserId) AS UserId,
COUNT_BIG(DISTINCT v.BountyAmount) AS BountyAmount,
COUNT_BIG(DISTINCT v.VoteTypeId) AS VoteTypeId,
COUNT_BIG(DISTINCT v.CreationDate) AS CreationDate
FROM dbo.Votes AS v;
In the plan for this query, we scan the clustered index of the Votes table five times, or once per distinct aggregate.
In case you’re wondering, this results in one intent shared object lock on the Votes table.
This query runs for 38.5 seconds, as the crow flies.
A Join Appears
Let’s join Votes to Posts for no apparent reason.
SELECT
COUNT_BIG(DISTINCT v.PostId) AS PostId,
COUNT_BIG(DISTINCT v.UserId) AS UserId,
COUNT_BIG(DISTINCT v.BountyAmount) AS BountyAmount,
COUNT_BIG(DISTINCT v.VoteTypeId) AS VoteTypeId,
COUNT_BIG(DISTINCT v.CreationDate) AS CreationDate
FROM dbo.Votes AS v
JOIN dbo.Posts AS p
ON p.Id = v.PostId;
The query plan now has two very distinct (ho ho ho) parts.
This is part 1. Part 1 is a spoiler. Ignoring that Repartition Streams is bizarre and Spools are indefensible blights, as we meander across the execution plan we find ourselves at a stream aggregate whose child operators have executed for 8 minutes, and then a nested loops join whose child operators have run for 20 minutes and 39 seconds. Let’s go look at that part of the plan.
Each branch here represents reading from the same spool. We can tell this because the Spool operators do not have any child operators. They are starting points for the flow of data. One thing to note here is that there are four spools instead of five, and that’s because one of the five aggregates was processed in the first part of the query plan we looked at.
The highlighted branch is the one that accounts for the majority of the execution time, at 19 minutes, 8 seconds. This branch is responsible for aggregating the PostId column. Apparently a lack of distinct values is hard to process.
But why is this so much slower? The answer is parallelism, or a lack thereof. So, serialism. Remember the 500 year old quote from above?
Another limitation is that this spool does not support parallel scan for reading, so the optimizer is very unlikely to restart parallelism after the spool (or any of its replay streams).
In queries that operate on large data sets, the parallelism implications of the spool plan can be the most important cause of poor performance.
Processing that many rows on a single thread is painful across all of the operators.
Flounder Edition
With SQL Server 2019, we get Batch Mode On Row store when compatibility level gets bumped up to 150.
The result is just swell.
The second query with the join still runs for nearly a minute, but 42 seconds of the process is scanning that big ol’ Posts table.
Grumpy face.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.