This is always fun to talk to people about, because of all the misconceptions around the concept.
You need to find rows in one table that don’t have a match in another table. Maybe it’s a reconciliation process, maybe it’s part of ETL or something.
Doesn’t matter. Pay attention!
Choices
The way most people will write this query on the first try is like this:
SELECT COUNT_BIG(u.Id) AS records
FROM dbo.Users AS u
LEFT JOIN dbo.Posts AS p
ON u.Id = p.OwnerUserId
WHERE p.Id IS NULL;
The query plan has one of my (generally) least favorite things in it: A filter.
What’s the filter doing?
Looking for NULL values after the join. Yuck.
Better Choices
Expressed more SQL-y, we could use NOT EXISTS.
SELECT COUNT_BIG(u.Id) AS records
FROM dbo.Users AS u
WHERE NOT EXISTS ( SELECT 1/0
FROM dbo.Posts AS p
WHERE p.OwnerUserId = u.Id );
I mean, look, the Id column in the Posts table is the PK/CX. That means it can’t be NULL, unless it’s a non-matched row in a left join.
If that column is NULL, then every other column will be NULL too. You don’t ever need to select any data from the Posts table.
The query plan looks like this now:
This performs better (under most circumstances), and gets some additional optimizations: A Bitmap, and a pre-aggregation of the OwnerUserId column in the Posts table.
Other Than Speed?
The Not Exists query will ask for around ~200MB less memory to run.
Why is this? Why is there such a difference between logically equivalent queries?
The Left Join version forces both tables to be fully joined together, which produces matches and non-matches.
After the join, we eliminate non-matches in the Filter. This is why I’m generally suspicious of Filter operators. They often mean we’ve some expression or complication in our logic that prevents the optimizer from eliminating rows earlier. This is to be expected when we do something like generate and filter on a row number — the row number doesn’t exist until the query runs, and has to be filtered later than existing data.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
I’m not writing about how if you use NOT IN, and suffer from NULL values in a column you’re comparing, you’ll get an empty result.
That’s easy enough to show:
DECLARE @not_null TABLE ( id INT NOT NULL );
INSERT @not_null ( id )
VALUES ( 1 );
DECLARE @null TABLE ( id INT NULL );
INSERT @null ( id )
VALUES ( 1 );
INSERT @null ( id )
VALUES ( NULL );
SELECT *
FROM @not_null AS nn
WHERE nn.id NOT IN ( SELECT n.id FROM @null AS n );
What’s sort of more interesting to me is what happens in execution plans when you’re comparing NULL-able columns that don’t contain any NULLs.
Exampled!
Here’s the first query I want to show you. In the Posts table, the OwnerUserId column is NULLable, but doesn’t have any NULLs in it. The Id column in the Users table is not NULLable — it’s the PK/CX on the table.
SELECT COUNT_BIG(*) AS records
FROM dbo.Users AS u
WHERE u.Id NOT IN ( SELECT p.OwnerUserId
FROM dbo.Posts AS p
WHERE p.Score < 0 );
Note that for all these queries, I’ve created these indexes:
CREATE INDEX beavis ON dbo.Users(AccountId);
CREATE INDEX butthead ON dbo.Posts(OwnerUserId, Score);
CREATE INDEX stewart ON dbo.Posts(Score, OwnerUserId);
The important part of the query looks like this:
Zooming in to a sorta weird part of the query:
The optimizer spins up a Row Count Spool to make sure it’s right about the lack of NULLs. You can see the seek predicate doing so, and finding 0 rows, but taking ~200ms to do it. This time was much worse without those great indexes.
Explicitly Not NULL
If we change our query slightly, we can get a less exotic query plan:
SELECT COUNT_BIG(*) AS records
FROM dbo.Users AS u
WHERE u.Id NOT IN ( SELECT p.OwnerUserId
FROM dbo.Posts AS p
WHERE p.Score < 0
AND p.OwnerUserId IS NOT NULL );
Which is a little bit faster overall, at ~350ms vs 550ms.
This is the equivalent of writing your query the way you should have written it in the first place, using NOT EXISTS.
SELECT COUNT_BIG(*) AS records
FROM dbo.Users AS u
WHERE NOT EXISTS ( SELECT 1 / 0
FROM dbo.Posts AS p
WHERE p.OwnerUserId = u.Id
AND p.Score < 0 )
Recap
The optimizer is a weird place. It’s the ultimate defensive driver. Dealing with NULLs can be quite odd and difficult.
I generally save IN and NOT IN for when I have short lists of literal values. Likely, EXISTS/NOT EXISTS when values are contained in other tables, or dumping values into a #temp table when there’s a lot of values can be more efficient, if only to make better indexing available.
While IN doesn’t have the same side effects as NOT IN with NULLs, I think using EXISTS instead is just a good habit.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
The next time you’re writing a query, I want you to try something in your head: Break the columns down into two groups.
Informational: Columns you’re only selecting
Relational: Columns that show up below the “from” clause
There may be some crossover, and that’s okay. If you’re selecting a short enough list of columns, and doing a group by, there might be total crossover.
But I’m willing to bet most of your queries don’t look that neat and tidy.
Your queries probably have a distinctly more robust list of columns, probably with a DISTINCT to fend off the dozen or so left joins for just in case columns.
And that’s… Okay, let’s reserve judgement. Instead, let’s figure out a better way to get them all.
Narrowin’
That long list of columns in your query has weight to it. Not physical weight, exactly. But to the optimizer, it represents a bunch of weight in CPU and I/O.
The more columns you want to drag around, the harder you make your queries work. Especially because the architect who migrated your app from Access to SQL Server 2000 made every column as wide as possible.
That weight changes a lot of things:
How much memory your query asks for
Which indexes your query uses
The kind of operators the optimizer chooses
And I know, you need to show people that data. I’m not opposed to that.
I’m only opposed to when you’re doing it.
Filterin’
The reason why I want you to separate these concepts mentally is so that when you’re writing a query and you know you have to show people a lot of columns, to not try to show them all immediately. Instead, just take the columns you need to relate tables and put them in a temp table. It’s fine if you have your joins and where clause stuff here.
The idea is to get a narrow list of columns for a filtered set of rows. We’ll worry about the informational stuff after we get the relational stuff.
Something like this:
Splainin’
If you avoid getting long lists of columns up front, you can:
Avoid overly wide indexes, and just index for the important stuff in the join/where clause
Be less sensitive to parameter sniffing, because there will be less variability in plan choices
Need fewer resources to push queries with, because less data is moving about
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
Though I would be cool with new ones, as long as they’re not memes.
When you’re trying to gauge some high level performance metrics on a new server, you probably have your favorite scripts.
They could be various Blitzes, they could be some stuff you’ve had sitting in your script folder, maybe they’re from the Tiger Toolbox.
Whatever.
The point is that you, dear reader, are smart and engaged enough to know about and use these things.
A lot of people aren’t.
I’m not talking about another thing to go find and install. I mean these should come with the product.
Perf Schema
It would be really cool if SQL Server had a system schema called perf. In there you could have views to all sorts of neat things.
It would exist in every database, and it would have views in it to fully assemble the mess of DMVs that accompany:
Query Store
Plan Cache
Index Usage
Missing Indexes
File Stats
Wait Stats
Locking
Deadlocks
Assembling all those views is painful for beginners (heck Query Store is painful for everyone). Worse, they may find scripts on the internet that are wrong or outdated (meaning they may not have new columns, or they may give outdated advice on things).
What would make these particularly helpful is that they could aggregate metrics at the database level. Server-wide counters are cool until your server is really wide, and it’s impossible to tell where stuff like wait stats are coming from. This wouldn’t be too difficult to implement, since Azure SQLDB already has to have a bunch of self-contained stuff, due to the lack of cross-database queries.
Best of all, Microsoft can keep them up to date based on which version and edition of SQL Server you’re on, and if certain changes get back ported.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
Having some key lookups in your query plans is generally unavoidable.
You’ll wanna select more columns than you wanna put in a nonclustered index, or ones with large data types that you don’t wanna bloat them with.
Enter the key lookup.
They’re one of those things — I’d say even the most common thing — that makes parameterized code sensitive to the bad kind of parameter sniffing, so they get a lot of attention.
The thing is, most of the attention that they get is just for columns you’re selecting, and most of the advice you get is to “create covering indexes”.
That’s not always possible, and that’s why I did this session a while back on a different way to rewrite queries to sometimes make them more efficient. Especially since key lookups may cause blocking issues.
Milk and Cookies
At some point, everyone will come across a key lookup in a query plan, and they’ll wonder if tuning it will fix performance.
There are three things to pay attention to when you look at a key lookup:
Number of executions: This is usually more helpful in an actual plan
If there are any Predicates involved: That means there are parts of your where clause not in your nonclustered index
If there’s an Output List involved: That means you’re selecting columns not in your nonclustered index
For number of executions, generally higher numbers are worse. This can be misleading if you’re looking at a cached plan because… You’re going to see the cached number, not the runtime number. They can be way different.
Notice I’m not worried about the Seek Predicates here — that just tells us how the clustered index got joined to the nonclustered index. In other words, it’s the clustered index key column(s).
Figure It Out
Here’s our situation: we’re working on a new stored procedure.
CREATE PROCEDURE dbo.predicate_felon (@Score INT, @CreationDate DATETIME)
AS
BEGIN
SELECT *
FROM dbo.Comments AS c
WHERE c.Score = @Score
AND c.CreationDate >= @CreationDate
ORDER BY c.CreationDate DESC;
END;
Right now, aside from the clustered index, we only have this nonclustered index. It’s great for some other query, or something.
CREATE INDEX ix_whatever
ON dbo.Comments (Score, UserId, PostId)
GO
When we run the stored procedure like this, it’s fast.
SQL Server wants an index — a fully covering index — but if we create it, we end up a 7.8GB index that has every column in the Comments table in it. That includes the Text column, which is an NVARCHAR(700). Sure, it fixes the key lookup, but golly and gosh, that’s a crappy index to have hanging around.
Bad Problems On The Rise
The issue turns up when we run the procedure like this:
EXEC dbo.predicate_felon @Score = 0, --El Zero
@CreationDate = '2013-12-31';
This happens because there are a lot more 0 scores than 6 scores.
Smarty Pants
Eagle eyed readers will notice that the second query only returns ~18k rows, but it takes ~18 seconds to do it.
The problem is how much time we spend locating those rows. Sure, we can Seek into the nonclustered index to find all the 0s, but there are 20.5 million of them.
Looking at the actual plan, we can spot a few things.
The 18k rows we end up with are only filtered to with they key lookup, but it has to execute 20.5 million times to evaluate that extra predicate.
If we just index the key columns, the key lookup to get the other columns (PostId, Text, UserId) will only execute ~18k times. That’s not a big deal at all.
CREATE NONCLUSTERED INDEX keys_only
ON dbo.Comments ( Score, CreationDate );
This index is only ~500MB, which is a heck of a lot better than nearly 8GB covering the entire thing.
With that in place, both the score 6 and score 0 plans are fast.
Why This Is Effective, and When It Might Not Be
This works here because the date filter is restrictive.
When we can eliminate more rows via the index seek, the key lookup is less of a big deal.
If the date predicate were much less restrictive, say going back to 2011, boy oh boy, things get ugly for the 0 query again.
Of course, returning that many rows will suck no matter what, so this is where other techniques come in like Paging, or charging users by the row come into play.
What? Why are you looking at me like that?
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
When you find unused indexes, whether using Some Script From The Internet™, sp_BlitzIndex, or Database Telepathy, the first thing most people think of is “wasted space”.
Sure, okay, yeah. That’s valid. They’re in backups, restores, they get hit by CHECKDB. You probably rebuild them if there’s a whisper of fragmentation.
But it’s not the end of the story.
Not by a long shot.
Today we’re going to look at how redundant indexes can clog the buffer pool up.
Holla Back
If you want to see the definitions for the views I’m using, head to this post and scroll down.
Heck, stick around and watch the video too.
LIKE AND SUBSCRIBE.
Now, sp_BlitzIndex has two warnings to catch these “bad” indexes:
Unused Indexes With High Writes
NC Indexes With High Write:Read Ratio
Unused are just what they sound like: they’re not helping queries read data at all. Of course, if you’ve rebooted recently, or rebuilt indexes on buggy versions of SQL Server, you might get this warning on indexes that will get used. I can’t fix that, but I can tell you it’s your job to keep an eye on usage over time.
Indexes with a high write to read ratio are also pretty self-explanatory. They’re sometimes used, but they’re written to a whole lot more. Again, you should keep an eye on this over time, and try to understand both how important they might be to your workload, or how much they might be hurting your workload.
I’m not going to set up a fake workload to generate those warnings, but I am going to create some overlapping indexes that might be good candidates for you to de-clutter.
Index Entrance
The Votes table is pretty narrow, but it’s also pretty big — 53 million rows or so as of Stack 2013.
Here are my indexes:
CREATE INDEX who ON dbo.Votes(PostId, UserId) INCLUDE(BountyAmount);
CREATE INDEX what ON dbo.Votes(UserId, PostId) INCLUDE(BountyAmount);
CREATE INDEX [where] ON dbo.Votes(CreationDate, UserId) INCLUDE(BountyAmount);
CREATE INDEX [when] ON dbo.Votes(BountyAmount, UserId) INCLUDE(CreationDate);
CREATE INDEX why ON dbo.Votes(PostId, CreationDate) INCLUDE(BountyAmount);
CREATE INDEX how ON dbo.Votes(VoteTypeId, BountyAmount) INCLUDE(UserId);
First, I’m gonna make sure there’s nothing in memory:
CHECKPOINT;
GO 2
DBCC DROPCLEANBUFFERS;
GO
Don’t run that in production. It’s stupid if you run that in production.
Now when I go to look at what’s in memory, nothing will be there:
SELECT *
FROM dbo.WhatsUpMemory AS wum
WHERE wum.object_name = 'Votes'
I’m probably not going to show you the results of an empty query set. It’s not too illustrative.
I am going to show you the index sizes on disk:
SELECT *
FROM dbo.WhatsUpIndexes AS wui
WHERE wui.table_name = 'Votes';
And I am going to show you this update:
UPDATE v
SET v.BountyAmount = 2147483647
FROM dbo.Votes AS v
WHERE v.BountyAmount IS NULL
AND v.CreationDate >= '20131231'
AND v.VoteTypeId > 2;
After The Update
This is when things get more interesting for the memory query.
We’re updating the column BountyAmount, which is present in all of the indexes I created. This is almost certainly an anti-pattern, but it’s good to illustrate the problem.
Pieces of every index end up in memory. That’s because all data needs to end up in memory before SQL Server will work with it.
It doesn’t need the entirety of any of these indexes in memory — we’re lucky enough to have indexes to help us find the 10k or so rows we’re updating. I’m also lucky enough to have 64GB of memory dedicated to this instance, which can easily hold the full database.
But still, if you’re not lucky enough to be able to fit your whole database in memory, wasting space in the buffer pool for unused (AND OH GODD PROBABLY FRAGMENTED) indexes just to write to them is a pretty bad idea.
After all, it’s not just the buffer pool that needs memory.
You also need memory for memory grants (shocking huh?), and other caches and activities (like the plan cache, and compressed backups).
Cleaning up those low-utilization indexes can help you make better use of the memory that you have.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
In yesterday’s post, we looked at MSTVFs. Today, we’re going to look at inlined functions.
And I know, that sounds weird. But it’s because we’re going to circle back to scalar valued functions, and how they’re inlined in SQL Server 2019 as well.
There’s stuff in here you’ll have to look out for when you move to SQL Server 2019.
Side By Side
The first thing we need is the function itself, which is just an inlined version of the others.
CREATE OR ALTER FUNCTION dbo.CommentsAreHorribleInline(@Id INT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT (SELECT SUM(Score) FROM dbo.Posts AS p WHERE p.OwnerUserId <= @Id)
-
(SELECT COUNT_BIG(*) FROM dbo.Comments AS c WHERE c.UserId <= @Id) AS Tally
GO
Where these differ from both kinds of functions, is that you can’t find them ANYWHERE on their own.
They don’t appear in any DMVs, or in the plan cache as isolated code. Since they’re inlined, it’s just a part of whatever query references it.
Let’s start simple, though:
SELECT * FROM dbo.CommentsAreHorribleInline(22656);
SELECT * FROM dbo.CommentsAreHorribleInline(138);
Like in the other posts, the chief difference between these two plans is the index access choice.
The 22656 plan scans the clustered index, and the 138 plan does a nonclustered index seek with a key lookup.
Check the other posts in the series for pictures of that if you’re interested.
As in the other examples, the clustered index scan is considerably slower than the nonclustered index seek plan.
But that’s not where things are interesting. Where they’re interesting is when we call the function in a “larger” query.
SELECT TOP (5) u.DisplayName,
(SELECT * FROM dbo.CommentsAreHorribleInline(u.Id))
FROM dbo.Users AS u
GO
Spool And Spool Alike
This query runs for a full minute.
We see why early on — an eager index spool.
This appears to happen because the optimizer is drunk as hell doesn’t like the idea of scanning the clustered index, or recycling the seek + lookup 5 times.
The index reflects what a two column clustered index keyed on OwnerUserId would look like.
Think about it like if you selected OwnerUserId and Score into a #temp table and put a clustered index on the table with the key on OwnerUserId.
Remember that part of the reason for this plan change is that we’re no longer just subtracting one aggregate from another like when we call the function on its own, there’s the additional “join” to the Users table. Correlated subqueries are just that — joins.
Futurist
I’d love to have more to say about how this changes in SQL Server 2019, but an identical pattern occurs, similar to what I blogged about earlier in the summer.
I caught some private flack about how the TOP (1) pattern in that post could easily be replace with an aggregate like MIN.
While that’s totally true, there’s no similar replacement for this pattern. We could expand the index to cover the Score column to get rid of the spool, which goes back to another post I wrote about 2019 optimizer features, where you’ll need wider (covering) indexes for them to be used.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
In yesterday’s post, we looked at we looked at simple scalar function plan caching.
Today, we’ll look at MSTVFs. If you’re not sure what that means, look at the title of the post real quick.
Yeah, up there.
On we go.
Dirty Version
The function will do the same thing as before, just rewritten to be a MSVTF.
CREATE OR ALTER FUNCTION dbo.CommentsAreHorribleMulti(@Id INT)
RETURNS @Tally TABLE(Tally BIGINT)
WITH SCHEMABINDING
AS
BEGIN
INSERT @Tally ( Tally )
SELECT (SELECT SUM(Score) FROM dbo.Posts AS p WHERE p.OwnerUserId <= @Id)
-
(SELECT COUNT_BIG(*) FROM dbo.Comments AS c WHERE c.UserId <= @Id)
RETURN
END
GO
Now, where these differ immediately from SVFs (scalar valued functions), is that they don’t show up in the plan cache by name.
Note that these are both “statements”.
Also, unlike SVFs, they don’t show up in dm_exec_function_stats. This is documented behavior, but whatever.
And even though they’re called a “Proc” in dm_exec_cached_plans, they only show up in dm_exec_query_stats, not dm_exec_procedure_stats (which is why BlitzCache calls them a Statement).
Different Sniffint
Unlike SVFs, which don’t have a restriction on the function body using parallelism, all table variable modifications are forced to run serially (unless you’re sneaky).
That means both insert queries will be serialized, with the main difference being index access.
Like before, if we cache either plan, it will get reused. And just like before, the clustered index scan plan is significantly slower.
SELECT TOP (5) u.DisplayName,
(SELECT * FROM dbo.CommentsAreHorribleMulti(u.Id))
FROM dbo.Users AS u
Plan Noir
Just like scalar functions, these can have different plans cached and reused, and may fall victim to parameter sniffing.
Again, this depends a lot on how the function is called and used. It’s just something to be aware of when tuning queries that call functions.
Execution times may vary greatly depending on… well…
Parameters.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
But to make matters worse, we’re going to combine them with a problem no one is pretending to solve: bad parameter sniffing.
After all, that’s what can go wrong when you cache a plan, right?
Our Dummy
This is our function. It’s set up to do something that should happen on Stack sites: users should be penalized for leaving comments.
Okay, so this isn’t exactly how it would work. But stick with me, because it makes a better demo.
CREATE OR ALTER FUNCTION dbo.CommentsAreHorribleScalar(@Id INT)
RETURNS BIGINT
WITH SCHEMABINDING, RETURNS NULL ON NULL INPUT
AS
BEGIN
DECLARE @Tally BIGINT
SELECT @Tally =
(SELECT SUM(Score) FROM dbo.Posts AS p WHERE p.OwnerUserId = @Id)
-
(SELECT COUNT_BIG(*) FROM dbo.Comments AS c WHERE c.UserId = @Id)
RETURN @Tally
END
GO
Really, we’re just getting a sum of all the scores in the Posts table for a user, then subtracting the count of comments they’ve left.
Because comments are horrible.
Anyway.
Individually
If we run these two queries, the plan will get reused.
We can free the cache, run them in the other order, and the same will happen in reverse.
Of course, each one gets a different plan.
Tangentially, these functions will end up dm_exec_function_stats, which was introduced in 2016, and can also be identified by name in the plan cache.
SELECT *
FROM sys.dm_exec_function_stats AS defs;
EXEC sp_BlitzCache @StoredProcName = 'CommentsAreHorribleScalar';
Sniffing
If we cache a plan for 22656 or 138, and then run a query like this:
SELECT TOP (100) u.DisplayName,
dbo.CommentsAreHorribleScalar(u.Id)
FROM dbo.Users AS u;
The query will reuse whatever the cached plan is.
Again, using BlitzCache to track the plan down:
In this case, the query is simple enough that the plan difference doesn’t change performance very much.
Let’s change our function a little bit to see a better example.
Appalled
Here it is:
CREATE OR ALTER FUNCTION dbo.CommentsAreHorribleScalar(@Id INT)
RETURNS BIGINT
WITH SCHEMABINDING, RETURNS NULL ON NULL INPUT
AS
BEGIN
DECLARE @Tally BIGINT
SELECT @Tally =
(SELECT SUM(Score) FROM dbo.Posts AS p WHERE p.OwnerUserId <= @Id)
-
(SELECT COUNT_BIG(*) FROM dbo.Comments AS c WHERE c.UserId <= @Id)
RETURN @Tally
END
GO
The plans are way different now:
22656 scans the clustered index, and 138 seeks into the nonclustered index with a key lookup.
For the record, the bad plan is the clustered index scan, and the fast plan is the nonclustered index seek.
Running a smaller query (because the top hundred made me impatient), the difference is obvious.
SELECT TOP (5) u.DisplayName,
dbo.CommentsAreHorribleScalar(u.Id)
FROM dbo.Users AS u;
The version of the query that gets the bad function plan has a 10 second compute scalar, and the version of the query that gets the good function plan finishes in 900ms.
Functions Can Get Different Plans
Depending on how you call them, and how much data they might have to touch, this can be a big deal or no deal.
Because I know I’ll get some semi-related questions:
The body of scalar functions have no restrictions on parallelism, only the calling statements are forced to run serially
Functions run “once per row”, with the number of rows being dependent on where in the plan the compute scalar appears
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
In dramatic fashion, I’m revisiting something from this post about stats updates.
It’s a quick post, because uh… Well. Pick a reason.
Get In Gear
Follow along as I repeat all the steps in the linked post to:
Load > 2 billion rows into a table
Create a stats object on every column
Load enough new data to trigger a stats refresh
Query the table to trigger the stats refresh
Except this time, I’m adding a mAxDoP 1 hint to it:
SELECT COUNT(*)
FROM dbo.Vetos
WHERE UserId = 138
AND PostId = 138
AND BountyAmount = 138
AND VoteTypeId = 138
AND CreationDate = 138
OPTION(MAXDOP 1);
Here’s Where Things Get Interesting
Our MaXdOp 1 query registers nearly the same amount of time on stats updates and parallelism.
But our plan is indeed serial. Because we told it to be.
By setting maxDOP to 1.
Not Alone
So, if you’re out there in the world wondering why this crazy kinda thing goes down, here’s one explanation.
Are there others? Probably.
But you’ll have to find out by setting MAXdop to 1 on your own.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.