Compiles! Recompiles! Stored Procedures! Temp Tables! Extended Events! In SQL Server
Thanks for watching!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
I will leave the finer points of the problems with MERGE statements to the Michaels and Aarons of the SQL Server world.
This is just a… But why? post about them, because I’m honestly a bit puzzled by this missing implementation detail.
To get us to the point, I’m going to use a code snippet (with embellishments) from Aaron’s post here.
We’ll be starting with this table and trigger from the linked post, with a couple small tweaks to satisfy my OCD:
CREATE TABLE
dbo.MyTable
(
id integer
);
INSERT
dbo.MyTable
VALUES
(1),
(4);
CREATE OR ALTER TRIGGER
dbo.MyTable_All
ON dbo.MyTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON;
IF ROWCOUNT_BIG() = 0 RETURN;
IF TRIGGER_NESTLEVEL() > 1 RETURN;
PRINT 'Executing trigger. Rows affected: ' + RTRIM(@@ROWCOUNT);
IF EXISTS (SELECT 1 FROM inserted) AND NOT EXISTS (SELECT 1 FROM deleted)
BEGIN
PRINT ' I am an insert...';
END;
IF EXISTS (SELECT 1 FROM inserted) AND EXISTS (SELECT 1 FROM deleted)
BEGIN
PRINT ' I am an update...';
END;
IF NOT EXISTS (SELECT 1 FROM inserted) AND EXISTS (SELECT 1 FROM deleted)
BEGIN
PRINT ' I am a delete...';
END;
END;
After all, one of the best ways to make sure you get code right is to copy and paste it from the internet.
Quiet On The Set
As much as we all love to dunk on MERGE, like cursors, heaps, and UDFs of various types, they did give you some neat options with the OUTPUT clause, like the $action column, and the ability to get columns from other tables involved in the query. You can’t do that with a normal insert, update, or delete when using the OUTPUT clause, though I think it would be cool if we could.
Working a bit with the code linked above, here’s an expansion on it showing the additional OUTPUT capability, but this is also where my annoyance begins.
BEGIN TRANSACTION
DECLARE
@t table
(
action varchar(6),
i_id integer,
d_id integer,
s_word varchar(5)
);
SELECT
mt.*
FROM dbo.MyTable AS mt;
MERGE
dbo.MyTable WITH (HOLDLOCK) AS Target
USING
(
VALUES
(1, 'one'),
(2, 'two'),
(3, 'three')
) AS Source (id, word)
ON Target.id = Source.id
WHEN MATCHED
THEN UPDATE
SET Target.id = Source.id
WHEN NOT MATCHED
THEN INSERT
(id)
VALUES
(Source.id)
WHEN NOT MATCHED BY SOURCE
THEN DELETE
OUTPUT
$action,
Inserted.id,
Deleted.id,
Source.word
INTO @t
(
action,
i_id,
d_id,
s_word
);
SELECT
t.*
FROM @t AS t;
SELECT
mt.*
FROM dbo.MyTable AS mt;
ROLLBACK TRANSACTION;
You likely can’t guess what I’m sore about just looking at this, because this isn’t what annoys me.
This is all fine, and rather a nice showing of capabilities for an oft-maligned (by myself included) bit of syntax.
The problem is really in the trigger.
What’s Wrong With Triggers?
If you work with trigger code enough, you’ll get used to seeing:
Catch all triggers that do a lot of gymnastics to differentiate insert from update from delete
A few separate triggers to catch each modification type separately, and still do some checking to verify
The problem is that in any trigger, the $action column is not directly exposed for use to determine the action of a MERGE statement.
Sure, you can add a column to a table to track it, or some other hacky workaround, but I consider it a quite poor design choice to not have the $action column as a part of the Inserted and Deleted virtual tables.
Having it there would also benefit generic modifications that are captured by triggers in some manner to make the absolute type of modification quite clear to query writers.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
The nice folks at the New England SQL Server User Group (w|t) had me in town a week ago to present my workshop, The Foundations Of SQL Server Performance.
We had about 40 folks show up — which ain’t bad for a random Friday in May — including one attendee from Nigeria.
Not just like, originated in Nigeria. Like, flew from Nigeria for the workshop. That’s probably a new record for me, aside from PASS Precons where folks are already headed in from all corners.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
You and Me might feel like the lower back end of a thing if we’re tuning a query that has other problems. Perhaps it’s running on one of those serverless servers with half a hyper-threaded core and 8kb of RAM, as an example.
When I’m working with clients, I often get put into odd situations that limit what I’m allowed to do to fix query performance. Sometimes code comes from an ORM or vendor binaries that can’t be changed, sometimes adding an index on a sizable table on standard edition in the middle of the day is just an impossibility, and of course other times things are just a spectacle du derrière that I’m allowed to do whatever I want. You can probably guess which one I like best.
This post is about the two other ones, where you’re stuck between derrière and ânesse.
For the duration of reading this, make the wild leap of faith that it takes to embrace the mindset that not everyone who works with SQL Server knows how to write good queries or design good indexes.
I know, I know. Leap with me, friends.
The Query And Execution Plan
Here’s what we’re starting with:
SELECT TOP (10)
DisplayName =
(
SELECT
u.DisplayName
FROM dbo.Users AS u
WHERE u.Id = p.OwnerUserId
),
p.AcceptedAnswerId,
p.CreationDate,
p.LastActivityDate,
p.ParentId,
p.PostTypeId,
p.Score,
p.CommentCount,
VoteCount =
(
SELECT
COUNT_BIG(*)
FROM dbo.Votes AS v
WHERE v.PostId = p.Id
)
FROM dbo.Posts AS p
ORDER BY
p.Score DESC;
And resulting plan:
We can surmise a few things from this plan:
If there are good indexes, SQL Server isn’t using them
That hash spill is some extra kind of bad news
Spools remain a reliable indicator that something is terribly wrong
Okay, so I’m kidding a bit on the last point. Sorta.
The Query Plan Details
You might look at all this work that SQL Server is doing and wonder why: With no good, usable indexes, and such big tables, why in the overly-ambitious heck are we doing all these nested loop joins?
And the answer, my friend, is blowing in the row goal.
The TOP has introduced one here, and it has been applied across the all of the operators along the top of the plan.
Normally, a row goal is when the optimizer places a bet on it being very easy to locate a small number of rows and produces an execution plan based on those reduced costs.
In this case, it would be 10 rows in the Posts table that will match the Users table and the Votes table, but since these are joins of the left outer variety they can’t eliminate results from the Posts table.
The row goals do make for some terrible costing and plan choices here, though.
This all comes from cardinality estimation and costing and all the other good stuff that the optimizer does when you throw a query at it.
The Query Rewrite
One way to show the power of TOPs is to increase and then decrease the row goal. For example, this (on my machine, at this very moment in time, given many local factors) will change the query plan entirely:
SELECT TOP (10)
p.*
FROM
(
SELECT TOP (26)
DisplayName =
(
SELECT
u.DisplayName
FROM dbo.Users AS u
WHERE u.Id = p.OwnerUserId
),
p.AcceptedAnswerId,
p.CreationDate,
p.LastActivityDate,
p.ParentId,
p.PostTypeId,
p.Score,
p.CommentCount,
VoteCount =
(
SELECT
COUNT_BIG(*)
FROM dbo.Votes AS v
WHERE v.PostId = p.Id
)
FROM dbo.Posts AS p
ORDER BY
p.Score DESC
) AS p
ORDER BY
p.Score DESC;
You may need to toggle with the top a bit to see the change on your machine. The resulting plan looks a bit funny. You won’t normally see two TOPs nuzzling up like this.
But the end result is an improvement by a full minute and several seconds.
Because the inner TOP has a bigger row goal, the optimizer changes its mind about how much effort it will have to expend to fully satisfy it before clenching things down to satisfy the smaller TOP.
If you’re only allowed quick query rewrites, this can be a good way to get a more appropriate plan for the amount of work required to actually locate rows at runtime, when the optimizer is dreadfully wrong about things.
The Index Rewrite
In this case, just indexing the Votes table is enough to buy us all the performance we need, but in my personal row goal for completeness, I’m going to add in two indexes:
CREATE INDEX
v
ON dbo.Votes
(PostId)
WITH
(SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);
CREATE INDEX
p
ON dbo.Posts
(Score DESC, OwnerUserId)
WITH
(SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);
Going back to the original query, we no longer need to play games with the optimizer and pitting TOPs against each other.
This is obviously much faster, if you’re in the enlightened and enviable position to create them.
Perhaps you are, but maybe not in the exact moment that you need to fix a performance problem.
In those cases, you may need to use rewrites to get temporary performance improvements until you’re able to.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
This is a short post. I know you’ve been somewhat spoiled by longer posts and videos lately!
Just kidding, y’all don’t pay attention (with the exception of Kevin Feasel), so maybe this will be consumable enough for even the most squirrel brained amongst us.
A long time ago, I complained that wait stats logged by actual execution plans don’t show lock waits. That seemed like a pretty big deal, because if you’re running a query and wondering why sometimes it’s fast and sometimes it’s slow, that could be a pretty huge hint.
But now, if you run a query that experienced lock waits, you can see that in the details. Just highlight the root operator, hit F4 or right click and go to Properties, and look under the wait stats node, you’ll see this:
When did this get added? I have no idea.
How far was it back ported? I have no idea.
I could look on VMs with older versions of SQL Server, but it’s dinner time. Or as they call it in Saskatchewan, “supper”.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
Since Kevin is often kind enough to link to my posts via his Curated SQL feed/aggregator, I figured I’d add to the SEO pyramid scheme this month.
As part of my consulting services, I often help clients decide if they could keep a full time database person (there are too many potential titles to list, here) busy for 40 hours a week for the foreseeable future.
If the Magic 8-Ball comes back with a Yes, I’ll also help them write or review their job posting, and screen candidates. If a resume makes it past various detectors and doesn’t end up in the randomly generated unlucky pile, I’ll interview them.
Me. Personally.
Not in-person, though.
There’s not enough armed security in the world for me to get that close to HR.
The Production DBA Question
If someone is going to be in charge of production DBA tasks, I’ll of course ask questions about experience with whatever model is in use, or is to be implemented by the company.
So like, if the company has Availability Groups, or wants Availability Groups, we’ll talk about those.
For flavor, I’ll also ask them why anyone would be so insane as to not just use a Failover Cluster with Log Shipping.
But the real question and answer that tells me if someone knows their business is this: When you set up backups, how often do you take log backups?
If anyone says “every 15 minutes” without any further clarification or qualification, they immediately go into the “No” pile.
See, 15 minute log backups are a meme in the same Pantheon as 5% and 30% for index fragmentation. Neither answer shows any deeper understanding of what exactly they’re doing.
Log backup frequency is a business requirement based on RPO goals (Recovery Point Objective). If your RPO goal is less than 15 minutes of data loss, 15 minute log backups don’t meet that goal.
And anyone who says that Availability Groups help meet RPO goals gets their resume burned in front of them.
The Developer DBA Question
For anyone responsible for performance tuning, I need to make sure that they’re staying current with their learning and techniques.
I’ll show a lot of pictures of queries and query plans, ask about various anti-patterns and tuning techniques, but my absolute favorite is to show them pictures of query plans.
Actual execution plans, as it were.
Why is this important? Because at least half of the people I show this to will totally ignore the operator times and start talking about costs.
If you’re looking at a plan where this much information is available, and all you can dredge up to troubleshoot things are estimated costs, I’ll probably mail you a copy of this.
And I’m not kidding here — it shows a complete lack of attention and growth — what’s the point of calling yourself a performance tuning expert if your expertise peaked in 2008?
There Are Good People Out There
They may not spend every waking moment writing, recording, speaking, or chasing MVP status.
And that’s fine — I don’t care if you partake in that stuff or not — there are plenty of very smart people out there who don’t file quarterly paperwork or schedule posts months in advance.
What I care about is that you aren’t just an automaton seeking a new place to go through the motions in.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
Why Performance Tuners Need To Use The Right Type Of Join In SQL Server
Thanks for watching!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
User Experience Under Different Isolation Levels In SQL Server
Thanks for watching!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
I found a few stored procedures with single plan cache entries over 500 MB of memory each according to sys.dm_exec_cached_plans while optimizing the plan cache in production. The stored procedures were relatively simple and both execution plans were less than 1 MB when saved as sqlplan files. I was confused, offended, and confused.
First published on MSDN on Jan 09, 2007
Did you know that the memory used for cursors can be cached along with the compiled plan? I’ll include a direct quote of the ancient wisdom:
Inactive cursors are cached in the compiled plan. Just like MXCs there are two lists maintained here: the lookup list and the enum list. The lookup list stores the inactive cursors. When a cursor is de-allocated, the memory used to store the cursor is cached. On subsequent re-execution of the batch with this cursor, the cached cursor memory is reused and re-initialized as an active cursor. The enum list has the list of active and inactive cursors and is used to enumerate all cursors for a compiled plan.
Cursors are re-generatable given a compiled plan. Relative to compiled plans they are also relatively inexpensive to re-generate.
I checked sys.dm_exec_cached_plan_dependent_objects and found that nearly all of the cached memory was used for the cursors instead of the execution plans. This makes sense because there was no reason why a simple procedure should use 500 MB for a cached plan, but why were the cursors using so much memory?
Back to 2022
I was able to create a simple reproduction of the issue on SQL Server 2022 CU13 which is the most recent version currently available at the time of writing. Create a stored procedure that uses the OPENJSON() function along with an NVARCHAR(MAX) local variable as part of a cursor declaration. Here is one example:
CREATE OR ALTER PROCEDURE TestMemoryLeak_1
AS
BEGIN
SET NOCOUNT ON;
DECLARE @json NVARCHAR(MAX);
SET @json = N'[
{
"Order": {
"Number": "SO43659",
"Date": "2024-05-31T00:00:00"
},
"AccountNumber": "AW29825",
"Item": {
"Price": 2024.9940,
"Quantity": 1
}
},
{
"Order": {
"Number": "SO43661",
"Date": "2024-06-01T00:00:00"
},
"AccountNumber": "AW73565",
"Item": {
"Price": 2024.9940,
"Quantity": 3
}
}
]';
DECLARE json_cursor CURSOR FAST_FORWARD FOR
SELECT Number, OrderDate, Customer, Quantity
FROM OPENJSON(@json) WITH (
Number VARCHAR(200) '$.Order.Number',
OrderDate DATETIME '$.Order.Date',
Customer VARCHAR(200) '$.AccountNumber',
Quantity INT '$.Item.Quantity'
);
DECLARE
@Number VARCHAR(200),
@OrderDate DATETIME,
@Customer VARCHAR(200),
@Quantity INT;
OPEN json_cursor;
FETCH NEXT FROM json_cursor INTO @Number, @OrderDate, @Customer, @Quantity;
WHILE @@FETCH_STATUS = 0
BEGIN
-- nobody wants to work anymore
FETCH NEXT FROM json_cursor INTO @Number, @OrderDate, @Customer, @Quantity;
END;
CLOSE json_cursor;
DEALLOCATE json_cursor;
RETURN;
END;
GO
The amount of memory cached for the cursor execution can be seen with the following code:
DECLARE @plan_handle varbinary(64);
SELECT @plan_handle = plan_handle
FROM sys.dm_exec_procedure_stats
where object_id = object_id('[TestMemoryLeak_1]');
SELECT * FROM
sys.dm_os_memory_objects
WHERE type = 'MEMOBJ_CURSOREXEC'
AND memory_object_address IN (
SELECT memory_object_address
FROM sys.dm_exec_cached_plan_dependent_objects(@plan_handle)
);
I executed the test stored procedure 1 time, measured the memory usage, then executed it 9999 more times, and finally measured the memory usage again. I expected the memory usage to not change because the cursor always processes the same data. SQL Server betrayed my expectations:
A cursor that processes 2 rows of data has managed to use 163 MB of cached memory. Wonderful. One workaround is to remove OPENJSON() from the cursor definition by inserting the data into a table variable and making the cursor read from the table variable:
DECLARE @json_table TABLE (
Number VARCHAR(200) NULL,
OrderDate DATETIME NULL,
Customer VARCHAR(200) NULL,
Quantity INT NULL
);
INSERT INTO @json_table (Number, OrderDate, Customer, Quantity)
SELECT Number, OrderDate, Customer, Quantity
FROM OPENJSON(@json) WITH (
Number VARCHAR(200) '$.Order.Number',
OrderDate DATETIME '$.Order.Date',
Customer VARCHAR(200) '$.AccountNumber',
Quantity INT '$.Item.Quantity'
);
DECLARE json_cursor CURSOR FAST_FORWARD FOR
SELECT Number, OrderDate, Customer, Quantity
FROM @json_table;
SQL Server no longer betrays my expectations. The amount of cached memory for the cursor does not change between 1 and 10000 executions:
Here is a comparison using the sys.dm_exec_cached_plans DMV:
Determining which row is associated with each stored procedure is an exercise left up to the reader.
A second workaround is to define the cursor using the LOCAL argument. A third workaround is to define the local variable as NVARCHAR(4000). Thank you to the comments for both of these workarounds. Note that I originally thought the issue was related to OPENJSON() but it appears that was an error on my part. The presence of an LOB variable looks to be the determining factor.
Final Thoughts
Using local LOB variables as part of a cursor declaration can lead to ever-increasing amounts of cached memory associated with its cached plan. It’s possible that SQL Server will prune this plan once it experiences plan cache pressure, but it may also choose to prune other plans instead. I recommend coding defensively by always using the LOCAL argument for your cursor declarations. Thanks for reading!