CPU & RAM Don’t Lie: Query Metrics I Care About For Tuning

Discarded


There are metrics that I care and don’t care about when I’m looking for queries to tune.

Metrics I don’t care about:

  • Logical Reads
  • Costs

If a query does “a lot” of reads or has a high “cost”, I generally don’t care as long as they run quickly. Doing consistent physical reads is a slightly different story, but would probably fall more under server tuning or fixing memory grants.

Metrics I do care about:

  • CPU (taking parallelism into consideration)
  • Duration (compared to CPU)
  • Memory Grants (particularly when they’re not being fully utilized)
  • Writes (especially if it’s just a select)
  • Executions (mostly to track down scalar UDFs)

CPU and Duration


These two metrics get lumped together because they need to be compared in order to figure out what’s going on. First, you need to figure out what the minimum runtime of a query is that you want to tune.

In general, as query execution time gets faster, getting it to be much faster gets more difficult.

  • Bringing a query from 1 second to 100 milliseconds might be a small matter
  • Bringing that same query from 100 milliseconds to 1 millisecond might take more time than it’s worth

I say that because unless someone is querying SQL Server directly, smaller durations tend to be less detectable to end users. By the time they hit a button, send the request, receive the data, and have the application render it etc. they’re probably not aware of a 99 millisecond difference.

Of course, not everything is end-user centric. Other internal operations, especially any loop processing, might benefit greatly from reductions on the smaller side of things.

  • If duration and CPU are acceptable, leave it alone
  • If either is unacceptable, tune the darn thing
  • If CPU is much higher than duration, you have a parallel plan, and tuning is optional
  • If duration is much higher than CPU, you have blocking or another contention issue, and the query you’re looking at probably isn’t the problem
  • If duration and CPU are roughly equivalent, you either have a functional serial plan or a really crappy parallel plan

I give these the highest priority because reducing these is what makes queries faster, and reduces the surface area (execution time) of a query where something crappy might happen, like blocking, or deadlocks, or other resource contention.

Memory Grants


Using these as a tuning metric can have a lot of positive effects, depending on what kind of shape the system is in.

Consider a few scenarios:

  • PAGEIOLATCH_XX waits are high because large memory grants steal significant buffer pool space
  • RESOURCE_SEMAPHORE waits are high because queries suck up available memory space and prevent other queries from using it
  • Queries are getting too low of a memory grant and spilling significantly, which can slow them down and cause tempdb contention under high concurrency

Fixing memory grant issues can take many forms:

  • Getting better cardinality estimates for better overall grant estimates
  • Indexing to influence operator choices away from memory consumers
  • Using more appropriate string lengths to reduce memory grants
  • Fixing parallel skew issues that leaves some threads with inadequate memory
  • Rewriting the query to not ask for ordered data
  • Rewriting the query to ask for ordered data in smaller chunks
  • Rewriting the query to convert strings to better fitting byte lengths

That’s just some stuff I end up doing off the top of my head. There are probably more, but blog posts are only useful up to a certain length.

Like all other strings.

Writes and Selects


Modification queries are going to do writes. This seems intuitive and not at all shocking. If you have queries that are doing particularly large modifications, you could certainly look into tuning those, but it would be a standard exercise in query or index tuning.

Except that your index tuning adventure would most likely lead you to dropping unused and overlapping indexes to reduce the number of objects that you need to write to than to add an index.

But who knows. Boring anyway. I hear indexes tune themselves in the cloud.

When select queries do a large number of writes, then we’re talking about a much more interesting scenario.

  • Spills
  • Spools
  • Stats updates

Of course, stats updates are likely a pretty small write, but the read portion can certainly halt plan compilation for a good but on big tables.

Spills and Spools are going to be the real target here. If it’s a spill, you may find yourself tracking back to the memory grant section up above.

Spools, though! What interesting little creatures. I wrote a longer post about them here:

Understand Your Plan: Operators That Write Data (Spools, Spools, Spools)

It has a bit of a link roundup of other posts on my site and others that talk about them, too.

But since we’re living in this now, let’s try to be present. Here’s the short story on spools that we might try to fix:

  • The Spools we typically care about are Table or Index
  • They can be eager or lazy
  • They’ll show up on the inner side of Nested Loops
  • SQL Server uses them as a temporary cache for data
  • They are a good indicator that something is amok with your query or indexes

For eager index spools, the story is pretty simple around creating a better index for SQL Server to use.

For lazy table spools, you have more options:

  • Give SQL Server unique data to work with
  • Get the optimizer to not choose nested loops
  • Use the NO_PERFORMANCE_SPOOL hint to test the query without spools

Of course, there are times where you’re better off with a spool than without. So don’t walk away feeling disheartened if that’s the case.

Executions


These are on the opposite end of the spectrum from most of the queries I go after. If a query runs enough, and fast enough, to truly rack up a high number of executions, there’s probably not a ton of tuning you could do.

Sure, sometimes there’s an index you could add or a better predicate you could write, but I’d consider it more beneficial to get the query to not run so much.

That might result in:

  • Rewriting functions as inline table valued functions
  • Handing the queries off to app developers for caching

To learn how I rewrite functions, check out this video

I know, you can’t rewrite every single function like this, but it’s a wonderful thing to do when you can.

Anything Other Than


Again, metrics I don’t ever look at are logical reads or costs.

  • Doing reads doesn’t necessarily mean that queries are slow, or that there’s anything you can fix
  • Costs are a meme metric that should be removed from query plans in favor of operator times

Well, okay, maybe not completely removed, but they shouldn’t be front and center anymore.

There are many other more reliable metrics to consider that are also far more interesting.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How I Learned To Hate SQL Server’s Plan Cache

Used To Love Her


I used to get so excited about looking in the plan cache, and writing all sorts wacky XML queries to parse plans and dig for goodies.

Then I started to ask some tough questions about it, like:

  • How many plans are in here?
  • What’s the oldest one?
  • How many duplicate plans are there?
  • Where’s the parameter sniffing details?
  • Why is optimize for adhoc workloads the worst setting that everyone said is a best practice for eons?

As I wrote queries to look at a lot of these things, stuff got… weird. And disappointing.

What’s In There?


The plan cache has limits for how many plans it’ll keep, and how big of a cache it’ll keep. Even if there’s a lot of plans, there’s no guarantee that they’re older than a few hours.

You may even find that simple parameterization makes things confusing, and that things get cleared out at inopportune times.

One situation I’ve run into well-more than once is the plan cache getting cleared out due to query memory pressure, and then any chance of figuring out which queries were responsible disappears along with it.

Memory is important, someone once said.

On top of that, a lot of SQL Server consumers have way too much faith in bunk metrics, like query and operator costs. Sure, there are plenty of corroborating views to get resource usage metrics, but if all you’ve got is a query plan, all you’ve got is a bunch of nonsense costs to tweedle yourself with.

Sniff Sniff Pass


Another big miss with the plan cache is that it is almost no help whatsoever with parameter sniffing.

Sure, you can sort of figure out based on wide variances in various metrics if a plan sometimes does way more work or runs way longer than other times, but… You still just see the one plan, and its compile values. You don’t see what the plan was, or could be.

Or should be, but that’s a story for another day.

This is where Query Store is awesome, of course! You can see regressions and all that jazz.

But in the plan cache, boy howdy, you get a whole lotta nothing. No Rosie at all.

And this is why I hate the plan cache.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Should Query Store Also Capture Blocking And Deadlocks?

Big Ideas


The more I used third party monitoring tools, the more annoyed I get. So much is missing from the details, configurability, and user experience.

I often find myself insisting on also having Query Store enabled. As much as I’d love other improvements, I think it’s also important to have a centralized experience for SQL Server users to track down tricky issues.

There are so many views and metrics out there, it would be nice to have a one stop shop to see important things.

Among those important things are blocking and deadlocks.

Deadlockness


Deadlocks are perhaps the more obvious choice, since they’re already logged to the system health extended event session.

Rather than leave folks with a bazillion scripts and stored procedures to track them down, Query Store should add a view to pull data from there.

If Microsoft is embarrassed by how slow it is to grab all that session data, and they should be, perhaps that’s a reasonable first step to having Query Store live up to its potential.

Most folks out there have no idea where to look for that stuff, and a lot of scripts that purport to get you detail are either wildly outdated, or are a small detail away from turning no results and leaving them frustrated as hell.

I know because I talk to them.

Blockhead


Blocking, by default, is not logged anywhere at all in SQL Server.

If you wanna get that, you have to be ready for it, and turn on the Blocked Process Report:

sp_configure 
    'show advanced options', 
    1;  
GO  
RECONFIGURE;  
GO  
sp_configure 
    'blocked process threshold', 
    10;  
GO  
RECONFIGURE;  
GO

Of course, from there you have to… do more to get the data.

Michael J Swart has a bunch of neat posts on that. For my part, I wrote sp_HumanEvents to help you spin up an Extended Event session to capture that.

Awful lot of prep work to catch blocking in a database with a pessimistic isolation level on by default, eh?

Left Out


If you want to take this to the next level, it could also grab CPU from the ring buffer, file stats, and a whole lot more. Basically everything other than PLE.

Never look at PLE.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Local Variables vs Forced Parameterization In SQL Server Queries

Questions, Arise!


I think it was sometime in the last century that I mentioned I often recommend folks turn on Forced Parameterization in order to deal with poorly formed application queries that send literal rather than parameterized values to SQL Server.

And then just like a magickal that, I recommended it to someone who also has a lot of problems with Local Variables in their stored procedures.

They were curious about if Forced Parameterization would fix that, and the answer is no.

But here’s proofs. We love the proofs.

Especially when they’re over 40.

A Poorly Written Stored Procedure


Here’s this thing. Don’t do this thing. Even the index is pretty dumb, because it’s on a single column.

CREATE INDEX
    i
ON dbo.Users
    (Reputation)
WITH
    (SORT_IN_TEMPDB= ON, DATA_COMPRESSION = PAGE);
GO 

CREATE PROCEDURE
    dbo.humpback
(
    @Reputation int
)
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    /*i mean don't really do this c'mon*/
    DECLARE 
        @ReputationCopy int = ISNULL(@Reputation, 0);
    
    SELECT
        u.DisplayName,
        u.Reputation,
        u.CreationDate,
        u.LastAccessDate
    FROM dbo.Users AS u
    WHERE u.Reputation = @ReputationCopy;

END;

If we look at the statement parameterization type, available with Query Store enabled, it returns 0. That’s true whether or not we enable simple or forced parameterization:

ALTER DATABASE StackOverflow2013 SET PARAMETERIZATION FORCED;
GO 
EXEC dbo.humpback 
    @Reputation = 11;
GO 

ALTER DATABASE StackOverflow2013 SET PARAMETERIZATION SIMPLE;
GO 

EXEC dbo.humpback 
    @Reputation = 11;
GO 

For now, you’ll have to do a little more work to fix local variable problems.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

What Does And Doesn’t Belong In A SQL Server Data Warehouse

No Way, No How


This is a list of things I see in data warehouses that make me physically ill:

  • Unique constraints of any kind: Primary Keys, Indexes, etc. Make things unique during your staging process. Don’t make your indexes do that work.
  • Foreign Keys: Referential integrity should be guaranteed from your data source. If it can’t be, there’s no sense in making it happen in your data warehouse. Foreign Keys in SQL Server suck anyway, and slow the hell out of large data loads.
  • Clustered row store indexes: At this point in time, when you need a clustered index, it ought to be a clustered column store index.
  • Tables with “lots” of nonclustered row store indexes: They’ll only slow down your load times a whole bunch. Replace them with nonclustered column store indexes.
  • Indexed views: This isn’t 2012 anymore. Column store, column store, column store.
  • Standard Edition: The CPU limit of 24 cores is probably fine, but the buffer pool cap of 128GB and strict limitations on column store/batch mode are horrendous.

I know what you’re thinking looking at this list: I can drop and re-create things like unique constraints, foreign keys, and nonclustered indexes. You sure can, but you’re wasting a ton of time.

Data warehouses have a completely different set of needs from transaction systems. The sooner you stop treating data warehouses like AdventureWorks, the better.

That’s all.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

The Empty Catch Block: Handling Errors In SQL Server… Weirdly.

Don’t Know Why


Normally people handle errors to… handle errors. But I came across someone doing something sort of interesting recently.

Before we talk about that, let’s talk about the more normal way of capturing errors from T-SQL:

CREATE OR ALTER PROCEDURE
    dbo.error_muffler
(
    @i int
)
AS 
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    BEGIN TRY
    
        SELECT 
            x = 1/@i;
    
    END TRY
    
    BEGIN CATCH
        /*Do some logging or something?*/
        THROW;
    END CATCH;

END;

So if we execute our procedure like this, it’ll throw a divide by zero error:

EXEC dbo.error_muffler 
    @i = 0;

Msg 8134, Level 16, State 1, Procedure dbo.error_muffler, Line 12 [Batch Start Line 33]

Divide by zero error encountered.

Well, good. That’s reasonable.

Empty Iterator


What I recently saw someone doing was using an empty catch block to suppress errors:

CREATE OR ALTER PROCEDURE
    dbo.error_muffler
(
    @i int
)
AS 
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    BEGIN TRY
    
        SELECT 
            x = 1/@i;
    
    END TRY
    
    BEGIN CATCH
        /*Nothing here now*/
    END CATCH;

END;
GO

So if you execute the above procedure, all it returns is an empty result with no error message.

Kinda weird.

Like not having finger or toenails.

Trigger Happy


Of course (of course!) this doesn’t work for triggers by default, because XACT_ABORT is on by default..

CREATE TABLE 
    dbo.catch_errors
(
    id int NOT NULL
);
GO

CREATE OR ALTER TRIGGER
    dbo.bury_errors
ON
    dbo.catch_errors
AFTER INSERT
AS
BEGIN
    BEGIN TRY
        UPDATE c
            SET c.id = NULL
        FROM dbo.catch_errors AS c;
    END TRY
    BEGIN CATCH

    END CATCH;
END;
GO

If we try to insert a row here, we’ll get a really weird error message, unswallowed.

INSERT 
    dbo.catch_errors
(
    id
)
VALUES
(
    1
);

Womp:

Msg 3616, Level 16, State 1, Line 29

An error was raised during trigger execution. The batch has been aborted and the user transaction, if any, has been rolled back.

If we were to SET XACT_ABORT OFF; in the trigger definition, it would work as expected.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Common SQL Server Consulting Advice: Enabling Trace Flags

Lessen Earned


This is advice that I have to give less frequently these days, but is absolutely critical when I do.

With SQL Server 2016, suggestions that I’d normally make to folks became the default behavior

  • 1117 and 1118 for tempdb performance and contention
  • 2371 for stats update threshold improvements on large tables

By far the most common trace flag that I still have to get turned on is 8048. Please read the post at the link before telling me that it’s not necessary.

Most trace flags work best as a startup option, because then you don’t have to remember to turn them on again.

There are a lot of trace flags that I usually have people turn off, too. Most common at the 12XX trace flags that stick deadlock information in the error log.

There are far better ways to get at that information these days, like using the system health extended event session.

Lessen Earned


There are query-level trace flags that make sense sometimes, too as part of query tuning and experimentation.

Some of these have been replaced by written hints, too:

  • 8649 can be replaced by OPTION(USE HINT('ENABLE_PARALLEL_PLAN_PREFERENCE'));
  • 8690 can be replaced by OPTION(NO_PERFORMANCE_SPOOL);

Others are quite interesting to get more details about the optimization process. They almost all require 3604 to be used as well, to output messages the to console.

  • 2315: Memory allocations taken during compilation
  • 2363: (2014+) Statistics Info
  • 2372: Shows memory utilization during the different optimization stages
  • 2373: Shows memory utilization while applying optimization rules and deriving properties
  • 7352: Show final query tree (post-optimization rewrites)
  • 8605: Initial query tree
  • 8606: Additional LogOp trees
  • 8607: Optimizer output tree
  • 8608: Input tree copied into memo
  • 8609: Operation counts
  • 8612: Extra LogOp info
  • 8615: Final memo
  • 8619: Applied transformation rules
  • 8620: Add memo arguments to trace flag 8619
  • 8621: Rule with resulting tree
  • 8670: Disables Search2 phase of optimization
  • 8671: Disables logic that prunes memo and prevents optimizer from stopping due to “Good Enough Plan found”
  • 8675: Optimization phases and timing
  • 8757: Disable trivial plan generation
  • 9204: Interesting statistics loaded (< 2014)
  • 9292: Interesting statistics (< 2014)

If this all seems daunting, it’s because it is. And in most cases, it should be. But like… Why not make trace flags safeguards?

Diagnostic


Microsoft creates trace flags to change default product behavior, often to solve a problem.

If you read through a cumulative update patch notes, you might find some documentation (no seriously, stop laughing) that says you need to apply the CU and enable a trace flag to see a problem get resolved.

It would be nice if SQL Server were a bit more proactive and capable of self-healing. If the issue at hand is detected, why not enable the trace flag automatically? There’s no need for it to act like a sinking ship.

I get that it’s not feasible all the time, and that some of them truly are only effective at startup (but that seems like something that could be done, too).

Let’s look at semi-recent Trace Flag 8101 as an example!

When you run many online transactions on a database in Microsoft SQL Server 2019, you notice severe spinlock contention. The severe contention is generally observed on new generation and high-end systems. The following conditions apply to severe spinlock contention:

  • Requires modern hardware, such as Intel Skylake processors
  • Requires a server that has many CPUs
  • Requires a high number of concurrent users
  • Symptoms may include unexpected high CPU usage

Okay, some of this stuff can be (or is interrogated at startup as part of Hekaton checks). Maybe some is subjective, like what constitutes a high number of concurrent users, or CPU.

But there’s more!

Note In SQL Server 2019 Cumulative Update 16, we fixed spinlock contention on SPL_HOBT_HASH and SPL_COMPPLAN_SKELETON.

Okay, and…

Note Trace flag 8101 has to be turned on to enable the fix.

That seems far less subjective, and a good opportunity to self-heal a little bit. Flip the switch, SQL Server.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Common SQL Server Consulting Advice: Enabling Query Store

Captain, Captain


Third party monitoring tools are in rough shape these days. That’s all I’m gonna say about the situation. I get enough letters from lawyers on account of my BTS fanfic sites.

With that out of the way, let’s talk about something you can do to get a handle on which queries are having problems: Enable Query Store.

You can do that using this command:

ALTER DATABASE 
    [YourDatabase]
SET QUERY_STORE 
(
    OPERATION_MODE = READ_WRITE,
    MAX_STORAGE_SIZE_MB = 1024,
    QUERY_CAPTURE_MODE = AUTO
);

The reason I use this command specifically us because it will override some bad defaults that have been corrected over various service packs and cumulative updates.

You know what I hate doing? Stopping to look at which service packs and cumulative updates fixed certain Query Store defaults.

The important things that this script does is:

  • Turn on Query Store
  • Give it a decent amount of space to store data in
  • Not capture every single tiny little query that runs

What a nice setup.

Okay, Now What?


Once Query Store is enabled, assuming the goal is to track down and solve performance problems, the easiest way to start digging in is the GUI.

I usually go into Top Resource Consuming Queries, then look at what used the most average CPU over the last week.

SQL Server Query Store
cherry

This view won’t tell you everything of course, but it’s a good starting place.

Okay, But I Need More


The GUI itself right now doesn’t allow for much beyond showing you the top whatever by whatever for whenever. If you want to search through Query Store data for specific plan or query IDs, procedure names, or query text, you’ll need to use my free script sp_QuickieStore.

To get you started, here are a bunch of example commands:

--Get help!
EXEC dbo.sp_QuickieStore
    @help = 1;


--Find top 10 sorted by memory 
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @sort_order = 'memory',
    @top = 10;              


--Search for specific query_ids
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @top = 10,
    @include_query_ids = '13977, 13978';    


--Search for specific plan_ids
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @sort_order = 'memory',
    @top = 10,
    @start_date = '20210320',
    @include_plan_ids = '1896, 1897';

    
--Ignore for specific query_ids
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @top = 10,
    @ignore_query_ids = '13977, 13978';    


--Ignore for specific plan_ids
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @sort_order = 'memory',
    @top = 10,
    @start_date = '20210320',
    @ignore_plan_ids = '1896, 1897'; 


--Search for queries within a date range
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @sort_order = 'memory',
    @top = 10,
    @start_date = '20210320',
    @end_date = '20210321';              


--Search for queries with a minimum execution count
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @top = 10,
    @execution_count = 10;


--Search for queries over a specific duration
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @top = 10,
    @duration_ms = 10000;


--Search for a specific stored procedure
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @procedure_name = 'top_percent_sniffer';   


--Search for specific query tex
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @query_text_search = 'WITH Comment'


--Use expert mode to return additional columns
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @sort_order = 'memory',
    @top = 10,
    @expert_mode = 1;              


--Use format output to add commas to larger numbers
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @sort_order = 'memory',
    @top = 10,
    @format_output = 1;


--Use wait filter to search for queries responsible for high waits
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @wait_filter = 'memory',
    @sort_order = 'memory';


--Troubleshoot performance
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @troubleshoot_performance = 1;


--Debug dynamic SQL and temp table contents
EXEC dbo.sp_QuickieStore
    @database_name = 'StackOverflow2013',
    @debug = 1;

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Common SQL Server Consulting Advice: Enabling Forced Parameterization

Not The Bad Kind


I need to start this post off by saying something that may not be obvious to you: Not all parameter sniffing is bad.

Sure, every time you hear someone say “parameter sniffing” they want to teach you about something bad, but there’s a lot more to it than that.

Parameter sniffing is normally great. You heard me. Most of the time, you don’t want SQL Server generating new query plans all the time.

And yet I see people go to extreme measures to avoid parameter sniffing from ever happening, like:

  • Local variables
  • Recompiling

What you care about is parameter sensitivity. That’s when SQL Server comes up with totally different execution plans for the same query depending on which parameter value it gets compiled with. In those cases, there’s usually a chance that later executions with different parameter values don’t perform very well using the original query plan.

The thing is, sometimes you need to introduce potentially bad parameter sensitivity in order to fix other problems on a server.

What’s Your Problem?


The problem we’re trying to solve here is application queries being sent in with literal values, instead of parametrized values.

The result is a plan cache that looks like this:

SQL Server Plan Cache
unethical

Of course, if you can fix the application, you should do that too. But fixing all the queries in an application can take a long time, if you even have access to make those changes, or a software vendor who will listen.

The great use case for this setting is, of course, that it happens all at once, unless you’re doing weird things.

You can turn it on for a specific database by running this command:

ALTER DATABASE 
   [YourDatabase] 
SET PARAMETERIZATION FORCED;

Good or Bad?


The argument for doing this is to drastically reduce CPU from queries constantly compiling query plans, and to reduce issues around constantly caching and evicting plans, and creating an unstable plan cache.

Of course, after you turn it on, you now open your queries up to parameter sensitivity issues. The good news is that you can fix those, too.

99% of parameter sniffing problems I see come down to indexing issues.

  • Non-covering indexes that give the optimizer a choice between Seek + Lookup and Clustered Index Scan
  • Lots of single key column indexes that don’t make sense to use across different searches
  • Suboptimal indexes suggested by various tooling that got implemented without any critical oversight

And of course, if you’ve got Query Store enabled, you can pretty easily force a plan.

Speaking of which, I still have to talk a lot of folks into turning that on, too. Let’s talk about that tomorrow.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Common SQL Server Consulting Advice: Enabling Resource Governor To Fix Memory Grants

With Or Without You


One thing I really hate about Standard Edition is the lack of a way to globally reduce the allowed memory grant default.

Out of the box, any query can come along and ask for ~25% of your server’s max server memory setting, and SQL Server is willing to loan out ~75% at once across a bunch of queries.

That impacts Standard Edition users way more than Enterprise Edition users, because of the Standard Edition buffer pool limit of 128GB.

A lot of folks misinterpret that limit — I’ve had several exchanges with big name hardware vendors where they insist 128GB is the overall RAM limit, so you’re likely getting bad advice from everywhere — and they end up with a server that only has 128GB of RAM in it.

Big mistake. Bump that up to 192GB and set Max Server Memory to ~180GB or so.

But I digress. Or whatever the word if for getting back to the real point.

I forget, if I ever knew.

Control Top


For all you high rollers out there on Enterprise Edition, you have an easy way to fight against SQL Server’s bad memory grant habits.

Before we do that, it’s important to make a few notes here:

  • SQL Server introduced batch mode memory grant feedback in 2016
  • SQL Server introduced batch mode on row store in 2019

Depending on your compatibility level, you may not be taking advantage of those things, but in either case the feedback might be kicking in too late. See, it’s not a runtime decision, it’s a decision that takes place after a query runs.

By then, it’s already sucked up 25% of your memory and probably stolen a whole bunch of space from your precious buffer pool. A properly filled buffer pool is important so your queries don’t get dry-docked going out to slowpoke disk all the live long day.

A lot of the time, folks I work with will have a ton of queries asking for bunk memory grants that are way bigger than they should be.

You can use this query to examine your plan cache for those things.

WITH 
    unused AS
(
    SELECT TOP (100)
        oldest_plan = 
            MIN(deqs.creation_time) OVER(),
        newest_plan = 
            MAX(deqs.creation_time) OVER(),
        deqs.statement_start_offset,
        deqs.statement_end_offset,
        deqs.plan_handle,
        deqs.execution_count,
        deqs.max_grant_kb,
        deqs.max_used_grant_kb,
        unused_grant = 
            deqs.max_grant_kb - deqs.max_used_grant_kb,
        deqs.min_spills,
        deqs.max_spills
    FROM sys.dm_exec_query_stats AS deqs
    WHERE (deqs.max_grant_kb - deqs.max_used_grant_kb) > 1024.
    AND   deqs.max_grant_kb > 5242880.
    ORDER BY 
        unused_grant DESC
)
SELECT      
    plan_cache_age_hours = 
        DATEDIFF
        (
            HOUR,
            u.oldest_plan,
            u.newest_plan
        ),
    query_text = 
        (
            SELECT [processing-instruction(query)] =
                SUBSTRING
                (
                    dest.text, 
                    ( u.statement_start_offset / 2 ) + 1,
                    (
                        ( 
                            CASE u.statement_end_offset 
                                 WHEN -1 
                                 THEN DATALENGTH(dest.text) 
                                 ELSE u.statement_end_offset 
                            END - u.statement_start_offset 
                        ) / 2 
                    ) + 1
                )
                FOR XML PATH(''), 
                    TYPE
        ),
    deqp.query_plan,
    u.execution_count,
    u.max_grant_kb,
    u.max_used_grant_kb,
    u.min_spills,
    u.max_spills,
    u.unused_grant
FROM unused AS u
OUTER APPLY sys.dm_exec_sql_text(u.plan_handle) AS dest
OUTER APPLY sys.dm_exec_query_plan(u.plan_handle) AS deqp
ORDER BY 
    u.unused_grant DESC
OPTION (RECOMPILE, MAXDOP 1);

If you have a bunch of those, and you want a quick fix until you can do more meaningful query and index tuning, you can use Resource Governor to reduce the 25% default to a lower number.

Scripted, For Your Pleasure


You can use this script to enable and reconfigure Resource Governor to use a lower memory grant percent.

/*The first time enables Resource Governor*/
ALTER RESOURCE GOVERNOR 
    RECONFIGURE;

/*This reduces the memory grant cap to 10%*/
ALTER WORKLOAD GROUP 
    [default] 
WITH
(
    REQUEST_MAX_MEMORY_GRANT_PERCENT = ?
);

/*This completes the change*/
ALTER RESOURCE GOVERNOR 
    RECONFIGURE;

You’ll have to fill in the question mark yourself, of course. Without looking at your system, I have no idea what it should be.

If you’d like help with that, hit the link below to schedule a sales call.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.