Followup: Make ADS An Optional Install Alongside SSMS

Botanical


A while back, I posted a feedback item to UserVoice in response to SSMS 18.7 forcing users to install ADS alongside SSMS, because installing software without user consent is just plain wrong.

I understand that other components are packaged in with SSMS, and I understand that ADS is a good tool for developers. I’m less sure that folks would be keen on installing a tool with an 1800+ issue “backlog“. I know, that seems harsh, but Microsoft’s response to this was really lame.

have you tried thinking about it?

Nice pamphlet. Get real, though: ADS is about as mature as whoopie cushion, and the choice to add dependencies between the two tools was an entirely unforced error.

Tea Leaves


One can’t help but wonder when Microsoft is going to cease work on SSMS to only work on ADS, and what that roadmap might look like.

After all, there’s a very clear differentiation in when you should use each tool.

the face

If ADS is going to be the future of working with SQL Server, it has a really long way to go. If you agree, head over here and cast your vote.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Join Me At Data Platform Summit 2020!

The Road From Nowhere


This year, I’m teaching an 8 hour online workshop at Data Platform Summit, and I’d love it if you joined me.

Here’s what I’ll be teaching:

Class Title: The Beginner’s Guide To Advanced Performance Tuning

Abstract: You’re new to SQL Server, and your job more and more is to fix performance problems, but you don’t know where to start.

You’ve been looking at queries, and query plans, and puzzling over indexes for a year or two, but it’s still not making a lot of sense.

Beyond that, you’re not even sure how to measure if your changes are working or even the right thing to do.

In this full day performance tuning extravaganza, you’ll learn about all the most common anti-patterns in T-SQL querying and indexing, and how to spot them using execution plans. You’ll also leave knowing why they cause the problems that they do, and how you can solve them quickly and painlessly.

If you want to gain the knowledge and confidence to tune queries so they’ll never be slow again, this is the training you need.

Date: Dec 7 & 8.

Time: 12 PM to 04 PM EST (View in your timezone)

Tickets: Tickets here!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Not All SQL Server Function Rewrites Are Straightforward

And Some, Not At All


Let’s say at some point, you just didn’t know any better, and you wrote a scalar function to make some common thing you needed to do all “modular” and “portable” and stuff.

Good on you, not repeating yourself. Apparently I repeat myself for a living.

Anyway, you know what stinks? When you hit divide by zero errors. It’d be cool if math fixed that for us.

Does anyone know how I can get in touch with math?

Uncle Function


Since you’re a top programmer, you know about this sort of stuff. So you write a bang-up function to solve the problem.

Maybe it looks something like this.

CREATE OR ALTER FUNCTION dbo.safety_dance(@n1 INT, @n2 INT)
RETURNS INT
WITH SCHEMABINDING,
     RETURNS NULL ON NULL INPUT
AS
BEGIN
RETURN
(
    SELECT @n1 / NULLIF(@n2, 0)
);
END
GO

You may even be able to call it in queries about like this.

SELECT TOP (5)
    u.DisplayName,
    fudge = dbo.safety_dance(SUM(u.UpVotes), COUNT(*))
FROM dbo.Users AS u
GROUP BY u.DisplayName
ORDER BY fudge DESC;

The problem is that it makes this query take a long time.

SQL Server Query Plan
you compute that scalar, sql server

At 23 seconds, this is probably unacceptable. And this is on SQL Server 2019, too. The function inlining thing doesn’t quite help us, here.

One feature restriction is this, so we uh… Yeah.

The UDF does not contain aggregate functions being passed as parameters to a scalar UDF

But we’re probably good query tuners, and we know we can write inline functions.

Ankle Fraction


This is a simple enough function. Let’s get to it.

CREATE OR ALTER FUNCTION dbo.safety_dance_inline(@n1 INT, @n2 INT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
    SELECT @n1 / NULLIF(@n2, 0) AS safety
);

Will it be faster?

SELECT TOP (5)
    u.DisplayName,
    fudge = (SELECT * FROM dbo.safety_dance_inline(SUM(u.UpVotes), COUNT(*))) 
FROM dbo.Users AS u
GROUP BY u.DisplayName
ORDER BY fudge DESC;

Well, yes. Mostly because it throws an error.

Msg 4101, Level 15, State 1, Line 35
Aggregates on the right side of an APPLY cannot reference columns from the left side.

Well that’s weird. Who even knows what that means? There’s no apply, here.

What’s your problem, SQL Server?

Fixing It


To get around this restriction, we need to also rewrite the query. We can either use a CTE, or  a derived table.

--A CTE
WITH counts AS 
(
    SELECT 
        u.DisplayName, 
        SUM(Upvotes) AS Upvotes,
        COUNT(*) AS records
    FROM dbo.Users AS u
    GROUP BY u.DisplayName
) 
SELECT TOP(5) 
    c.DisplayName,
    fudge = (SELECT * FROM dbo.safety_dance_inline(c.Upvotes, c.records) AS s)
FROM counts AS c
ORDER BY fudge DESC;

--A derived table
SELECT TOP(5) 
    c.DisplayName,
    fudge = (SELECT * FROM dbo.safety_dance_inline(c.Upvotes, c.records) AS s)
FROM 
(
    SELECT 
        u.DisplayName, 
        SUM(Upvotes) AS Upvotes,
        COUNT(*) AS records
    FROM dbo.Users AS u
    GROUP BY u.DisplayName
) AS c
ORDER BY fudge DESC;

 

Is it faster? Heck yeah it is.

SQL Server Query Plan
you’re just so parallel, baby

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How To Avoid SQL Injection In Dynamic SQL Queries In SQL Server

Injectables


Dynamic SQL is always a hot topic. I love using it. Got a lot of posts about it.

Recently, while answering a question about it, it got me thinking about safety when accepting table names as user input, among other things.

The code in the answer looks like this:

CREATE OR ALTER PROCEDURE dbo.SelectWhatever (@SchemaName sysname, @TableName sysname)
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

DECLARE @SafeSchema sysname = NULL,
        @SafeTable sysname = NULL,
        @SQL NVARCHAR(MAX) = N'';

SELECT @SafeSchema = SCHEMA_NAME(t.schema_id),
       @SafeTable  = t.name
FROM sys.tables AS t
WHERE t.schema_id = SCHEMA_ID(ISNULL(@SchemaName, 'dbo'))
AND   t.name = ISNULL(@TableName, 'SomeKnownTable');

IF (@SafeSchema IS NULL)
BEGIN
    RAISERROR('Invalid schema: %s', 0, 1, @SchemaName) WITH NOWAIT;
    RETURN;
END;

IF (@SafeTable IS NULL)
BEGIN
    RAISERROR('Invalid table: %s', 0, 1, @TableName) WITH NOWAIT;
    RETURN;
END;


SET @SQL += N'
SELECT TOP (100) *
/*dbo.SelectWhatever*/
FROM ' + QUOTENAME(@SafeSchema) 
       + N'.'
       + QUOTENAME(@SafeTable)
       + N';';

RAISERROR('%s', 0, 1, @SQL) WITH NOWAIT;
EXEC sys.sp_executesql @SQL;

END;

Sure, there are other things that I could have done, like used OBJECT_ID() and SCHEMA_ID() functions to validate existence, but I sort of like the idea of hitting the system view, because if you follow that pattern, you could expand on it if you need to accept and validate column names, too.

Expansive


Yeah, I’m using some new-version-centric stuff in here, because I uh… I can. Thanks.

If you need examples of how to split strings and create CSVs, get them from the zillion other examples on the internet.

CREATE OR ALTER PROCEDURE dbo.SelectWhatever (@SchemaName sysname, @TableName sysname, @ColumnNames NVARCHAR(MAX))
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

DECLARE @SafeSchema sysname = NULL,
        @SafeTable sysname = NULL,
        @SafeColumns NVARCHAR(MAX) = NULL,
        @SQL NVARCHAR(MAX) = N'';

SELECT @SafeSchema = SCHEMA_NAME(t.schema_id),
       @SafeTable  = t.name
FROM sys.tables AS t
WHERE t.schema_id = SCHEMA_ID(ISNULL(@SchemaName, 'dbo'))
AND   t.name = ISNULL(@TableName, 'SomeKnownTable');

SELECT @SafeColumns = STRING_AGG(QUOTENAME(c.name), ',')
FROM sys.columns AS c
WHERE c.object_id = OBJECT_ID(@SafeSchema + N'.' + @SafeTable)
AND   c.name IN ( SELECT TRIM(ss.value) 
                  FROM STRING_SPLIT(@ColumnNames, ',') AS ss );


IF (@SafeSchema IS NULL)
BEGIN
    RAISERROR('Invalid schema: %s', 0, 1, @SchemaName) WITH NOWAIT;
    RETURN;
END;

IF (@SafeTable IS NULL)
BEGIN
    RAISERROR('Invalid table: %s', 0, 1, @TableName) WITH NOWAIT;
    RETURN;
END;

IF (@SafeColumns IS NULL)
BEGIN
    RAISERROR('Invalid column list: %s', 0, 1, @ColumnNames) WITH NOWAIT;
    RETURN;
END;


SET @SQL += N'
SELECT TOP (100) ' 
       + @SafeColumns
       + N'
/*dbo.SelectWhatever*/
FROM ' + QUOTENAME(@SafeSchema) 
       + N'.'
       + QUOTENAME(@SafeTable)
       + N';';

RAISERROR('%s', 0, 1, @SQL) WITH NOWAIT;
EXEC sys.sp_executesql @SQL;

END;

Normally I’d raise hell about someone using a function like STRING_SPLIT in a where clause, but for simple DMV queries you’re not likely to see a significant perf hit.

There’s a lot of stuff you’ll see in DMV queries that are not okay in normal queries.

Some explanations


It would be nice if we had a dynamic SQL data type that did some of this stuff for us. Sort of like XML document validation with less obtuse error messages.

Until that time which will never come, we have to do the hard work. One way to make your dynamic SQL a little bit safer is to keep user inputs as far away from the execution as you can.

In the above example, I declare a separate set of variables to hold values, and only use what a user might enter in non-dynamic SQL blocks, where they can’t do any harm.

If there’s anything goofy in them, the “@Safe” variables end up being NULL, and an error is thrown.

Also, I’m using QUOTENAME on every individual object: Schema, Table, and Column, to cut down on any potential risks of naughty object values being stored there. If I had to do this for a database name, that’d be an easy add on, using sys.databases.

If you’ve got to work with stringy input for dynamic SQL, this is one way to make the ordeal a bit more safe. You can also extend that to easier to locate key values, like so:

CREATE PROCEDURE dbo.SaferStringSearch (@UserEquals NVARCHAR(40) = NULL, @UserLike NVARCHAR(40))
AS
SET NOCOUNT, XACT_ABORT ON
BEGIN

CREATE TABLE #ids(id INT NOT NULL PRIMARY KEY);

INSERT #ids WITH (TABLOCK) ( id )
SELECT u.Id
FROM dbo.Users AS u
WHERE u.DisplayName = @UserEquals
UNION
SELECT u.Id
FROM dbo.Users AS u
WHERE u.DisplayName LIKE @UserLike;

DECLARE @SQL NVARCHAR(MAX) = N'
SELECT p.*
FROM dbo.Posts AS p
WHERE EXISTS
(
    SELECT 1/0
    FROM #ids AS i
    WHERE i.id = p.OwnerUserId
);
';

EXEC sp_executesql @SQL;

END;

I get that this isn’t the most necessary use of dynamic SQL in the world, it’s really just a simple way to illustrate the idea.

Stay Safe


If you’ve got to pass strings to dynamic SQL, these are some ways to make the process a bit safer.

In the normal course of things, you should parameterize as much as you can, of course.

For search arguments, that’s a no-brainer. But for objects, you can’t do that. Why? I don’t know.

I’m not a programmer, after all.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Getting Specific Query Plans From Stored Procedures

I Know You


None of your stored procedures are a single statement. They’re long. Miles long. Ages long.

If you’re troubleshooting performance for one of those, you could end up really causing yourself some headaches if you turn on actual execution plans and fire away.

Not only is there some overhead to collecting all the information for those plans, but then SSMS has to get all artsy and draw them from the XML.

Good news, though. If you’ve got some idea about which statement(s) are causing you problems, you can use an often-overlooked SET command.

Blitzing


One place I use this technique a lot is with the Blitz procedures.

For example, if I run sp_BlitzLock without plans turned on, it’s done in about 7 seconds.

If I run it with plans turned on, it runs for a minute and 7 seconds.

Now, a while back I had added a bit of feedback in there to help me understand which statements might be running the longest. You can check out the code I used over in the repo, but it produces some output like this:

Why is it always the XML?

If I’m not patient enough to, let’s say, wait a minute for this to run every time, I can just do something like this:

SET STATISTICS XML ON;

    /*Problem queries here*/

SET STATISTICS XML OFF;

That’ll return just the query plans you’re interested in.

Using a screenshot for a slightly different example that I happened to have handy:

click me, click me, yeah

You’ll get back the normal results, plus a clickable line that’ll open up the actual execution plan for a query right before your very eyes.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

A Misleading Error While Testing HammerDB On Hekaton

Safe Words


This a short post about a sort of quirky error message I got while trying to run the TPC-C load test against SQL Server’s in-memory tables.

Error in Virtual User1: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The integer value XXXXXXXXXXXX is out of range.

chucked

A few of the tables would get created, but not all of them. It wasn’t obvious where that number was coming from, until I thought back to the setup.

In the build script, there’s a helpful looking line for you to adjust the hash bucket count that your indexes get created with.

tinker tailor

I had set mine to a higher number, thinking it would help load speeds to have a bunch of buckets ready in there.

But the thing is, buried way down in the script, there’s already some math to pad the hash buckets for you.

the salty bucket

If you’re in the mood to be totally underwhelmed, you should also run this test against Hekaton.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Which Parallel Deadlocks Produce Deadlock Graphs In SQL Server?

Sadness


There are two types of parallel deadlocks. The kind that end in an error message, and the kind that are resolved by exchanges spilling buffers.

It used to be that both kinds would produce deadlock graphs. Microsoft even added some additional troubleshooting information specifically for them.

But apparently that had some overhead, and parallel deadlocks for exchange spills got pulled from the equation.

I checked back to SQL Server 2012 and 2014 on their respective latest service packs, and they both still capture deadlock graphs for exchange spills.

There have been some CUs since Sp3 for SQL Server 2014, but they don’t mention anything about this being backported in them.

Why Is This A Big Deal?


If you were digging into query performance issues, or if you were capturing deadlocks somehow, you used to be able to find queries with these problems pretty easily.

In the article that describes a fix for many deadlock reports, Microsoft offers up an alternative Extended Event session to capture queries that produce error 1205 (a deadlock), but I wasn’t able to get that to capture deadlocks that were resolved by exchange spills.

I don’t think they actually produce that error, which is also why they don’t produce a deadlock graph.

Why they did that when there is, quite not-figuratively, an event dedicated to capturing exchange spills, is beyond me.

i mean really

For me personally, it was a bit of a curveball for sp_BlitzLock. The XML that got produced for exchange spill deadlocks has different characteristics from the ones that produce errors.

There’s a lot of stuff that isn’t documented, too.

Change It Back?


I’m assuming there was some technical challenge to producing a single deadlock graph for exchange spills, which is why it got pulled instead of fixed.

Normally I’d think about opening a UserVoice item, but it doesn’t seem like it’d go anywhere.

There’s enough good ideas on there now that haven’t seen any traction or attention.

Anyway, if you’re on a newer version of SQL Server, take note of the change if you’re troubleshooting this sort of thing.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Options For Fixing Parallel Query Deadlocks In SQL Server

Recreating


If you have a copy of the StackOverflow2013 database, this query should produce a parallel deadlock.

SELECT c.Id, c.UserId, c.CreationDate, c.PostId, c.Score 
INTO #ohno 
FROM dbo.Comments AS c WITH (TABLOCKX);

If you want an easy way to track down deadlocks, I can’t recommend sp_BlitzLock enough.

It doesn’t render the deadlock graph for you, but it does get you the deadlock XML, which you can save as an XDL file.

For viewing them, Sentry One’s Plan Explorer tool is way better than SSMS. It doesn’t just explore plans — it also explores deadlocks.

Graphed Out


The way it’ll look is something like this:

SQL Server Deadlock
ow my face

You’ll see the exchange event, and you’ll also see the same query deadlocking itself.

This is an admittedly odd situation, but one I’ve had to troubleshoot a bunch of times.

You might see query error messages something like this:

Msg 1205, Level 13, State 18, Line 3
Transaction (Process ID 55) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Options For Fixing It


If you start running into these, it can be for a number of reasons, but the root cause is a parallel query. That doesn’t mean you should change MAXDOP to 1, though you should check your parallelism settings to make sure they’re not at the defaults.

You may want to try setting the query you’re having a problem with to DOP 1. Sure, performance might suffer, but at least it won’t error out.

If that’s not possible, you might need to look at other things in the query. For example, you might be missing a helpful index that would make the query fast without needing to go parallel.

Another issue you might spot in query plans is around order preserving operators. I wrote a whole bunch about that with an example here. You might see it around operators like Sorts, Merges, and Stream Aggregates when they’re surrounding parallel exchange operators. In those cases, you might need to hint HASH joins or aggregations.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

A Fun Way To Return Zero Rows To SQL Server Management Studio Using ROW_NUMBER

Demos And Demos!


Sending rows out to SSMS is annoying. It takes a long time when there are a lot of them, and sometimes you just wanna show a query plan that does a lot of work without all the blah blah.

And sure, there are other great ways that I talked about in this Q&A.

One other way is a method I used in yesterday’s post.

Let’s be timely and look at that!

Everyone Loves Windowing Functions


They do all sorts of funny stuff. Number things, rank things.

And they make great demos, whether you’re trying to show how to do something cool, or make them perform well.

The funny thing is that the optimizer doesn’t really understand that, unless you intervene, the numbering and ranking doesn’t start at zero.

You can have a whole query run, do a bunch of really hard work, take a long time, and not return any rows, just by filtering to where the function = 0.

Using yesterday’s query as an example, here’s what happens:

SELECT u.Id, 
       u.DisplayName, 
       u.Reputation, 
       u.CreationDate, 
       ca.*
FROM dbo.Users AS u
OUTER APPLY
    (
        SELECT *, 
        DENSE_RANK() 
              OVER( PARTITION BY vs.Id 
                    ORDER BY vs.Id DESC ) AS whatever
        FROM dbo.VotesSkewed AS vs
        WHERE vs.UserId = u.Id
        AND   vs.VoteTypeId BETWEEN 1 AND 5
    ) AS ca
WHERE ca.whatever = 0;

Since I’m generating the rank here on the primary key/clustered index for the table, it’s basically free to do. It’s fully supported by the index.

If you create a different index to support that, e.g. a POC index (Partitioning, Ordering, Covering), you can use that too.

SQL Server Query Plan
FULL SIZE

Bag Of Bones


This query will, of course, return no rows. But it will do all the prerequisite work to generate a result set to filter on. That’s where that filter operator kicks in, and nothing passes through it. It’s a stopping point right before things would actually have to start kicking out to SSMS.

SQL Server Query Plan
less than zero

But of course, the optimizer doesn’t know that until we get there. If it did, we might just end up with a constant scan and a query that finishes instantly.

For example, if you add a silly where clause like this:

SELECT u.Id, 
       u.DisplayName, 
       u.Reputation, 
       u.CreationDate, 
       ca.*
FROM dbo.Users AS u
OUTER APPLY
    (
        SELECT *, 
        DENSE_RANK() 
              OVER( PARTITION BY vs.Id 
                    ORDER BY vs.Id DESC ) AS whatever
        FROM dbo.VotesSkewed AS vs
        WHERE vs.UserId = u.Id
        AND   vs.VoteTypeId BETWEEN 1 AND 5
    ) AS ca
WHERE 1 = (SELECT 0)

You end up with this:

SQL Server Query Plan
O NO A SCAN

That’s That


If you find yourself with a demo that returns a lot of rows, and you don’t want to use a TOP or OFFSET/FETCH to only return some of them, this is a fun way to return nothing.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

A SQL Server Parameterization Puzzle With TOP: Part 2

Up From The Floor


Now that we’ve covered what happened with our query, how can we fix it?

Remember when I said that this only happens with literals? I sort of lied.

Sorry.

Pants On Fire


Probably the simplest thing to do would be to set MAXDOP to 1 for the query. Avoiding the parallel exchanges avoids the problem, but the query does run longer than the original with a literal TOP. That being said, it may be the simplest solution in some cases for you if it stabilizes performance.

If you’re feeling as brave as Sir Robin, you can add an OPTIMIZE FOR hint to bring back the early-buffer-send behavior.

CREATE OR ALTER PROCEDURE dbo.SniffedTop (@Top INT)
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    SELECT TOP (@Top)
            u.DisplayName,
            b.Name
    FROM dbo.Users u
    CROSS APPLY 
    (
        SELECT TOP (1) 
                b.Name
        FROM dbo.Badges AS b
        WHERE b.UserId = u.Id
        ORDER BY b.Date DESC
    ) AS b
    WHERE u.Reputation >= 10000
    ORDER BY u.Reputation DESC
    OPTION(OPTIMIZE FOR(@Top = 1));

END;
GO

Are they always better? I have no idea, but if you’ve got long-running queries with a parameterized TOP, this might be something worth experimenting with.

Another rewrite that works is slightly more complicated. Though for maximum benefit, Batch Mode is necessary.

CREATE OR ALTER PROCEDURE dbo.SniffedTop (@Top INT)
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    SELECT TOP (@Top)
            u.DisplayName,
            b.Name
    FROM dbo.Users u
    CROSS APPLY 
    (
        SELECT *
        FROM 
        (
            SELECT *,
            	ROW_NUMBER() 
            	    OVER( PARTITION BY b.UserId
            		      ORDER BY b.Date DESC ) AS n
            FROM dbo.Badges AS b
        ) AS b
        WHERE b.UserId = u.Id
        AND b.n = 1
    ) AS b
    WHERE u.Reputation >= 10000
    ORDER BY u.Reputation DESC;

END;
GO

So that’s fun. We’re having fun. I like fun.

I’m gonna make a PowerPoint about fun.

Other Things, And Drawbacks


So like, you could add a recompile hint to allow the TOP parameter to be sniffed, sure. But then you’re sort of undoing the good you did parameterizing in the first place.

You could also write unparameterized dynamic SQL, but see above. Same problem, plus a blown out plan cache if people ask for different values.

Optimize for unknown, and OFFSET/FETCH also don’t work.

Of course, one thing that would help here is a more appropriate index leading with UserId. However, most good demos arise from less than ideal indexing, so you’re just going to have to deal with it.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.