Join me at DataTune in Nashville, March 8-9 2024

Spring Training


This March, I’ll be presenting my full day training session The Foundations Of SQL Server Performance Tuning.

All attendees will get free access for life to my SQL Server performance tuning training. That’s about 25 hours of great content.

Get your tickets here for this event, taking place Friday, March 8th-9th 2024 at Belmont University – Massey Center 1900 Belmont Blvd, Nashville, TN 37212

Here’s what I’ll be presenting:

The Foundations Of SQL Server Performance Tuning

Session Abstract:

Whether you want to be the next great query tuning wizard, or you just need to learn how to start solving tough business problems at work, you need a solid understanding of not only what makes things fast, but also what makes them slow.

I work with consulting clients worldwide fixing complex SQL Server performance problems. I want to teach you how to do the same thing using the same troubleshooting tools and techniques I do.

I’m going to crack open my bag of tricks and show you exactly how I find which queries to tune, indexes to add, and changes to make. In this day long session, you’re going to learn about hardware, query rewrites that work, effective index design patterns, and more.

Before you get to the cutting edge, you need to have a good foundation. I’m going to teach you how to find and fix performance problems with confidence.

Event Details:

Get your tickets here for this event!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Wrap Up And Final Thoughts

Thanks For Reading!


Here’s the full list of posts from the series:

I will eventually revisit these to record some material on them, and perhaps expand on it.

But this is good enough for now.

Follow On


By far, the most commented on post in the series is the one about formatting. It’s mostly people continuing to be wrong about it, but it was interesting to note.

The general idea of the series was to teach developers about the types of things I always seem to be fixing and adjusting, so that I can hopefully fix really interesting problems in the future.

Of course, that all depends on folks finding these and reading them. If that were the general sway of the world, I’d probably never had been in business in the first place.

So if you’re here and reading this, you’re special.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Cursors And Loops

I Don’t See Nothin’ Wrong


You will, for better or worse, run into occasions in your database career that necessitate the use of loops and cursors.

While I do spend a goodly amount of time reworking this sort of code to not use loops and cursors, there are plenty of reasonable uses for them.

In most cases, I would prefer to see developers write a loop manually, because there are many difficult to understand cursor options that change their behavior, and can really mess with query performance.

Those are just a couple examples from my site, without getting into all the many other variations in cursor types.

One very important distinction in cursor types is what set of data the queries in them will see.

As an example, static cursors make a copy of the results of the select, and then use that copy (like a temporary table) to work off of. Dynamic cursors read the underlying data on each iteration.

Especially in cases where your cursors may modify data, and you expect changes to the data to be reflected in future iterations of the cursor, static cursors are likely not a good choice.

Likewise, if you’re already putting data into a temporary table for the cursor to work off of, there is likely no benefit to a static cursor type because you’re just making another copy of the temporary table’s contents.

As I’ve said before, it’s up to you to test different options to see which works correctly and performs the best for your needs.

Also as I’ve said before, never trust the defaults. When you just write DECLARE CURSOR with no options specified, SQL Server may make a lot of bad decisions.

Appropriate Uses


In general, I’d expect many readers to understand when a loop or cursor is the only (or best) way to accomplish what you need.

I use them in several of my stored procedures because they’re the only sensible way to get things done.

Notably, in sp_LogHunter:

DECLARE
    c
CURSOR
    LOCAL /*Nothing outside of this needs to reference it*/
    SCROLL /*I want the FETCH FIRST command available to me*/
    DYNAMIC /*I don't want to make another copy of my temp table*/
    READ_ONLY /*Okay, perhaps this is over communicating*/
FOR
SELECT
    command
FROM #search;

The only real additional point I’d make here is that much like with dynamic SQL, if you’re going to use these options, you’re doing yourself (and anyone else who many need to run or manage your code) a great service by building verbose debugging options and error handling into it.

Knowing things like:

  • Starting row counts
  • Iteration count/progress
  • Current parameter/variable values
  • Rows in play per iteration
  • Iteration timing

Are all quite useful landmarks in your running code, help you find logical errors, and which parts you may be getting stuck on from a performance point of view.

Also from sp_LogHunter:

IF @debug = 1
BEGIN
    RAISERROR('log %i of %i', 0, 1, @l_log, @h_log) WITH NOWAIT; /*Which log file I'm working on, out of how many I need to process*/
    RAISERROR('search %i of %i', 0, 1, @l_count, @t_searches) WITH NOWAIT; /*Which search I'm working on, out of how many I need to run*/
    RAISERROR('@c: %s', 0, 1, @c) WITH NOWAIT; /*The current search command syntax*/
END;

These are all things that helped me while writing it for reasons listed above.

You, like me, may be surprised where your loops take you when they finally sally forth.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Pagination

Video Star


A common-enough practice to limit search results is to write paginated queries. You may give users the ability to specify how many rows per page they want to see, or you may have a set number of rows per page.

But the end goal is to allow users to very quickly get a smaller number of rows returned to them. Almost no one needs to get many thousands of rows back, unless they’re planning on exporting the data.

In these four videos, which are part of my paid training, I’ll teach you how to write reliably fast paging queries in SQL Server.

If you like this sort of stuff, head to the very end of my blog post for a discount code on all of my training.

 

 

 

 

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Dynamic Searches

Convenience


Like having a built-in type to make dynamic SQL more easily managed, it would also be nice to have some mechanism to manage dynamic searches.

Of course, what I mean by dynamic searches is when you have a variety of parameters that users can potentially search on, with none or few of them being required.

Right now, of course, your first best option is writing parameterized dynamic SQL that generates a where clause specific to the set of non-null parameters being sent to the procedure.

Your second best option is to just slap a statement-level recompile hint on things.

There are of course pros and cons to each.

Dynamic SQL vs Option Recompile


For dynamic SQL, the main cons are:

  • Relying on developers to write it safely
  • Detachment from stored procedure (harder to identify in query store/plan cache)
  • Potentially managing permissions
  • Handling increasingly complex scenarios (lists of columns, custom ordering, optional joins, etc.)

The main pros are:

  • Execution plan reusability
  • Transparent optimizer predicates
  • Consistent performance for indexed searches
  • Incredible flexibility

For OPTION(RECOMPILE), there’s really not a whole lot of up and down.

  • Pro: Really simple to slap on the end of a query
  • Con: Query constantly compiles a new plan (no plan reuse, but no cache bloat either)

As long as you have query store enabled, tracking the plans over time is available to you.

I don’t see queries that take a long time to compile all too frequently, but it’s worth keeping an eye on compilation time if you’re trying to balance these options locally.

Trust me on this one.

Managing Performance


The most difficult part, once you’ve made a choice, is figuring out indexing. Assuming you want searches to be fast, this is important.

Assuming you have just one table, and just 6 optional parameters, there are over 700 potential index key column combinations to consider.

Once you consider includes, and throwing columns into the mix for sorting, the mind begins to boggle.

In many cases, nonclustered column store indexes make this a lot easier. Row store indexes are built in a way that may make having many similar ones necessary.

Key column order makes a lot of difference when speeding up searches generally. An index on columns (a, b, c) will certainly make a variety of searches fast, so long as column a is part of the predicates.

Searches that do not filter on column a will have to scan the full index. Now you’re stuck trying to figure out what the most common or important searches are.

In this video, I talk about solving indexing problems with this type of query.

Thanks for reading (and watching)!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Happy New Year, From Darling Data

Up All Night


I don’t wanna. Neither do you. Have a good one.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Parameter Sniffing

Responsible Reading


One of the most fun parts of my job is telling people that parameter sniffing is a good thing. They’re often shocked, because the phrase has such ominous connotations to developers.

The parameters! They get sniffed! Performance! Something! Stuff!

Parameter sniffing does not mean the sky is falling. Like I’ve said before, if it were such a bad idea, it would have been abandoned.

What you have to be mindful of is how data gets distributed over time, and doing a few things to make sure the optimizer has as few reasonable plan choices as possible.

  • Indexes keys that cover all where clause predicates at minimum
  • Indexes includes that cover all select list columns in special circumstances
  • Queries written to best take advantage of those indexes (SARGability, etc.)
  • Using temporary objects to isolate critical parts of queries into manageable chunks

That will help cut down on how disastrous plan sharing is between queries that need to process very different amounts of data.

Under really crazy circumstances (and I know, I’ve linked to this before), I use dynamic SQL to fix parameter sensitivity issues.

 

SQL Server 2022 Not To The Rescue


SQL Server 2022 has a feature that’s suppose to help with parameter sensitivity issues. It’s okay sometimes, but the current set of limitations are:

  • Only equality predicate parameters are evaluated for sensitivity issues
  • Only one parameter is selected for managing sensitivity
  • Only three query plan choices are generated for different estimation buckets

I have seen it be helpful in a few cases, but there are times when it should obviously kick in, but doesn’t.

There is currently no query hint to force the feature to kick in when it doesn’t (and it should).

Just so you know how I set up for this:

ALTER DATABASE StackOverflow2013 
SET COMPATIBILITY_LEVEL = 160;

Here are my indexes:

CREATE INDEX
    beavis
ON dbo.Posts
    (OwnerUserId, PostTypeId)
WHERE
    (PostTypeId = 1)
WITH
    (MAXDOP = 8, SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

CREATE INDEX
    butthead
ON dbo.Votes
    (VoteTypeId, UserId, PostId)
INCLUDE
    (BountyAmount, CreationDate)
WITH
    (MAXDOP = 8, SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

CREATE INDEX
    stewart
ON dbo.Badges
    (UserId)
WITH
    (MAXDOP = 8, SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);
GO

Here are the checks that validate options, etc.

SELECT
    d.name,
    d.compatibility_level,
    d.is_query_store_on
FROM sys.databases AS d
WHERE d.database_id = DB_ID();

SELECT
    dsc.configuration_id,
    dsc.name,
    dsc.value,
    dsc.is_value_default
FROM sys.database_scoped_configurations AS dsc
WHERE dsc.name = N'PARAMETER_SENSITIVE_PLAN_OPTIMIZATION';
sql server
way to be

With That Out Of The Way


Here’s the stored procedure that causes us grief.

CREATE OR ALTER PROCEDURE
   dbo.VoteSniffing
(
    @VoteTypeId integer
)
AS
BEGIN
SET XACT_ABORT, NOCOUNT ON;

    SELECT
        UserId  = 
            ISNULL(v.UserId, 0),
        Votes2013  = 
            SUM
            (
                CASE
                    WHEN 
                    (
                            v.CreationDate >= '20130101'
                        AND v.CreationDate <  '20140101'
                    )
                    THEN 1
                    ELSE 0
                END
            ),
        TotalBounty  = 
            SUM
            (
                CASE
                    WHEN v.BountyAmount IS NULL
                    THEN 0
                    ELSE 1
                END
            ),
        PostCount  = 
            COUNT(DISTINCT v.PostId),
        VoteTypeId  = 
            @VoteTypeId
    FROM dbo.Votes AS v
    WHERE v.VoteTypeId = @VoteTypeId
    AND   NOT EXISTS
    (
        SELECT
            1/0
        FROM dbo.Posts AS p
        JOIN dbo.Badges AS b
          ON b.UserId = p.OwnerUserId
        WHERE  p.OwnerUserId = v.UserId
        AND    p.PostTypeId = 1
    )
    GROUP BY 
        v.UserId
    ORDER BY
        PostCount DESC;
END;
GO

The good news is that all formatting checks pass here. Thank you, thank you, thank you. You’re far too kind.

Execution Problems


There’s only one parameter. It’s an equality predicate. And the VoteTypeId Column is highly volative.

sql server
oh dear me

You would think that with data distributions that look this way, there would be some attempt to remedy the situation by the Parameter Sensitive Plan Optimization feature.

Unfortunately, there is not.

Little Big Plan


When executed first with 4 as the VoteTypeId, the plan is very fast. Of course it is. We’re finding 733 rows to work with.

EXEC dbo.VoteSniffing
    @VoteTypeId = 4;
sql server execution plan
simple as

I have no qualms with this plan, other than the erroneous No Join Predicate warning. There is a join predicate, and you can see it in the Seek operators.

Heck, you can even see it in the query text. It’s pretty stupid.

When the plan is repeated for VoteTypeId 2, it is a disaster.

EXEC dbo.VoteSniffing
    @VoteTypeId = 2;
sql server query plan
not so hot

A Seek takes 9.5 seconds, a Nested Loops join occurs for 5 seconds, and a spill occurs for 22 seconds.

After the spill, another ~22 seconds is spent in various operators until it’s complete.

In other words, we still have our work cut out for us to tune this thing. If we go back to our list:

  • Indexes keys that cover all where clause predicates at minimum ✅
  • Indexes includes that cover all select list columns in special circumstances ✅
  • Queries written to best take advantage of those indexes (SARGability, etc.) ✅
  • Using temporary objects to isolate critical parts of queries into manageable chunks ❌

In this case, if we use a #temp table and isolate the portion of the query that evaluates a parameter (just getting data from the Votes table), we can solve 99% of the parameter sniffing problems between the majority of the VoteTypeIds.

However, VoteTypeId 2 is still a special flower. This is a good place to pause and ask ourselves if repeatedly filling a temporary object with 37 million rows is a good use of time.

This is when some techniques we’ve discussed before would come in handy:

  • Dynamic SQL to treat VoteTypeId 2 differently from the others
  • A separate stored procedure to intercept VoteTypeId 2 executions

Game Change


The reason why this is a more sensible arrangement is because when the plan is compiled initially for VoteTypeId 2, it runs just fine.

It’s also true that when VoteTypeId 4 uses the plan for VoteTypeId 2, it goes from taking 0 seconds across to taking a couple seconds. Sharing is not caring.

This is the fast plan for VoteTypeId 2. It has a lot of operators with additional startup costs that make grabbing small amounts of data slow.

sql server query plan
good for some

Parameter sensitivity is a tough thing at times. Managing multiple plans for the same query can end up being quite complicated.

There are absolutely times when you’re going to have to throw in the towel and use a statement-level recompile hint to make things function appropriately.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Temporary Objects

And Then You Were Gone


I probably overuse temp tables. I’m the first one to admit it. But often when I choose to use one, it’s a choice between:

  • Spending a long time trying to business understand logic for a company I don’t work for
  • Tinkering with query syntax and using really weird tricks and hints to get different plan shapes
  • Waiting a long time to create or tweak indexes on huge tables (usually on “dev” hardware)

And you know what? If I had to work anywhere full time, and not balance performance tuning requests from a bunch of people, I just might spend more time on those things.

Don’t tell anyone, but sometimes if I’m really busy, I have no problem waiting a long time for an index change to go through, so I can look at something else for a while.

In general, temp tables offer a convenient way to focus on one problem part of the query, without the noise of a bazillion other joins and applys and subqueries and everything else going on.

They also offer a familiar paradigm to anyone looking at the work later, that doesn’t involve SELECT TOP (2147483647) UNION ALL SELECT TOP (1) type syntax, which can really raise some eyebrows on unfamiliar faces.

A Recent Example


I’m using Stack Overflow in my query, but a recent client engagement had a problem query that looked about the same.

WITH
    top5 AS
(
    SELECT
        p.*,
        n = 
            ROW_NUMBER() OVER
            (
                PARTITION BY
                    p.OwnerUserId
                ORDER BY
                    p.Score DESC
            )
    FROM dbo.Posts AS p
    WHERE p.OwnerUserId = 22656
    AND   p.PostTypeId = 1
)
SELECT
    u.DisplayName,
    t1.Title,
    t2.Title,
    t3.Title,
    t4.Title,
    t5.Title
FROM dbo.Users AS u
LEFT JOIN top5 AS t1
  ON  t1.OwnerUserId = u.Id
  AND t1.n = 1
LEFT JOIN top5 AS t2
  ON  t2.OwnerUserId = u.Id
  AND t2.n = 2
LEFT JOIN top5 AS t3
  ON  t3.OwnerUserId = u.Id
  AND t3.n = 3
LEFT JOIN top5 AS t4
  ON  t4.OwnerUserId = u.Id
  AND t4.n = 4
LEFT JOIN top5 AS t5
  ON  t5.OwnerUserId = u.Id
  AND t5.n = 5
WHERE t1.OwnerUserId IS NOT NULL;

The problems with this contain multitudes, but let’s let the query plan do the talking:

sql server query plan
oh, that.

In the words of Drake: “Take Care”

If you’re the type of developer who insists that common table expressions are some sort of magic potion, I do hope that you this is your hangover.

Practical Magic


Temporary tables are the cure for common table expression hangovers, for the most part.

Let’s see how we do with one.

WITH
    top5 AS
(
    SELECT
        p.*,
        n = 
            ROW_NUMBER() OVER
            (
                PARTITION BY
                    p.OwnerUserId
                ORDER BY
                    p.Score DESC
            )
    FROM dbo.Posts AS p
    WHERE p.OwnerUserId = 22656
    AND   p.PostTypeId = 1
)
SELECT
    t.OwnerUserId,
    t.Title,
    n
INTO #top5
FROM top5 AS t
WHERE t.n <= 5;

SELECT
    u.DisplayName,
    t1.Title,
    t2.Title,
    t3.Title,
    t4.Title,
    t5.Title
FROM dbo.Users AS u
LEFT JOIN #top5 AS t1
  ON  t1.OwnerUserId = u.Id
  AND t1.n = 1
LEFT JOIN #top5 AS t2
  ON  t2.OwnerUserId = u.Id
  AND t2.n = 2
LEFT JOIN #top5 AS t3
  ON  t3.OwnerUserId = u.Id
  AND t3.n = 3
LEFT JOIN #top5 AS t4
  ON  t4.OwnerUserId = u.Id
  AND t4.n = 4
LEFT JOIN #top5 AS t5
  ON  t5.OwnerUserId = u.Id
  AND t5.n = 5
WHERE t1.OwnerUserId IS NOT NULL;

Here’s the new query plan:

sql server query plan
hi mom

Without changing indexes, we get our query from 26 seconds to 1.5 seconds.

Really puts the “common” in “common table expressions”, doesn’t it?

Argue


While the “should I use common table expressions or temp tables?” question is largely a settled one for me, one thing that always comes up in seemingly chaotic practice is temp tables vs. table variables.

Table variables certainly have their place in the world, though often not without compromises or alternatives.

Many developers will use table variables for the purpose of passing them directly to another stored procedure. That’s fine and all, but you can just use #temp tables and reference them directly in another stored procedure, too.

Other times, which on gets used seems to be a question of moods and fancies of the developer currently working on something.

Some time ago I talked about when table variables are most useful in SQL Server practice.

And I stand by the majority of what’s discussed in here still. Hopefully you find it enlightening, even if you’ve already seen it.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Dynamic SQL

Time Well Spent


I’ve spent a lot of time on this blog telling you different reasons why you should, when you should, and how you should use dynamic SQL.

In this post, I’m going to go into some of the finer points of how I approach dynamic SQL to avoid issues — not performance issues — more procedural issues.

Things you should use to protect yourself from wonky object names, string truncation, object identification, and more.

There are many things that I wish Microsoft would do with dynamic SQL to make working with it easier and safer. For one, there should be a specific data type for it with built-in safe guards against SQL injection. Sort of like a template that would make parameter substitution a lot easier, and allow queries in strings to be written without a ton of escape quotes so that they’d be easier to debug and troubleshoot.

It’s a dream that will never come true, but hey we got ledger tables that will eat up plenty of disk space in Azure, so that’s just great.

Quote And Quote Alike


The first and most obvious rule I have with dynamic SQL is that if I need to accept object names, I’m using QUOTENAME.

That goes for any object. Server, database, schema, table, view, procedure, function, you get the idea.

Why? Because developers don’t always do nice things when they name things. The most common one that I see is putting spaces in names, but over the years I’ve run into unprintable characters, brackets, symbols like @, #, $, and &, and… unicode characters.

If you look in any of the stored procedures I write, you’ll see how heavily I use QUOTENAME to save myself headaches later.

Now, you could write your own brackets into strings, but they don’t offer the same level of protection against SQL injection.

One thing I will say is smart to do is to keep two copies of anything you’re quoting in the name of. One to use in the dynamic SQL, and one to use for other information you may need to retrieve about the thing you’ve quoted.

As an example, if you accept a database name as a parameter, and then you add quotes to it, it’ll be awfully hard to find any information about it in sys.databases, because database_name will not equal [database_name]

There you go. Your mind is blown.

Always Unicode


Whenever I see parameters or variables created to hold dynamic SQL with varchar (and not nvarchar) as the type, I get real nervous.

Why? Because I know some lazy bones is about to concatenate a bunch of user input into their strings and then:

EXEC (@sql);

And that is unsafe, my friends. Very unsafe. This where unfunny people will tell an unfunny joke about Bobby Tables.

When you use sp_executesql to write and run the safe, parameterized dynamic SQL that you should, it expects unicode inputs for both the query that you execute.

DECLARE
    @s varchar(MAX) = 'SELECT d.* FROM sys.databases AS d WHERE d.database_id = @d;',
    @p varchar(MAX) = '@d integer',
    @d integer = 1;

EXEC sys.sp_executesql
    @s,
    @p,
    @d;

This will fail with the error:

Msg 214, Level 16, State 2, Procedure sys.sp_executesql, Line 1 [Batch Start Line 0]

Procedure expects parameter ‘@statement’ of type ‘ntext/nchar/nvarchar’.

And then with the error

Msg 214, Level 16, State 3, Procedure sys.sp_executesql, Line 1 [Batch Start Line 0]

Procedure expects parameter ‘@params’ of type ‘ntext/nchar/nvarchar’.

Because we did not follow instructions.

This, however, will run wonderfully, and keep us safe from unfunny jokes about Bobby Tables.

DECLARE
    @s nvarchar(MAX) = N'SELECT d.* FROM sys.databases AS d WHERE d.database_id = @d;',
    @p nvarchar(MAX) = N'@d integer',
    @d integer = 1;

EXEC sys.sp_executesql
    @s,
    @p,
    @d;

And besides, it’s good form. You never know when something unicodey may sneak in to your string, and you don’t want to end up with question marks.

Concatenation Street


One big problem, and one that can happen suddenly, out of nowhere, without any rhyme or reason, and only on one server, and when you test it on another server it won’t happen, is string truncation because of implicit conversion.

See, when you put strings together, even if you declare the base parameter or variable to hold the string as a max data type, shorter strings will somehow cause SQL Server to decide that the result will be… something else.

Even more annoying is that even if you put an outer CONVERT(nvarchar(max), everything) around the whole ordeal, one string concatenation inside will still mess the whole works.

If you look at the dynamic SQL that I write, you’ll notice that certain shorter strings are surrounded with converts to a max type.

This is why you’ll see code that looks like this:

+
CONVERT
(
    nvarchar(MAX),
    CASE @new
         WHEN 1
         THEN
N'
qsp.plan_forcing_type_desc,
qsp.force_failure_count,
qsp.last_force_failure_reason_desc,
w.top_waits,'
         ELSE
N''
    END
)
+

This is also why I go heavy on having debugging to print string out: Because you can usually tell by which part of the string gets cut off where the implicit conversion happened.

Formation


Formatting dynamic SQL is especially important. Remember when I blogged about formatting T-SQL?

That goes doubly-double for dynamic SQL. I’ll often write and format the query first, and then make it dynamic so that SQL Prompt can work it’s 80% magic.

That dynamic SQL I showed you up there? I’d never put that out into the world for people to deal with. Rude.

It would look way more like this:

DECLARE
    @s nvarchar(MAX) = N'
SELECT 
/*I came from Erik Darling''s awesome stored procedure*/
    d.* 
FROM sys.databases AS d 
WHERE d.database_id = @d;
',
    @p nvarchar(MAX) = N'@d integer',
    @d integer = 1;

RAISERROR(@s, 0, 1) WITH NOWAIT;

EXEC sys.sp_executesql
    @s,
    @p,
    @d;
  1. It’s formatted nicely
  2. It’s aligned so that it pretty prints in RAISERROR
  3. There are leading and trailing new lines so it separates nicely from other returned messaged
  4. There’s a comment to tell me where it came from

Now if only there were a real dynamic SQL type so I didn’t have to use two apostrophes in there.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Wrapper Stored Procedures

Happenstance


Wrapper stored procedures are useful for things like:

  • Transforming declared local variables into parameters
  • Preventing code from compiling when it isn’t used
  • Generating different query plans to deal with parameter sniffing

The upside of using this over dynamic SQL is that you have a convenient object name attached to the code.

The downside is that if stored procedures share code logic, you now have more to maintain. Likewise, if you have many IF...ELSE branches, you’ll have many more stored procedures to dig around to (though they’ll be a lot easier to identify in the plan cache and query store).

There are other handy things about stored procedures that dynamic SQL makes iffy, like permissions, and making sure developers don’t write the kind of code that makes headlines.

Stored procedures also aren’t a good use case for all the “kitchen sink” queries that use a lot of optional parameters. Dynamic SQL is king here, because creating a stored procedure for every permutation of parameter combinations is a bigger no than well done steak.

To learn how to deal with those, watch these videos.

 

Thanks for reading (and watching)!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.