SQL Server Community Tools: Using sp_QuickieStore To Find Your Worst Performing Queries

Mind Loss


Microsoft has invested some engineering time in the plumbing behind Query Store in SQL Server 2022. Really cool stuff, like the ability to add hints to a query and force it to use the plan with that hint in place.

That’s going to solve a crazy amount of problems for me, with queries that I can’t actually touch (and not because they’re priceless works of art).

But… the front end of Query Store still hasn’t changed. It’s clunky, it’s ugly, it’s not very configurable, and I find it downright unfriendly.

It can also be really slow and, golly and gosh, the number of times I’ve seen the queries that fill in the GUI show up in there is sort of depressing.

So I wrote sp_QuickieStore to fill in the gaps. No, it doesn’t populate a GUI (I don’t have those chops), but it does get you actionable results pretty quickly.

Explain Plan


By default, sp_QuickieStore will give you the top ten queries in query store by average CPU over the last 24 hours. I’m going to talk about other things you can do with it later this week.

For now, let’s just look at the first thing you see when you run it without any additional parameters. Most folks will stick sp_QuickieStore in the master database, but Query Store can only be turned on in user databases.

Of course, sp_QuickieStore has a parameter to tell it which database you want to analyze (@database_name). It’d be utterly insane for me to ask you, dear user, to install it in every user database.

The nice thing is that if you run sp_QuickieStore from a user database context, it will assume that that’s the database you want to analyze Query Store in.

EXEC sp_QuickieStore;

Right up front, you get the stuff that helps you figure out if you want to dig any deeper:

SQL Server Query Results
big machine

There’s a lot more information if you keep scrolling to the right that’ll tell you about resource usage, but here’s what you get:

  • query_id: how Query Store identifies the query text
  • plan_id: how Query Store identifies the query plan
  • all_plan_ids: if your query has generated multiple plans, you’ll get a CSV list of them here
  • execution_type_desc: if you query ran successfully or not
  • object_name: if your query came from a store procedure
  • query_sql_text: XML clickable of the query text
  • compatibility_level: uh… compatibility level
  • query_plan plan_forcing_type_desc: if Query Store is forcing a plan
  • top_waits: the high-level wait stats that your query has generated
  • first_execution_time: um… c’mon
  • last_execution_time: don’t make me say it
  • count_executions: oh gosh darn it to heck.

By The Numbers


There’s plenty for you to think about up there. Most folks know if they care about something by looking at some combination of object_name and query_sql_text. Sometimes count_executions will come into play.

Other times, you might have no idea what you’re looking at or why it’s showing up here. And baby. Baby, baby, baby. I am here for you.

SQL Server Query Results
bingo

These results are sorted by average CPU (that’s the default, remember), but there’s plenty of other memes here like logical reads for you to nod at sagely.

Something for everyone, really.

All this stuff is nice, but… Maybe you need something more. Maybe you’re searching for something in particular, maybe you want the results to look a little different, or uh… maybe you want to be an expert.

I would also love to be an expert. I would tell people expert things like “don’t throw eggs”.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

SQL Server Community Tools: Capturing Query Wait Stats With sp_HumanEvents

Paladin


I have sort of a love/hate relationship with wait stats scripts and analysis. Sometimes they’re great to correlate with larger performance problems or trends, and other times they’re totally useless.

When you’re looking at wait stats, some important things to figure out are:

  • How much of a wait happened compared to uptime
  • If the waits lasted a long time on average

And you can do that out of the box with SQL Server. What you can’t get are two very important things:

  • When the waits happened
  • Which queries caused the waits

This stuff is vitally important for figuring out if wait stats are benign overall to the workload.

For example, let’s say your server has been up for 100 hours, and you spent 50 hours waiting on PAGEIOLATCH_SH waits. Normally I’d be pretty worried about that, and I’d be looking at if the server has enough memory, if queries are asking for big memory grants, if important queries are missing any indexes, etc.

But if we knew that all 50 of those hours were outside of normal use activity, and maybe even happened in a separate database for warehousing or archiving off data, we might be able to ignore it and focus on other portions of the workload.

Dorking


With sp_HumanEvents, you can get all of those things!

EXEC sp_HumanEvents 
    @event_type = 'waits',
    @seconds_sample = 60;

When this finishes running, you’ll get three results back:

  • Overall wait stats for the period of time
  • Wait stats broken down by database for the period of time
  • Wait stats broken down by database and query for the period of time

And because I don’t want to leave you hanging, you’ll also get details about the waits themselves, like

  • How much of a wait happened compared to sampled time
  • How long the waits lasted on average in the sampled time

If you need to figure out which queries are causing wait stats that you’re worried about, this is a great way to get started with that investigation.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

SQL Server Community Tools: Capturing Which Queries Are Recompiling And Why With sp_HumanEvents

Classic Espionage


Like query compilations, query recompilations can be annoying. The bigger difference is that even occasional recompiles can introduce a bad query plan.

If your monitoring tools or scripts are warning you about high compiles or recompiles, sp_HumanEvents can help you dig in further.

We talked about compilations yesterday (and, heck, maybe I should have added that point in there, but hey), so today we’ll talk about recompilations.

There are a lot of reasons why a query might recompile:

  • Schema changed
  • Statistics changed
  • Deferred compile
  • Set option change
  • Temp table changed
  • Remote rowset changed
  • For browse permissions changed
  • Query notification environment changed
  • PartitionView changed
  • Cursor options changed
  • Option (recompile) requested
  • Parameterized plan flushed
  • Test plan linearization
  • Plan affecting database version changed
  • Query Store plan forcing policy changed
  • Query Store plan forcing failed
  • Query Store missing the plan
  • Interleaved execution required recompilation
  • Not a recompile
  • Multi-plan statement required compilation of alternative query plan
  • Query Store hints changed
  • Query Store hints application failed
  • Query Store recompiling to capture cursor query
  • Recompiling to clean up the multiplan dispatcher plan

That list is from SQL Server 2022, so there are going to be some listed here that you might not see just yet.

But let’s face it, the reasons you’re gonna see most often is probably

  • Schema changed
  • Statistics changed
  • Temp table changed
  • Option (recompile) requested

Mad Dog


To capture which queries are recompiling in a certain window, I’ll usually do something like this:

EXEC sp_HumanEvents 
    @event_type = 'recompiles',
    @seconds_sample = 30;

Sometimes recompiles can be good:

  • Schema changed: use a new index that suits the query better
  • Statistics changed: use newer statistics that more accurately reflect column data
  • Temp table changed: use a new histogram for a temp table more relevant to the query
  • Option (recompile) requested: burn it flat, salt the earth

But of course, there’s always an element of danger, danger when a query starts using a new plan. What if it sucks?

To cut down on recompiles, you can use this stuff:

  • Plan Guides
  • Query Store forced plans
  • Keep Plan/KeepFixed Plan query hints
  • Stop using recompile hints?

One thing that can be a pretty big bummer about recompiles is that, if you’re relying solely on the plan cache to find problem queries, they can leave you with very little (or zero) evidence about what queries are getting up to.

Query Store and some monitoring tools will capture them, so you’re better off using those for more in-depth analysis.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

SQL Server Community Tools: Capturing Which Queries Are Compiling With sp_HumanEvents

Compilation Game


One thing that I have to recommend to clients on a fairly regular basis is to enable Forced Parameterization. Many vendor applications send over queries that aren’t parameterized, or without strongly typed parameters, and that can make things… awkward.

Every time SQL Server gets one of those queries, it’ll come up with a “new” execution plan, cache it, and blah blah blah. That’s usually not ideal for a lot of reasons.

There are potentially less tedious ways to figure out which queries are causing problems, by looking in the plan cache or query store.

But, you know, sometimes the plan cache isn’t reliable, and sometimes Query Store isn’t turned on.

And so we have sp_HumanEvents!

Easy Street


One way to start getting a feel for which queries are compiling the most, along with some other details about compilation metrics and parameterization is to do this:

EXEC sp_HumanEvents 
    @event_type = 'compilations',
    @seconds_sample = 30;

Newer versions of SQL Server have an event called query_parameterization_data.

Fired on compile for every query relevant for determining if forced parameterization would be useful for this database.

If you start monitoring compilations with sp_HumanEvents you’ll get details from this event back as well, as long as it’s available in your version of SQL Server.

You can find all sorts of tricky application problems with this event setup.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

SQL Server Community Tools: Capturing Query Performance Problems With sp_HumanEvents

Cumulative Noopdate


In yesterday’s post, I talked through how I capture blocking using sp_HumanEvents. Today I’m going to talk about a couple different ways I use it to capture query performance issues.

One thing I want to stress is that you shouldn’t use yesterday’s technique to gather query performance issues. One thing sp_HumanEvents does is capture actual execution plans, and that can really bog a server down if it’s busy.

I tend to use it for short periods of time, or for very targeted data collection against a single stored procedure or session id running things.

I’ve occasionally toyed with the idea of adding a flag to not get query plans, or to use a different event to get them.

I just don’t think there’s enough value in that to be worthwhile since the actual execution plan has so many important details that other copies do not.

So anyway, let’s do a thing.

Whole Hog


You can totally use sp_HumanEvents to grab absolutely everything going on like this:

EXEC sp_HumanEvents 
    @event_type = 'query', 
    @query_duration_ms = 5000, 
    @seconds_sample = 20;

You may need to do this in some cases when you’re first getting to know a server and need to get a feeling for what’s going on. This will show you any query that takes 5 seconds or longer in the 20 second window the session is alive for.

If you’re on a really busy server, it can help to cut down on how much you’re pulling in:

EXEC sp_HumanEvents 
    @event_type = 'query', 
    @query_duration_ms = 5000, 
    @seconds_sample = 20,
    @sample_divisor = '5';

This will only pull in data from sessions if their spid is divisible by 5. The busier your server is, the weirder you might want to make this number, like 15/17/19 or something.

Belly


Much more common for me is to be on a development server, and want to watch my spid as I execute some code:

EXEC sp_HumanEvents
    @event_type = 'query',                   
    @query_duration_ms = 10000,               
    @session_id = N'58',                    
    @keep_alive = 1;

This is especially useful if you’re running a big long stored procedure that calls a bunch of other stored procedures, and you want to find all the statements that take a long time without grabbing every single query plan.

If you’ve ever turned on Actual Execution Plans and tried to do this, you might still be waiting for SSMS to become responsive again. It’s really painful.

By only grabbing query details for things that run a long time, you cut out all the little noisy queries that you can’t really tune.

I absolutely adore this, because it lets me focus in on just the parts that take a long time.

Shoulder


One pretty common scenario is for clients to give me a list of stored procedures to fix. If they don’t have a monitoring tool, it can be a challenge to understand things like:

  • How users call the stored procedure normally
  • If the problem is parameter sniffing
  • Which queries in the stored procedure cause the biggest problems

We can do that like so:

EXEC sp_HumanEvents 
    @event_type = 'query', 
    @query_duration_ms = 5000, 
    @keep_alive = 1,
    @object_schema = 'dbo',
    @object_name = 'TheWorstStoredProcedureEverWritten';

This will only collect sessions executing a single procedure. I’ll sometimes do this and work through the list.

Hoof


There are some slight differences in how I call the procedure in different circumstances.

  • When I use the @seconds_sample parameter, sp_HumanEvents will run for that amount of time and then spit out a result
  • When I use the @keep_alive parameter, all that happens is a session gets created and you need to go watch live data like this:
SQL Server Extended Events
viewme

Just make sure you do that before you start running your query, or you might miss something important.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

SQL Server Community Tools: Capturing Blocking With sp_HumanEvents

Month Of Sundays


A while back, Brent blogged about using September to raise awareness about free SQL Server community tools.

I thought this was a great idea because I maintain a number of them: my own scripts, the First Responder Kit scripts, and sp_WhoIsActive.

Unlike most people who left links in the comments, I read the entire post and decided to use the whole darn month to write about scripts I maintain and how I use them in my work.

Lest I be accused of not being able to read a calendar, I know that these are dropping a little earlier than the 1st of September. I do apologize for September not starting on a Monday.

There are other great tools and utilities out there, like Andy Mallon’s DBA Utility Database, but I don’t use them enough personally to be able to write about them fluently.

My goal here is to help you use each script with more confidence and understanding. Or even any confidence and understanding, if none existed beforehand.

Oral Board


First up is (well, I think) my most ambitious script: sp_HumanEvents. If you’re wondering why I think it’s so ambitious, it’s because the goal is to make Extended Events usable by Humans.

At present, that’s around 4000 lines of T-SQL. Now, sp_HumanEvents can do a lot of stuff, including logging event data to tables for a bunch of different potential performance issues.

When I was first writing this thing, I wanted it to be able to capture data in a few different ways to fit different monitoring scenarios. In this post, I’m going to show you how I most often use it on systems that have are currently experiencing blocking.

First, you need to have the Blocked Process Report enabled, which is under advanced options:

EXEC sys.sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sys.sp_configure 'blocked process threshold', 5;
RECONFIGURE;

If you want to flip the advanced options setting back, you can. I usually leave it set to 1.

The second command turns on the blocked process report, and tells SQL Server to log any instances of blocking going on for 5 or more seconds. You can adjust that to meet your concerns with blocking duration, but I wouldn’t set it too low because there will be overhead, like with any type of monitoring.

Blockeroos


The way I usually set up to look at blocking that’s currently happening on a system — which is what I most often have to do — is to set up a semi-permanent session and watch what data comes in.

When I want to parse that data, I use sp_HumanEventsBlockViewer to do that. At first, I just want to see what kind of stuff starts coming in.

To set that session up, here’s what I do:

EXEC sp_HumanEvents
    @event_type = 'blocking',                   
    @keep_alive = 1;

What this will do is set up an Extended Event session to capture blocking from the Blocked Process Report. That’s it.

From there, you can either use my script (linked above), or watch data coming in via the GUI. Usually I watch the GUI until there’s some stuff in there to gawk at:

EXEC dbo.sp_HumanEventsBlockViewer
    @session_name = 'keeper_HumanEvents_blocking'

Blockerinos


Once you have the blocking collected, troubleshooting it is a whole… Fun… thing.

  • Misguided use of transactions
  • Misguided use of isolation levels
  • Misguided use of locking hints

Are common reasons behind blocking, but not always. Common solutions are things like:

  • Making sure foreign keys have supporting indexes, especially ones with cascading actions
  • Getting angry at Triggers and throwing your computer out the window
  • Making sure modification queries have adequate supporting indexes
  • Batching modifications on large chunks of data
  • Enabling an optimistic isolation level because Read Committed Is A Garbage Isolation Level

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Correlating Data From sp_WhoIsActive to Query Store Or The Plan Cache

sp_QuickiePost


If you’re the type of person who logs sp_WhoIsActive to a table to capture executing queries, you may want to find some additional details about the statements that end up there.

Out of the box, it’s arduous, tedious, and cumbersome to click around on a bunch of columns and grab handles and hashes and blah blah.

Now, these two queries depend on you grabbing a couple specific columns in your output. If you’re not getting these, you’re kinda screwed:

From query plans, you can get the plan handle and plan hash:

WITH XMLNAMESPACES(DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT
   session_id,
   query_plan,
   additional_info,
   query_hash = 
       q.n.value('@QueryHash', 'varchar(18)'),
   query_plan_hash = 
       q.n.value('@QueryPlanHash', 'varchar(18)')
FROM dbo.WhoIsActive AS w
CROSS APPLY w.query_plan.nodes('//StmtSimple') AS q(n);

From additional info, you can get the SQL Handle and Plan Handle:

SELECT
  session_id,
  query_plan,
  additional_info,
  sql_handle =
      w.additional_info.value('(//additional_info/sql_handle)[1]', 'varchar(131)'),
  plan_handle = 
      w.additional_info.value('(//additional_info/plan_handle)[1]', 'varchar(131)')
FROM dbo.WhoIsActive AS w;

Causation


For the plan cache, you can use your favorite script. Mine is, of course, sp_BlitzCache.

You you can use the @OnlyQueryHashes or @OnlySqlHandles parameters to filter down to queries you’re interested in.

For Query Store, you can use my script sp_QuickieStore to do the same thing.

It has parameters for @include_query_hashes, @include_plan_hashes or @include_sql_handles.

You might want to add some other filtering or sorting to the queries up there to find what you’re interested in, but this should get you started.

I couldn’t find a quick or easy way to combine the two queries, since we’re dealing with two different columns of XML data, and the query plan XML needs a little special treatment to be queried.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Some Interesting Questions And Answers Of Mine On Database Administrator’s Stack Exchange About SQL Server

Assurance, Assurance


Here’s a quick roundup on some of the more interesting Q&A I’ve taken part in on dba.stackexchange.com over the past few months.

Whew. That’s not even all of it. I should get out more.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

How Microsoft Could Make Problems In Execution Plans Easier To Understand

Round Up


Execution plans have come a long way over the years, gradually adding more and more details as computing power becomes less of a hurdle to collecting metrics.

The thing is, it’s not always obvious where to look or dig deeper into a query plan to figure out where problems are.

Right now, there are some warnings:

  • At the root operator for a few different things
  • For memory consuming operators when they spill

But there are some other things in query plans that should be loud and clear, because they’re not going to be obvious to folks just getting started out reading query plans.

Non-SARGable Predicates:

These can cause a lot of issues:

  • Unnecessary scans
  • Poor cardinality estimates

It’s primarily caused by:

  • function(column) = something
  • column + column = something
  • column + value = something
  • value + column = something
  • column = @something or @something IS NULL
  • column like ‘%something’
  • column = case when …
  • value = case when column…
  • Mismatching data types (implicit conversion)

The thing is, it’s hard to see where this stuff happens in a plan, unless the plan is very small, or you’re looking directly at the query text, which is often truncated when pulled from a query plan. It would be nice if we got a warning of some sort on operators where this happened.

Predicates That Result In Scans

If you write a where clause, but don’t have an index with a key that matches that where clause, sometimes you’ll get a missing index request and sometimes you won’t. It’s a bit of a gamble of course.

For large tables, this can be painful, burn a lot of CPU, and result in a parallel plan where you could get by without one if you had a better index in place.

SQL Server Query Plan
bigscan4u

Of course, not every scan has a predicate: think joins without a where clause, or where only one table has a predicate against it. You don’t have much choice but to scan an index.

Eager Index Spools

Sometimes SQL Server wants an index so badly that it creates one on its own for you. When this happens on a large enough table, you can spend an awful lot of time waiting for it.

You know like when you put something in the microwave and you’re standing there staring at the timer and even though you set it for two minutes it seems to hang out at 1:30 forever? That’s what an Eager Index Spool is like. A Hungry Man Dinner that you microwave for an hour but still comes out with ice around the edges of your Salisbury Steak.

SQL Server Query Plan
community

Okay, I stretched that one a bit. But here’s the thing: If SQL Server is gonna spend all that time creating a temporary index for you, it should tell you. Maybe a missing index request, maybe a warning on the spool itself. Just… anything that would help alert more casual execution plan observers to the fact that an index might not be the worst idea, here.

Why Indexes Weren’t Used

I know you. You create indexes all the time, then for some strange reason your queries don’t use them, or stop using them.

When SQL Server optimizes a query, part of the flow chart is a pit stop called index matching. At this point, SQL Server looks at available indexes and then chooses to use or not use them based on various pieces of feedback.

Sometimes it’s obvious why an index wasn’t used, like if it only covers a portion of the query, or if the key columns weren’t in the best order. Other times, it’s really unclear.

It would be nice if we had reasons for that available, even if it’s only in actual plans.

Louder Warnings For Deeper Problems

Right now, SQL Server buries some information that can be really important to why a query didn’t perform well:

  • When estimated and actual rows or executions are way off
  • When something forces a query to run serially
  • When operators execute more than once (including rebinds and rewinds)
  • When rows are badly skewed across parallel threads

The thing is, like a lot of these other items on this list, it takes real digging to figure out if any of them apply to you, and if they’re why your query slowed down. They just need some basic visual indicators to draw attention to them at the right times.

Different Per-Operator Details

When you look at each individual operator in an actual execution plan, you get sort of a confusing story:

  • Estimated cost
  • Wall clock time
  • Actual rows
  • Estimated rows
  • Percent of actual to estimated rows

I’d throw out some of that, and show:

  • CPU time
  • Wall clock time
  • Actual Rows
  • Actual Executions
  • Percent of actual to estimated

It would also be nice to have per-operator wait stats at this juncture, since we’d need to know why there’s a discrepancy between CPU and wall clock time, e.g. because of blocking or waiting on some other resource.

While we’re talking about all this, it might be helpful to consider the direction plans show their work. Right to left for data and left to right for logic are… fine. I guess. But up and down might make more sense. A lot of folks I know have a tough time understanding when things happen in horizontal execution plans, where vertical plans would be far more clear.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Different Ways To Parameterize Queries In SQL Server

The Importance Of Being Parameterized


Parameterization has many benefits for queries.

But first, let’s cover two things that aren’t exactly parameters!

  1. Local Variables
  2. Unsafe Dynamic SQL

Much more details at the post, but local variables are any variable that you declare inside a code block, e.g.

DECLARE
    @a_local_variable some_data_type;

And unsafe dynamic SQL is when parameters or local variables are concatenated into a string like so:

@sql += N'AND H.user_name = ''' + @injectable + ''';';

Note the series of single quotes and + operators (though the same would happen if you used the CONCAT function), and that square brackets alone won’t save you.

Now let’s talk about actual parameterization.

The same concept applies to ORM queries, but I can’t write that kind of code so go to this post to learn more about that.

Stored Procedures


The most obvious way is to use a stored procedure.

CREATE OR ALTER PROCEDURE
    dbo.Obvious
(
    @ParameterOne int
)
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;

    SELECT
        records = COUNT_BIG(*)
    FROM dbo.Users AS u
    WHERE u.Id = @ParameterOne;

END;

There are millions of upsides to stored procedures, but they can get out of hand quickly.

Also, the longer they get, the harder it can become to troubleshoot individual portions for performance or logical issues.

Developers without a lot of SQL experience can make a ton of mistakes with them, but don’t worry: young, good-looking consultants are standing by to take your call.

Inline Table Valued Functions


There are other kinds of functions in SQL Server, but these are far and away the least-full of performance surprises.

CREATE OR ALTER FUNCTION
    dbo.TheOnlyGoodKindOfFunction
(
    @ParameterOne int
)
RETURNS table
AS
RETURN

    SELECT
        records = COUNT_BIG(*)
    FROM dbo.Users AS u
    WHERE u.Id = @ParameterOne;
GO

Both scalar and multi-statement types of functions can cause lots of issues, and should generally be avoided when possible.

Inline table valued functions are only as bad as the query you put in them, but don’t worry: young, good-looking consultants are standing by to take your call.

Dynamic SQL


Dynamic SQL gets a bad rap from people who have:

  1. No idea what they’re talking about
  2. All the wrong kinds of experience with it
DECLARE
    @sql nvarchar(MAX) = N'',
    @ParameterOne int;

SELECT
    @sql += N'
    SELECT
        records = COUNT_BIG(*)
    FROM dbo.Users AS u
    WHERE u.Id = @ParameterOne;	
    ';

EXEC sys.sp_executesql
    @sql,
  N'@ParameterOne int',
    @ParameterOne;

This kind of dynamic SQL is just as safe and reusable as stored procedures, but far less flexible. It’s not that you can’t cram a bunch of statements and routines into it, it’s just not advisable to get overly complicated in here.

Note that even though we declared @ParameterOne as a local variable, we pass it to the dynamic SQL block as a parameter, which makes it behave correctly. This is also true if we were to pass it to another stored procedure.

Dynamic SQL is only as bad as the query you put in it, but don’t worry: young, good-looking consultants are standing by to take your call.

Forced Parameterization


Forced parameterization is a great setting. It’s unfortunate that everything thinks they want to turn on optimize for adhoc workloads, which is a pretty useless setting.

You can turn it on like so:

ALTER DATABASE [YourDatabase] SET PARAMETERIZATION FORCED;

Forced parameterization will take queries with literal values and replace them with parameters to promote plan reuse. It does have some limitations, but it’s usually a quick fix to constant-compiling and plan cache flushing from unparameterized queries.

Deciding whether or not to turn on this feature can be tough if you’re not sure what problem you’re trying to solve, but don’t worry: young, good-looking consultants are standing by to take your call.

Other


SQL Server may attempt simple parameterization in some cases, but this is not a guaranteed or reliable way to get the majority of the queries in your workload parameterized.

In general, the brunt of the work falls on you to properly parameterize things. Parameters are lovely things, which can even be output and shared between code blocks. Right now, views don’t accept parameters as part of their definitions, so they won’t help you here.

Figuring out the best thing to use and when to use it can be tough, but don’t worry: young, good-looking consultants are standing by to take your call.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.