I’ve Got A Full Training Day At SQLBits: The Professional Performance Tuning Blueprint

The Professional Performance Tuning Blueprint


Searching the internet for every problem isn’t cutting it. You need to be more proactive and efficient when it comes to finding and solving database performance fires.

I work with consulting customers around the world to put out SQL Server performance fires. In this day of learning, I will teach you how to find and fix your worst SQL Server problems using the same modern tools and techniques which I use every week.

You’ll learn tons of new and effective approaches to common performance problems, how to figure out what’s going on in your query plans, and how indexes really work to make your queries faster. Together, we’ll tackle query rewrites, batch mode, how to design indexes, and how to gather all the information you need to analyze performance.

This day of learning will teach you cutting edge techniques which you can’t find in training by folks who don’t spend time in the real world tuning performance. Performance tuning mysteries can easily leave you stumbling through your work week, unsure if you’re focusing on the right things. You’ll walk out of this class confident in your abilities to fix performance issues once and for all.

If you want to put out SQL Server performance fires, this is the precon you need to attend. Anyone can have a plan, it takes a professional to have a blueprint.

See you there!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Black Christmas Training Sale

Ho Ho Ho


For the month of December I’m putting my training on sale at 75% off.

That means you get over 24 hours of SQL Server performance tuning training content for around $100 USD when you click on this link and add everything to your cart. Discount applies at checkout, so you have to click through for it to apply

The way my training is structured is to take you from where you are to where you want to be. There’s beginner, intermediate, and advanced stuff.

You pick where you want to start — but I guarantee no matter where you are you’ll learn something new — I don’t want to waste anyone’s time. There’s enough of that in the world.

What I cover is all of the things that confused, confounded, and cut me up when I first started learning about SQL Server querying, indexing, execution plans, locking and deadlocks, and more.

Hi Hi Hi


If you’re planning on attending SQLBits in London this March, you can catch me doing a full day session live and in person.

It’ll be nice to see those smiling faces again!

Enjoy, and have yourself a happy holiday.

Thanks for reading!

Is Read Uncommitted (NOLOCK) An Optimistic Or Pessimistic Isolation Level?

Poll Worker


I ran a Twitter poll recently, because as I was writing another blog post, I wasn’t quite sure how to group uncommitted/nolock between pessimistic or optimistic isolation levels.

On the one hand, locking and blocking does still come into play with them. anyone who has seen locking on schema stability can attest to that. On the other hand, they will read just about anything you want. That’s sort of optimistic in spirit, but maybe not in letter.

Most of you kind folks out there called it optimistic, though some others had more colorful language to describe it:

At the end of the day, it’s probably not the right isolation to use, no matter what you choose to call it.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How To Tell If You Need To Partition Your Tables In SQL Server

You Probably Don’t


When I work with clients, nearly every single one has this burning question about partitioning.

“We’ve got this huge table, should we partition it?”

“Do you need to insert or delete data in big chunks?”

“No, it’s all transactional.”

“Do you have last page contention problems?”

“No, but won’t it help performance?”

“No, not unless you’re using clustered column store.”

“…”

Burial


Buried deep in that conversation are the two reasons you might want to partition a table:

  • Data management features
  • Clustered columnstore indexes

Included in “data management” is the ability to put different partitions in different files/file groups, which you can make read only, and/or allow you do do piecemeal backups and restores.

Outside of those, you’re only introducing a whole mess of complexity to a whole bunch of different places, not to mention some pretty severe pains rebuilding or shadow-copying and switching your current “huge table” as a partitioned on.

Complexity


If you partition a table and plan on using it for data management, all of your other indexes need to be aligned to the partitioning scheme as well.

That adds complexity to index tuning, because if you mess up and don’t do that, your partition swapping will error out. A lot of people add DDL triggers to reject index definitions without the partition alignment.

Add to that, query plans and query writing both get more complicated, too. Not only that, but the query optimizer has to work harder to figure out if partitions can be eliminated or not.

Problems that might have been less problematic against non-partitioned tables and indexes (like some implicit conversions), can be a death spiral against partitioned tables.

There have been a lot of bugs and performance regressions over the years with partitioning, too. Some have been fixed, and others are only “fixed” in higher compatibility levels.

Lonely Road


I’ve seen a lot of people partition tables, expecting performance fireworks and afternoon delights, only to find no joy. In some cases, I’ve had to help people undo partitioning because critical queries slowed down.

It seems like a “free” thing you can do, especially if you’re clueless about how to find and fix real problems in your workload. Like most marquee features in SQL Server, there’s an air of mystery and misunderstanding about partitioning excels at and is best used for.

In most cases, SQL Server users seem to skip basic index tuning and go right to WE NEED TO PARTITION THIS THING.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How To Tell You Need An Optimistic Isolation Level In SQL Server

Into The Yonder


When you create a database in SQL Server (everything except Azure SQL DB), you get this garbage isolation level called Read Committed.

That isolation level, along with others like Repeatable Read and Serializable, are considered pessimistic. Though Repeatable Read and Serializable are less garbage, it comes with a strictness of locking that most applications don’t need across the board. They may need it for certain niche activities, but you know…

Then there are isolation levels that are quite useful for most isolation levels, and they’re called optimistic isolation levels. In SQL Server, they’re Snapshot Isolation (SI), and Read Committed Snapshot Isolation (RCSI).

I think they are very much not-garbage, and so do other major database platforms that use specific implementations of MVCC (Multi Version Concurrency Control) by default. There may be some historical reason for SQL Server not doing it by default, which is also garbage.

Differences


There are some differences between the two optimistic isolation levels, which makes them useful in different situations. Most people don’t need both turned on, which is something I see quite a bit, but there just might be someone out there who turns on and actually uses both.

To generalize a little bit:

  • SI is good when you only want certain queries to read versioned data
  • RCSI is good when you want every query to read versioned data

What’s versioned data? You can think of it like the “last known good” version of a row before a modification started.

When an update or a delete starts to change data, SQL Server will send those last known good versions up to tempdb for read queries to grab what they need rather than getting blocked. Inserts are a little different, because they are the only known good version of a row.

There are some other differences, too.

SI:

  • Can be turned on without exclusive access to the database
  • Queries all read data as it looked at the beginning of a transaction

RCSI:

  • Does need exclusive access to the database, but it’s not as bad as it sounds
  • Reads data as it looked when each query in a transaction starts

Getting exclusive access to the database can be done without the single-user/multi-user dance:

ALTER DATABASE YourDatabase
    SET READ_COMMITTED_SNAPSHOT ON
WITH ROLLBACK IMMEDIATE;

If You Know You Know


The reasons why you might want to turn these on are when your application performance suffers because of locking or deadlocking.

If you want some quick and dirty queries to figure out if you’ve got those happening, you can run these queries.

/*Lock waits?*/
SELECT
    dows.wait_type,
    dows.waiting_tasks_count,
    dows.wait_time_ms,
    dows.max_wait_time_ms,
    dows.signal_wait_time_ms
FROM sys.dm_os_wait_stats AS dows
WHERE dows.wait_type LIKE 'LCK%'
AND   dows.waiting_tasks_count > 0
ORDER BY dows.wait_time_ms DESC;

/*Deadlocks?*/
SELECT 
    p.object_name,
    p.counter_name,
    p.cntr_value
FROM sys.dm_os_performance_counters p
WHERE TRIM(p.counter_name) = 'Number of Deadlocks/sec'
AND   TRIM(p.instance_name) = '_Total';

If you need deeper analysis of waits or deadlocks, I’d suggest you use sp_BlitzFirst or sp_BlitzLock.

What you want to look for in general are when readers and writers are interfering with each other. If your blocking or deadlocking problems are between any kind of exclusive locks, optimistic isolation levels won’t help you.

Wait stats from readers will generally have an “S” in them, like LCK_M_S. The same goes for deadlocks, where the lock mode will have an S in either the owner or the waiter.

Reader Writer Fighter


It’s important to keep in mind that it’s not just writers that block readers, or writers that can deadlock.

This is where the “Shared” lock and lock mode stuff comes into play. Again, if all your locks and deadlocks are between modification queries — locks and lock modes with X (exclusive) or U (update) — they’ll still block each other.

There’s a lot more details at the linked posts above, but that’s the general pattern. Another pattern to look for is if your developers keep adding more and more NOLOCK hints to “fix performance issues”.

A lot of times they’re just covering up other issues with indexing or the way queries are written, or they’re totally misunderstood. I’ve said it before, but it doesn’t mean your query doesn’t take any locks, it means that your query doesn’t respect locks taken by other queries.

That often comes as a surprise to people when I tell them, so I say it whenever I write about it. But that’s where the bad reputation comes from — it can read all sorts of in-flight data that may not reflect reality.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server’s Filtered Indexes Don’t Always Have To Be Selective

Somewhere Between


I’ve had quite a bit of luck with filtered indexes over the years. When they work well, they’re absolute aces.

But there’s a bit of trickiness to them, both for getting the optimizer to match the expression in your query to the filter on the index (they have to be an exact match), and for getting it to choose that index (parameterization is a nuisance for this).

You also need to define them correctly, which means any columns that you’re filtering on need to reside in the key or included columns as well. Not doing that risks the index not being used.

One thing I hear quite a bit — I’ve probably even said it myself at times — is that unless the index filters the number of rows down an appreciable amount, the filter might be more trouble than it’s worth.

But what if…

Camel Wides


I see queries that sometimes have a lot of predicates. Where clause, join clause, the works.

When you have queries that reliably filter on some literal values, like bits or a status or something equivalent, you may want to consider filtering your index on those even if they’re not selective in order to accomplish two things:

  1. Reduce the number of candidate key columns
  2. Reduce the effort of applying residual predicates

Designing effective indexes is tough, and it’s something I see a lot of people struggle with. Particularly with complex queries that filter, join, group by, order by, maybe even throw a window function in there for good measure…

You see what I’m getting at. For most OLTP queries, I want my indexes to be where clause centric. The faster I can filter rows down to find the ones I care about the more problems I avoid later on.

Filtering indexes is just another way of doing that.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Get The Full Query Text From The GUI

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Five


People write big queries, whether it’s an ORM, in a stored procedure, or any other vehicle for sending your ANSI torture to SQL Server’s humble ports. Unfortunately, query plans don’t store the full text.

If you’re looking at query plans in Query Store’s GUI for a big query, you get AN ABSOLUTELY ENORMOUS pop up window, with a preview of the query text. But only a preview.

You can even get the full text if you query the underlying views directly. But most people aren’t taking that dive.

Often, getting the full query text is necessary for, you know… running the query without error. If it’s cut off, you can’t do that, and right now there’s no way to fetch it from the main dashboard screens.

That’s a major shortcoming, and fixing it would make life easier for anyone who uses Query Store for query tuning.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Choose Queries To Ignore

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Four


A lot of dumb things end up in Query Store that you can’t do anything about, but which skew the graphs you have to look at and make things hard to decipher.

For example:

  • Query store queries from the GUI
  • Index rebuilds
  • Statistics updates
  • Index creation
  • Some dumb thing that runs at 3am that no one cares about

But there they are, acting like the biggest, baddest thing to ever touch SQL Server.

Except you know they’re not. And they have no business showing up when you know better. For as many options to tweak things as are getting added to Query Store, being able to quarantine some stuff would be a nice option.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Forced Plan Association

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Three


Not every application is designed well. This may be shocking news to some of you.

One area where I consistently see application developers go wrong is with parameterization. Either their ORM doesn’t use parameters at all, or it only partially parameterizes queries.

When this happens, you can end up with a lot of duplicate plans both in your plan cache and in Query Store.

For queries that have no parameterization, the Forced Parameterization database setting can be very useful, but it doesn’t help with queries that are only partially parameterized.

It would be absolutely stellar if Query Store had a mechanism for associating queries with the same query or plan hashes are “morally equivalent” even if they’re not parameterized and be able to force them to use the same query plan, when you have a good one.

This would solve many applications problems that I see out there, where the developers refuse to lift a finger to solve basic problems that end users end up feeling.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Query Store Improvements I’d Like To See For SQL Server: Memory Grant Columns

Spirited Away


This week’s posts are a series of short, and, admittedly lazy ones about improvements I’d like to see in Query Store.

I’m away from home for the Holiday Week, and I don’t have a lot of time to write at the moment. Normally I’d put these all into one post. I do hope you, dear reader, will forgive me. In the spirit of the holidays, or something.

Problem Two


When you run a query, you might get a memory grant, for example to hash or sort data. In the query plan, there are a whole bunch of columns that give you information about the memory grant:

  • SerialRequiredMemory         
  • SerialDesiredMemory          
  • RequiredMemory               
  • DesiredMemory                
  • RequestedMemory              
  • GrantWaitTime                
  • GrantedMemory                
  • MaxUsedMemory                
  • MaxQueryMemory               
  • LastRequestedMemory          
  • IsMemoryGrantFeedbackAdjusted

And if you read the query plan documentation, you’ll get some explanation of them, too.

Provide memory grant estimate as well as actual runtime memory grant information. Serial required/desired memory attributes are estimated during query compile time for serial execution. The rest of attributes provide estimates and counters for query execution time considering actual degree of parallelism. SerialRequiredMemory: Required memory in KB if the query runs in serial mode. The query will not start without this memory. SerialDesiredMemory: Memory estimated to fit intermediate results in KB if the query runs in serial mode. RequiredMemory: Required memory in KB for the chosen degree of parallelism. If the query runs in serial mode, this is the same as SerialRequiredMemory. DesiredMemory: Memory estimated to fit intermediate results in KB for the chosen degree of parallelism. If the query runs in serial mode, this is the same as SerialDesiredMemory. RequestedMemory: Memory in KB which the query requests the memory manager to grant. This can be smaller than sum of RequiredMemory and DesiredMemory if it exceeds the maximum allowed for single query. GrantWaitTime: Time in seconds if the query has to wait for successful memory grant. MaxUsedMemory: Maximum memory in KB used by the query. MaxQueryMemory: Maximum memory in KB allowed for single query. LastRequestedMemory: Memory in KB which was requested by the query from the memory manager during the last execution. IsMemoryGrantFeedbackAdjusted: Information regarding if the memory grant in this plan is adjusted based on memory grant feedback.

In Query Store, you get limited memory grant information in query_store_runtime_stats. But if you look in dm_exec_query_stats, there’s a whole lot more.

SQL Server Query Store
grim

It’s a much more complete view of the memory grant, and it’s a damn shame that you have to rely on the plan cache for this, because it’s so unreliable. Clearing out, dropping plans, recompiles, etc. etc. etc.

It would be marvelous if Query Store retained information like the size of the grant, so you can figure out if the amount of memory that it used was anywhere near reality. The ideal columns are not very useful to me, but the total and used are wonderful to compare.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.