Been There
A lot of the code to do this is in the sp_WhoIsActive documentation.
But since I have to do things a little bit differently, I decided to open source the way I do things.
You can get the full script here: sp_WhoIsActive Logging
That’s the full script to the SQL Server Agent job I use.
The big tweak that I make to things is to create a new table for every day of logging. I dislike the approach of logging everything to one big table and automatically managing the retention for two reasons:
- You might delete something important
- It sucks querying giant tables full of query logging details
I sort of stopped short of automating creating date constraints on each table and creating a view to union them all together. I just don’t need to do that often enough.
The full command I use for logging is this:
EXEC sp_WhoIsActive @get_transaction_info = 1, @get_outer_command = 1, @get_plans = 1, @get_task_info = 2, @get_additional_info = 1, @find_block_leaders = 1, @get_memory_info = 1;
Which means you do have to be on the latest version of sp_WhoIsActive. If you haven’t updated in a while, uh… I promise it’s a fast and painless process.
Just hit F5.
Thanks for reading!
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.
I was reading through the script and saw the job is logging to the master database.
What is the thought process behind that decision since I normally try to avoid putting anything in master other than diagnostic sprocs (sp_blitz, Ola’s backup scripts, sp_whoisactive etc)
Funny, that’s where Ola’s jobs log to as well. Probably for the same reason, too: everybody has one.
What about the delta’s?
Wouldn’t that be important from a workload perspective to see what each session is doing when you have very expensive queries, all competing for resources, and all trying to process millions of records at a time? With that I can see and show that the engine is working hard and that there is progress being made.
Is there something better?
I’ve never found a lot of use for them when logging to a table, but I use that all the time when live troubleshooting.
Awesome idea and I look forward to checking this out. I support SQL Server on Amazon Web Services. (RDS on AWS) You don’t get to write to the master database anymore. So if you have to write to the master database, then this is no bueno.
You can write to whatever database you want with the script. Just change it.