Free SQL Server Performance Monitoring: How To Talk To Your Performance Data With The MCP Server
Summary
In this video, I delve into the built-in MCP server feature of my FreeSQL server monitoring tool, Darling Data. This feature allows users to interact with their collected monitoring data in a conversational manner using language models like Claude or other compatible tools. The MCP server is designed for local use only and does not expose any data beyond what’s available within the performance monitor database, ensuring that no sensitive information is at risk. By enabling this feature, you can ask questions about your server’s performance directly to an LLM, receiving quick insights into issues such as deadlocks, query performance, and workload spikes. This approach simplifies the process of diagnosing problems without requiring extensive manual analysis or script writing.
Chapters
- *00:00:00* – Introduction
- *00:00:30* – Model Context Protocol Server
- *00:01:00* – Tight Binding to Tools
- *00:01:30* – Chatting with LLMs
- *00:02:00* – MCP Server Setup
- *00:02:30* – Using Different LLMs
- *00:03:05* – Asking Questions Directly
- *00:04:14* – Settings and Configuration
- *00:05:30* – Available Tools Overview
- *00:06:42* – Health Check Results
- *00:07:08* – Longest Running Query
- *00:08:43* – Performance Investigations
Full Transcript
Erik Darling here with Darling Data. And in this video, I want to go over a little bit about using the built-in MCP server. That is a model context protocol server available in my FreeSQL server monitoring tool that allows you to have a little bit of a chat with your collected monitoring data. It’s not allowed to go out and start running random ad hoc DMV queries. It’s not allowed to, you know, you know, drop, delete, update, mess up, merge, merge anything. It’s not, it’s very tightly bound to the tools that it has defined and available to it and nothing else. Right? So, uh, I wanted to have this in there for people who are maybe not fully comfortable looking through monitoring tools, uh, maybe not fully comfortable, um, you know, like digging into things, figuring out what was actually wrong and when, uh, and, uh, you know, just maybe people who are maybe a little bit more geared towards, you know, like, or gotten used to over the last couple few years, like having chats with LLMs about things and figuring things out with them. Right? Cause I mean, they’re not always right the first time. Sometimes you get a little poking and prodding and you got to, you know, nudge them a little bit, but you can usually get there. Right? So this feature is opt-in. Um, it’s only running locally. It is not like some big broadcast web server out there in the world. Uh, it’s just running as a local host on your machine. You have to turn it on. Right? Again, I don’t want to alienate anyone who hates the robots. Uh, but if you, uh, find yourself using robots more and more in your day to day work, you can use this here. Um, and then you can start talking to your performance data with the tools that I’ve made available to, um, to, uh, basically expose the tables and views. Uh, if you’re on the full version in the performance monitor database, so only that database. And if you’re on the light version, it’s just talking to the local DuckDB database where all the data got collected. So you have nice sort of like, like pre arranged time series data, the, the model knows the schema and everything, and it can just talk directly to that. And it can look at everything in there and it can tell you like, you know, some pretty neat things about it. There are a whole bunch of tools available for, uh, different things like discovery, weight, CPU, queries, memory blocking. There’s a whole bunch of stuff. There’s like 30 or something tools available to it across all the different data collectors. So you have a lot that you can look at and dig into. And what you have to do, and I’ll, I’ll show you what this looks like is you have to, um, enable the MCP server, right? So again, not the DOM by default, you turn it on. Uh, and then depending on which version you’re on, the port might be different, but it’s unimportant, unimportant ports. Uh, and then you have to go into the LLM of your choice. Um, you know, we are a strong Claude household. We’re a Diet Coke, Claude, um, Lagavulin, like we have strong allegiances. Uh, so this is what we’re doing. If you use something else, as long as the LLM that you use is, um, MCP compatible, it’s fine. Right? And then you just start asking questions about, what’s been going on with your servers. So you, and you can ask it a lot of just normal conversational questions. Like if you like, instead of having to like, you know, like someone’s like, Oh, the server was slow yesterday. And you’re like, ah, you’re going to go look at yesterday. What the, come on, give me a break yesterday. You’re telling me today, like, if you were to go like do this on your own, it might take you five or more minutes to, you know, like, you know, either like, like, let’s say you’re lucky enough to have a free open source monitoring tool. It might take you five or more minutes to like go through all the charts and graphs and dig into stuff and take screenshots and make notes and pull things out and go like, you know, the normal sort of like outlining of things. But with Claude, you can ask it pretty quickly and get some pretty fast answers. It’s maybe not like, you know, as fast as you would like it, but it’s still reasonably quick. Uh, if you want to check this out, go to code.erikdarling.com.
It’s the performance monitor repo. Uh, and again, totally free, totally open source. If your company does want to, um, get support or some contract guarantees with that, I have things available at training.eric.com to, um, to do that. So let’s go look, uh, real quick at, um, first let’s look at the, uh, the settings that I’m talking about. So, um, if you want to look at that, uh, like I already have mine turned on. So this is the MCP server. Um, and if you go in here and you’re using Claude, you can copy the setup command right from here. It tells you like, just like, just the, gives you the little one line command to add the MCP tool. If you use a different LLM and you want to like include like the, the, the correct command to do it with whatever you use, you can submit it to the documentation. I just don’t use anything else. So that’s how you set that up there. And I’ve pre-baked a little bit of a, um, conversation with our dear friend Claude, uh, via the MCP server because, um, you know, like it’s not really a lot of fun sitting around like, like watching an LLM, like spin its gears with stuff. You get bored. You’ll look at something else for a little while, right? Oh, I don’t know. Gamble on, gamble online or something. So no, I just started off like, Hey Claude, can you see my MCP server? It was like, yeah, I see that. Look at that. It’s connected and working. You have seven monitor servers, but only SQL 2022 is available. That’s fine. All right. And then I asked Todd, Claude, not Todd. I don’t know who Todd is. Todd is not cool. Can you list all the tools? Right. Let’s tell me what you have. And then it, so we have a list here of all of the available tools that it has to look again, just at your performance data, not at anything else, right. Just local to that. Um, so don’t, don’t think that I’m sending queries off to like other databases and collecting things. And I’m going to, you know, I don’t know, Ashley Madison, you or something, but these are all the tools that we have out there. And, uh, you know, like I, this is just me, like mostly hitting tab, uh, because there’s like autocomplete stuff where it’s like, well, what, what, what, what’s it, what should I do here? And so I just said, yeah, run a health check on SQL 22, 2022. And it went, cause I’d run a hammer DB TPCH workload on, on SQL 22. You might see some fun spikes and stuff in there, uh, where the queries are running. And it went through a whole bunch of the tools and it said, well, you know, right now things are normal. Um, I ran through, um, all sorts of things and, you know, your server was idle most of the day, but you did have a big spike in there for 33 minutes. 33 minutes is exactly how long my TPCH workloads for, right. And it even says, it looks like a hammer DB, right? It’s like, it knows. It’s like, I got you, buddy.
All right. Uh, it’ll tell you about weight stats. Uh, you know, it says, wow, yeah, 2009 deadlocks today. That’s, that’s a lot. You might want to think about that. All right. It’ll give you a breakdown of how, like what memory looks like file IO running jobs. Um, so, you know, we’re all good there. And then, um, it’s like, you know, there’s some things that you might want to think about, right? Like 2009 deadlocks, 32, 33 minute spike, but you know, whatever. And I was like, Hey, yeah, you know, Claude, um, well, where was it? Uh, there was, uh, there was another prompt.
What was the longest running query in the hammer DB workload? And it comes back with some information in here, right? It’s like, it was a TPCH, TPCH workload, right? Tells you this was the longest running query. It gives you information about all the stuff that we have collected about the query, right? Like all this stuff that make you, you know, you would have to go and look at and like, wait for a tool tip. It’s all, it’s all a nightmare. Right. Uh, and then like, you know, say, okay, well, you know, that’s cool. I can, I can go, I’ll, I’ll make a note of that. I’ll go work on, uh, query 21 later. And then we can say, well, you know, this is the next one. Dig into those 2000 deadlocks. And it did, um, I had a little boo-boo there with some JSON, but that’s okay. And then, um, you know, it tells you what happened, right? And so all 2009 deadlocks are from the TPCC workload, right? And it tells you all the stuff that was going on there and the two deadlocks that were, uh, that were the two queries that were involved in the deadlock, right? So new order and, uh, execute, yeah, new, well, new order and delivery. Huh? See, we like new order too. It’s a good band, right? Uh, so they take some X locks on these and delivery takes some X locks on these and they acquire page level locks in the opposite order, right? And so we just get all this information.
We can say, cool, I can take that information and I can do something actionable with it, right? So, uh, this is just a sort of basic example of, um, you know, what you can do with the MCP servers that are built into my monitoring tools so that you can do sort of performance investigations without breaking a sweat, running a bunch of scripts, like gathering a bunch of query plans, query texts, looking at like 50 million different metrics, you know, trying to corroborate a whole bunch of stuff together. Uh, and, uh, you know, again, if you’re uncomfortable, you know, doing that sort of thing, or even if you’re uncomfortable, just looking at charts and graphs and figuring out when things were crappy, this is a very, very easy way to just talk to your data and get some answers about it. Anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something and I hope that you will try out my wonderful, uh, free open source SQL Server monitoring tool.
Again, that is at code.erikdarling.com. Uh, so I would encourage you to head over there and check that out. All right. Thank you for watching.
Going Further
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

