Software Vendor Mistakes With SQL Server: Overly Complicated Triggers

Necessary Evils

Triggers can be quite useful to perform functions that cascading foreign keys are too simple for, but you can very easily overload them. I’ve run into cases where people had written what looked like an entire application worth of business logic into triggers.

Triggers that hit tables that fire other triggers that call stored procedures across servers in a while loop. You know. Developers 🐿

One very important thing to understand is that triggers always happen in a transaction, and will roll everything back unless you explicitly SET XACT_ABORT OFF; inside them. I’m not saying you should do that, at all; just that it’s an option.

Bail Reform

There are a few things you should do early on in your triggers to let them bail out as soon as possible.

  • Check if ROWCOUNT_BIG() = 0
  • Check if there are rows in the inserted pseudo-table
  • Check if there are rows in the deleted psuedo-table

You’ll wanna do the check against ROWCOUNT_BIG() before any SET statements, because they’ll reset the counter to 0.

DECLARE @i int;

The first will print 1, the second will print 0. Though I suppose messing that up would be an interesting performance tuning bug for your triggers.

One bug I see in plenty of triggers, though…


Make sure your triggers are set up to handle multiple rows. Triggers don’t fire per-row, unless your modifications occur for a single row. So like, if your modification query is run in a cursor or loop and updates based on a single unique value, then sure, your trigger will fire for each of those.

But if your modifications might hit multiple rows, then your trigger needs to be designed to handle them. And I don’t mean with a cursor or while loop. I mean by joining to the inserted or deleted pseudo-tables, depending on what your trigger needs to do.

Note that if your trigger is for an update or merge, you may need to check both inserted and deleted. Complicated things are complicated.

One more thing to ponder as we drift along through our trigger-writing extravaganza, is that we need to be careful where we OUTPUT rows to. If you return them to a table variable or directly to the client, you’ll end up with a fully single-threaded execution plan.

You’ll wanna dump them to a #temp table or a real table to avoid that, if your triggers are being asked to handle a deluge of rows. For smaller numbers of rows, you’re unlikely to notice that being an issue.

Know When To Say END;

The longer and more complicated your trigger becomes, the harder it will be to troubleshoot performance issues with it. Since triggers are “part” of whatever your modification queries do, you can end up with locks being taken and held for far longer than intended if there’s a lot of busy work done in them.

In much the same way Bloggers Of The World™ will warn you to index your foreign keys appropriately, you need to make sure that any actions performed in your triggers are appropriately indexed for, too. They’re not so different, in that regard.

Separating triggers into specific functions and duties can be helpful, but make sure that you set the correct order of execution, if you need them to happen in a specific order.

Thanks for reading!

Going Further

If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

One thought on “Software Vendor Mistakes With SQL Server: Overly Complicated Triggers

  1. I always start my triggers with

    Ther first part should be clear, if someone run an UPDATE where nothing was changed, I can bypass all of the trigger logic.

    The check onto CONTEXT_INFO() allows me to “disable” the trigger for maintenance tasks just for my current session without really disabling it (so I can’t forget to reenable it and the users can still work with the app while I do my own triggerless stuff). The only thing I have to do in my maintenance script is to execute a
    SET CONTEXT_INFO 0x1000;
    where the 0x1000 is just a random (not a “magical”) number I decided to use. Of course someone else could use other numbers or different numbers to disable only parts of the index. To “reenable” the trigger in my session I just
    or I simply close the SSMS window (this ends my session and deletes the context_info value)

    BTW: be careful with INSTEAD OF triggers – in this case you may not want to simply RETURN (when CONTEXT_INFO() = 0x1000) but execute the insert/update/delete explicit in the trigger.

Comments are closed.