People still tell me things like “I only put 100 rows in table variables”, and think that’s the only consideration for their use.
There are definitely times when table variables can be better, but 100 rows is meaningless.
Even if you put one row in a table variable it can fudge up performance because SQL Server doesn’t know what’s in your table variable. That’s still true in SQL Server 2019, even if the optimizer knows how many rows are in your table variable.
The problem that you can run into, even with just getting 100 rows into a table variable, is that it might take a lot of work to get those 100 rows.
I’ve blogged before about workarounds for this problem, but the issue remains that inserts, updates, and deletes against table variables aren’t naturally allowed to go parallel.
The reason why is a bit of a mystery to me, since table variables are all backed by temp tables anyway. If you run this code locally, you’ll see what I mean:
SET NOCOUNT ON; SET STATISTICS IO ON; DECLARE @t table(id int); SELECT * FROM @t AS t; SET STATISTICS IO OFF;
Over in the messages tab you’ll see something like this:
Table '#B7A53B3E'. Scan count 1, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Now, look, I’m not asking for update or delete portions of the query plan to go parallel, but it might be nice if other child operators could go parallel. That’s how things go with regular tables and #temp tables. It would be nice if inserts could go parallel, but hey…
The problem this solves is one that I see often, usually from vendor code where the choice of which temporary object to use was dependent on individual developer preference, or they fell for the meme that table variables are “in memory” or something. Maybe the choice was immaterial at first with low data volume, and over time performance slowly degraded.
If I’m allowed to change things, it’s easy enough to replace @table variables with #temp tables, or use a workaround like from the above linked post about them to improve performance. But when I’m not, clients are often left begging vendors to make changes, who aren’t receptive.
Thanks for reading!
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
- Local Variables Get Bad Cardinality Estimates In Cursors In SQL Server, Too
- Things SQL Server vNext Should Address: Local Variable Estimates
- Why Expressions Are Better Than Local Variables For Performance In SQL Server Queries
- Things SQL Server vNext Should Address: Common Table Expression Materialization