Dynamic Data Unmasking In SQL Server

Dynamic data masking is a SQL Server 2016 feature to mask sensitive data at the column level from non-privileged users. Hiding SSNs is a common example in the documentation. However, the documentation also gives the following warning:

The purpose of dynamic data masking is to limit exposure of sensitive data, preventing users who should not have access to the data from viewing it. Dynamic data masking does not aim to prevent database users from connecting directly to the database and running exhaustive queries that expose pieces of the sensitive data.

How bad can it be? This post explores how quickly a table of SSNs can be unmasked by a non-privileged user.

Simple Demo


Let’s use a table structure very similar to the example in the documentation:

DROP TABLE IF EXISTS dbo.People;
CREATE TABLE dbo.People
(
    PersonID BIGINT PRIMARY KEY,
    FirstName VARCHAR(100) NOT NULL,
    LastName VARCHAR(100) NOT NULL,
    SSN VARCHAR(11) MASKED WITH (FUNCTION = 'default()') NULL
);
INSERT INTO dbo.People
VALUES
(1, 'Pablo', 'Blanco', '123-45-6789');

Here’s what the data looks like for a privileged user, such as a user with sa:

a12_sa_results

However, if I login with my lowly erik SQL Server login I can no longer see Pablo Blanco’s SSN:

a12_erik_results

Test Data


To make things more interesting let’s load a million rows into the table. SSNs will be randomized but I didn’t bother randomizing the first and last names.

DROP TABLE IF EXISTS dbo.People;
CREATE TABLE dbo.People
(
    PersonID BIGINT PRIMARY KEY,
    FirstName VARCHAR(100) NOT NULL,
    LastName VARCHAR(100) NOT NULL,
    SSN VARCHAR(11) MASKED WITH (FUNCTION = 'default()') NULL
);
INSERT INTO dbo.People WITH (TABLOCK)
SELECT TOP (1000000)
       ROW_NUMBER() OVER (ORDER BY (SELECT NULL)),
       REPLICATE('A', 10),
       REPLICATE('Z', 12),
       RIGHT('000' + CAST(ABS(CHECKSUM(NEWID())) AS VARCHAR(11)), 3) + '-'
       + RIGHT('00' + CAST(ABS(CHECKSUM(NEWID())) AS VARCHAR(11)), 2) + '-'
       + RIGHT('0000' + CAST(ABS(CHECKSUM(NEWID())) AS VARCHAR(11)), 4)
FROM master..spt_values AS t1
    CROSS JOIN master..spt_values AS t2;

How quickly can the malicious end user erik decode all of the data? Does he really require a set of exhaustive queries? To make things somewhat realistic, setting trace flags and creating objects is off limits. Only temp tables can be created, since all users can do that.

Decoding the SSN Format


The WHERE clause of queries can be used to infer information about the data. For example, the following query is protected by data masking because all of the action is in the SELECT clause:

SELECT PersonId,
       FirstName,
       LastName,
       CASE LEFT(SSN, 1)
           WHEN '0' THEN
               '0'
           WHEN '1' THEN
               '1'
           WHEN '2' THEN
               '2'
           WHEN '3' THEN
               '3'
           WHEN '4' THEN
               '4'
           WHEN '5' THEN
               '5'
           WHEN '6' THEN
               '6'
           WHEN '7' THEN
               '7'
           WHEN '8' THEN
               '8'
           WHEN '9' THEN
               '9'
           ELSE
               NULL
       END AS D1
FROM dbo.People;

However, the following query will only return the subset of rows with 1 as the first digit in their SSNs:

SELECT PersonId,
       FirstName,
       LastName
FROM dbo.People
WHERE LEFT(SSN, 1) = 1;

With 90 queries we could get all of the information that we need, but that’s too much work. First we need to verify the format of the SSN in the column. Perhaps it has dashes and perhaps it doesn’t. Let’s say that our malicious end user gets lucky and both of the following queries return a count of one million rows:

SELECT COUNT(*)
FROM dbo.People
WHERE LEN(SSN) = 11;

SELECT COUNT(*)
FROM dbo.People
WHERE LEN(REPLACE(SSN, '-', '')) = 9;

It’s a reasonable assumption that the SSN is in a XXX-XX-XXXX format, even though the data mask doesn’t tell us that directly.

Looping to Victory


Armed with our new knowledge, we can create a single SQL query that decodes all of the SSNs. The strategy is to define a single CTE with all ten digits and to use one CROSS APPLY for each digit in the SSN. Each CROSS APPLY only references the SSN column in the WHERE clause and returns the matching prefix of the SSN that we’ve found so far. Here’s a snippet of the code:

SELECT p.PersonID,
       d9.real_ssn
FROM dbo.People AS p
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d0.DIGIT + '%'
) AS d1(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d1.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d1.prefix + d0.DIGIT + '%'
) AS d2(prefix);

In the d1 derived table the first digit is found. That digit is passed to the d2 derived table and the first two digits are returned from d2. This continues all the way to d9 which has the full SSN. The full query is below:

DROP TABLE IF EXISTS #t;
WITH DIGITS (DIGIT)
AS (SELECT *
    FROM
    (
        VALUES
            ('0'),
            ('1'),
            ('2'),
            ('3'),
            ('4'),
            ('5'),
            ('6'),
            ('7'),
            ('8'),
            ('9')
    ) AS v (x) )
SELECT p.PersonID,
       p.FirstName,
       p.LastName,
       d9.real_ssn
INTO #t
FROM dbo.People AS p
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d0.DIGIT + '%'
) AS d1(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d1.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d1.prefix + d0.DIGIT + '%'
) AS d2(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d2.prefix + d0.DIGIT + '-'
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d2.prefix + d0.DIGIT + '%'
) AS d3(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d3.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d3.prefix + d0.DIGIT + '%'
) AS d4(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d4.prefix + d0.DIGIT + '-'
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d4.prefix + d0.DIGIT + '%'
) AS d5(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d5.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d5.prefix + d0.DIGIT + '%'
) AS d6(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d6.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d6.prefix + d0.DIGIT + '%'
) AS d7(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d7.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d7.prefix + d0.DIGIT + '%'
) AS d8(prefix)
    CROSS APPLY
(
    SELECT TOP 1
           d8.prefix + d0.DIGIT
    FROM DIGITS AS d0
    WHERE p.SSN LIKE d8.prefix + d0.DIGIT + '%'
) AS d9(real_ssn);

On my machine, this query takes an average of 5952 ms to finish. Here’s a sample of the results:

a12_sample_results

Not bad to unmask one million SSNs.

Looping Even Faster to Victory


The LIKE operator is a bit heavy for what we’re doing. Another way to approach the problem is to have each derived table just focus on a single digit and to concatenate them all together at the end. I found SUBSTRING to be the fastest way to do this. The full query is below:

DROP TABLE IF EXISTS #t;
WITH DIGITS (DIGIT)
AS (SELECT *
    FROM
    (
        VALUES
            ('0'),
            ('1'),
            ('2'),
            ('3'),
            ('4'),
            ('5'),
            ('6'),
            ('7'),
            ('8'),
            ('9')
    ) AS v (x) )
SELECT p.PersonID,
       p.FirstName,
       p.LastName,
       d1.DIGIT + d2.DIGIT + d3.DIGIT + '-' + d4.DIGIT + d5.DIGIT + '-' + d6.DIGIT + d7.DIGIT + d8.DIGIT + d9.DIGIT AS real_ssn
INTO #t
FROM dbo.People AS p
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 1, 1) = d0.DIGIT
) AS d1(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 2, 1) = d0.DIGIT
) AS d2(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 3, 1) = d0.DIGIT
) AS d3(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 5, 1) = d0.DIGIT
) AS d4(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 6, 1) = d0.DIGIT
) AS d5(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 8, 1) = d0.DIGIT
) AS d6(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 9, 1) = d0.DIGIT
) AS d7(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 10, 1) = d0.DIGIT
) AS d8(DIGIT)
    CROSS APPLY
(
    SELECT TOP 1
           d0.DIGIT
    FROM DIGITS AS d0
    WHERE SUBSTRING(p.SSN, 11, 1) = d0.DIGIT
) AS d9(DIGIT);

 

This query runs in an average on 1833 ms on my machine. The query plan looks as you might expect. Each cross apply is implemented as a parallel nested loop join against a constant scan of 10 values. On average each constant scan operator produces roughly 5.5 million rows. This makes sense, since for each loop we’ll need to check an average of 5.5 values before finding a match, assuming perfectly distributed random digits. Here’s a representative part of the plan:

a12_query1

Letting SQL Server do the Work


With nine digits we end up reading almost 50 million values from the constant scan operators. That’s a lot of work. Can we write a simpler query and let SQL Server do the work for us? We know that SSNs are always numeric, so if we had a table full of all billion possible SSNs then we could join to that and just keep the value from the table. Populating a temp table with a billion rows will take too long, but we can simply split up the SSN into its natural three parts and join to those tables. One way to do this is below:

SELECT TOP (100)
       RIGHT('0' + CAST(t.RN AS VARCHAR(10)), 2) AS NUM
INTO #t_100
FROM
(
    SELECT -1 + ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS RN
    FROM master..spt_values AS t1
        CROSS JOIN master..spt_values AS t2
) AS t;
SELECT TOP (1000)
       RIGHT('00' + CAST(t.RN AS VARCHAR(10)), 3) AS NUM
INTO #t_1000
FROM
(
    SELECT -1 + ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS RN
    FROM master..spt_values AS t1
        CROSS JOIN master..spt_values AS t2
) AS t;
SELECT TOP (10000)
       RIGHT('000' + CAST(t.RN AS VARCHAR(10)), 4) AS NUM
INTO #t_10000
FROM
(
    SELECT -1 + ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS RN
    FROM master..spt_values AS t1
        CROSS JOIN master..spt_values AS t2
) AS t;
DROP TABLE IF EXISTS #t;
SELECT p.PersonID,
       p.FirstName,
       p.LastName,
       CONCAT(t1000.NUM, '-', t100.NUM, '-', t10000.NUM) AS SSN
INTO #t
FROM dbo.People AS p
    LEFT OUTER JOIN #t_1000 AS t1000
        ON SUBSTRING(p.SSN, 1, 3) = t1000.NUM
    LEFT OUTER JOIN #t_100 AS t100
        ON SUBSTRING(p.SSN, 5, 2) = t100.NUM
    LEFT OUTER JOIN #t_10000 AS t10000
        ON SUBSTRING(p.SSN, 8, 4) = t10000.NUM;

The query now runs in an average of 822 ms. Note that I didn’t try very hard to optimize the inserts into the temp tables because they finish almost instantly. Taking a look at the plan, we see a lot of repartition stream operators because the column for the hash join is different for each query:

a12_repartition

Can we go faster?

Batch Mode to the Rescue


With parallel batch mode hash joins we don’t need to repartition the streams of the larger outer result set. I changed the query to only look at the table with 10000 rows to get more consistent and even parallel row distribution on the temp tables. I also added a clustered index on the temp table for the same reason. In addition to that, maybe we can expect joins to be faster with INT join columns as opposed to VARCHAR. With the canonical #BATCH_MODE_PLZ temp table to make the query eligible for batch mode, the query now looks like this:

SELECT TOP (100000)
       ISNULL(CAST(t.RN AS INT), 0) AS NUM
INTO #t_10000
FROM
(
    SELECT -1 + ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS RN
    FROM master..spt_values AS t1
        CROSS JOIN master..spt_values AS t2
) AS t;
CREATE CLUSTERED INDEX CI ON #t_10000 (NUM);

CREATE TABLE #BATCH_MODE_PLZ
(
    I INT,
    INDEX C CLUSTERED COLUMNSTORE
);

DROP TABLE IF EXISTS #t;
SELECT p.PersonID,
       p.FirstName,
       p.LastName,
       CONCAT(t1000.NUM, '-', t100.NUM, '-', t10000.NUM) AS SSN
INTO #t
FROM dbo.People AS p
    LEFT OUTER JOIN #t_10000 AS t1000
        ON CAST(SUBSTRING(p.SSN, 1, 3) AS INT) = t1000.NUM
    LEFT OUTER JOIN #t_10000 AS t100
        ON CAST(SUBSTRING(p.SSN, 5, 2) AS INT) = t100.NUM
    LEFT OUTER JOIN #t_10000 AS t10000
        ON CAST(SUBSTRING(p.SSN, 8, 4) AS INT) = t10000.NUM
    LEFT OUTER JOIN #BATCH_MODE_PLZ
        ON 1 = 0;

The query now runs in an average of 330 ms. The repartition stream operators are no longer present:

a12_no_repart

It wasn’t clear to me how to speed this query up further. The probe residuals in the hash joins are one target:

a12_probe

These appear because SQL Server cannot guarantee that hash collisions won’t occur. Paul White points out the following:

If the join is on a single column typed as TINYINT, SMALLINT or INTEGER and if both columns are constrained to be NOT NULL, the hash function is ‘perfect’ – meaning there is no chance of a hash collision, and the query processor does not have to check the values again to ensure they really match.

Unfortunately, the probe residual remains even with the right temp table definition and adding explicit casts and non-null guarantees to the SUBSTRING expression. Perhaps the type information is lost in the plan and cannot be taken advantage of.

Final Thoughts


I don’t think that there’s really anything new here. This was mostly done for fun. Decoding a million SSNs in half a second is a good trick and a good reminder to be very careful with expectations around how much security data masking really gives you.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

The SQL Server Trillion Row Table

I loaded one trillion rows into a nonpartitioned table just to see what would happen. Spoiler: bad things happen.

Hardware and Table Plan


I did all of my testing on my home desktop which has an i5-4670 CPU (quad core), 5 GB of RAM for SQL Server, and a Samsung SSD 850 EVO 1 TB. Obviously not ideal to do testing like this but it was enough to get the job done. In order to fit a trillion rows into a table I had to use a CCI. Rowstore tables simply don’t offer good enough compression. For example, consider a page compressed heap with nothing but NULL values. One million rows takes up 10888 KB of disk space:

DROP TABLE IF EXISTS dbo.TEST_HEAP;
CREATE TABLE dbo.TEST_HEAP (
ID BIGINT
) WITH (DATA_COMPRESSION = PAGE);

INSERT INTO dbo.TEST_HEAP WITH (TABLOCK)
SELECT TOP (1000000) NULL
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
OPTION (MAXDOP 1);

EXEC sp_spaceused 'TEST_HEAP';

Therefore, such a table with a trillion rows would require around 10 TB of disk space. That won’t fit on a 1 TB HDD, but the better compression of CCIs gives us some options. Ultimately I decided on building completely full rowgroups of 1048576 rows with each rowgroup only storing a single value. Estimating space for the final table is difficult, but we can hopefully get an upper bound using the following code:

DROP TABLE IF EXISTS dbo.TRIAL_BALLOON;
CREATE TABLE dbo.TRIAL_BALLOON (
ID BIGINT,
INDEX CCI_TRIAL_BALLOON CLUSTERED COLUMNSTORE
) 

INSERT INTO dbo.TRIAL_BALLOON WITH (TABLOCK)
SELECT TOP (10 * 1048576) NULL
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

EXEC sp_spaceused 'TRIAL_BALLOON';

The table has 776 KB reserved space and 104 KB space for data. In the worst case we might expect the final table to require 75 GB space on disk.

Serial Population Strategy


My strategy for populating the table was to run the same code in four different SQL Server sessions. Each session grabs the next ID from a sequence and does a MAXDOP 1 insert of 1048576 rows into the CCI. With only four concurrent sessions I didn’t expect to run into any locking issues such as the mysterious ROWGROUP_FLUSH wait event. The sequence definition is about as simple as it gets:

DROP SEQUENCE IF EXISTS dbo.CCI_Sequence;
CREATE SEQUENCE dbo.CCI_Sequence AS BIGINT
START WITH 1
INCREMENT BY 1
NO CACHE;

Here’s the code that I used to add rowgroups to the table:

ALTER SERVER CONFIGURATION
SET PROCESS AFFINITY CPU=0;

DECLARE @i BIGINT

SET NOCOUNT ON;

IF EXISTS (SELECT 1 FROM ##stop_table)
BEGIN
	SET @i = 9999999999999;
END
ELSE
BEGIN
	SELECT @i = NEXT VALUE FOR dbo.CCI_Sequence;
END; 

WHILE @i <= 953674
BEGIN
	WITH NUM (n) AS (
	SELECT n
	FROM
	(
	VALUES
	 (@i),(@i),(@i),(@i),(@i),(@i),(@i),(@i)
	,(@i),(@i),(@i),(@i),(@i),(@i),(@i),(@i)
	,(@i),(@i),(@i),(@i),(@i),(@i),(@i),(@i)
	,(@i),(@i),(@i),(@i),(@i),(@i),(@i),(@i)
	) v(n)
	)
	INSERT INTO dbo.BIG_DATA
	SELECT n1.n
	FROM NUM n1
	CROSS JOIN NUM n2
	CROSS JOIN NUM n3
	CROSS JOIN NUM n4
	OPTION (MAXDOP 1);

	IF EXISTS (SELECT 1 FROM ##stop_table)
	BEGIN
		SET @i = 9999999999999;
	END
	ELSE
	BEGIN
		SELECT @i = NEXT VALUE FOR dbo.CCI_Sequence;
	END;
END;

The affinity stuff was to get the work spread out evenly over my four schedulers. Each session was assigned to a different CPU from 0-3. It was also important to run the following in a new session after all four sessions started working:

ALTER SERVER CONFIGURATION
SET PROCESS AFFINITY CPU=AUTO;

It wasn’t clear to me why that was needed to get good throughput. Perhaps part of the work of building a compressed rowgroup is offloaded to a system process?

The references to the ##stop_table temp table are just a way to pause the work as needed without skipping numbers in the sequence. I think that the code to generate 1048576 rows is fairly optimized. I did try to optimize it since this code was going to be run over 950000 times, but I still suspect that there was a better way to do it that I missed.

The jobs finished after about 2 days of running on 4 CPUs. That’s a rate of around 86 million rows per core per minute which I was pretty happy with. After the sessions finished I needed to do one final, very nervous, insert into the table:

INSERT INTO dbo.BIG_DATA
SELECT TOP (331776) 953675
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
OPTION (MAXDOP 1);

Running sp_spaceused has never felt so satisfying:

a11_trillion

Under a GB for one trillion rows. Not bad.

Parallel Population Strategy


Taking advantage of natural parallelism within SQL Server may also be a viable method to populate a trillion row table (especially on an actual server), but I didn’t test it fully. In order to get perfect rowgroups, you need round robin parallelism with a driving table that’s a multiple of the query’s MAXDOP. For example, here’s a suitable plan:

a11_CCI_parallel_insert

The Constant Scan contains exactly four rows because I’m running MAXDOP 4. The source CCI has exactly 1048576 rows. The parallel insert happens without a repartition streams so we end up with exactly 1048576 rows on each thread and in each compressed rowgroup. Below is one way to generate such a plan:

DROP TABLE IF EXISTS dbo.SOURCE_CCI;
CREATE TABLE dbo.SOURCE_CCI (
ID BIGINT,
INDEX CCI_SOURCE_CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.SOURCE_CCI WITH (TABLOCK)
SELECT TOP (1048576) 0
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
OPTION (MAXDOP 1);

DROP TABLE IF EXISTS dbo.TRIAL_BALLOON;
CREATE TABLE dbo.TRIAL_BALLOON (
ID BIGINT,
INDEX CCI_TRIAL_BALLOON CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.TRIAL_BALLOON WITH (TABLOCK)
SELECT  driver.n
FROM (
SELECT TOP (4) v.n
FROM (
	VALUES
		(1),(2),(3),(4)
	) v(n)
) driver
INNER JOIN dbo.SOURCE_CCI sc ON sc.ID < driver.n
OPTION (MAXDOP 4, NO_PERFORMANCE_SPOOL);

With nothing else running on my machine I estimate that this method would take about 2 days to complete. The problem is that if one core is busy with something else, such as watching terrible Youtube videos, then the entire insert could be slowed down.

Updating Stats


To do anything interesting on a table we want statistics. Gathering statistics for extremely compressed data can be challenging in SQL Server. That is because the target sampled rate is based on the total size of the table as opposed to the number of rows in the table. Consider an 8 billion row table built in the same way as the one trillion row table above. SQL Server generates the following query to gather sampled stats against the table:

SELECT StatMan([SC0])
FROM (
SELECT TOP 100 PERCENT [ID] AS [SC0]
FROM [dbo].[BIG_DATA] WITH (READUNCOMMITTED)
ORDER BY [SC0]
) AS _MS_UPDSTATS_TBL
OPTION (MAXDOP 1)

You may notice the lack of TABLESAMPLE as well as the MAXDOP 1 hint. To gather sampled stats SQL Server will get all eight billion rows from the table, sort them, and build the statistics object using the eight billion rows. On my machine, this took almost 3 hours to complete and tempdb grew to 85 GB.

There is a trick to get more reasonable sampled stats. All that’s needed is to increase the table size while keeping the same data. Soft deletion of compressed rowgroups is a good way to accomplish this. First find a data distribution that doesn’t compress well in CCIs. Here’s one example:

SELECT 9999999 + SUM(RN) OVER (ORDER BY RN)
FROM (
	SELECT TOP (1048576) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
) t
OPTION (MAXDOP 1);

Each rowgroup has a size of 8392 KB, so adding 100 rowgroups will add 839200 KB to the table. Deleting all of the newly added rows can take a little while and will log quite a bit to the transaction log, but the table size won’t change. Gathering sampled stats after the insert and delete took just a few seconds. The sample size was about 1% of the table. After a REORG the fully deleted rowgroups will be marked as TOMBSTONE and cleaned up by a background process.

For the one trillion row table I decided to roll the dice and go for sampled stats without any tricks. I gave tempdb a maximum size of 280 GB in order to not completely fill my hard drive. The stats update took 3 hours and 44 minutes. Surprisingly, the stat update grew tempdb to its maximum size but it didn’t fail. Perhaps I got very lucky. Here is the hard earned stats object:

a11_stats_object

Expected Query Performance


I expected reasonably fast query performance for queries designed to take advantage of rowgroup elimination. After all, the table was built in a way such that every compressed rowgroup only has a single value. I can get a count of rowgroups along with some other metadata in 20 seconds using the DMVs:

SELECT COUNT(*), MIN(css.min_data_id), MAX(css.max_data_id)
FROM sys.objects o
INNER JOIN sys.columns c ON o.object_id = c.object_id
INNER JOIN sys.partitions p ON o.object_id = p.object_id
INNER JOIN sys.column_store_segments css
    ON p.hobt_id = css.hobt_id
    AND css.column_id = c.column_id
WHERE o.name = 'BIG_DATA'
AND c.name = 'ID';

Getting the relevant segment_ids for a particular filter finishes in under a second:

SELECT segment_id
FROM sys.objects o
INNER JOIN sys.columns c ON o.object_id = c.object_id
INNER JOIN sys.partitions p ON o.object_id = p.object_id
INNER JOIN sys.column_store_segments css
    ON p.hobt_id = css.hobt_id
    AND css.column_id = c.column_id
WHERE o.name = 'BIG_DATA'
AND c.name = 'ID'
AND 500000 BETWEEN css.min_data_id AND css.max_data_id;

I’m dealing with DMVs so I would expect SQL Server to be able to do rowgroup elimination in a much more efficient way than the above queries. Therefore, 30 seconds seemed like a reasonable upper bound for the following query:

SELECT COUNT(*)
FROM dbo.BIG_DATA
WHERE ID = 500000;

The query takes over 15 minutes to complete despite reading only a single segment:

Table ‘BIG_DATA’. Scan count 4, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 6, lob physical reads 1, lob read-ahead reads 0.
Table ‘BIG_DATA’. Segment reads 1, segment skipped 953674.

SQL Server Execution Times:
CPU time = 905328 ms, elapsed time = 917478 ms.

The Evil Wait Event


The query had a max wait time of 915628 ms for QUERY_TASK_ENQUEUE_MUTEX. This is suspiciously close to the elapsed time of 917478 ms. Unfortunately this is a very unpopular wait event in the industry. The wait event library has almost no information about it as well.

I call this the evil wait event because while it’s happening queries cannot be canceled through SSMS and many unrelated queries won’t even run. Most of the time no useful work can be done on the instance. I can’t read Russian so I’m not sure what the wait event is about. After I restarted SQL Server the wait event no longer appeared as consistently, but query performance did not improve as far as I could tell.

Other Queries


I ran a few other tests queries as well, although I was limited in what I could do by the evil wait event. The following query is the only one that I found that wasn’t affected:

SELECT TOP 1 ID
FROM dbo.BIG_DATA;

For reasons I don’t understand the query still took a long time:

Table ‘BIG_DATA’. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 6, lob physical reads 1, lob read-ahead reads 0.
Table ‘BIG_DATA’. Segment reads 1, segment skipped 0.

SQL Server Execution Times:
CPU time = 791625 ms, elapsed time = 811202 ms.

Counting the rows in the table took almost half an hour:

SELECT COUNT_BIG(*)
FROM dbo.BIG_DATA;

Statistics output:

Table ‘BIG_DATA’. Scan count 4, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 5722050, lob physical reads 819345, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 3734515 ms, elapsed time = 1635348 ms.

A query to sum every ID in the table took over 40 minutes:

SELECT SUM(ID)
FROM dbo.BIG_DATA;

Statistics output:

SQL Server Execution Times:
CPU time = 4810422 ms, elapsed time = 2433689 ms.

As expected performance gets worse without aggregate pushdown. Picking a simple query that’s not supported:

SELECT MAX(ID / 1)
FROM dbo.BIG_DATA;

This query took over an hour:

SQL Server Execution Times:
CPU time = 10218343 ms, elapsed time = 3755976 ms.

Queries against some of the CCIs DMVs don’t do very well either. A simple count took almost ten minutes:

SELECT COUNT(*)
FROM sys.dm_db_column_store_row_group_physical_stats
where OBJECT_ID = OBJECT_ID('BIG_DATA');

All of the work appears to be in the COLUMNSTORE_ROW_GROUPS table-valued function but I didn’t dig any more into it.

If you’re interested in how a query performs let me know in the comments. I will try anything that’s somewhat reasonable.

Final Thoughts


Now I can add working with trillion row tables to my resume. The compression for the one trillion row table was very impressive but everything else was decidedly less impressive. The very long QUERY_TASK_ENQUEUE_MUTEX wait times for nearly all queries were especially disappointing. I plan to do more testing at a later date with a partitioned table to see if that helps at all.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Rowgroup Elimination In SQL Server Column Store Indexes

Rowgroup elimination is a performance optimization based on compressed rowgroup metadata that can allow rowgroups to be skipped during query execution. It’s likely that all of the metadata used for the optimization is exposed in the sys.column_store_segments DMV. This blog post explores some of the less well known rules and limitations for rowgroup elimination.

Test Data


To keep things very simple we’ll build 100 rowgroups with exactly 1 million rows in each of them. ID and ID2 increase from 1 to 10000000 and ID_NULL is always NULL. Code to create and populate the table:

DROP TABLE IF EXISTS dbo.MILLIONAIRE_CCI;

CREATE TABLE dbo.MILLIONAIRE_CCI (
	ID BIGINT NULL,
	ID2 BIGINT NULL,
	ID_NULL BIGINT NULL,
	INDEX CCI_MILLIONAIRE_CCI CLUSTERED COLUMNSTORE
);

DECLARE @loop INT = 0;
BEGIN
	SET NOCOUNT ON;
	WHILE @loop < 100
	BEGIN
		INSERT INTO dbo.MILLIONAIRE_CCI WITH (TABLOCK)
		SELECT t.RN, t.RN, NULL
		FROM (
			SELECT TOP (1000000)
				(1000000 * @loop)
				+ ROW_NUMBER()
					OVER (ORDER BY (SELECT NULL)) RN
			FROM master..spt_values t1
			CROSS JOIN master..spt_values t2
			ORDER BY RN
		) t
		OPTION (MAXDOP 1);

		SET @loop = @loop + 1;
	END;
END;

We can expect very good rowgroup elimination on the ID and ID2 columns based on how we built them. That can be verified by calculating the REFF or by looking at sys.column_store_segments:

a10_not_null_DMV

Code to generate the above result set:

SELECT css.min_data_id, css.max_data_id, css.has_nulls
FROM sys.objects o
INNER JOIN sys.columns c ON o.object_id = c.object_id
INNER JOIN sys.partitions p ON o.object_id = p.object_id
INNER JOIN sys.column_store_segments css
    ON p.hobt_id = css.hobt_id
    AND css.column_id = c.column_id
INNER JOIN sys.dm_db_column_store_row_group_physical_stats s
    ON o.object_id = s.object_id
    AND css.segment_id = s.row_group_id
    AND s.partition_number = p.partition_number
WHERE o.name = 'MILLIONAIRE_CCI'
AND c.name = 'ID'
AND s.[state] = 3
ORDER BY css.min_data_id, css.segment_id;

Many of the test queries below select a single aggregate value. This isn’t done for any special reason other than to limit the size of the result set. The easiest way to see how many rowgroups were skipped is to use SET STATISTICS IO ON and that requires that the results be returned to the client.

Single Column Filtering


Consider the following query:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID IN (1000000, 2000001);

Based on how we built the segments for the ID column we might expect that only two segments will need to be read: segment 1 with ID values of 1-1000000 and segment 3 with ID values of 2000001-3000000. As usual, SQL Server does not care about our expectations:

Table ‘MILLIONAIRE_CCI’. Segment reads 3, segment skipped 97.

Why did the storage engine scan two segments instead of three? Running another test makes the problem more clear:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID IN (1, 100000000);

For this query we end up scanning the entire table:

Table ‘MILLIONAIRE_CCI’. Segment reads 100, segment skipped 0.

It seems as if the query optimizer reduces the predicate against the filtered column to be a range of IDs. That range of IDs is used for rowgroup elimination. In some cases it’s possible to write a WHERE clause that won’t return any rows but still isn’t eligible for rowgroup elimination. The storage engine is not able to skip any segments while executing the below query:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID < 0 OR ID > 100000000;

There isn’t an issue when the where clause is filtering on a contiguous range. For example, the following query skips 98 segments as expected:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID BETWEEN 1 AND 2000000;

There also isn’t an issue when filtering down to multiple values as long as those values are sufficiently close together, as shown with the first example. I also wasn’t able to find any liminations around the number of values in the IN clause. The query below reads 1 segment and skips 99 as we might hope:

SELECT MAX(l.ID)
FROM dbo.MILLIONAIRE_CCI l
WHERE l.ID IN (
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30
, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40
, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50
, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60
, 61, 62, 63, 64
);

If we add one more filter value then the query optimizer changes the plan to use a join:

a10_part1_to_join

The above query is eligible for rowgroup elimination but it follows slightly different ruless as covered later in this post.

It is possible to disable the transformation to a join by using the undocumented query hint QueryRuleOff SelToLSJ. With 976 entries in the IN clause I still get rowgroup elimination as expected. With 977 entries nothing was pushed to the scan at all, and we get a truly horrible plan:

a10_terrible_plan

This doesn’t appear to be a columnstore limitation. The same behavior can be observed with a clusted rowstore index.

Getting back on track, the internal calculation around which rowgroups to skip isn’t always as simple as calculating the minimum and maximum in the range and using those values to do elimination. It’s possible to end up with no rowgroup elimination even when the maximum and minimum ID in the WHERE clause are close to each other. Consider the following query:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID BETWEEN 1 AND 2
OR ID BETWEEN 2 AND 3;

The storage engine only has to read a single segment. We can see in the query plan that the optimizer was able to simplify the expression into something that happens to qualify for rowgroup elimination:

a10_part1_rewrite

Now consider the following query:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID BETWEEN 1 AND 2
OR ID BETWEEN 3 AND 4;

It would be helpful if the query optimizer changed the predicate to ID BETWEEN 1 AND 4 when doing calculations around which rowgroups can be skipped. This does not happen, and as a result all 100 rowgroups are scanned. Rowgroup elimination won’t be available when the WHERE clause is a sufficiently complicated mix of AND and OR conditions, even when filtering on just one column.

NULLs


Information about NULLs is stored internally and can be used for rowgroup elimination. SQL Server knows that none of the compressed segments for the ID column contain NULL, so the storage engine can skip all 100 segments for the following query:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID IS NULL;

Naturally, reversing the filter for this query will require the storage engine to scan the entire table.

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID IS NOT NULL;

We might expect that query to skip all segments if we change the filter column to ID_NULL. All rows in the rowgroups for ID_NULL are NULL and SQL Server ought to be aware of that fact. However, the storage engine still scans the entire table even for the query below:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID_NULL IS NOT NULL;

The DMV for ID_NULL doesn’t look as we might expect:

a10_NULL_DMV

sys.column_store_dictionaries has a value of 0 for the entry_count column. It seems likely that the fact that the segments only contain NULL can be deduced from information already tracked by SQL Server. Rowgroup elimination for IS NOT NULL may have not been added because it was thought to be too unlikely of a use case.

Filters on Multiple Columns


To state it simply, rowgroup elimination can work quite well with AND predicates on different columns. It will not work with OR predicates on different columns unless the query optimizer can simplify the expression to something that’s eligible for rowgroup elimination.

The following queries are all able to skip 99 rowgroups:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID = 1 AND ID2 = 1;

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID BETWEEN 1 AND 2
AND ID2 BETWEEN 3 AND 4;

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID BETWEEN 1 AND 100000000
AND ID2 BETWEEN 1000001 AND 2000000;

This query skips all 100 rowgroups:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID = 1 AND ID2 = 1000001;

The storage engine doesn’t take the union of rowgroups that could be relevant. It instead takes the intersection, so adding AND predicates won’t increase the number of segments scanned, unless perhaps if you do something very unreasonable. The following query scans one rowgroup as expected:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID BETWEEN 1 AND 100000000
AND ID2 BETWEEN 1000001 AND 2000000
AND ID > ID2;

The final part of the WHERE clause is implemented in a filter operator. The rest of the WHERE clause remains eligible for rowgroup elimination.

Now let’s try a simple query with an OR predicate:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID = 1 OR ID2 = 1;

We might hope that the storage engine is able to deduce that only the first segment is relevant. Instead, rowgroup elimination isn’t even attempted. The predicate is implemented as a filter:

a10_FILTER

The only situation with OR filters that I’ve found to work with rowgroup elimination is when the optimizer can eliminate one of them. For example, the following query scans 5 segments because the optimizer is able to eliminate the condition on the ID2 column:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID IN (1, 5000000) OR ID2 BETWEEN 1 AND 0;

Joins


The query optimizer is able to copy predicates when filtering and joining on the same column. The copied predicates are eligible for rowgroup elimination. Consider the query below:

SELECT MAX(l1.ID)
FROM dbo.MILLIONAIRE_CCI l1
INNER JOIN dbo.MILLIONAIRE_CCI l2 ON l1.ID = l2.ID
WHERE l1.ID BETWEEN 1 AND 1000000;

Only two segments are read because the filter on ID can be applied to both sides of the join. The same behavior can be observed when forcing a merge join. Loop join is a bit different. As covered in the post on CCI string aggregation, rowgroup elimination does not occur on the inner side of a loop. Consider the following query:

SELECT MAX(l1.ID)
FROM dbo.MILLIONAIRE_CCI l1
INNER JOIN dbo.MILLIONAIRE_CCI l2 ON l1.ID = l2.ID
WHERE l1.ID BETWEEN 1 AND 1000
OPTION (LOOP JOIN, NO_PERFORMANCE_SPOOL);

The inner side is scanned 1000 times and the outer side is scanned once. The filter on ID allows all segments to be skipped besides one. So we should read 1001 segments and skip 1001 * 100 – 1001 = 99099 segments. This is what happens:

Table ‘MILLIONAIRE_CCI’. Segment reads 1001, segment skipped 99099.

More segments will be read depending on how many rowgroups the filter crosses. Suppose that we include rows with an ID that’s between 999501 and 1000500:

SELECT MAX(l1.ID)
FROM dbo.MILLIONAIRE_CCI l1
INNER JOIN dbo.MILLIONAIRE_CCI l2 ON l1.ID = l2.ID
WHERE l1.ID BETWEEN 999501 AND 1000500
OPTION (LOOP JOIN, NO_PERFORMANCE_SPOOL);

Now each scan on both the inner and outer side will need to read two segments:

Table ‘MILLIONAIRE_CCI’. Segment reads 2002, segment skipped 98098.

It’s possible to get rowgroup elimination even when filtering and joining on different columns. Consider the following query that joins on ID but filters on ID2:

SELECT MAX(l1.ID)
FROM dbo.MILLIONAIRE_CCI l1
INNER JOIN dbo.MILLIONAIRE_CCI l2 ON l1.ID = l2.ID
WHERE l1.ID2 BETWEEN 1 AND 1000000;

We still get rowgroup elimination against both sides of the join:

Table ‘MILLIONAIRE_CCI’. Segment reads 2, segment skipped 198.

The key is the optimized bitmap:

a10_opt_bitmap

That allows rowgroup elimination to happen on both sides. Bitmap optimization can only occur with hash joins, so queries written in this way that do a merge or loop join won’t be able to take advantage of rowgroup elimination against both tables.

Less Reasonable Queries


Below is a set of sometimes unreasonable queries to test some of the limits around rowgroup elimilation. It was surprising how often the queries remained eligible for rowgroup elimination. For example, local variables seem to cause no issues, even without PEO. The following query reads just one segment:

DECLARE @ID_FILTER BIGINT = 1;
SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID = @ID_FILTER;

Data type conversions on the filtered expression don’t make the query ineligible for rowgroup elimination:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID = '1';

Casting on the filtered column is going to prevent rowgroup elimination. As will “optimizer tricks” like adding zero to the column:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID + 0 = 1;

We read all rowgroups:

Table ‘MILLIONAIRE_CCI’. Segment reads 100, segment skipped 0.

The query below is eligible for rowgroup elimination:

SELECT MAX(ID)
FROM dbo.MILLIONAIRE_CCI
WHERE ID <= CEILING(RAND());

Using scalar UDFs in queries is a terrible idea, but let’s create one for testing purposes:

CREATE OR ALTER FUNCTION dbo.CHEAP_UDF() RETURNS BIGINT
AS
BEGIN
	RETURN 1;
END;

As you might expect, the following query runs without parallelism and cannot skip any segments:

SELECT MAX(l.ID)
FROM dbo.MILLIONAIRE_CCI l
WHERE l.ID = dbo.CHEAP_UDF();

However, if we add SCHEMABINDING to the function definition then we get rowgroup elimination:

Table ‘MILLIONAIRE_CCI’. Segment reads 1, segment skipped 99.

The query below gets rowgroup elimination with and without SCHEMABINDING:

SELECT MAX(l.ID)
FROM dbo.MILLIONAIRE_CCI l
WHERE l.ID = (SELECT MAX(dbo.CHEAP_UDF()));

Query Rewrites for Better Rowgroup Elimination


In some cases it’s possible to rewrite queries to get better rowgroup elimination. This requires knowing your data and awareness of the rules around rowgroup elimination. Going back to an earlier example, the following query isn’t eligible for rowgroup elimination (without very convenient constraints):

SELECT *
FROM dbo.MILLIONAIRE_CCI
WHERE ID = 1 OR ID2 = 1;

It can be written to use UNION or UNION ALL. Here’s the UNION query:

SELECT *
FROM dbo.MILLIONAIRE_CCI
WHERE ID = 1

UNION 

SELECT *
FROM dbo.MILLIONAIRE_CCI
WHERE ID2 = 1;

Now the storage engine skips 198 segments and only reads 2:

Table ‘MILLIONAIRE_CCI’. Segment reads 2, segment skipped 198.

In some cases it may be advantageous to avoid the sort. The query below has the same rowgroup elimination:

SELECT *
FROM dbo.MILLIONAIRE_CCI
WHERE ID = 1

UNION ALL

SELECT *
FROM dbo.MILLIONAIRE_CCI
WHERE ID2 = 1 AND ID <> 1;

Here’s the query plan:

a10_section_rewrite_UNION_ALL

Consider another query with a wide range of values in the IN clause, but filtered against a single column. The query below won’t be able to skip any rowgroups because we’re including the minimum and maximum value of ID in the query’s results:

SELECT *
FROM dbo.MILLIONAIRE_CCI
WHERE ID IN (
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
, 100000000
);

It may be impractical to write out the query using UNION. Instead, we can essentially force a join by putting the filter values into a derived table. The optimizer is likely to move the IN list to a constant scan and do a hash join to the CCI. We can get rowgroup elimination through the bitmap filter on the hash join. Here’s one way to rewrite the query:

SELECT c.*
FROM
(
	VALUES (1), (2), (3), (4), (5)
	, (6), (7), (8), (9), (10)
	, (100000000)
) v(x)
INNER JOIN dbo.MILLIONAIRE_CCI c ON c.ID = v.x;

Here’s the plan:

a10_section_rewrite_hash

As expected, we only need to scan 2 rowgroups:

Table ‘MILLIONAIRE_CCI’. Segment reads 2, segment skipped 98.

SQL Server 2017 Changes


I ran all of the test queries against SQL Server 2017 RC2. I was not able to observe any differences. It may be that Microsoft did not choose to make improvements in this area, or any improvements were missed by my test cases.

Final Thoughts


Rowgroup elimination seems designed to reduce IO requirements for queries that filter against contiguous ranges against a column, like filtering against a single month of data from a table, or when joining to the CCI through a hash join. It’s possible to write queries for which rowgroup elimination does not occur, even though SQL Server in theory has all of the information that it would need to perform rowgroup elimination. From a practical point of the view, the biggest limitation is probably around OR logic.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Aggregate Pushdown Limitations With SQL Server Column Store Indexes

Aggregate pushdown is an optimization for aggregate queries against columnstore tables that was introduced in SQL Server 2016. Some aggregate computations can be pushed to the scan node instead of calculated in an aggregate node. I found the documentation to be a little light on details so I tried to find as many restrictions around the functionality as I could through testing.

Test Data


Most of my testing was done against a simple CCI with a single compressed rowgroup:

DROP TABLE IF EXISTS dbo.AP_1_RG;
CREATE TABLE dbo.AP_1_RG (
	ID1 bigint NULL,
	ID2 bigint NULL,
	AGG_COLUMN BIGINT NOT NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.AP_1_RG WITH (TABLOCK)
SELECT
t.RN % 8000
, t.RN % 8000
, 0
FROM
(
	SELECT TOP (1048576)
	ROW_NUMBER() OVER
		(ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
) t
OPTION (MAXDOP 1);

I found this table structure and data to be convenient for most of the test cases, but all tests can be reproduced with different data or a different table structure.

Restrictions without GROUP BY


Aggregate pushdown is supported both with and without a GROUP BY clause. I found it convenient to test those two cases separately. Below is a list of restrictions that I found or verified during testing.

Data Type of Aggregate Column

The documentation says:

Any datatype <= 64 bits is supported. For example, bigint is supported as its size is 8 bytes but decimal (38,6) is not because its size is 17 bytes. Also, no string types are supported.

However, this isn’t quite accurate. Float data types are not supported. numeric(10,0) is supported despite requiring 9 bytes for storage. Here’s a full table of results:

a9_data_type_table

* bit is not supported for aggregates in general
** not all data types were tested

I would summarize support as all date and time data types are supported except datetimeoffset. All exact numeric data types are supported if they are under 10 bytes. Approximate numerics, strings, and other data types are not supported.

Data Type Conversions

Aggregate pushdown does not appear to be supported if there is any data type conversion in any part of the aggregate expression. Both implicit and explicit data type conversions can cause issues, although it ultimately depends on the rules for data type precedence and if the query optimizer determines if a conversion is needed. For example, the following queries are eligible for pushdown:

SELECT MAX(ID1 + 1)
FROM dbo.AP_1_RG;

SELECT MAX(ID1 + CAST(1 AS BIGINT))
FROM dbo.AP_1_RG;

SELECT MAX(ID1 + CAST(1 AS INT))
FROM dbo.AP_1_RG;

SELECT SUM(ID1 + 1)
FROM dbo.AP_1_RG;

SELECT SUM(1 * ID1)
FROM dbo.AP_1_RG;

SELECT MAX(CAST(ID1 AS BIGINT))
FROM dbo.AP_1_RG;

However, the following queries are not:

SELECT MAX(CAST(ID1 AS INT))
FROM dbo.AP_1_RG;

SELECT SUM(1.5 * ID1)
FROM dbo.AP_1_RG;

SELECT SUM(ID1 + CAST(1 AS BIGINT))
FROM dbo.AP_1_RG;

Sometimes the compute scalar appears with the conversion even when we might not expect it, like for the last query:

a9_compute_scalar

Unsupported Operators

Division and modulus prevent aggregate pushdown even when they wouldn’t change the result or if there isn’t a data type conversion in the plan. There are likely other unsupported operators as well. The following queries are eligible for pushdown:

SELECT MAX(ID1 + 1)
FROM dbo.AP_1_RG;

SELECT MAX(ID1 - 1)
FROM dbo.AP_1_RG;

SELECT MAX(ID1 * 2)
FROM dbo.AP_1_RG;

The following queries are not eligible for pushdown:

SELECT MAX(ID1 / 1)
FROM dbo.AP_1_RG;

SELECT MAX(ID1 % 9999999999999)
FROM dbo.AP_1_RG;

Aggregate Cannot be Applied to Scan

The aggregate expression must be applied directly to the scan. If there’s a filter between the aggregate and the scan then it won’t work. A compute scalar node between the aggregate and the scan can be okay.

Filter expressions involving OR on different columns tend to be calculated as a filter. This means that the following query isn’t eligible for pushdown:

SELECT MIN(ID1)
FROM dbo.AP_1_RG
WHERE ID1 > 0 OR ID2 > 0;

It’s likely that this restriction will affect many queries with joins to other tables.

Local Variables Without PEO

Aggregate pushdown is not available if there is a local variable in the aggregate expression unless the optimizer is able to embed the literal parameter value in the query, such as with a RECOMPILE hint. For example, the first query is not eligible for pushdown but the second query is:

DECLARE @var BIGINT = 0;
-- no
SELECT MIN(ID1 + @var)
FROM dbo.AP_1_RG;

-- yes
SELECT MIN(ID1 + @var)
FROM dbo.AP_1_RG
OPTION (RECOMPILE);

Trivial Plans

Simple queries that use SUM, AVG, COUNT, or COUNT_BIG against very small tables may get a trivial plan. In SQL Server 2016 that trivial plan will not be eligible for batch mode so aggregate pushdown will not occur. Consider the following CCI with 10000 rows in a compressed rowgroup:

DROP TABLE IF EXISTS AGG_PUSHDOWN_FEW_ROWS;

CREATE TABLE dbo.AGG_PUSHDOWN_FEW_ROWS (
ID BIGINT NOT NULL,
INDEX CCI2 CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.AGG_PUSHDOWN_FEW_ROWS WITH (TABLOCK)
SELECT t.RN
FROM
(
	SELECT TOP (10000) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
	CROSS JOIN master..spt_values t3
) t
OPTION (MAXDOP 1);

ALTER INDEX CCI2 ON AGG_PUSHDOWN_FEW_ROWS
REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);

With the default cost threshold for parallelism value of 5 I get a trivial plan:

a9_trivial_plan

If I decrease the CTFP value to 0 I get batch mode along with aggregate pushdown. As far as I can tell, queries with MAX or MIN do not have this issue.

Lack of Hash Match

For some queries the query optimizer may cost a stream aggregate as a cheaper alternative to a hash match aggregate. Aggregate pushdown is not available with a stream aggregate. If I truncate the AGG_PUSHDOWN_FEW_ROWS table and load 3002 rows into it I get a stream aggregate and no pushdown. With 3003 rows I get a hash match and pushdown. The tipping point depends on the aggregate function, MAXDOP, the data loaded into the table, and so on. The undocumented query hint QUERYRULEOFF GbAggToStrm can be used to encourage a hash match aggregate. If the aggregated column is a string this gets more complicated.

Non-trivial CASE statements

Trivial CASE statements inside the aggregate are still eligible for pushdown:

SELECT SUM(CASE WHEN 1 = 1 THEN ID1 ELSE ID2 END)
FROM dbo.AP_1_RG;

However, many other CASE statements are not. Here is one example:

SELECT SUM(CASE WHEN ID1 < ID2 THEN ID1 ELSE ID2 END)
FROM dbo.AP_1_RG;

Basic Restrictions

For completeness I’ll list a few more of the more obvious restrictions on pushdown. Many of these are documented by Microsoft. The CCI must contain at least one compressed rowgroup. Delta stores are not eligible for pushdown. There are only six aggregate functions supported: COUNT, COUNT_BIG, MIN, MAX, SUM, and AVG. COUNT (DISTINCT col_name) is not supported.

If a query does not get batch mode due to TF 9453 or for other reasons it will not get aggregate pushdown. Undocumented trace flag 9354 directly disables aggregate pushdown.

Restrictions with GROUP BY


Queries with a GROUP BY have many, if not, all of the same restrictions on the aggregate expressions. As far as I can tell there are no restrictions on the data type of the GROUP BY columns. More than one GROUP BY column is supported as well. However, there are a few restrictions which only apply to queries with a GROUP BY clause.

Non-direct Column References

The columns in the GROUP BY need to be columns. Adding scalars and other nonsense appears to make the query ineligible. The following queries are not eligible:

SELECT ID1 + 0, SUM(AGG_COLUMN)
FROM dbo.AP_1_RG
GROUP BY ID1 + 0;

SELECT ID1 + ID2, SUM(AGG_COLUMN)
FROM dbo.AP_1_RG
GROUP BY ID1 + ID2;

This one is eligible:

SELECT ID1, ID2, SUM(AGG_COLUMN)
FROM dbo.AP_1_RG
GROUP BY ID1, ID2;

Segment Not Compressed Enough?

A rowgroup appears to be ineligible for aggregate pushdown if the GROUP BY column has a segment size which is too large. This can be observed with the following test data:

DROP TABLE IF EXISTS AP_3_RG;
CREATE TABLE dbo.AP_3_RG (
ID1 bigint NULL,
AGG_COLUMN BIGINT NOT NULL,
INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.AP_3_RG WITH (TABLOCK)
SELECT
t.RN % 16000
, 0
FROM
(
	SELECT TOP (1 * 1048576) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
	CROSS JOIN master..spt_values t3
) t
OPTION (MAXDOP 1);

INSERT INTO dbo.AP_3_RG WITH (TABLOCK)
SELECT
t.RN % 17000
, 0
FROM
(
	SELECT TOP (1 * 1048576) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
	CROSS JOIN master..spt_values t3
) t
OPTION (MAXDOP 1);

INSERT INTO dbo.AP_3_RG WITH (TABLOCK)
SELECT
t.RN % 16000
, 0
FROM
(
	SELECT TOP (1 * 1048576) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
	CROSS JOIN master..spt_values t3
) t
OPTION (MAXDOP 1);

Rows from the second rowgroup are not locally aggregated for the following query:

SELECT ID1, SUM(AGG_COLUMN)
FROM AP_3_RG
GROUP BY ID1
OPTION (MAXDOP 1);

The following query has 2097152 locally aggregated rows which correspond to the first and third rowgroups. From the actual plan:

a9_local_agg

Poking around in the sys.column_store_segments and sys.column_store_dictionaries DMVs doesn’t reveal any interesting differences other than the rowgroup with more distinct values has a much larger size (2097736 bytes versus 128576 bytes ). We can go deeper with the undocumented DBCC CSINDEX:

DBCC TRACEON (3604);

DBCC CSINDEX (
7, -- DB_ID
72057613148028928, -- hobt_id
2, -- 1 + column_id
0, -- segment_id
1, -- 1 for segment
0 -- print option
);

Among other differences, for the 16000 distinct value segment we see:

Bitpack Data Header:

Bitpack Entry Size = 16
Bitpack Unit Count = 0
Bitpack MinId = 3
Bitpack DataSize = 0

But for the 17000 distinct value segment we see:

Bitpack Data Header:
Bitpack Entry Size = 16
Bitpack Unit Count = 262144
Bitpack MinId = 3
Bitpack DataSize = 2097152

Perhaps bitpack compressed data is not eligible for aggregate pushdown?

It’s important to note that segments are not compressed independently in SQL Server, despite the data being stored at a column level. For the AP_1_RG table we’re eligible for pushdown when aggregating by ID1 or ID2. However, if I truncate the table and change the data slightly:

TRUNCATE TABLE dbo.AP_1_RG;

INSERT INTO dbo.AP_1_RG WITH (TABLOCK)
SELECT
t.RN % 8000
, t.RN % 8001 -- was previously 8000
, 0
FROM
(
	SELECT TOP (1048576) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
	CROSS JOIN master..spt_values t3
) t
OPTION (MAXDOP 1);

Now both columns are no longer eligible for aggregate pushdown. A REBUILD operation on the table does not help. Tricking SQL Server into assigning more memory to the columnstore compression also does not help.

Pushdown Surprises


During testing I was surprised by a few queries that supported aggregate pushdown. The most surprising was that the following query can get aggregate pushdown:

SELECT MAX(ID + ID2)
FROM dbo.AP_1_RG;

I have no idea how it works, but it does. For a few others, TABLESAMPLE does not prevent aggregate pushdown from happening. In addition, GROUP BY CUBE, GROUPING SETS, and ROLLUP are supported as well.

Changes with SQL Server 2017


I ran the same series of tests on SQL Server 2017 RC1. The only difference I observed was that I could no longer get a trivial plan without batch mode aggregation. This change was announced here by Microsoft.

Final Thoughts


As you can see, there are many undocumented restrictions around aggregate pushdown. These limits may be changed or go away as Microsoft continues to update SQL Server. Pushdown with GROUP BY is supported for some queries against some tables, but eligibility appears to be based on how the data is compressed, which cannot be predicted ahead of time. In my opinion, this makes it rather difficult to count on in practice.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Copying Large Tables Efficiently In SQL Server

Sometimes there’s a need to copy tables from one database to another. A large INSERT to copy all of the rows at once may not be desired or possible. For example, if the target database has a recovery model of full we may need to avoid filling the log or long rollbacks if the process needs to be canceled. If the database has a recovery model of simple and there’s a lot of other activity going on we may need to avoid filling the log due to a lengthy transaction. Minimal logging won’t help with that. There may be a desire to throttle each loop with a WAITFOR DELAY command, and so on.

Think of the tables as being copied by a background process. We want to copy them in chunks while efficiently using the server’s resources. It’s not a race to copy all of the data for a particular table as quickly as possible. Also, there many be many tables to move and they could have different structures for their clustered indexes.

Test Data


For the test table I deliberately picked a table far too large to fit into the buffer cache. This wasn’t hard because I configured SQL Server to have the minimum required memory of 1 GB. All tests were conducted with a recovery model of simple. To make things interesting, but not too interesting, I gave the SOURCE_TABLE a two column clustered index:

DROP TABLE IF EXISTS dbo.SOURCE_TABLE;
CREATE TABLE dbo.SOURCE_TABLE (
	ID1 BIGINT NOT NULL,
	ID2 BIGINT NOT NULL,
	PAGE_TURNER VARCHAR(170) NOT NULL,
	PRIMARY KEY (ID1, ID2)
);

Aiming for a target size of 20 GB, I inserted 100 million rows:

CREATE TABLE #t (ID BIGINT NOT NULL, PRIMARY KEY (ID));

INSERT INTO #t WITH (TABLOCK)
SELECT TOP (10000) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN  master..spt_values t2;

INSERT INTO dbo.SOURCE_TABLE WITH (TABLOCK)
SELECT
  t1.ID
, t2.ID
, REPLICATE('Z', 170)
FROM #t t1
CROSS JOIN #t t2
ORDER BY t1.ID, t2.ID
OPTION (MAXDOP 1, NO_PERFORMANCE_SPOOL);

The table has exactly 20 GB worth of data pages!

a8_pages

Which of course works out to 2.5 million pages. As an aside, I didn’t want to use the temp table to do the data prep but couldn’t find a good way around it. The usual method of spt_values, TOP, and ROW_NUMBER() lead to a pretty large sort. I tried all kinds of tricks but couldn’t make it go away:

a8_unnecessary_sort

The TOP N sort for the outer result set is for the first column of the clustered index. The TOP N sort for the inner result set is for the second column of the clustered index. The final sort before the SELECT is for column 1 and column 2. Clearly, the sort isn’t needed because the data will be already sorted in that order coming out of the join. However, the query optimizer can be very stubborn in these situations.

The code samples in this post are designed to move any table that has a unique, clustered index. If you know something about the data in the table or the structure it may be possible to write more efficient code at the table level. Note that there is no attempted handling of concurrency. If the underlying data in the source table is changing then none of this code should be expected to work. The code can also handle heaps as long as you define a clustered index on them first.

To efficiently use server resources I decided that I wanted to loop in order of the clustered key. This should avoid unnecessary page splitting and lead to a cleaner result table. With the right recovery model, it should also take advantage of reduced logging for all of the inserts in SQL Server 2016. Finally, it would be nice to avoid sorting and excessive tempdb usage. Perhaps many of these jobs will run at once and we don’t want them to fail due to tempdb usage.

Straight Insert


The first thing that I tested was a single INSERT to be used as a benchmark for all of the different algorithms:

INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
SELECT *
FROM dbo.SOURCE_TABLE WITH (TABLOCK);

This isn’t a fair comparison because we can get better minimal logging for every row, avoid query plan compilations, and we’re simply doing less work overall. However, I think it’s still useful for context and to see how close we can get to the ideal case with no overhead. Here is a table of performance numbers from sys.dm_exec_sessions for our first test:

a8_table_1

Full Copy Temp Table


We need to loop over our source table and insert chunks of rows into the target table. One strategy to do this is to insert all of the clustered keys from the source table into a temp table with an IDENTITY column. This approach is easy to understand and the number of keys in the clustered index doesn’t make the SQL more complicated. Here’s one implementation:

DECLARE @total_rows_to_move BIGINT,
@batch_size INT = 1000000,
@batch_number INT = 1;

BEGIN

SET NOCOUNT ON;

DROP TABLE IF EXISTS #TARGET_TABLE_PKS;
CREATE TABLE #TARGET_TABLE_PKS (
	ID BIGINT NOT NULL IDENTITY (1, 1),
	ID1 BIGINT NOT NULL,
	ID2 BIGINT NOT NULL,
	PRIMARY KEY (ID)
);

INSERT INTO #TARGET_TABLE_PKS WITH (TABLOCK)
(ID1, ID2)
SELECT ID1, ID2
FROM dbo.SOURCE_TABLE WITH (TABLOCK)
ORDER BY ID1, ID2;

SET @total_rows_to_move = @@ROWCOUNT;

WHILE @batch_number <= CEILING(@total_rows_to_move / @batch_size)
BEGIN
	INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
	SELECT s.*
	FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
	INNER JOIN #TARGET_TABLE_PKS t
	ON s.ID1 = t.ID1 AND s.ID2 = t.ID2
	WHERE t.ID BETWEEN
		1 + @batch_size * (@batch_number - 1)
		AND @batch_size * @batch_number
	OPTION (RECOMPILE);

	SET @batch_number = @batch_number + 1;
END;

DROP TABLE #TARGET_TABLE_PKS;

END;

The RECOMPILE hint was included to avoid default estimates caused by the local variables. Even with that, our first attempt does not go so well. The code takes nearly 40 minutes to complete. All performance metrics are bad across the board:

a8_table_2

Looking at the query plans, SQL Server chose a merge join between the source table and the temp table:

a8_merge_join

The merge join won’t always scan every row from the hundred million row table. In fact, the first loop will scan the minimum required number of rows, one million. The next loop scans two million rows, the one after that three million, and so on. The scan can stop early but it always starts with the first row in the table. This strategy is very poorly suited for how we’re processing the table. Performance will be quadratic with the number of loops.

Fix The Join


For this code we know something that the query optimizer doesn’t. A loop join feels like a better choice than merge for this query due to how we’re processing data from the temp table. It’ll do a constant amount of work for each loop. One way to encourage a loop join is to lower the cardinality estimate for the temp table. In the code below I did this by adding a TOP operator and removing the RECOMPILE hint:

DECLARE @total_rows_to_move BIGINT,
@batch_size INT = 1000000,
@batch_number INT = 1;

BEGIN

SET NOCOUNT ON;

DROP TABLE IF EXISTS #TARGET_TABLE_PKS;
CREATE TABLE #TARGET_TABLE_PKS (
	ID BIGINT NOT NULL IDENTITY (1, 1),
	ID1 BIGINT NOT NULL,
	ID2 BIGINT NOT NULL,
	PRIMARY KEY (ID)
);

INSERT INTO #TARGET_TABLE_PKS WITH (TABLOCK)
(ID1, ID2)
SELECT ID1, ID2
FROM dbo.SOURCE_TABLE WITH (TABLOCK)
ORDER BY ID1, ID2;

SET @total_rows_to_move = @@ROWCOUNT;

WHILE @batch_number <= CEILING(@total_rows_to_move / @batch_size)
BEGIN
	INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
	SELECT s.*
	FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
	INNER JOIN (
		SELECT TOP (@batch_size) *
		FROM #TARGET_TABLE_PKS
		WHERE ID BETWEEN
			1 + @batch_size * (@batch_number - 1)
			AND @batch_size * @batch_number
		ORDER BY ID
	) t
	ON s.ID1 = t.ID1 AND s.ID2 = t.ID2;

	SET @batch_number = @batch_number + 1;
END;

DROP TABLE #TARGET_TABLE_PKS;

END;

Now we get a nested loop join because of the default estimate of 100 rows:

a8_loop_join

As a nice bonus, the unnecessary (from our point of view) sort operator goes away. The data already is in clustered key order due to how we built the temp table. The cardinality estimate for the insert is a bit low, but if the data is already sorted then why should it matter? We see better performance than before:

a8_table_3

With the old code we did 100 loops and read an average of 50.5 million rows from the source table. We also read all 100 million rows to build the temp table, so we read a total of 5.15 billion rows from the table. That adds up to a lot of physical reads. With this code we do 100 million index seeks but we only read a total of 200 million rows from the source table. That might be why the logical reads increased so much but physical reads are way down.

Can we improve the code further? Storing all of the clustered keys in a temp table feels like a bit much. It’s a lot of writes and the operation could fail if the table is too large. It would also be nice to not have to read all 100 million rows from the source table using joins.

Sampled Temp Table


There’s no need to store every row from the source table in the temp table. We can instead store a sample of rows and use that sample to build key ranges to perform clustered index range scans against. That should lead to a dramatic reduction in tempdb space usage and makes the joins to the source table unnecessary. One complication is that the SQL to do efficient range scans becomes more annoying to write as the number of clustered key columns increased. For a table with two clustered key columns we can do the following:

WHERE (
	s.ID1 > @id1_start OR
	(s.ID1 = @id1_start AND s.ID2 >= @id2_start)
)

I’m not doing this technique justice here, but the important part is that all of the filters are seek predicates instead of predicates:

a8_seek_predicates

Here’s the full code:

DECLARE @batch_size INT = 1000000,
@id1_start BIGINT,
@id2_start BIGINT;

BEGIN

SET NOCOUNT ON;

DROP TABLE IF EXISTS #TARGET_TABLE_SAMPLED_PKS;
CREATE TABLE #TARGET_TABLE_SAMPLED_PKS (
	ID1 BIGINT NOT NULL,
	ID2 BIGINT NOT NULL,
	PRIMARY KEY (ID1, ID2)
);

INSERT INTO #TARGET_TABLE_SAMPLED_PKS
SELECT
  ID1
, ID2
FROM
(
	SELECT
	  ID1
	, ID2
	, ROW_NUMBER() OVER (ORDER BY ID1, ID2) RN
	FROM dbo.SOURCE_TABLE WITH (TABLOCK)
) t
WHERE RN % @batch_size = 1;

DECLARE cur CURSOR
LOCAL FAST_FORWARD
FOR
SELECT ID1, ID2
FROM #TARGET_TABLE_SAMPLED_PKS
ORDER BY ID1, ID2;

OPEN cur  

FETCH NEXT FROM cur
INTO @id1_start, @id2_start;

WHILE @@FETCH_STATUS = 0
BEGIN
	INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
	SELECT TOP (@batch_size) s.*
	FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
	WHERE (
	s.ID1 > @id1_start
	OR (s.ID1 = @id1_start AND s.ID2 >= @id2_start)
	)
	ORDER BY s.ID1, s.ID2;

	FETCH NEXT FROM cur
	INTO @id1_start, @id2_start;
END;

CLOSE cur;
DEALLOCATE cur;

DROP TABLE #TARGET_TABLE_SAMPLED_PKS;

END;

Performance improves yet again:

a8_table_4

However, why is the number of logical reads so high? We’ve eliminated the nested loop joins so this is an unexpected result.

Cardinality Estimates for Inserts


It turns out that the cardinality estimate for inserts can matter after all. DMLRequestSort for the insert operator is set to false. That’s bad when we’re inserting a million of rows at a time. I don’t know the details, but a bad cardinality estimate can cause the wrong internal APIs to be used for inserting the data and logging. We no longer need to reduce the cardinality estimate to get the plan that we want, so let’s try replacing the @batch_size variable with a hardcoded value of 1000000 for the TOP operator. After that change we see another big gain in performance:

a8_table_5

The table insert now has the DMLRequestSort property set to true.

No Temp Table


One issue with the code above is that we do a clustered index scan of the entire table at the beginning. Some of that data remains in the buffer cache but very little of it will be useful for the looping part of the procedure. If we can find a way to loop as we need to we may be able to take better advantage of the buffer cache. Also, why use a temp table when you don’t have to?

One way to accomplish this is with the OFFSET keyword introduced in SQL Server 2012. SQL Server always reads and throws away the number of rows specified in the OFFSET clause. It cannot smartly skip ahead in the index. To avoid some of the performance problems with OFFSET we need to use it with an anchor row. Here is an algorithm that uses OFFSET instead of a temp table:

DECLARE @batch_size INT = 1000000,
@id1_start BIGINT,
@id2_start BIGINT;

BEGIN

SET NOCOUNT ON;

SELECT @id1_start = ID1, @id2_start = ID2
FROM SOURCE_TABLE s WITH (TABLOCK)
ORDER BY ID1, ID2
	OFFSET 0 ROWS
	FETCH FIRST 1 ROW ONLY;

WHILE @id1_start IS NOT NULL
BEGIN
	INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
	SELECT TOP (1000000) s.*
	FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
	WHERE (
	s.ID1 > @id1_start OR
	(s.ID1 = @id1_start AND s.ID2 >= @id2_start)
	)
	ORDER BY s.ID1, s.ID2;

	SET @id1_start = NULL;
	SET @id2_start = NULL;

	SELECT @id1_start = ID1, @id2_start = ID2
	FROM SOURCE_TABLE s WITH (TABLOCK)
	WHERE (
	s.ID1 > @id1_start OR
	(s.ID1 = @id1_start AND s.ID2 >= @id2_start)
	)
	ORDER BY ID1, ID2
		OFFSET (@batch_size) ROWS
		FETCH FIRST 1 ROW ONLY;
END;

END;

OFFSET to get the row that starts the next key range to process. Performance is a bit better than before:

a8_table_6

However, the number of physical reads didn’t fall as much as it could have. The source table has 2.5 million data pages and the straight insert only needed 2.5 million physical reads to copy the data, which is hardly a coincidence.

Better Buffer Cache


There’s a subtle issue in the last algorithm. Think about the order of scans and inserts. For the Nth loop we scan one million rows from the source table using the anchor row that we found previously. Nearly all of those rows won’t be in the buffer cache so they will be physical reads. The insert happens at the same time and that certainly has an effect on what will remain in the buffer cache. Then we scan the same range again to find the next anchor row. It feels like it would be better to scan the upcoming range first, get the next anchor row, then perform the insert up to the anchor row. That way data that we want to remain in the buffer cache won’t be kicked out as much by the INSERT. The code to do this is a bit more complex:

DECLARE @batch_size INT = 1000000,
@id1_start BIGINT,
@id2_start BIGINT,
@id1_next BIGINT,
@id2_next BIGINT;

BEGIN

SET NOCOUNT ON;

SELECT @id1_start = ID1, @id2_start = ID2
FROM SOURCE_TABLE s WITH (TABLOCK)
ORDER BY ID1, ID2
	OFFSET 0 ROWS
	FETCH FIRST 1 ROW ONLY;

SELECT @id1_next = ID1, @id2_next = ID2
FROM SOURCE_TABLE s WITH (TABLOCK)
WHERE (
s.ID1 > @id1_start OR
(s.ID1 = @id1_start AND s.ID2 >= @id2_start)
)
ORDER BY ID1, ID2
	OFFSET (@batch_size) ROWS
	FETCH FIRST 1 ROW ONLY;

WHILE @id1_start IS NOT NULL
BEGIN
	INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
	SELECT TOP (1000000) s.*
	FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
	WHERE (
	s.ID1 > @id1_start OR
	(s.ID1 = @id1_start AND s.ID2 >= @id2_start)
	)
	ORDER BY s.ID1, s.ID2;

	SET @id1_start = @id1_next;
	SET @id2_start = @id2_next;
	SET @id1_next = NULL;
	SET @id2_next = NULL;

	SELECT @id1_next = ID1, @id2_next = ID2
	FROM SOURCE_TABLE s WITH (TABLOCK)
	WHERE (
	s.ID1 > @id1_start OR
	(s.ID1 = @id1_start AND s.ID2 >= @id2_start)
	)
	ORDER BY ID1, ID2
		OFFSET (@batch_size) ROWS
		FETCH FIRST 1 ROW ONLY;
END;

END;

However, performance improved yet again. It’s almost like I planned this blog post out in advance!

a8_table_7

The number of physical reads is now very close to the number required by the straight insert. Note that we could have also reduced the physical read count of the previous algorithm by lowering the batch size.

The Single Source Scan Method


What if we don’t want to read the source table twice? After all, the straight insert only needs to read the source table once. We can’t let SQL Server outsmart us that much. The key to this next algorithm is that the target table starts out empty. Since we’re inserting in clustered key order, getting the next anchor for a loop is as simple as selecting the last clustered key from the target table and changing the predicate a little bit:

DECLARE @batch_size INT = 1000000,
@RC int,
@id1_start BIGINT,
@id2_start BIGINT;

BEGIN

SET NOCOUNT ON;

INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
SELECT TOP (1000000) s.*
FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
ORDER BY s.ID1, s.ID2;

set @RC = @@ROWCOUNT;

SELECT TOP (1) @id1_start = ID1, @id2_start = ID2
FROM TARGET_TABLE s WITH (TABLOCK)
ORDER BY ID1 DESC, ID2 DESC;

WHILE @RC = @batch_size
BEGIN
	INSERT INTO dbo.TARGET_TABLE WITH (TABLOCK)
	SELECT TOP (1000000) s.*
	FROM dbo.SOURCE_TABLE s WITH (TABLOCK)
	WHERE (
	s.ID1 > @id1_start OR
	(s.ID1 = @id1_start AND s.ID2 > @id2_start)
	)
	ORDER BY s.ID1, s.ID2;

	set @RC = @@ROWCOUNT;

	SELECT TOP (1) @id1_start = ID1, @id2_start = ID2
	FROM TARGET_TABLE s WITH (TABLOCK)
	ORDER BY ID1 DESC, ID2 DESC;
END;

END;

We see a minor improvement in the number of logical reads as expected:

a8_table_8

This was the best algorithm that I was to come up with.

A Few Bad Ideas


I played around with a few other methods of moving the data and feel the need to caution the reader against them. One idea is to define an AFTER INSERT trigger on the target table and to use the special inserted table to save off the value of the last clustered key inserted into that table. This appeared to just be slower than the single scan method and who wants to use triggers when they can be avoided?

A standard cursor performs extremely poorly because it operators on one row at a time. There’s an impressively poorly-documented alternative called API cursors which can process more than one row at a time. After some struggling I was able to use API cursors to copy data from one table to the other, but there was an intermediate step that loaded data from the cursor into a hidden temp table. Also, the cardinality estimate from the insert came from an EXECUTE and was a single row. Performance was very poor.

Final Thoughts


The point of this post wasn’t to insist that one method of moving data is better than all others. In fact, many different algorithms will be likely be suitable depending on the table structure, table size, and time requirements. However, it’s important to know that seemingly minor changes can lead to large differences in performance when moving around data that cannot fit in the buffer pool. These performance problems can become magnified when moving large tables or lots of tables.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server Clustered Columnstore Index Partitioning Part 1: Rowgroup Elimination Fragmentation

This is part 1 of a series on columnstore index partitioning. The focus of this post is on rowgroup elimination.

Defining Fragmentation for CCIs


There are many different ways that a CCI can be fragmented. Microsoft considers having multiple delta rowgroups, deleted rows in compressed rowgroups, and compressed rowgroups less than the maximum size all as signs of fragmentation. I’m going to propose yet another way. A CCI can also be considered fragmented if the segments for a column that is very important for rowgroup elimination are packed in such a way that limits rowgroup elimination. That sentence was a bit much but the following table examples should illustrate the point.

The Test Data


First we’ll create a single column CCI with integers from 1 to 104857600. I’ve written the query so that the integers will be loaded in order. This is more commonly done using a loop and I’m relying slightly on undocumented behavior, but it worked fine in a few tests that I did:

DROP TABLE IF EXISTS dbo.LUCKY_CCI;

CREATE TABLE dbo.LUCKY_CCI (
	ID BIGINT NOT NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.LUCKY_CCI WITH (TABLOCK)
SELECT TOP (100 * 1048576) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL)) RN
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
ORDER BY RN
OPTION (MAXDOP 1);

On my machine, the table takes less than 40 seconds to populate. We can see that the minimum and maximum IDs for each rowgroup are nice and neat from the sys.column_store_segments DMV:

a7_lucky

This table does not have any rowgroup elimination fragmentation. The segments are optimally constructed from a rowgroup elimination point of view. SQL Server will be able to skip every rowgroup except for one or two if I filter on a sequential range of one million integers:

SELECT COUNT(*)
FROM dbo.LUCKY_CCI
WHERE ID BETWEEN 12345678 AND 13345677;

STATISTICS IO output:

Table ‘LUCKY_CCI’. Segment reads 2, segment skipped 98.

Erik Darling was the inspiration for this next table, since he can never manage to get rowgroup elimination in his queries. The code below is designed to create segments that allow for pretty much no rowgroup elimination whatsoever:

DROP TABLE IF EXISTS dbo.UNLUCKY_CCI;

CREATE TABLE dbo.UNLUCKY_CCI (
	ID BIGINT NOT NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.UNLUCKY_CCI WITH (TABLOCK)
SELECT rg.RN
FROM
(
	SELECT TOP (100) ROW_NUMBER()
		OVER (ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
) driver
CROSS APPLY (
	SELECT driver.RN

	UNION ALL

	SELECT 104857601 - driver.RN

	UNION ALL

	SELECT TOP (1048574) 100 + 1048574 * (driver.RN - 1)
		+ ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
	FROM master..spt_values t2
	CROSS JOIN master..spt_values t3
) rg
ORDER BY driver.RN
OPTION (MAXDOP 1, NO_PERFORMANCE_SPOOL);

Each rowgroup will contain an ID near the minimum (1 – 100) and an ID near the maximum (104857301 – 104857400). We can see this with the same DMV as before:

a7_unlucky

This table has pretty much the worst possible rowgroup elimination fragmentation. SQL Server will not be able to skip any rowgroups when querying almost any sequential range of integers:

SELECT COUNT(*)
FROM dbo.UNLUCKY_CCI
WHERE ID BETWEEN 12345678 AND 13345677;

STATISTICS IO output:

Table ‘UNLUCKY_CCI’. Segment reads 100, segment skipped 0.

Measuring Rowgroup Elimination Fragmentation


It might be useful to develop a more granular measurement for rowgroup elimination instead of just “best” and “worst”. One way to approach this problem is to define a batch size in rows for a filter against the column and to estimate how many total table scans would need to be done if the entire table was read in a series of SELECT queries with each processing a batch of rows. For example, with a batch size of 1048576 query, 1 would include the IDs from 1 to 1048576, query 2 would include IDs from 1 + 1048576 to 1048576 * 2, and so on. I will call this number the rowgroup elimination fragmentation factor, or REFF for short. It roughly represents how many extra rowgroups are read that could have been skipped with less fragmentation. Somewhat-tested code to calculate the REFF is below:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

DECLARE @batch_size INT = 1048576;
DECLARE @global_min_id BIGINT;
DECLARE @global_max_id BIGINT;
DECLARE @segments BIGINT;

DROP TABLE IF EXISTS #MIN_AND_MAX_IDS;
CREATE TABLE #MIN_AND_MAX_IDS (
	min_data_id BIGINT,
	max_data_id BIGINT
);
CREATE CLUSTERED INDEX c ON #MIN_AND_MAX_IDS
(min_data_id, max_data_id);

INSERT INTO #MIN_AND_MAX_IDS
SELECT css.min_data_id, css.max_data_id
FROM sys.objects o
INNER JOIN sys.columns c ON o.object_id = c.object_id
INNER JOIN sys.partitions p ON o.object_id = p.object_id
INNER JOIN sys.column_store_segments css
	ON p.hobt_id = css.hobt_id
	AND css.column_id = c.column_id
INNER JOIN sys.dm_db_column_store_row_group_physical_stats s
	ON o.object_id = s.object_id
	AND css.segment_id = s.row_group_id
	AND s.partition_number = p.partition_number
WHERE o.name = 'LUCKY_CCI'
AND c.name = 'ID'
AND s.[state] = 3;

SET @segments = @@ROWCOUNT;

SELECT
  @global_min_id = MIN(min_data_id)
, @global_max_id = MAX(max_data_id)
FROM #MIN_AND_MAX_IDS;

DROP TABLE IF EXISTS #BUCKET_RANGES;
CREATE TABLE #BUCKET_RANGES (
	min_data_id BIGINT,
	max_data_id BIGINT
);

 -- allows up to 6.4 million pieces
INSERT INTO #BUCKET_RANGES
SELECT
  @global_min_id + (RN - 1) * @batch_size
, @global_min_id + RN * @batch_size - 1
FROM
(
	SELECT ROW_NUMBER() OVER
	(ORDER BY (SELECT NULL)) RN
	FROM master..spt_values t1
	CROSS JOIN master..spt_values t2
) t
WHERE -- number of buckets
t.RN <= CEILING((1.0 * @global_max_id
	- (@global_min_id)) / @batch_size);

 -- number of total full scans
SELECT 1.0 * SUM(cnt) / @segments
FROM (
	SELECT COUNT(*) cnt
	FROM #BUCKET_RANGES br
	LEFT OUTER JOIN #MIN_AND_MAX_IDS m
		ON m.min_data_id <= br.max_data_id AND m.max_data_id >= br.min_data_id
	GROUP BY br.min_data_id
) t;

For the LUCKY_CCI table, if we pick a batch size of 1048576 we get a REFF of 1.0. This is because each SELECT query would read exactly 1 rowgroup and skip the other 99 rowgroups. Conversely, the UNLUCKY_CCI table has a REFF of 100.0. This is because every SELECT query would need to read all 100 rowgroups just to return 1048576 rows. In other words, each query does a full scan of the table and there are 100 rowgroups in the table.

What about DUIs?


Suppose our lucky table runs out of luck and starts getting DUIs. As rows are deleted and reinserted they will be packed into new rowgroups. We aren’t likely to keep our perfect rowgroups for every long. Below is code that randomly deletes and inserts a range of 100k integers. There is some bias in which numbers are processed but that’s ok for this type of testing. I ran the code with 600 loops which means that up to 60% of the rows in the table could have been moved around. There’s nothing special about 60%. That just represents my patience with the process.

DECLARE @target_loops INT = 600,
@batch_size INT = 100001,
@current_loop INT = 1,
@middle_num BIGINT,
@rows_in_target BIGINT;

BEGIN

SET NOCOUNT ON;

SELECT @rows_in_target = COUNT(*)
FROM dbo.LUCKY_CCI;

DROP TABLE IF EXISTS #num;
CREATE TABLE #num (ID INT NOT NULL);

INSERT INTO #num WITH (TABLOCK)
SELECT TOP (@batch_size) -1 + ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL)) RN
FROM master..spt_values t1
CROSS JOIN master..spt_values t2;

WHILE @current_loop <= @target_loops
BEGIN
	SET @middle_num = CEILING(@rows_in_target * RAND());

	DELETE tgt WITH (TABLOCK)
 	FROM dbo.LUCKY_CCI tgt
 	WHERE tgt.ID BETWEEN
 	CAST(@middle_num - FLOOR(0.5*@batch_size) AS BIGINT)
	AND
	CAST(@middle_num + FLOOR(0.5*@batch_size) AS BIGINT);

 	INSERT INTO dbo.LUCKY_CCI WITH (TABLOCK)
 	SELECT @middle_num + n.ID - FLOOR(0.5 * @batch_size)
	FROM #num n
 	WHERE n.ID BETWEEN 1 AND @rows_in_target
	OPTION (MAXDOP 1);

	SET @current_loop = @current_loop + 1;
END;

END;

Our table now has a bunch of uncompressed rowgroups and many compressed rowgroups with lots of deleted rows. Let’s clean it up with a REORG:

ALTER INDEX CCI ON dbo.LUCKY_CCI
REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);

This table has a REFF of about 41 for a batch size of 1048576. That means that we can expect to see significantly worse rowgroup elimination than before. Running the same SELECT query as before:

Table ‘LUCKY_CCI’. Segment reads 51, segment skipped 75.

The LUCKY_CCI table clearly needs to be renamed.

The Magic of Partitioning


A partition for a CCI is a way of organizing rowgroups. Let’s create the same table but with partitions that can hold five million integers. Below is code to do that for those following along at home:

CREATE PARTITION FUNCTION functioning_part_function
(BIGINT)
AS RANGE RIGHT
FOR VALUES (
  0
, 5000000
, 10000000
, 15000000
, 20000000
, 25000000
, 30000000
, 35000000
, 40000000
, 45000000
, 50000000
, 55000000
, 60000000
, 65000000
, 70000000
, 75000000
, 80000000
, 85000000
, 90000000
, 95000000
, 100000000
, 105000000
); 

CREATE PARTITION SCHEME scheming_part_scheme
AS PARTITION functioning_part_function
ALL TO ( [PRIMARY] );

DROP TABLE IF EXISTS dbo.PART_CCI;

CREATE TABLE dbo.PART_CCI (
ID BIGINT NOT NULL,
INDEX CCI CLUSTERED COLUMNSTORE
) ON scheming_part_scheme(ID);

INSERT INTO dbo.PART_CCI WITH (TABLOCK)
SELECT TOP (100 * 1048576) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL)) RN
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
ORDER BY RN
OPTION (MAXDOP 1);

5 million was a somewhat arbitrary choice. The important thing is that it’s above 1048576 rows. Generally speaking, picking a partition size below 1048576 rows is a bad idea because the table’s rowgroups will never be able to reach the maximum size.

Using the sys.dm_db_column_store_row_group_physical_stats dmv we can see that rowgroups were divided among partitions as expected:

a7_part

This table has a REFF of 1.9 for a batch size of 1048576. This is perfectly reasonable because the partition size was defined as 5 * 1000000 instead of 5 * 1048576 which would be needed for a REFF of 1.0.

Now I’ll run the same code as before to do 60 million DUIs against the table. I expect the partitions to limit the damage because row cannot move out of their partitions. After the process finishes and we do a REORG, the table has a REFF of about 3.7. The rowgroups are definitely a bit messier:

a7_part_after_DUI

Running our SELECT query one last time:

SELECT COUNT(*)
FROM dbo.Part_CCI
WHERE ID BETWEEN 12345678 AND 13345677;

We see the following numbers for rowgroup elimination:

Table ‘PART_CCI’. Segment reads 4, segment skipped 3.

Only four segments are read. Many segments are skipped, including all of those in partitions not relevant to the query.

Final Thoughts


Large, unpartitioned CCIs will eventually run into issues with rowgroup elimination fragmentation if the loading process can delete data. A maintainance operation that rebuilds the table with a clustered rowstore index and rebuilds the CCI with MAXDOP = 1 is one way to restore nicely organized rowgroups, but that is a never-ending battle which grows more expensive as the table gets larger. Picking the right partitioning function can guarantee rowgroup elimination fragmentation.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Columnstore Parallel Scan Row Distribution In SQL Server

Parallel rowstore scans are parallel “aware”. This makes them unlike most other operators which work independently on different threads. Columnstore indexes store data in a dramatically different way than rowstore objects, so perhaps we can expect differences in how rows are distributed among threads during a parallel scan. This blog post explores some of the observable behavior around row distribution for parallel columnstore scans.

Methods of Rowstore Scan Parallel Row Distribution


This will be an extremely brief summary of how SQL Server distributes rows among threads for rowstore parallel scans. A parallel page supplier sends pages to each thread on a demand basis. Threads may end up processing different numbers of pages for many reasons. If you like, you can read more about this here and here. In addition, there is some fancy behavior for partitioned tables. The documentation describes this behavior in SQL Server 2008 and it may have changed since then, but this is sufficient to set the stage. The important part is that SQL Server will in some cases give each relevant partition its own thread. In other cases, SQL Server will assign multiple threads to a single partition.

Methods of Columnstore Scan Parallel Row Distribution


I investigated tables with only compressed rowgroups. I did not consider delta stores because real CCIs don’t have delta stores. As far as I can tell, there are at least three different methods that SQL Server can use to assign rows to threads during a parallel CCI scan.

Rowgroup Parallelism

One method of distributing rows is to assign each thread to a relevant rowgroup. Two or more threads will not read rows from the same rowgroup. This strategy can be used if the cardinality estimate from the scan is sufficiently high compared to the DOP used by the query. Rowgroup level parallelism will be used if:

Cardinality Estimate >= MAXDOP * 1048576 * 0.5

To show this, let’s build a CCI with a single column integer that runs from 1 to 1048576 * 10. Naturally, the table will have ten compressed rowgroups of the maximum size:

CREATE TABLE dbo.CCI_SHOW_RG_PARALLELISM (
	ID BIGINT NOT NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.CCI_SHOW_RG_PARALLELISM WITH (TABLOCK)
SELECT TOP (1048576 * 10) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

I ran my tests with a simple query that is likely to go parallel on my machine and that doesn’t qualify for aggregate pushdown. With a cardinality estimate of exactly 1000000 and MAXDOP of 2 all of the rows are sent to one thread:

SELECT MAX(SQRT(ID))
FROM dbo.CCI_SHOW_RG_PARALLELISM
WHERE ID <= 1000000
OPTION (MAXDOP 2);

From the actual plan:

a6_RG_1

If we reduce the cardinality estimate by one row, the rows are spread out more evenly on the two threads:

SELECT MAX(SQRT(ID))
FROM dbo.CCI_SHOW_RG_PARALLELISM
WHERE ID <= 999999
OPTION (MAXDOP 2);

From the actual plan:

a6_RG_2

Note that the cardinality estimate displayed in SSMS may be misleading. The first query has a displayed cardinality estimate of 2000000 rows, but it does not use rowgroup parallelism:

SELECT MAX(SQRT(ID))
FROM dbo.CCI_SHOW_RG_PARALLELISM
WHERE ID <= 1999999 -- query plans can round
OPTION (MAXDOP 4);

From the actual plan:

a6_RG_3

But this one does:

SELECT MAX(SQRT(ID))
FROM dbo.CCI_SHOW_RG_PARALLELISM
WHERE ID <= 2000000
OPTION (MAXDOP 4);

From the actual plan:

a6_RG_4

Of course, we can get the query that takes an aggregate of 1999999 rows to use rowgroup level parallelism by bumping up the estimate:

DECLARE @filter BIGINT = 1999999;

SELECT MAX(SQRT(ID))
FROM dbo.CCI_SHOW_RG_PARALLELISM
WHERE ID <= @filter
OPTION (MAXDOP 4);

Here the estimate is:

0.3 * 10485800 = 3145740.0

So we get the expected parallelism strategy:

a6_RG_5

We can show that the rowgroup parallelism strategy is demand-based by deliberately slowing down the required the thread that grabs the first rowgroup in the table. Here I’m defining first by the ID that’s returned when running a SELECT TOP 1 ID query against the table. On my machine I get an ID of 9437185. The following code will add significant processing time in the CROSS APPLY part for only the row with an ID of 9437185. Every other row simply does a Constant Scan and goes on its merry way:

SELECT COUNT(*)
FROM dbo.CCI_SHOW_RG_PARALLELISM o
CROSS APPLY (
	SELECT TOP 1 1 c
	FROM
	(
		SELECT 1 c
		WHERE o.ID <> 9437185

		UNION ALL

		SELECT 1 c
		FROM master..spt_values t1
		CROSS JOIN master..spt_values t2
		CROSS JOIN master..spt_values t3
		WHERE o.ID = 9437185
		ORDER BY (SELECT NULL)
			OFFSET 100000000 ROWS
			FETCH FIRST 1 ROW ONLY
	) t
) t2
OPTION (MAXDOP 2);

Thread 1 processes nine rowgroups and waits on thread 2 to process its single rowgroup:

a6_RG_6

Split Rowgroup Parallelism

If the CCI has a small number of rows and the cardinality estimate is low enough you may see a different parallel row distribution strategy employed. I couldn’t think of a good name for this, but SQL Server splits up each rowgroup into roughly equal pieces and threads can process those pieces on a demand basis. The number of pieces seems to depend on the number of rows in the rowgroup instead of MAXDOP . For MAXDOP larger than 2 this behavior can be observed if a table has one or two rowgroups. For a MAXDOP of 2 this behavior can be observed if a table has exactly one rowgroup.

The formula for the number of pieces appears to be the number of rows in the rowgroup divided by 104857, rounded down, with a minimum of 1. The maximum rowgroup size of 1048576 implies a maximum number of pieces of 10 per rowgroup. Here’s a table to show all of the possibilities:

a6_table

We can see evidence of this behavior in SQL Server with a few tests. As before I’ll need to slow down one thread. First I’ll put 943713 integers into a single compressed rowgroup:

DROP TABLE IF EXISTS dbo.TEST_CCI_SMALL;

CREATE TABLE dbo.TEST_CCI_SMALL (
ID BIGINT NOT NULL,
INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.TEST_CCI_SMALL WITH (TABLOCK)
SELECT TOP (943713) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

I will test with a very similar query to a previous test:

SELECT COUNT(*)
FROM dbo.TEST_CCI_SMALL o
CROSS APPLY (
	SELECT TOP 1 1 c
	FROM
	(
		SELECT 1 c
		WHERE o.ID <> 1

		UNION ALL

		SELECT 1 c
		FROM master..spt_values t1
		CROSS JOIN master..spt_values t2
		CROSS JOIN master..spt_values t3
		WHERE o.ID = 1
		ORDER BY (SELECT NULL)
			OFFSET 100000000 ROWS
			FETCH FIRST 1 ROW ONLY
	) t
) t2
OPTION (QUERYTRACEON 8649, MAXDOP 4);

The parallel scan should be split into nine pieces to be divided among threads because the table has a single rowgroup with a row count of 943713. This is exactly what happens:

a6_small_1

If I truncate the table and load one fewer row, the scan is now split up into eight pieces:

a6_small_2

I can also create two compressed rowgroups of a size that should led to seven pieces per rowgroup:

DROP TABLE IF EXISTS dbo.TEST_CCI_SMALL;

CREATE TABLE dbo.TEST_CCI_SMALL (
	ID BIGINT NOT NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.TEST_CCI_SMALL WITH (TABLOCK)
SELECT TOP (733999) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

INSERT INTO dbo.TEST_CCI_SMALL WITH (TABLOCK)
SELECT TOP (733999) 733999 + ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

Thread 1 processes the row with an ID of 734000 so it only gets one piece:

a6_small_3

With more than one rowgroup, the demand-based aspect of piece distribution doesn’t quite work in the same way as with a single rowgroup. I wasn’t able to work out all of the details.

A Third Way?

What about queries that do not meet either of the two above criteria? For example, a query against a not small CCI that has a low cardinality estimate coming out of the scan? In some cases SQL Server will use rowgroup level distribution. In other cases it appears to use a combination of the two methods described above. Most of the time the behavior can be described as each thread gets assigned an entire rowgroup and threads race for pieces of the remaining rowgroups. I wasn’t able to figure out exactly how SQL Server decides which method to use, despite running many tests. However, I will show most of the behavior that I observed. First put 7 rowgroups into a CCI:

DROP TABLE IF EXISTS dbo.MYSTERIOUS_CCI;

CREATE TABLE dbo.MYSTERIOUS_CCI (
	ID BIGINT NOT NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.MYSTERIOUS_CCI WITH (TABLOCK)
SELECT TOP (1048576 * 7) ROW_NUMBER()
	OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

SELECT TOP 1 ID -- 6291457
FROM dbo.MYSTERIOUS_CCI;

I added local variables to my test query to lower the cardinality estimate. Otherwise I would get rowgroup distribution every time. Here’s an example query:

declare @lower_id INT = 1048576 * 2 + 1;
declare @upper_id INT = 1048576 * 7;

SELECT COUNT(*)
FROM dbo.MYSTERIOUS_CCI o
CROSS APPLY (
	SELECT TOP 1 1 c
	FROM
	(
		SELECT 1 c
		WHERE o.ID <> 6291457

		UNION ALL

		SELECT 1 c
		FROM master..spt_values t1
		CROSS JOIN master..spt_values t2
		CROSS JOIN master..spt_values t3
		WHERE o.ID = 6291457
		ORDER BY (SELECT NULL)
			OFFSET 100000000 ROWS
			FETCH FIRST 1 ROW ONLY
	) t
) t2
WHERE o.ID BETWEEN @lower_id AND @upper_id
OPTION (QUERYTRACEON 8649, MAXDOP 3);

With a MAXDOP of 3 and five processed rowgroups I get rowgroup level parallelism:

a6_3rd_1

With a MAXDOP of 4 and five processed rowgroups, each thread gets a rowgroup and the other three threads race to process the remaining 20 pieces:

a6_3rd_2

With a MAXDOP of 3 and six processed rowgroups we no longer get rowgroup level parallelism:

a6_3rd_3

What about Partitioning?


In tests not reproduced here, I was not able to observe any differences in how rows were distributed for parallel CCI scans when the underlying CCI was partitioned. This seems reasonable if we think about how partitioning for CCIs is different than for rowstore tables. CCI partitions are simply a collection of rowgroups containing rows relevant to the partition. If there’s a need to split up the underlying components of a table we can just assign rowgroups to different threads. For rowstore tables, we can think of each partition as a mini-table. Partitioning for rowstore adds underlying objects which can be distributed to parallel threads.

Fun with Query Plans


With a partial understanding of how SQL server distributes rows after a parallel scan, we can write queries that show some of the edge cases that can lead to poor performance. The following query is complete nonsense but it shows the point well enough. Here I’m cross joining to a few numbers from the spt_values table. The CROSS JOIN is meant to represent other downstream work done by a query:

SELECT MAX(SQRT(o.ID + t.number))
FROM dbo.MYSTERIOUS_CCI o
CROSS JOIN master..spt_values t
WHERE o.ID <= 1100000 AND
t.number BETWEEN 0 AND 4
OPTION (MAXDOP 2, NO_PERFORMANCE_SPOOL, QUERYTRACEON 8649);

The cardinality estimate and MAXDOP of 2 leads to rowgroup level parallelism being used. Unfortunately, this is very unbalanced:

a6_fun_1

And as a result, the query barely benefits from parallelism:

CPU time = 25578 ms, elapsed time = 24580 ms.

It’s a well-guarded secret that queries run faster with odd MAXDOP , so let’s try a MAXDOP of 3. Now my parallel threads no longer sleep on the job:

CPU time = 30922 ms, elapsed time = 11840 ms.

Here’s the row distribution from the scan:

a6_fun_2

Final Thoughts


This post explored a bit of the observable behavior for how rows are distributed to threads after a parallel columnstore index scan. The algorithms used can in some cases lead to row imbalance on threads which can cause performance issues downstream in the plan. The lack of repartition stream operators in batch mode can make this problem worse than it might be for rowstore scans, but I still expect issues caused by it to be relatively uncommon in practice.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Trace Flag 610 and SQL Server 2016

SQL Server recently surprised me with unexpected minimal logging. Normally this would be a cause for celebration but I was designing demos for a class. The point of that particular demo was to demonstrate a case in which minimal logging did not occur. The specific test case was inserting into a rowstore clustered index without a TABLOCK hint. Per the documentation, I should not have gotten minimal logging unless trace flag 610 was turned on. I was testing on SQL Server 2016 without trace flag 610.

Trace Flag 610


This trace flag is documented by Microsoft in The Data Loading Performance Guide. Here’s a relevant quote:

Not every row inserted in a cluster index with trace flag 610 is minimally logged. When the bulk load operation causes a new page to be allocated, all of the rows sequentially filling that new page are minimally logged. Rows inserted into pages that are allocated before the bulk load operation occurs are still fully logged, as are rows that are moved as a result of page splits during the load.

In addition, you can see a table of expected logging results if you search for “Summarizing Minimal Logging Conditions”. That table confirms that minimal logging should not happen when inserting into a clustered index without TABLOCK or trace flag 610. However, that’s exactly what I saw on SQL Server 2016.

SQL Server 2014 Testing


In a state of panic I immediately downloaded and installed SQL Server 2014 Express. Well not really, but it makes for a better blog post. For these tests I’m using a recovery model of simple.

Simple Inserts

First I’ll insert 2500 pages into a clustered rowstore table with a few different options. Table schema:

DROP TABLE IF EXISTS dbo.target_ci;

CREATE TABLE dbo.target_ci (
	ID BIGINT NOT NULL,
	FILLER VARCHAR(7700) NOT NULL,
	PRIMARY KEY (ID)
);

I’ll be checking out what’s getting logged with these queries:

SELECT SUM(database_transaction_log_bytes_used)
FROM sys.dm_tran_database_transactions
WHERE DATABASE_ID = 7;
-- you probably have a different database ID

SELECT Operation, COUNT(*)
FROM fn_dblog(NULL, NULL)
GROUP BY Operation
ORDER BY COUNT(*) DESC;

The first test is with TABLOCK, the second is without TABLOCK, and the third is without TABLOCK but with trace flag 610 enabled. I should get minimal logging for the first and the third and not for the second. Here’s the code that I ran:

CHECKPOINT;
BEGIN TRANSACTION;

INSERT INTO dbo.target_ci WITH (TABLOCK)
SELECT TOP (2500)
  ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 7700)
FROM master..spt_values;

-- pause here

COMMIT TRANSACTION;
TRUNCATE TABLE dbo.target_ci;
CHECKPOINT;

BEGIN TRANSACTION;

INSERT INTO dbo.target_ci
SELECT TOP (2500)
  ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 7700)
FROM master..spt_values;

-- pause here

COMMIT TRANSACTION;
TRUNCATE TABLE dbo.target_ci;
CHECKPOINT;

DBCC TRACEON(610);

BEGIN TRANSACTION;

INSERT INTO dbo.target_ci
SELECT TOP (2500)
  ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 7700)
FROM master..spt_values;

-- pause here

COMMIT TRANSACTION;
TRUNCATE TABLE dbo.target_ci;
CHECKPOINT;

According the transaction log DMV, the first test logged 78,464 bytes, the second test logged 22,157,848 bytes, and the third logged 2,140,124 bytes. These results were expected. Below is a table of results from the undocumented fn_dblog TVF:

a5_2014_min_logging_table

I omitted out the (hopefully) irrelevant data. I don’t know what any of it means, but we can clearly see differences between the three tests. Based on this simple testing, SQL Server 2014 appears to match the documentation.

Page Splits

We can also do a simple test to show a case where trace flag 610 doesn’t help. First insert 2500 rows with odd primary keys from 1 to 4999:

CREATE TABLE dbo.NO_NEW_PAGES (
	ID BIGINT NOT NULL,
	DATA VARCHAR(3800),
	PRIMARY KEY (ID)
);

INSERT INTO dbo.NO_NEW_PAGES WITH (TABLOCK)
SELECT TOP (2500)
  -1 + 2 * ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 3800)
FROM master..spt_values;

Then insert 2499 rows with even primary keys from 2 to 4998 with and without TF 610:

DBCC TRACEON(610);

BEGIN TRANSACTION;

INSERT INTO dbo.NO_NEW_PAGES
SELECT TOP (2499)
  2 * ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 3800)
FROM master..spt_values;

-- pause here

ROLLBACK TRANSACTION;

DBCC TRACEOFF(610);

BEGIN TRANSACTION;

INSERT INTO dbo.NO_NEW_PAGES
SELECT TOP (2499)
  2 * ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 3800)
FROM master..spt_values;

-- pause here

COMMIT TRANSACTION;

I expect a lot of page splitting because I can now fit two rows per page. TF 610 shouldn’t handle this scenario well. I logged 16,575,608 bytes with the trace flag and 9,796,224 bytes without the trace flag according to the transaction log DMV.

Nonclustered Indexes

Trace flag 610 can help with nonclustered indexes. Below is a test that inserts 2500 rows with and without trace flag 610 with an index on a new column:

CREATE TABLE dbo.IX_TEST (
	ID BIGINT NOT NULL,
	ID2 BIGINT NOT NULL,
	DATA VARCHAR(3800),
	PRIMARY KEY (ID)
);

CREATE INDEX IX ON IX_TEST (ID2);

DBCC TRACEON(610);

BEGIN TRANSACTION;

INSERT INTO dbo.IX_TEST
SELECT TOP (2500)
  ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 3800)
FROM master..spt_values;

-- pause here

ROLLBACK TRANSACTION;

DBCC TRACEOFF(610);

BEGIN TRANSACTION;

INSERT INTO dbo.IX_TEST
SELECT TOP (2500)
  ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 3800)
FROM master..spt_values;

-- pause here

COMMIT TRANSACTION;

There was an order of magnitude difference. With trace flag 610 I logged 1,129,728 bytes and without trace flag 610 I logged 10,120,144 bytes.

SQL Server 2016 Testing


I ran the same tests again on SQL Server 2016 SP1-CU3. With TABLOCK I logged 111,568 bytes according to the transaction log DMV. For both other tests I logged 2,125,376 bytes. Here’s a table of results from fn_dblog:

a5_2016_min_logging_table

Despite what the table says, note that the results with and without the trace flag weren’t exactly the same. However, even when running the same test sometimes the number of logged records per operation varies a little bit. My conclusion was that SQL Server is doing the same thing behind the scenes.

For both the page splitting and the nonclustered index test I got the same results as in SQL Server 2014. This shows that trace flag 610 does have some effect in SQL Server 2016, but not the one you’d expect from reading the documentation.

Estimated Final Thoughts


It appears that Microsoft has added additional minimal logging optimizations for clustered rowstore tables in SQL Server 2016. At least one of these optimizations was previously locked behind trace flag 610. I could not find any mention of this change in the documentation. If you carried over trace flag 610 when upgrading to SQL Server 2016, you should consider retesting that trace flag with your workload. It’s possible to construct a workload that benefits from trace flag 610 in SQL Server 2014 but is harmed by the same trace flag in SQL Server 2016.

Actual Final Thoughts


As of August 1, Microsoft documented that trace flag 610 is a no-op in SQL Server 2016. I revisited my results and found issues with the page splitting and nonclustered index tests. Starting with an empty table, inserting rows, and rolling back that transaction can affect the amount written to the log for future transactions. I don’t know why, but it’s necessary to drop and recreate the table before changing the trace flag to get clean results. It appears to be safe to turn off trace flag 610 in SQL Server 2016.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

SQL Server Clustered Column Store Indexes and String Aggregation

String aggregation is a not uncommon problem in SQL Server. By string aggregation I mean grouping related rows together and concatenating a string column for that group in a certain order into a single column. How will do CCIs do with this type of query?

The Data Set


A simple table with three columns is sufficient for testing. ITEM_ID is the id for the rows that should be aggregated together. LINE_ID stores the concatenation order within the group. COMMENT is the string column to be concatenated. The table will have 104857600 rows total with 16 rows per ITEM_ID.

CREATE TABLE dbo.CCI_FOR_STRING_AGG (
	ITEM_ID BIGINT NOT NULL,
	LINE_ID BIGINT NOT NULL,
	COMMENT VARCHAR(10) NULL,
	INDEX CCI CLUSTERED COLUMNSTORE
);

DECLARE @loop INT = 0
SET NOCOUNT ON;
WHILE @loop < 100
BEGIN
	INSERT INTO dbo.CCI_FOR_STRING_AGG WITH (TABLOCK)
	SELECT
	t.RN / 16
	, 1 + t.RN % 16
	, CHAR(65 + t.RN % 16)
	FROM
	(
		SELECT TOP (1048576)
		(1048576 * @loop) - 1 +
		ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) RN
		FROM master..spt_values t1
		CROSS JOIN  master..spt_values t2
	) t
	OPTION (MAXDOP 1);

	SET @loop = @loop + 1;
END;

On my machine this codes takes around a minute and a half and the final table size is around 225 MB. I’m inserting batches of 1048576 rows with MAXDOP 1 to get nice, clean rowgroups.

The Test Query


Let’s concatenate the strings at an ITEM_ID level for all ITEM_IDs with an id between 3276800 and 3342335. All of the data is stored in a single rowgroup and the CCI was built in such a way that all other rowgroups can be skipped with that filter. This should represent the best case for CCI performance. A common way to concatenate a column is with the FOR XML PATH method:

SELECT
  o.ITEM_ID
, STUFF(
	(
		SELECT ','+ i.COMMENT
		FROM dbo.CCI_FOR_STRING_AGG i
		WHERE o.ITEM_ID = i.ITEM_ID
		ORDER BY i.LINE_ID
		FOR XML PATH('')
	)
,1 ,1, '') ALL_COMMENTS
FROM dbo.CCI_FOR_STRING_AGG o
WHERE o.ITEM_ID BETWEEN 3276800 AND 3342335
GROUP BY o.ITEM_ID;

Note that I’m not bothering with the additional arguments for the XML part, but you should whenever using this method in production.

The query plan looks reasonable at first glance:

a4 FOR XML estimated

However, the query takes nearly six minutes to complete. That’s not the lightning fast CCI query performance that we were promised!

Spool Problems


We can see from the actual execution plan that SQL Server scanned all 104.8 million rows from the CCI on a single thread:

a4 actual threads

Those rows were then sent into an index spool which was read from all four threads in the nested loop. The CCI scan and the build of the index spool took the most time in the plan so it makes sense why CPU time is less than elapsed time, even with a MAXDOP 4 query:

CPU time = 305126 ms, elapsed time = 359176 ms.

Thinking back to how parallel nested loop joins work, it seems unavoidable that the index spool was built with just one thread. The rows from the outer result set are sent to the inner query one row at a time on different threads. All of the threads run independently at MAXDOP 1. The eager spool for the index is a blocking operation and the blocking occurs even when the subquery isn’t needed. Consider the following query in which the ITEM_ID = 3400000 filter means that rows from the FOR XML PATH part of the APPLY will never be needed:

SELECT o.ITEM_ID
, ac.ALL_COMMENTS
FROM
(
	SELECT TOP (9223372036854775807) b.ITEM_ID
	FROM dbo.CCI_FOR_STRING_AGG b
	WHERE b.ITEM_ID BETWEEN 3276800 AND 3342335
	GROUP BY b.ITEM_ID
) o
OUTER APPLY (
	SELECT 'TEST'

	UNION ALL

	SELECT STUFF(
		(
			SELECT ','+ i.COMMENT
			FROM dbo.CCI_FOR_STRING_AGG i
			WHERE o.ITEM_ID = i.ITEM_ID
			ORDER BY i.LINE_ID
			FOR XML PATH('')
		)
	,1 ,1, '')
	WHERE o.ITEM_ID =  3400000
)  ac (ALL_COMMENTS)
OPTION (MAXDOP 4, QUERYTRACEON 8649);

The index spool is still built for this query on one thread even though the startup expression predicate condition is never met. It seems unlikely that we’ll be able to do anything about the fact that the index spool is built with one thread and that it blocks execution for the query. However, we know that we only need 1048576 rows from the CCI to return the query’s results. Right now the query takes 104.8 million rows and throws them into the spool. Can we reduce the number of rows put into the spool? The most obvious approach is to simply copy the filter on ITEM_ID into the subquery:

SELECT
  o.ITEM_ID
, STUFF(
	(
		SELECT ','+ i.COMMENT
		FROM dbo.CCI_FOR_STRING_AGG i
		WHERE o.ITEM_ID = i.ITEM_ID
		and i.ITEM_ID BETWEEN 3276800 AND 3342335
		ORDER BY i.LINE_ID
		FOR XML PATH('')
	)
,1 ,1, '') ALL_COMMENTS
FROM dbo.CCI_FOR_STRING_AGG o
WHERE o.ITEM_ID BETWEEN 3276800 AND 3342335
GROUP BY o.ITEM_ID;

This doesn’t have the desired effect:

a4 bad filter with spool

The filter is moved after the index build. We’ll still get all of the rows from the table put into the spool, but no rows will come out unless ITEM_ID is between 3276800 and 3342335. This is not helpful. We can get more strict with the query optimizer by adding a superfluous TOP to the subquery. That should force SQL Server to filter on ITEM_ID before sending rows to the spool because otherwise the TOP restriction may not be respected. One implementation:

SELECT
o.ITEM_ID
, STUFF(
(
SELECT ','+ i.COMMENT
FROM
(
SELECT TOP (9223372036854775807)
a.ITEM_ID, a.LINE_ID, a.COMMENT
FROM dbo.CCI_FOR_STRING_AGG a
WHERE a.ITEM_ID BETWEEN 3276800 AND 3342335
) i
WHERE o.ITEM_ID = i.ITEM_ID
ORDER BY i.LINE_ID
FOR XML PATH('')
)
,1 ,1, '') ALL_COMMENTS
FROM dbo.CCI_FOR_STRING_AGG o
WHERE o.ITEM_ID BETWEEN 3276800 AND 3342335
GROUP BY o.ITEM_ID;

SQL Server continues to outsmart us:

a4 no spool

As you can see in the plan above, the spool has completely disappeared. I was not able to find a way, even with undocumented black magic, to reduce the number of rows going into the spool. Perhaps it is a fundamental restriction regarding index spools. In fact, we can use the undocumented trace flag 8615 to see that spools are not even considered at any part in the query plan for the new query. On the left is the previous query with the spool with an example highlighted. On the right is the new query. The text shown here is just for illustration purposes, but we can see the spool on the left:

a4 TF diff

The important point is that for this query we appear to be stuck.

Rowgroup Elimination Problems


We can try our luck without the spool by relying on rowgroup elimation alone. The spool can’t be eliminated with the NO_PERFORMANCE_SPOOL hint, but another option (other than the TOP trick above) is to use the undocumented QUERYRULEOFF syntax to disable the optimizer rule for building spools:

SELECT
  o.ITEM_ID
, STUFF(
	(
		SELECT ','+ i.COMMENT
		FROM dbo.CCI_FOR_STRING_AGG i
		WHERE o.ITEM_ID = i.ITEM_ID
		ORDER BY i.LINE_ID
		FOR XML PATH('')
	)
,1 ,1, '') ALL_COMMENTS
FROM dbo.CCI_FOR_STRING_AGG o
WHERE o.ITEM_ID BETWEEN 3276800 AND 3342335
GROUP BY o.ITEM_ID
OPTION (QUERYRULEOFF BuildSpool);

The spool is gone but we don’t get rowgroup elimination:

a4 no RG elimination

We can get rowgroup elimination even without hardcoded filters with a bitmap from a hash join. Why don’t we get it with a nested loop join? Surely SQL Server ought to be able to apply rowgroup elimination for each outer row as it’s processed by the inner part of the nested loop join. We can explore this question further with tests that don’t do string aggregation. The query below gets rowgroup elimination but the constant filters are passed down to the CCI predicate:

SELECT *
FROM (VALUES (1), (2), (3), (4)) AS v(x)
INNER JOIN dbo.CCI_FOR_STRING_AGG a ON v.x = a.ITEM_ID
OPTION (FORCE ORDER, LOOP JOIN);

If we throw the four values into a temp table:

SELECT x INTO #t
FROM (VALUES (1), (2), (3), (4)) AS v(x)

The join predicate is applied in the nested loop operator. We don’t get any rowgroup elimination. Perhaps TOP can help us:

SELECT *
FROM #t v
CROSS APPLY (
	SELECT TOP (9223372036854775807) *
	FROM dbo.CCI_FOR_STRING_AGG a
	WHERE v.x = a.ITEM_ID
) a
OPTION (LOOP JOIN);

Sadly not:

a4 top failure

The join predicate is applied in the filter operation. We still don’t get any rowgroup elimination. Frustratingly, we get the behavior that we’re after if we replace the CCI with a heap:

a4 heap success

I don’t really see an advantage in pushing down the predicate with a heap. Perhaps Microsoft did not program the optimization that we’re looking for into the query optimizer. After all, this is likely to be a relatively uncommon case. This query is simple enough in that we filter directly against the CCI. In theory, we can give our query a fighting chance by adding a redundant filter on ITEM_ID to the subquery. Here’s the query that I’ll run:

SELECT
  o.ITEM_ID
, STUFF(
	(
		SELECT ','+ i.COMMENT
		FROM dbo.CCI_FOR_STRING_AGG i
		WHERE o.ITEM_ID = i.ITEM_ID
		and i.ITEM_ID BETWEEN 3276800 AND 3342335
		ORDER BY i.LINE_ID
		FOR XML PATH('')
	)
,1 ,1, '') ALL_COMMENTS
FROM dbo.CCI_FOR_STRING_AGG o
WHERE o.ITEM_ID BETWEEN 3276800 AND 3342335
GROUP BY o.ITEM_ID
OPTION (QUERYRULEOFF BuildSpool);

Unfortunately performance is even worse than before. The query finished in around 51 minutes on my machine. It was a proper parallel query and we skipped quite a few rowgroups:

Table ‘CCI_FOR_STRING_AGG’. Segment reads 65537, segment skipped 6488163.
CPU time = 10028282 ms, elapsed time = 3083875 ms.

Skipping trillions of rows is impressive but we still read over 68 billion rows from the CCI. That’s not going to be fast. We can’t improve performance further with this method since there aren’t any nonclustered indexes on the CCI, so there’s nothing better to seek against in the inner part of the loop.

Temp Tables


We can use the humble temp table to avoid some of the problems with the index spool. We’re able to insert into it in parallel, build the index in parallel, and we can limit the temp table to only the 1048576 rows that are needed for the query. The following code finishes in less than two seconds on my machine:

CREATE TABLE #CCI_FOR_STRING_AGG_SPOOL (
	ITEM_ID BIGINT NOT NULL,
	LINE_ID BIGINT NOT NULL,
	COMMENT VARCHAR(10) NULL
);

INSERT INTO #CCI_FOR_STRING_AGG_SPOOL WITH (TABLOCK)
SELECT a.ITEM_ID, a.LINE_ID, a.COMMENT
FROM dbo.CCI_FOR_STRING_AGG a
WHERE a.ITEM_ID BETWEEN 3276800 AND 3342335;

CREATE CLUSTERED INDEX CI_CCI_FOR_STRING_AGG_SPOOL
ON #CCI_FOR_STRING_AGG_SPOOL (ITEM_ID, LINE_ID);

SELECT
  o.ITEM_ID
, STUFF(
	(
		SELECT ','+ i.COMMENT
		FROM #CCI_FOR_STRING_AGG_SPOOL i
		WHERE o.ITEM_ID = i.ITEM_ID
		ORDER BY i.LINE_ID
		FOR XML PATH('')
	)
,1 ,1, '') ALL_COMMENTS
FROM dbo.CCI_FOR_STRING_AGG o
WHERE o.ITEM_ID BETWEEN 3276800 AND 3342335
GROUP BY o.ITEM_ID;

The query plan for the last query:

a4 temp table

With this approach we’re not really getting much from defining the underlying table as a CCI. If the table had more columns that weren’t needed then we could skip those columns with column elimination.

Other Solutions


An obvious solution is simply to add a covering nonclustered index to the table. Of course, that comes with the admission that columnstore currently doesn’t have anything to offer with this type of query. Other solutions which are likely to be more columnstore friendly include using a CLR function to do the aggregation and the SQL Server 2017 STRING_AGG function.

Note that recursion is not likely to be a good solution here. Every recursive query that I’ve seen involves nested loops. The dreaded index spool returns:

a4 recursion issue

Final Thoughts


Note that the poor performance of these queries isn’t a problem specific to columnstore. Rowstore tables can have the similar performance issues with string aggregation if there isn’t a sufficient covering index. However, if CCIs are used as a replacement for all nonclustered indexes, then the performance of queries that require the table to be on the inner side of a nested loop join may significantly degrade. Some of the optimizations that make querying CCIs fast appear to be difficult to realize for these types of queries. The same string aggregation query when run against a table with a rowstore clustered index on ITEM_ID and LINE_ID finished in 1 second. That is an almost 360X improvement over the CCI.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Returning All Rows with TABLESAMPLE In SQL Server

This post explores some undocumented behavior with TABLESAMPLE, specifically around the REPEATABLE option. TABLESAMPLE is a very useful tool to get a fast page-based sample of large tables. SQL Server uses it behind the scenes to gather sampled statistics.

Syntax


For this post we only need to be concerned with a subset of the syntax:

TABLESAMPLE SYSTEM (sample_number PERCENT )
[ REPEATABLE (repeat_seed) ]

Here’s a quote from that page about how TABLESAMPLE works internally:

TABLESAMPLE SYSTEM returns an approximate percentage of rows and generates a random value for each physical 8-KB page in the table. Based on the random value for a page and the percentage specified in the query, a page is either included in the sample or excluded.

One theory for how TABLESAMPLE works is that SQL Server generates a sequence of random values and does an allocation order scan that skips a page if the random value for that page does not meet the sample percent threshold. If REPEATABLE is used then SQL Server will generate the same sequence of random values each time, but results may be different if the table’s data has changed:

The REPEATABLE option causes a selected sample to be returned again. When REPEATABLE is specified with the same repeat_seed value, SQL Server returns the same subset of rows, as long as no changes have been made to the table. When REPEATABLE is specified with a different repeat_seed value, SQL Server will typically return a different sample of the rows in the table. The following actions to the table are considered changes: inserting, updating, deleting, index rebuilding, index defragmenting, restoring a database, and attaching a database.

How repeatable is REPEATABLE?


One might expect to get the same sample of pages with the REPEATABLE option even if the underlying data has changed but all of the pages remain in the same physical order. It also seems reasonable to think that if we add a page to the end of a table that the sample should stay the same except the new page may or may not be included in the sample. We can do some quick tests:

DROP TABLE IF EXISTS dbo.REPEATABLE_TEST;

CREATE TABLE dbo.REPEATABLE_TEST (
ID BIGINT NOT NULL IDENTITY(1, 1),
FILLER VARCHAR(7000),
PRIMARY KEY (ID)
);

INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));
INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));
INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));
INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));
INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));
INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 3000));

-- returns 3 - 6
-- we skipped the first two pages 
-- and included the last four
SELECT ID
FROM dbo.REPEATABLE_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (1);

INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 3000));

-- returns 3 - 7, same pages as before
SELECT ID
FROM dbo.REPEATABLE_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (1);

INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));

-- returns 3 - 8, includes new page
SELECT ID
FROM dbo.REPEATABLE_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (1);

INSERT INTO dbo.REPEATABLE_TEST
VALUES (REPLICATE('Z', 7000));

-- returns 3 - 8, does not include new page
SELECT ID
FROM dbo.REPEATABLE_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (1);

So far so good. However, the quote about REPEATABLE also calls out doing an index rebuild. We can see that our table isn’t fragmented at all with this query:

SELECT
  index_level
, page_count
, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats
(7, OBJECT_ID(N'dbo.REPEATABLE_TEST')
, NULL, NULL , 'DETAILED');

Result set:

a3_result_set_1

With a MAXDOP 1 rebuild I wouldn’t expect the physical order of pages to change at all. Indeed it doesn’t:

a3_result_set_1

We issue the REBUILD:

ALTER TABLE dbo.REPEATABLE_TEST REBUILD WITH (MAXDOP = 1);

However, now we get a completely different sample, despite the table still having no fragmentation. Pages 1, 2, 3, 5, and 8 are included in the results. The table has the same data and physical order as before. Why should the sample change even with the same REPEATABLE value?

Decoding REPEATABLE


Perhaps the REPEATABLE value is somehow combined with some piece of metadata with the table, similar to a salt used for encryption. The OBJECT_ID seems like a reasonable guess except that it won’t change after a rebuild. However, the HOBT_ID of the table does change after a REBUILD. We may be able to get a repeatable sample even after a REBUILD if we’re able to factor in the HOBT_ID somehow.

First let’s put 2537 pages into a new testing table:

DROP TABLE IF EXISTS dbo.REPEATABLE_SALT_TEST;

CREATE TABLE dbo.REPEATABLE_SALT_TEST (
ID BIGINT NOT NULL,
FILLER VARCHAR(7000),
PRIMARY KEY (ID)
);

INSERT INTO dbo.REPEATABLE_SALT_TEST WITH (TABLOCK)
SELECT
  ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
, REPLICATE('Z', 7000)
FROM master..spt_values;

REPEATABLE allows between 1 and the maximum BIGINT value, 9223372036854775807. However, it’s easy to show through testing that a REPEATABLE value will return the same sample as REPEATABLE value + 4294967296. Perhaps picking a REPEATABLE value of hobt_id % 4294967296 will return the same sample even through a REBUILD.

SELECT hobt_id % 4294967296
FROM sys.partitions
where object_id = OBJECT_ID('REPEATABLE_SALT_TEST');

SELECT COUNT(*), MIN(ID), MAX(ID)
FROM dbo.REPEATABLE_SALT_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (1248067584); -- use query value

I get 1306 pages sampled with a minimum value of 3 and a maximum value of 2537. After a REBUILD I get different results:

ALTER TABLE dbo.REPEATABLE_SALT_TEST
REBUILD WITH (MAXDOP = 1);

SELECT hobt_id % 4294967296
FROM sys.partitions
where object_id = OBJECT_ID('REPEATABLE_SALT_TEST');

SELECT COUNT(*), MIN(ID), MAX(ID)
FROM dbo.REPEATABLE_SALT_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (1248133120); -- use query value

Now I get 1272 pages sampled with a minimum value of 6 and a maximum value of 2537. Through trial and error I arrived at one formula which works:

4294967296 - ((hobt_id + @salt) % 4294967296)

Where @salt is a positive integer between 1 and 4294967295. Let’s go back to our current testing table and use a @salt value of 1:

SELECT 4294967296 - ((hobt_id + 1) % 4294967296)
FROM sys.partitions
where object_id = OBJECT_ID('REPEATABLE_SALT_TEST');

SELECT COUNT(*), MIN(ID), MAX(ID)
FROM dbo.REPEATABLE_SALT_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (3046768639);

Now I get 1268 pages sampled with a minimum value of 1 and a maximum value of 2534. After the rebuild I get the same result:

ALTER TABLE dbo.REPEATABLE_SALT_TEST
REBUILD WITH (MAXDOP = 1);

SELECT 4294967296 - ((hobt_id + 2147483648) % 4294967296)
FROM sys.partitions
where object_id = OBJECT_ID('REPEATABLE_SALT_TEST')

SELECT COUNT(*), MIN(ID), MAX(ID)
FROM dbo.REPEATABLE_SALT_TEST
TABLESAMPLE (50 PERCENT)
REPEATABLE (899219456); -- use query value

The formula can also be expressed as:

4294967296 - ((hobt_id + ABS(2147483648 - @salt)) % 4294967296)

With @salt as a positive integer from 0 to 2147483647. This is because there are repeated values in a cycle of 4294967296. More importantly, picking a @seed value of 1 always returns all pages from a table regardless of the sample size.

I admit that there is probably a simpler formula that I missed. Also, none of the above applies to columnstore tables. I doubt that it applies for partitioned rowstore tables as well but I did not test that.

Practical Applications?


There are a few interesting things that can be done with the new knowledge that we have around how REPEATABLE works.

Force an Allocation Order Scan

Normally you can only get an allocation order scan on a table with 64 pages or more unless you use TABLESAMPLE with a sample rate of less than 100%. However, now that we have a seed value that returns all rows no matter what we can force an allocation scan for any size table that returns all rows. Consider a 63 page table written to disk in the wrong order:

DROP TABLE IF EXISTS dbo.FORCE_ALLOCATION_ORDER_SCAN;

CREATE TABLE dbo.FORCE_ALLOCATION_ORDER_SCAN (
	ID BIGINT NOT NULL,
	PAGE_TURNER VARCHAR(5000) NOT NULL,
	PRIMARY KEY (ID)
);

-- add 63 pages with one row per page
-- in reverse clustered index order
DECLARE @i INTEGER = 63;
SET NOCOUNT ON;
WHILE @i > 0
BEGIN
	INSERT INTO dbo.FORCE_ALLOCATION_ORDER_SCAN
	WITH (TABLOCK)
	SELECT @i, REPLICATE('Z', 5000);

	SET @i = @i - 1;
END;

This table is very fragmented from the point of view of SQL Server:

a3_FORCE_ALLOCATION_ORDER_SCAN_frag

Now let’s find the REPEATABLE value which returns all rows:

-- returns 899088385
SELECT 4294967296 - ((hobt_id + ABS(2147483648 - 1)) % 4294967296)
FROM sys.partitions
where object_id = OBJECT_ID('SAMPLE_ALLOCATION_ORDER_TEST');

SELECT ID
FROM dbo.FORCE_ALLOCATION_ORDER_SCAN
TABLESAMPLE SYSTEM (99.99999 PERCENT )
REPEATABLE (899088385);

Here is a subset of the results:

a3_FORCE_ALLOCATION_ORDER_SCAN_results

Change Cardinality Estimates

With a repeatable value that returns 100% of the rows in the table we can change the percent to change the cardinality estimate from the table as long as we don’t exceed 99.99999 PERCENT or so.

The below query has a row estimate of 1 row:

SELECT ID
FROM dbo.FORCE_ALLOCATION_ORDER_SCAN
TABLESAMPLE SYSTEM (0 PERCENT)
REPEATABLE (899088385);

Query plan:

a3_1_row_estimate

And this query has an estimate of 63 rows:

SELECT ID
FROM dbo.FORCE_ALLOCATION_ORDER_SCAN
TABLESAMPLE SYSTEM (99.99999 PERCENT)
REPEATABLE (899088385);

Query plan:

a3_63_row_estimate

Of course, if the hobt_id for the table ever changes then the results from the query will change. TOP with an OPTIMIZE FOR hint is a good alternative, but TOP PERCENT has the overhead of an added table spool.

Consistent Samples

Suppose I want to create a demo that shows TABLESAMPLE missing all pages with a 50% sample rate against a 20 page table. If I find a REPEATABLE value that works against a table on my machine it likely won’t work for the reader because it will have a different hobt_id. First I need to find a REPEATABLE value that works for my table:

DROP TABLE IF EXISTS dbo.SAMPLE_NO_ROWS;

CREATE TABLE dbo.SAMPLE_NO_ROWS (
	PAGE_TURNER VARCHAR(5000) NOT NULL
);

INSERT INTO dbo.SAMPLE_NO_ROWS WITH (TABLOCK)
SELECT TOP 20 REPLICATE('Z', 5000)
FROM master..spt_values t1;

DECLARE @random_seed BIGINT = 0,
@target_num_rows INT = 0,
@found_rows INT,
@stop INT = 0,
@dynamic_sql NVARCHAR(1000);

BEGIN

SET NOCOUNT ON;

WHILE @stop = 0
BEGIN
	SET @random_seed = @random_seed + 1;

	SET @dynamic_sql = N'SELECT @sampled_row_count = COUNT(*)
	FROM dbo.SAMPLE_NO_ROWS
	TABLESAMPLE SYSTEM (50 PERCENT)
	REPEATABLE(' + CAST(@random_seed AS NVARCHAR(20)) + ')';

	EXECUTE sp_executesql @dynamic_sql, N'@sampled_row_count int OUTPUT', @sampled_row_count = @found_rows OUTPUT;

	IF @found_rows = @target_num_rows
	BEGIN
		SET @stop = 1
	END;
END;

SELECT @random_seed;

END;

After a lengthy search I got a REPEATABLE value of 823223 on my machine. The following query returns a count of 0 rows:

SELECT COUNT(*)
FROM dbo.SAMPLE_NO_ROWS
TABLESAMPLE SYSTEM (50 PERCENT)
REPEATABLE(823223);

With a hobt_id value of 72057595285143552 we can figure out the @salt value:

SELECT 4294967296 - ((hobt_id + 3046928457) % 4294967296)
FROM sys.partitions
where object_id = OBJECT_ID('SAMPLE_NO_ROWS');

Now I can reproduce the demo from scratch:

DROP TABLE IF EXISTS dbo.SAMPLE_NO_ROWS;

CREATE TABLE dbo.SAMPLE_NO_ROWS (
	PAGE_TURNER VARCHAR(5000) NOT NULL
);

INSERT INTO dbo.SAMPLE_NO_ROWS WITH (TABLOCK)
SELECT TOP 20 REPLICATE('Z', 5000)
FROM master..spt_values t1;

SELECT 4294967296 - ((hobt_id + 3046928457) % 4294967296)
FROM sys.partitions
where object_id = OBJECT_ID('SAMPLE_NO_ROWS');

-- count of 0
SELECT COUNT(*)
FROM dbo.SAMPLE_NO_ROWS
TABLESAMPLE SYSTEM (50 PERCENT)
REPEATABLE(4294545335); -- use query value

Query plan:

a3_demo_0_rows

Final Thoughts


Many of the things that I’ll blog about won’t be suitable for production use, but this should be considered especially unsuitable. Microsoft could change the undocumented behavior described here at any time. However, until it is changed it can be useful for demos or for further experimentation. Thanks for reading!