Your browser (Internet Explorer 7 or lower) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.

X

Are your clustering keys good enough

Why some of your clustered indexes create performance problems and how to check that?

Many physical database implementation suffer performance because of wrong clustering key. If the clustering key is not appropriately chosen it can lead to

              •  – high level of page split events
              •  – high level of index fragmentation
              •  – increasing the overall object size – the clustered and all nonclustered indexes of the table

The results are slow execution of insert statements, overall performance degradation when accessing specific tables, increasing the size of the table object and the database as a whole.

One of the important steps of every performance review process is to make index assessment, which includes different steps such as checking for duplicate indexes and statistics, removing not used indexes, checking index usage, checking missing indexes, index status and fragmentation (explaining all the steps of an index assessment is a subject for a separate post) and last but not least paying special attention to clustered index keys. Actually the process of implementing physical database and designing the physical model is the right time for choosing the appropriate clustering key, changing it after that is more complicated but not impossible. Of course you can leave with some not well chosen clustering keys, I am not saying that you should change all your clustering keys, you have to identity only those that make performance problems and plan for changing them.

The goal of this post is to describe the possible damages of the wrong clustering keys, as well as to give the hints for finding problematic clustering keys in your current database. Last but not least I am giving you the main recommendations on choosing the clustering keys. The code and described principles in this post are valid for all latest SQL Server versions – 2008, 2008R2 and 2012

The importance and the role of clustering key

When you define a Primary Key in SQL Server it also becomes a Clustering Key by default. If you choose a business key as a primary key following a relational database design practice, then you could put your table into troubles. The Primary Key according to the relational databases theory is the field that uniquely identifies the records in a table. The clustering key role is a little bit different than the role of the Primary Key. The clustering key:

  • is “responsible” for physical arrangement of records in the table. It is the field by which the records are placed in respective data pages the table. A new record needs to be placed inside specific page according to the value of the clustered key field  (see the leaf level data pages in the next diagram)
  • is part of the index record in index pages (non leaf index level on the diagram) of the clustering index Btree structure
  • is part of every NonClustering Index record and as such it is part of every NCl Index of the table. It participates as a lookup field for searching trough the clustering index

According to that role description of the clustering key, it is easier to understand where and why the problems occur. Let’s face them

Page splits – the pain of the wrong clustering key

A page split is an event that occurs when a record is inserted in a filled clustering index page (mostly on leaf level pages). The records are stored ordered, which means that the record has to be inserted in the specific page and there is not enough space in it, the page is divided in two pages, the records are approx half separated between the old page and the new one, and the new record is saved.

 

As shown in the diagram the page split occurs when inserting new Employee with Name Frank. The record needs to be inserted in page 1:334, but because there is not enough space, the page split occurs, and a new page is added – page 1:879. The new record of Frank is inserted in the newly created page.

Two thinks are important here – if there is enough room in the data page, the page split will not occur and it will perform a regular insert statement into the index. So probably lowering of the fill factor is a kind of a work around against page split events. The second think is that the Page Split event is expensive, it leads to two main problems:

  • it makes the respective insert very heavy because of the additional logging.
  • It causes clustering index fragmentation both logical and physical

Logging

Page split makes an insert heavy due to additional logging. The page split logging consists of many log records for saving the new page allocation, records moving (deleting), inserting new index records in non leaf level and updating the above leaf levels of the Btree structure. Here is an example of logging an insert statement with a page split, and logging of a simple insert statement .

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
--Creating MyUniqueTable with GUID values of the clustering key field
CREATE TABLE MyUniqueTable
(UniqueColumn UNIQUEIDENTIFIER DEFAULT NEWID PRIMARY KEY,
Characters CHAR(100) )
GO
--Inserting rows into MyUniqueTable
DECLARE @i INT
SET @i=1

while @I<80
BEGIN
INSERT INTO MyUniqueTable VALUES (NEWID(), 'def')
SET @i=@i+1
END
--Creating MyUniqueTable2 with increasingly generated values of the clustering key field (int with auto increment)

CREATE TABLE MyUniqueTable2
(UniqueColumn INT IDENTITY(1,1) PRIMARY KEY,
Characters CHAR(100)
)
GO

--loading it with some data

DECLARE @i INT
SET @i=1

while @I<80
BEGIN
INSERT  INTO MyUniqueTable2 VALUES ('def')
SET @i=@i+1
END
GO

--Let’s insert a record in the first table and simulate page split:

INSERT  INTO MyUniqueTable VALUES (NEWID(), 'def')

--And check logging records
SELECT operation, context, [log record fixed LENGTH],
[log record LENGTH], AllocUnitId, AllocUnitName
FROM fn_dblog(NULL, NULL)
ORDER BY [Log Record LENGTH] DESC;

The following log records are describing the insert and the page split. The definite prove the there was a page split event is the operation LOP_DELETE_SPLIT. This operation appears in the log only in case there is a delete of records from a page because they exists (saved) in a new page allocated by page split operation:

There are 27 log records for inserting one row! The reason is the logging of the page split event which occurs because of randomly generated value by NEWID() that should be inserted in specific page of the clustered index.

Let’s insert a record in the second table

1
INSERT  INTO MyUniqueTable2 VALUES ('def')

Check log records:

1
2
3
4
SELECT operation, context, [log record fixed LENGTH],
[log record LENGTH], AllocUnitId, AllocUnitName
FROM fn_dblog(NULL, NULL)
ORDER BY [Log Record LENGTH] DESC;

This is all the logging of the insert statement without additional events:

There are 6 log records comparing to 27 in the first example.

When you insert new records in an index structure that is not ordered by the increasing inserting field, then you can potentially have page split on every insert (at least until the leaf level is doubled as number of pages, creating enough room for growth until the next split) which slows down the performance.

Fragmentation

The other main result of a page split event is index fragmentation. The new page that is created is not contingent with the previous leaf level page, it is created somewhere in the file. This increases logical fragmentation. Page density of the pages that are part of the page split event is decreasing, meaning the leaf level data pages become physically fragmented. This potentially creates performance problems on reads . As a general a physically fragmented index contains more pages on its leaf level than not fragmented one. Here is an example:

Inserting rows in MyUniqueTable:

1
2
3
4
5
6
7
8
DECLARE @i INT
SET @i=1

while @I<10000
BEGIN
INSERT INTO MyUniqueTable VALUES (NEWID(), 'def')
SET @i=@i+1
END

Let’s look at the fragmentation of the table MyUniqueTable:

1
2
3
4
5
6
7
8
9
SELECT  object_id,
index_type_desc,
index_level,
avg_fragmentation_in_percent,
page_count,
avg_page_space_used_in_percent,
record_count,
avg_record_size_in_bytes
FROM sys.dm_db_index_physical_stats (db_id(),  OBJECT_ID('MyUniqueTable'), NULL, NULL, 'DETAILED')

Let’s do the same with MyUniqeTable2. Inserting 1000 rows and checking fragmentation

Obviously the clustering index key is making the difference in logging and fragmentation. The fragmentation is 97% using GUID v/s 12.5% using identity, page space used or page density is 77% v/s 98%. What’s wrong with GUID clustering key? It is not narrow and increasing, it doesn’t follow the insert ordering. Using a field (such as int identity) that is ever increasing as a clustering index key makes thinks much better.

How to find the clustered indexes where the page split occurs most often

You can be suspicions to specific indexes if you regularly monitor the level of index fragmentation. You can apply the following rule in order to select potential indexes that you need to check further: If an index has high page splits, it fragments faster than the other indexes after rebuild. You can check which indexes fragment faster and check their clustering keys.

Of course you can be more specific and find indexes that generate page splits. You can monitor and catch the Page Split event, using xEvent. The problem with Page Split event is that it raises when a new page is added to the leaf level of an ever increasing clustered index. Even with adding additional info of the page split type is not enough to isolate the bad page split occurring in the middle of the data pages. That is why I wouldn’t go catching Page Split event. I recommend the approach described by Jonathan Kehayias in the the following post – http://www.sqlskills.com/blogs/jonathan/tracking-problematic-pages-splits-in-sql-server-2012-extended-events-no-really-this-time/

It allow to track track down the worst splitting indexes in the database experiencing the worst mid-page splits. The approach is to monitor the LOP_DELETE_SPLIT (operation=11) of the sqlserver.transaction_log event by creating an xEvent Session. LOP_DELETE_SPLIT operation and respectively the LOP_DELETE_SPLIT log record is saved in the transaction log when records are deleted from the page that ispart of a bad page split, like page 1:334 on the diagram. This event and log records is specific and unique operation only for this occurrence of delete. Using and processing ‘alloc_unit_id’ from the events data shows the specific index.

Sizing problems

The clustered index key is part of every non-clustered index record. For big tables with many noncustered indexes, a large clustered index key could increase the overall size of the nonclustered indexes by 3-4 times even more, making them less efficient. There is a great post by Kimberly Tripp of what does it cost a larger clustering key with calculation examples – http://www.sqlskills.com/blogs/kimberly/how-much-does-that-key-cost-plus-sp_helpindex9/

If you suspect that there are clustered indexes with large clustering keys in your database,  you can analyze the key fields of the clustered indexes using the following script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
--analyze clustered index keys

SELECT t.object_id AS ObjectID, s.name AS ShemaName, t.name AS TableName, i.name AS IdxName,
c.name AS ColName, ic.key_ordinal, c.is_identity,
c.max_length, st.name AS DataType
FROM sys.TABLES t
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.indexes i ON i.object_id = t.object_id
INNER JOIN sys.index_columns ic ON ic.object_id = i.object_id
INNER JOIN sys.COLUMNS c ON c.object_id = ic.object_id AND
ic.column_id = c.column_id
JOIN sys.types st ON st.system_type_id=c.system_type_id

WHERE ic.index_id = 1
AND i.index_id=1
AND i.is_primary_key = 1
AND ic.key_ordinal >=1
ORDER BY ShemaName, TableName, IdxName, ic.key_ordinal

Executing the code in AdventureWorks2012 database gives the following results:

This is a list of all the clustered indexes that are defined as a result of defining primary key constraints (you can remove PK filter). In case the clustering key is composite, the key_ordinal is >1 and the number  shows the order of the field inside the composite key. In order to find the overall size of the clustering key you need to sumarize all the max_lengh values for that objectID using SUM and GROUP BY clause.

You can easy order the result by the field length to show the top indexes with the largest clustering index key. It is a very easy “health” check of your clustering indexes that you can perform at any time.

Clustering key rules of choice

Clustering key should be chosen carefully, especially for tables with high number of rows and indexes, for tables that have a lot of inserts. Some important criteria for choosing the right clustering key are

  • Ever Increasing
  • Narrow
  • Relatively small data type
  • Unique
  • Static

It is common to choose Identity for primary key/clustered index key, as it answers all those criteria. I am not saying that all of your keys should be Identities. Identity has its own drawbacks like for example: merging data from different sources, gaps cases, additional logic for preserving identity generation per session, hot spots for inserts generating PAGELATCH-еs, etc. So probably it is not applicable to choose identity in every case. As this is an interesting topic it is probably a good idea for a next post 🙂

Anyway, you should be careful when considering your clustering keys and if they are considered already by somebody else and you just administer the database then you can check if the clustering keys make performance problems, find them and plan changes or communicate problems properly.

Thanks for reading 🙂