How I tune SQL Server queries

I’m a lazy person. I just want to do as little work as possible. I don’t want to think too much when I work. Yes, I know that sounds very terrible and will probably disqualifies me as a SQL Server consultant, but in today’s blog posting I want to show you how you can delegate the working and thinking process to the Query Optimizer when you want to create an indexing strategy for a specific query. Sounds interesting? If yes, then enter my world of index tuning 😉

The problematic query

Let’s have a look at the following query:

DECLARE @i INT = 999

SELECT
	SalesOrderID, 
	SalesOrderDetailID,
	CarrierTrackingNumber, 
	OrderQty, 
	LineTotal
FROM Sales.SalesOrderDetail
WHERE ProductID < @i
ORDER BY CarrierTrackingNumber
GO

As you can see here I use a local variable in combination with an inequality predicate to retrieve some records from the table Sales.SalesOrderDetail. When you run that query and you look at the execution plan, you can see some serious problems with it.

The original execution plan

  • SQL Server has to scan the complete Clustered Index of the table Sales.SalesOrderDetail, because there is no supporting Non-Clustered Index. The query needs 1382 logical reads for this scan, and the elapsed time is around 800ms.
  • The Query Optimizer introduced an explicit Filter operator in the query plan, which does a row-by-row comparison to check for qualifying rows (ProductID < @i)
  • Because of the ORDER BY CarrierTrackingNumber, an explicit Sort operator is introduced in the execution plan.
  • The Sort operator is spilled over to TempDb, because of the inaccurate Cardinality Estimation. With an inequality predicate in combination with local variables, SQL Server estimates hard-coded 30% of the rows from the base cardinality of the table. In our case the estimation is 36395 rows (121317 * 30%). In reality the query returns 120621 rows, which means that the Sort operator has to spill over to TempDb because the requested memory grant is just too small.

And now I ask you - how can you improve that query? What are your suggestions? Just take a break and think for a few minutes. How can you improve that query without changing the query itself?

Let’s tune the query!

Of course we have to work on our indexing strategy to make an improvement. Without a supporting Non-Clustered Index that’s the only plan that the Query Optimizer can use to run our query. But what is a good Non-Clustered Index for this specific query? Normally I will always start thinking about possible Non-Clustered Indexes by looking at the search predicate. In our case the search predicate is as follows:

WHERE ProductID < @i

We request rows filtered on the column ProductID. Therefore we want to create a supporting Non-Clustered Index on that column. So let’s create that index.

CREATE NONCLUSTERED INDEX idx_Test ON Sales.SalesOrderDetail(ProductID)
GO

After the creation of the Non-Clustered Index we have to test our change, so we execute our original query from the first listing again. And guess what? The Query Optimizer is not using the Non-Clustered Index that we just created! We have created a supporting Non-Clustered Index on the search predicate, and the Query Optimizer is not referencing it? Normally people are already out of luck at this point. But we can hint the Query Optimizer to use the Non-Clustered Index to get a better understanding of *why* the Query Optimizer hasn’t chosen the index automatically:

DECLARE @i INT = 999

SELECT
	SalesOrderID, 
	SalesOrderDetailID,
	CarrierTrackingNumber, 
	OrderQty, 
	LineTotal
FROM Sales.SalesOrderDetail WITH (INDEX(idx_Test))
WHERE ProductID < @i
ORDER BY CarrierTrackingNumber
GO

When you now look at the execution plan, you can see the following beast – a parallel plan!

A parallel plan - what a beast!

The query takes 370244 logical reads! And the elapsed time is almost the same as previously with around 800ms. What the heck is going on here? When you look in more detail at the execution plan, you can see that the Query Optimizer has introduced a Bookmark Lookup, because the previous created Non-Clustered Index is not a Covering Non-Clustered Index for this query. The query is over the so-called Tipping Point, because we are retrieving almost all rows with our current search predicate. Therefore it doesn’t make sense to use the Non-Clustered Index in combination with a very expensive Bookmark Lookup.

Instead of thinking why the Query Optimizer hasn’t chosen the previous created Non-Clustered Index, we have just delegated that thinking process to the Query Optimizer itself, and have asked it through the query hint, why that Non-Clustered Index wasn’t chosen automatically. As I said at the beginning: I don’t want to think too much ☺.

To solve that problem with the Non-Clustered Index we have to include the additional requested columns from the SELECT list in the leaf level of the Non-Clustered Index. You can look again at the Bookmark Lookup to see which columns are currently missing in the leaf level:

  • CarrierTrackingNumber
  • OrderQty
  • UnitPrice
  • UnitDiscountPrice

Let’s recreate that Non-Clustered Index:

CREATE NONCLUSTERED INDEX idx_Test ON Sales.SalesOrderDetail(ProductID)
INCLUDE (CarrierTrackingNumber, OrderQty, UnitPrice, UnitPriceDiscount)
WITH
(
	DROP_EXISTING = ON
)
GO

We have made another change, so we have to test our change again by running our query. But this time we run the query without the query hint, because the Query Optimizer should now choose the Non-Clustered Index automatically. And guess what? The index is now chosen when you look at the execution plan.

Our Non-Clustered Index is now chosen by the Query Optimizer

SQL Server now performs a Seek operation on the Non-Clustered Index, but we still have an explicit Sort operator in the execution plan. And because of the 30% hard-coded Cardinality Estimation the Sort operator still spills over to TempDb. Ouch! Our logical reads have dropped down to 757, but the elapsed time is still at around 800ms. What do you do now?

We can now try to include the column CarrierTrackingNumber at first in the navigation structure of the Non-Clustered Index. This is the column on which SQL Server performs the Sort operation. When we have that column first in the Non-Clustered Index, we have a physical presorting of our data by that column, and therefore the explicit Sort operator should go away. And as a positive side-effect there is nothing to spill over to TempDb. And no operator in the execution plan cares now about the wrong Cardinality Estimation. So let’s try that assumption by recreating the Non-Clustered Index again:

CREATE NONCLUSTERED INDEX idx_Test ON Sales.SalesOrderDetail(CarrierTrackingNumber, ProductID)
INCLUDE (OrderQty, UnitPrice, UnitPriceDiscount)
WITH
(
	DROP_EXISTING = ON
)
GO

As you can see from the index definition, we have now physically presorted our data by the columns CarrierTrackingNumber and ProductID. When you now rerun the query, and when you have a look at the execution plan, you can see that the explit Sort operator has gone away, and that SQL Server scans the complete leaf level of the Non-Clustered Index (with a residual predicate for the search predicate).

And now we have a Missing Index Recommendation!

That plan isn’t that bad! We just need 764 logical reads, and the elapsed time for this query is now down to 600ms. That’s a 25% improvement compared with previously! BUT: the Query Optimizer suggests to us a better Non-Clustered Index through the *great* (?) feature of the Missing Index Recommendations! Because we trust the Query Optimizer blindly, we create that recommended Non-Clustered Index:

CREATE NONCLUSTERED INDEX [SQL Server doesn't care about names, why I should care about names?]
ON [Sales].[SalesOrderDetail] ([ProductID])
INCLUDE ([SalesOrderID],[SalesOrderDetailID],[CarrierTrackingNumber],[OrderQty],[LineTotal])
GO

When you now rerun the original query you will see amazing things: the Query Optimizer uses *OUR* previously created Non-Clustered Index, and the Missing Index Recommendation has gone away! You have just created an Index that is never used by SQL Server – except for INSERT, UPDATE, and DELETE statements where SQL Server has to maintain your Non-Clustered Index. You just have created *pure* overhead for your database. But on the other hand you have satisfied the Query Optimizer by eliminating the Missing Index Recommendation. But that’s *NOT* the goal: the goal is to create indexes that are *ALSO* used.

Conclusion: never, ever trust the Query Optimizer!

Summary

Today’s blog posting was a little bit controversial, but I wanted to show you how the Query Optimizer can help you when you work on your indexing strategy, and how the Query Optimizer can fool you when you work on your indexing strategy. Therefore it is very, vey important that you just make minor adjustments, and that you immediately test your change by running your query again. And when you use a Missing Index Recommendation from the Query Optimizer, please think whether the recommendation is a good one. As I have said – I don’t want to think. Ouch...

Like or share this blog posting to get the source code.

Thanks for your time,

-TheLazyPersonNamedKlaus

12 thoughts on “How I tune SQL Server queries”

  1. Dimitri Choustov

    The names like [SQL Server doesn’t care about names, why I should care about names?] or even worse [idx_Test] makes me… It means not only more thinking but more work instead.
    Klaus, please, give a solid example and implement the naming convention to your examples.

    Enjoyed anyway 😉

  2. Christian Gräfe

    Hallo Klaus,

    ein großartiger Artikel, wo ich beim Lesen nur schmunzeln konnte.
    Danke schön.

    Gruß
    Christian

  3. Very nice article, Klaus. Good “revelation” and great explanations.

    Just to add to the article, the probable reason (can’t confirm because don’t have the execution plans) why the INDEX SEEK produced by the *great* recommendation of the missing index feature caused so many reads is that the INDEX SEEK was probably executed thousands of times (one for each row likely) and that means a trip through multiple pages in the B-TREE and a page read at the leaf level for every row. Seeks for large queries like this one are only really valuable if a single seek is done to find the beginning of the range it needs to return and then it scans through the leaf level (reading all the rows on each page instead of just one at a time) until it reaches the end of the range.

    And, I love the bit of humor and irony in an otherwise straight-laced article. It’s like finding a Silver coin in your change jar. It can’t help but bring a smile to your face.

    Well Done.

    –Jeff Moden

  4. Chris Valdivia

    Hi Klaus –

    I came across this article via a SQL blog I often read. Thank you! I’m just starting to get acquainted with some of the more advanced areas of SQL work, including things like performance plans, tuning, etc. Do you have any reading suggestions (e.g. websites, articles, etc.) that explain in a high-level overview manner the basics of such topics? I’m quite familiar with table design, stored procedures, user functions, table parameters, etc., but not so much with optimizing queries for performance. I’d love some suggestions on a few good places to start.

    Thanks again!
    — Chris

  5. Hi Klaus-
    you said:” How can you improve that query without changing the query itself?”

    But now , I want to know if you can change the query whatever you want. How can i change it is the right way?

Leave a Reply to Christian Gräfe Cancel Reply

Your email address will not be published. Required fields are marked *