Possible Deadlocks in the Isolation Level Repeatable Read

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In today’s blog posting I want to talk about 2 kinds of deadlocks that can occur when you run transactions in the Transaction Isolation Level Repeatable Read. When you set your transaction to use the isolation level Repeatable Read, SQL Server holds acquired Shared Locks for reading data until the end of the transaction (COMMIT or ROLLBACK). When you afterwards try to change the read data (through an UPDATE statement), it can cause different types of deadlocks if the same transaction runs concurrently multiple times. Let’s have a more detailed look at 2 different kinds of such deadlocks.

Cycle Deadlock

A Cycle Deadlock is one of the most common deadlocks – you just access resources (like various tables) in a different order, and at the end every query waits for another one. With the isolation level Repeatable Read it is even possible to cause a cycle deadlock when you just work with one table across multiple transactions. Let’s have a look at the T-SQL code of the first transaction.

And here is the T-SQL code of the second transaction.

As you can see from both listings, 2 data ranges are read and finally updated. If both transactions run at the same time, a Cycle Deadlock can occur, because the data ranges are accessed in the wrong order.

Deadlockk in Repeatable Read!

When SQL Server starts to run the UPDATE statement, the necessary Update Lock (U) can’t be acquired, because the Update Lock isn’t compatible with the already granted Shared Lock from the different session. Finally both UPDATE statements are waiting because of the Shared Lock acquired in the other session – a classical Cycle Deadlock just with one table! In this case you have to rewrite your code to get resolve this specific deadlock – just access your data ranges in the same order.

Read/Update Deadlock

A second type of deadlock can occur with the isolation level Repeatable Read if you read data with the intention to update it later. Let’s have a look at the T-SQL code of a simple transaction.

To cause this type of deadlock you just need to run the transaction across multiple sessions. You even don’t need to access different data ranges as you can see from the code. Let’s try to explain what happens here. When this transaction runs across multiple sessions concurrently, all sessions can acquire the Shared Locks for reading the data.

Because you hold the Shared Locks until the end of the transaction (COMMIT or ROLLBACK) in Repeatable Read, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session. Deadlock!

Deadlock in Repeatable Read!

This type of deadlock can be avoided by using a hint in the SELECT statement to acquire an Update Lock in advance.

Therefore just one SELECT statement can acquire the necessary Update Locks at the beginning (the Update Lock is not compatible with the Update Lock itself), continue with the UPDATE statement, and finally release the acquired locks. Afterwards the second transaction can continue with its SELECT and UPDATE statement.

You have to use the same technique here that SQL Server uses internally in an UPDATE execution plan: when you read data with the intention to update it later, you have to acquire an Update Lock during the reading phase to prevent this type of deadlock in the isolation level Repeatable Read.

Summary

As you have seen in this blog posting, it is very easy to cause various types of deadlocks if you use the isolation level Repeatable Read. Therefore you have to be very careful when you write transactions in this specific isolation level.

Like or share this blog posting to get the source code.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Announcing the SQLpassion workshop schedule for 2015

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I’m very happy and proud to announce the SQLpassion workshop schedule around Europe for 2015. The following locations are currently scheduled and planned:

Working hard to make your future life easier!

I’m really happy to finally also include a workshop in Frankfurt/Germany in the schedule, because I got over and over again requests for that. I can listen :-) . In addition to the current scheduled dates I also work to plan additional workshops at the following locations:

  • SQL Server Performance Tuning Workshop in London/United Kingdom
  • SQL Server Query Tuning Workshop in Antwerpen/Belgium
  • SQL Server Query Tuning Workshop in Stockholm/Sweden

If you are interested in attending a workshop in one of these locations, please get in touch with me.

You can also learn from past attendees how it feels when you get the SQLpassion in your veins.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickie #20 – CXPACKET Waits

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I have uploaded the 20th SQL Server Quickie to YouTube. This time I’m talking about CXPACKET Waits in SQL Server.

Like or share to get the source code.

Thanks for watching!

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Deadlocks caused by Missing Indexes in SQL Server

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In today’s blog posting I want to demonstrate how missing indexes on tables can cause deadlocks in SQL Server. For the preparation of the test scenario the following code creates 2 tables and populates both tables with 4 records.

Before I show you the actual deadlock repro, the following code shows a simple UPDATE statement that updates a specific row in the first table.

Because we have no index defined on Column2, the Query Optimizer has to choose a Table Scan operator in the execution plan to find the qualifying rows for our UPDATE statement:

Execution Plan for an UPDATE statement

This means that we have to scan the complete heap table to find the one row we want to update. In that case SQL Server locks the first row in the table with an Exclusive Lock. When you run in a different session a SELECT statement where you reference another “later occurring” row in the heap table, the Table Scan operator will block, because first you have to read all the “earlier occurring” rows of the heap table to actually get to the row that you have logically requested in your query.

A Table Scan by default means that you have to scan the whole table, and therefore you have to acquire a Shared Lock on every record – even on records that you have not logically requested. This situation can lead now to a deadlock situation if you access the 2 tables in the 2 different sessions in a different order, and when you try to write and read from both tables. The following code shows the transaction from the first query:

And the following listing shows the code from the 2nd transaction:

As you can see from both transactions, the two tables are accessed in a different order. If the timing is now right, running both transactions at the same time can lead to a deadlock situation. Imagine the following execution sequence:

  1. The first transaction runs the UPDATE statement on Table1.
  2. The second transaction runs the UPDATE statement on Table2.
  3. The first transaction runs the SELECT statement on Table2. This SELECT statement will block, because the Table Scan operator wants to acquire a Shared Lock on a row that is already exclusively locked by the second transaction.
  4. The second transaction runs the SELECT statement on Table1. This SELECT statement will block, because the Table Scan operator wants to acquire a Shared Lock on a row that is already exclusively locked by the first transaction.

The following picture illustrates this deadlock situation.

Deadlock caused by a missing index

Both transaction are now blocking each other, therefore you cause a deadlock in SQL Server. In that case the Deadlock Monitor background process kicks in, and performs a rollback of the cheapest transaction (based on the number of bytes that the transaction has written to the Transaction Log).

You can resolve this deadlock very easily by providing an index for the Column2 in both tables. In that case SQL Server can perform a Seek operation to find the qualifying rows, and can therefore bypass already locked rows in the leaf level of the index when you perform the SELECT statement:

The following picture illustrates how the locking situation looks now:

A index provides you an alternative data access path

With a Seek operation you can just bypass locked rows in the leaf level of the index, and you can avoid the deadlock that we have been talking about. Therefore it’s always very important that you have a detailed look at your indexing strategy when you see deadlock situations in your database. Indexing is one of the most important things in SQL Server – always remember that!

Like or share this blog posting to get the source code.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Recompilations, Part 2

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In my last week’s blog posting I talked about how the use of temp tables can cause recompilations in SQL Server. As mentioned at the very end of the blog posting, I want to concentrate today a little bit more on Table Variables, which can avoid the expensive overhead of recompilations. Let’s have a more detailed look at them.

Table Variables

A table variable is always scoped to a batch submitted to SQL Server. When you want to reference a table variable outside of a batch, you will get back an error message. That’s the first big difference compared to temp tables. The following code shows how you can create and use table variables – again in the context of a simple stored procedure.

The nice thing about table variables is that they don’t cause any recompilations. When you execute the stored procedure and when you monitor recompilations with the Extended Event session from last week, you will see that no recompilations are happening.

Why is this possible with table variables? First of all a table variable is just a variable – as the name implies. This means that you don’t change your database schema when you define your table variable. Therefore recompilations based on database schema changes can be avoided. And in addition a table variable has no statistics. Therefore statistics don’t have to be maintained, and the second cause of recompilations just disappears.

Both things sounds great at first, but we will find huge drawbacks when we look further. Let’s have a look at them. A table variable is just a variable – almost! A table variable is still persisted in TempDb! Yes, you read correctly: when you work with a table variable, physical I/O in TempDb is involved. This is very easy to prove with the Dynamic Management View sys.dm_db_session_space_usage, which tracks TempDb usage on a session level. Let’s have a look at the following code:

As you can see from the output from the DMV, the table variable needs 5 pages in TempDb, which were allocated. Because the table variable is already out of scope, these 5 pages are already marked for deallocation. Be aware of that side-effect!

Table variables also have no statistics. Therefore no recompilations happen here. But as a side-effect the Query Optimizer always just estimates 1 row. And that can be very, very bad. Imagine you make a join from your table variable to another table in your database. In that case the Query Optimizer will introduce the Nested Loop Join operator in the execution plan, and will reference the table variable as the outer table, because of the estimation of 1 row. If you return in reality 10000 or even more rows, the whole execution plan isn’t really optimal anymore. Let’s have a look at the following example:

When you have a more detailed look at the Clustered Index Scan operator, you can see that the cardinality estimation is here just 1 row and is nowhere near the actual number of rows returned by the operator.

Wrong Cardinality Estimation

You can fix that wrong cardinality estimation by using a Statement-Level Recompilation that was introduced back with SQL Server 2005.

But this approach is a little bit counter-productive, because you again introduce a recompilation overhead that you wanted to avoid with the use of table variables…

Summary

With table variables you can avoid the overhead of recompilations in SQL Server – with some side-effects. The largest one is the wrong cardinality estimation of just 1 row. Therefore table variables only make sense when you deal with a small amount of rows, because then the wrong cardinality estimation doesn’t really matter and doesn’t affect your performance. But with a larger set of rows it would hurt your performance because of the inefficient execution plan that is generated.

As a general rule-of-thumb you should use temp tables for a large amount of data, and table variables with a small amount of data. But you really have to benchmark it for your own workload, to decide when a temp table and when a table variable makes sense.

Like or share this blog posting to get the source code.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Recompilations, Part 1

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In this and next week I want to talk about a very important performance tuning topic in SQL Server: Recompilations. As you will see they can happen when you execute very simple stored procedures. Today I want to lay out the foundation of recompilations in SQL Server, why they have to happen, and next week I will show you how you can avoid them by rewriting your stored procedures in a different way.

Recompilations – what?

Before I want to talk about the specifics of recompilations in SQL Server, I want you to have a more detailed look at the following simple stored procedure.

As you can see from the code, there is nothing special about it. In the first step a temp table is created. Afterwards some records are inserted into the Temp Table, and finally the inserted data is retrieved with a simple SELECT statement. Code that you have already written hundreds – maybe thousand times in SQL Server. Let’s create in the first step an Extended Event session that tracks recompilations. The following code shows the definition of this event session.

When you now execute the stored procedure you will see that that 2 Recompilations have happened during the runtime.

2 recompilations for a simple stored procedure

A Recompilation means that SQL Server recompiled the execution plan during the execution itself. This introduces additional CPU overhead that will finally decrease the throughput of your workload. But for now the real question is why these recompilations have happened?

SQL Server mainly performs recompilations under the following 2 circumstances:

  • Schema Changes
  • Statistic Updates

And because of these 2 circumstances we have triggered the recompilations in the previous executed stored procedure. When you look again at the stored procedure definition, you can see that we have created in the first step a temp table. When you create a temp table in TempDb, you change your database schema, and because of this reason the first recompilation has happened.

And after the creation of the temp table, you have inserted some rows into it. Therefore SQL Server had to update the underlying statistics object of the Clustered Index of the temp table – the Clustered Index was created through the PRIMARY KEY constraint by SQL Server. A simple stored procedure caused 2 recompilations in SQL Server – wow!

Summary

Temp tables can have a huge performance overhead caused by recompilations. On the other hand temp tables have accurate statistics that helps the Query Optimizer to produce better performing execution plans. Therefore temp tables only make sense when you deal with a larger set of data. For a smaller set of data the CPU overhead caused by the recompilations would be larger as the performance gain through the statistics.

In next weeks blog posting I will introduce Table Variables to you, and here we will see how they can avoid the nasty side-effects of recompilations – and what other performance problems they can introduce. Stay tuned.

Like or share this blog posting to get the source code.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?

How to feel the SQLpassion in your veins

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

The last month was really time consuming for me: I was running my brand-new SQL Server Query Tuning Workshop 3 times across Europe: Zurich/Switzerland, London/United Kingdom, and finally in Vienna/Austria. The workshop runs for 4 days, and I teach almost everything you need to know to write better and faster performing SQL queries for SQL Server.

I don’t want to talk too much about the workshop itself, because Arun, Ina, Bernd, and Thomas were kindly enough to provide some feedback about the workshop from their perspective as an attendee. So let’s sit back, and enjoy listening to them.

Arun from London/United Kingdom

“My day-to-day writing of SQL has gone up a level – I am able to finely tune individual statements or underlying database objects to get the kind of query plan I want. Moreover I know now what sort of plan I am looking for. I have been already able to use this knowledge to address performance issues that came up this week in capacity testing.”

Arun feels now SQLpassion in his veins

Ina from Oberhaching/Germany

“Anyone faced with the task to implement queries on complex ERP data structures – for reporting, providing views, or for statistical analysis – comes quickly to its limits without knowledge of the operation of SQL Server. The workshop provides an excellent insight into the technical world of SQL Server, explained with many examples of queries and their optimization.”

Ina feels now SQLpassion in her veins

Bernd from Monheim/Germany

“After attending the 4 days long workshop you know very exactly how SQL Server executed SQL queries on the logical and physical level through execution plans. In addition you will know how you can improve queries by applying the right indexing strategy. This in-deep knowledge was very well explained and presented by Klaus.” 

Bernd feels now SQLpassion in his veins 

Thomas from St. Margrethen/Switzerland

“I was lucky enough to attend the 4 day long SQL Server Query Tuning Workshop from Klaus in Zurich. He has presented in a very easy way the performance related internals of SQL Server that will help you to write better performing SQL queries. Based on that knowledge I finally know now why sometimes queries are fast, and sometimes are slow. It was an amazing workshop, which makes the day-to-day work with SQL Server now very easy.”

Thomas feels now SQLpassion in his veins

Wow, that’s really amazing! It makes me really proud to see the SQLpassion in their faces, and how the workshop already helped them in their day-to-day life with SQL Server. If you also want to feel the SQLpassion in your veins when working with SQL Server, you can signup for the last SQL Server Query Tuning Workshop in 2014 in Utrecht/Netherlands from November 17 – 21.

If you attend this week the SQLPASS Summit in Seattle, make sure to say hi to me. I try to be as much time as possible at the Community Zone.

Nice greetings from the always rainy Seattle ;-)

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickie #19 – The Halloween Problem

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I have uploaded the 19th SQL Server Quickie to YouTube. This time I’m talking about the Halloween Problem in SQL Server.

Like or share to get the source code.

Thanks for watching!

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Parameter Sniffing, Part 2

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In last week’s blog posting I introduced the basic concept and the problems behind Parameter Sniffing in SQL Server. As you have seen, it can lead to serious performance problems when a cached plan is blindly reused by SQL Server. Today I want to show you how you can deal with this problem, and how to overcome it using a number of different techniques.

Indexing

The underlying root cause of the Parameter Sniffing problem the we discussed last week is the fact that in some cases the SQL statement produced a Bookmark Lookup and in some other cases produced a Table/Clustered Index Scan operator in the execution plan. If you are able to change the indexes in your database, the easiest way is to provide a Covering Non-Clustered Index for this specific query. In our case you have to include the additional requested columns from the Bookmark Lookup in the leaf level of the Non-Clustered Index. If you do that, you have also achieved so-called Plan Stability: regardless of the provided input parameters, the Query Optimizer always compiles the same execution plan – in our case an Index Seek (Non Clustered) operator.

A Covering Non-Clustered Index is accessed

If you don’t have a chance to work on your indexing strategy (maybe you deal with a 3rd party application, where you are not allowed to make indexing changes), you can work with a number of “transparent” SQL Server options that I will describe in the following sections.

Recompilations

The first option that SQL Server offers you is a recompilation of the execution plan. SQL Server provides 2 different options for you to use:

  • A recompilation of the whole, complete stored procedure
  • A recompilation of the problematic SQL statement – a so called Statement Level Recompilation (available since SQL Server 2005)

Let’s have a more detailed look at both options. The following code shows how you can apply a recompilation of the whole stored procedure with the RECOMPILE option.

When you run such a stored procedure, the Query Optimizer always recompiles the stored procedure at the beginning of the execution. Therefore you always get an execution plan which is optimized for the currently provided input parameter values. As a side-effect the execution plan no longer gets cached, because it doesn’t make sense to cache a query plan which is recompiled every time. When you have a large, complicated stored procedure a RECOMPILE query hint at the stored procedure level doesn’t always make sense, because your whole stored procedure is recompiled.

Maybe you have a Parameter Sniffing problem in just one specific SQL statement. In that case the overhead for the recompilation of the whole stored procedure would be too much. For that reason SQL Server, since version 2005, offers a so-called Statement Level Recompilation. You are able to mark a specific SQL statement for recompilation instead of the complete stored procedure. Let’s have a look at the following code.

In that example the second SQL statement is recompiled every time that the stored procedure is executed. The first statement is compiled during the initial execution, and the generated plan is cached for further reuse. That’s the preferred way to deal with Parameter Sniffing when you have no influence on the indexing strategy of your database.

OPTIMIZE FOR

In addition to the recompilation of a stored procedure or the SQL statement, SQL Server also offers you the OPTIMIZE FOR query hint. With that query hint you are able to tell the Query Optimizer for which specific parameter values the generated query plan should be optimized. Let’s have a more detailed look at the following example.

As you can see from the stored procedure definition, the execution plan of the SQL statement is always optimized for an input parameter value of 1 for the parameter @Col2Value. Regardless of which input value you provide for this parameter, you will always get a plan compiled for the value of 1. With this approach you are already working with a sledgehammer against SQL Server, because the Query Optimizer no longer has any choice – it must always produce a plan optimized for the parameter value of 1. You can implement this query hint when you know that a query plan optimized for a specific parameter value should almost always be generated. You will be able to predict your query plans when you restart SQL Server or when you perform a cluster failover.

If you are going to go down this route, you really have to know your data distribution, and you also need to know when your data distribution changes. If the data distribution changes, you also have to review your query hint to see if it’s still appropriate. You can’t rely on the Query Optimizer, because you just have overruled the Query Optimizer with the OPTIMIZE FOR query hint. You must always keep this in mind! In addition to the OPTIMIZE FOR query hint, SQL Server also offers the OPTIMIZE FOR UNKNOWN query hint. If you decide to use that query hint, the Query Optimizer uses the Density Vector of the underlying statistics object to derive the cardinality. The plan that the Query Optimizer generates for you depends on the data distribution. If your logical reads are over the Tipping Point, you end up with a Table/Clustered Index Scan…

Summary

In this blog posting I have shown you multiple ways to deal with the Parameter Sniffing problem in SQL Server. One of the most common root causes of this specific problem is a bad indexing strategy, where with a selective parameter value the Query Optimizer introduces a Bookmark Lookup into the execution plan. If such a plan gets reused in the future, your I/O costs will explode. I’ve already seen execution plans in production, which generated more than 100 GB of logical reads, just because of this problem. A simple RECOMPILE query hint on the statement level fixed the problem, and the query produced just a few logical reads.

If you can’t influence the indexing strategy of the database, you can work with the RECOMPILE query hint on the stored procedure or SQL statement level. As a side effect the recompiled plan will no longer be cached. In addition to these query hints, SQL Server also offers you the OPTIMIZE FOR and OPTIMIZE FOR UNKNOWN query hints. If you work with these query hints, you really have to know your data and your data distribution, because you are overruling the Query Optimizer. Be always aware of this fact!

Like or share this blog posting to get the source code.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Parameter Sniffing, Part 1

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In today’s blog posting I want to talk about one major performance problem that I see very often during SQL Server consulting engagements – the so-called Parameter Sniffing problem. This problem occurs when you work with parameterized SQL statements like stored procedures in combination with the plan caching mechanism of SQL Server. In the first part of this blog posting I will introduce the scope of this problem to you, and in the second part next week I will show you several techniques for solving it.

Parameter Sniffing – what?

When you execute a parameterized SQL query in SQL Server, the Query Optimizer compiles an execution plan based on the first provided parameter values. Afterwards the generated query plan is cached in the plan cache for further reuse. This means that SQL Server will reuse the plan afterwards without regard for the parameter values that you provide at the time. We need to recognize 2 kinds of parameter values:

  • Compile time values
  • Runtime values

The Compile time parameter values are the values that are used by the Query Optimizer to generate a physical execution plan. And the Runtime values are the values that you provide for the execution of the query plan. For the first execution these values are identical, but for subsequent executions, these values will probably be different. This may introduce serious performance problems, because your execution plan is always optimized for the Compile time values, and not for the various Runtime values that you subsequently provide.

Imagine that during the first query execution you provide a very selective value, and the Query Optimizer chooses a Non-Clustered Index Seek operation followed by a Bookmark Lookup operation to retrieve additional requested columns from your base table. Such an execution plan only makes sense with a selective value, because with a non-selective value your logical reads would be too high, and SQL Server would just scan the whole table and discard non-qualifying rows through a residual predicate in the Scan operator. The point where SQL Server decides between the two plan shapes is the so-called Tipping Point.

Imagine now what happens if a query plan with a Bookmark Lookup gets cached, and afterwards gets blindly reused with a non-selective value. In that case SQL Server doesn’t implement any protection mechanism and will just execute the found query plan from the plan cache. As a side-effect your I/O costs (your logical reads) will just explode, and the query will end up with very bad performance. To show you a concrete example of this behavior, the following code creates a simple table with an uneven data distribution in the second column.

When you have a look at the table, you can see that the value 1 occurs only once, and that the value 2 occurs 1499 times. This is a simple example of an uneven data distribution but you will often see this in your own tables: you have a lot of customers in your own country, and just a few other customers in other countries. Based on that uneven data distribution, a Non-Clustered Index on this column, and the Tipping Point, you get 2 different execution plans for the same logical query:

2 different Execution Plans

When you now create a stored procedure, the Query Optimizer will generate the execution plan based on the first provided parameter values, and afterwards the query plan will be blindly reused by subsequent executions.

When you now execute the stored procedure with the parameter value of 1, where you just return one record, the Query Optimizer will choose a Bookmark Lookup in the execution plan. The query itself just produces 3 logical reads. But when you next execute the stored procedure– this time with a parameter value of 2 – the cached plan gets reused and the Bookmark Lookup is done 1499 times – over and over again. For every returned record! The query now results in1505 logical reads. That’s a huge difference from the previous execution. When you look at the properties window of the SELECT operator you can also see that we have a parameter list. For every parameter SQL Server shows the Compile and Runtime values.

Different Compile-Time and Runtime Values

And as you can see they are different. The Compile time value is 1, and the Runtime value is 2. This means that the execution in front of you was optimized for an input parameter value of 1, but actually you executed the stored procedure with a parameter value of 2. That’s the Parameter Sniffing problem in SQL Server.

Continue reading on part 2.

Summary

As you have seen it is very easy in SQL Server to hit that specific problem. Every time that you use parameterized SQL queries (such as in stored procedures) you have to deal with this side-effect when you have an uneven data distribution in your table, and your provided Non-Clustered Index is not a covering one. Now that I have introduced you to the problem, I will show you next week how you can deal with this specific problem, and which solutions SQL Server provides for you.

Like or share this blog posting to get the source code.

Thanks for your time,

-Klaus

Do you like this post? Why not share it with the SQL Server community?