Become an expert in SQL Server Performance Tuning & Troubleshooting!

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Since September 2012 I’m a Microsoft Certified Master SQL Server 2008, which is the highest technical certification for SQL Server that is currently available. Because I want to share my performance tuning & troubleshooting knowledge, and because of the huge demand on performance tuning I’m running again this year my popular SQL Server Performance Tuning & Troubleshooting Workshop across Europe in an extended version – 4 days performance tuning & troubleshooting on a SQLMCM level!

Based on the feedbacks on past attendees I have extended the workshop with one additional day. On that 4th day we take a real SQL Server workload and execute it on our database server. Throughout the whole day we are digging into SQL Server and trying to optimize SQL Server itself based on the knowledge we have acquired in the past 3 days. At the end of the day we have a better performing SQL Server which handles a larger workload than earlier.

I have already presented the shiny new content of the 4rd day at various conferences around the world, like the SQLPASS summit last year in Seattle in front of about 130 participants. Here is some feedback what you can expect:

  • “The presentation was like listening to a story, not just technical info. Even though it was demo based it had practical and application real-world applications.”
  • “Although I already knew a lot of the principles that was gone over, I also learned a lot that I did not know. It will be good to take the additional knowledge back to work and see if I can improve our production environment.”
  • “The speaker provided plenty of information and techniques for SQL Server performance troubleshooting that are not evident in standard SQL Server training and from available online information.”
  • “My notes have a bunch of TODOs for follow up research and implementation. Mostly it’s a bunch of settings to check and probably tweak to suit our environment, but I also found a couple new tools to check out.”
  • “Baselining is the key to troubleshooting. Solving a problem can lead to more opportunities and sometimes solving problems can create more problems, without a baseline you’ll never know.”

I’m running the workshop at the following locations and dates:

  • April 22 – 25, Vienna/Austria
  • May 13 – 16, Zurich/Switzerland,
  • May 27 – 30, Utrecht/Netherlands

You can find further information about the registration at http://www.SQLpassion.at/events.html. There is also an early-bird price available until the end of February. So don’t lose time and register and get an expert in SQL Server Performance Tuning & Troubleshooting!

Here is the detailed outline of the agenda for the 4 days.

  • Database Internals
    • Structures
    • Fixed Length Data Types
    • Variable Length Data Types
    • Sparse Columns
    • LOB Columns
    • Data Modifications
  • Execution Plans
    • Understanding Execution Plans
    • Physical Operators
    • Plan Generation & Caching
    • Plan Reuse & Recompilations
    • Plan Cache Pollution
    • Parameter Sniffing
  • Indexing
    • Table Scan/Index Scan/Index Seek
    • Clustered/Non-Clustered Indexes
    • Bookmark Lookups
    • Index Intersection
    • Filtered Indexes
    • Indexed Views
    • Page Fragmentation
    • Page Splits
    • Fill Factor
    • Searchable Arguments
    • Index Maintenance
  • Statistics
    • Overview
    • Working with Statistics
    • Multi-Column Statistics
    • Histogram/Density Vectors
    • Statistics Maintenance
  • Concurrency
    • Pessimistic Concurrency
    • Optimistic Concurrency
    • Isolation Levels
    • Troubleshooting Locking
    • Viewing Locks
    • Lock Granularity
    • Lock Hierarchy
    • Lock Escalations
    • Lock Compatibility
  • Deadlocking
    • Detection
    • Avoidance
    • Deadlocking Types
    • Troubleshooting
  • Latches & Spinlocks
    • Latch Architecture
    • Latch Types
    • Latch Modes
    • Hash Partitioning
    • Spinlocks
    • Troubleshooting
  • Performance Monitoring & Troubleshooting
    • Performance Monitoring Methology
    • PAL Tools
    • Windows Performance Monitor
    • SQL Server Profiler
    • Establishing a Baseline
    • Wait Statistics
    • OS & Storage Configuration
    • Database Configuration
    • Memory Management
    • Parallelism
    • TempDb

Thanks

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickie #5 – Bookmark Lookup Deadlocks

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I have uploaded the 5th SQL Server Quickie to YouTube. This time Santa Claus talks about Bookmark Lookup Deadlocks in SQL Server.

Like or share to get the source code.

Thanks for watching!

-Santa Claus

Do you like this post? Why not share it with the SQL Server community?

Practical SQL Server Performance Troubleshooting Day

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Due to the huge demand of SQL Server Troubleshooting Know How and Techniques I’m running on January 29 (Vienna/Austria) and January 31 (Zurich/Switzerland) my 1 day workshop called “Practical SQL Server Performance Troubleshooting Day” (see http://www.SQLpassion.at/events.html for further information).

During this 1 day workshop we will take a default installation of SQL Server, and run an OLTP workload with a several hundred users to generate our initial baseline for performance tuning/troubleshooting. Throughout the day we will work across various areas of SQL Server to implement different performance optimizations, and check how those changes will impact the throughput of our test workload. At the end of the day we will have a well-performing SQL Server which can handle a much larger workload than the initial (default) installation.

Following areas are covered in this workshop:

  • Windows OS Settings
  • Storage Configuration
  • SQL Server Instance Settings
  • Database Settings
  • Index/Statistics Maintenance
  • Locking/Blocking/Deadlocking
  • Memory Management

I have also presented this content in November at the SQLPASS Summit in Seattle/USA as a precon with around 130 attendees. Here is some feedback from them:

  • “The presentation was like listening to a story, not just technical info. Even though it was demo based it had practical and application real-world applications.”
  • “Although I already knew a lot of the principles that was gone over, I also learned a lot that I did not know. It will be good to take the additional knowledge back to work and see if I can improve our production environment.”
  • “The speaker provided plenty of information and techniques for SQL Server performance troubleshooting that are not evident in standard SQL Server training and from available online information.”
  • “My notes have a bunch of TODOs for follow up research and implementation. Mostly it’s a bunch of settings to check and probably tweak to suit our environment, but I also found a couple new tools to check out.”
  • “Baselining is the key to troubleshooting. Solving a problem can lead to more opportunities and sometimes solving problems can create more problems, without a baseline you’ll never know.”

So if you are also interested in improving your SQL Server performance/throughput, don’t hesitate and register at http://www.SQLpassion.at/events.html. There is also an early-bird price available until the end of this year.

See you soon!

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickie #4 – IAM Pages

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I have uploaded the 4rd SQL Server Quickie to YouTube. In this episode I’m talking about IAM Pages in SQL Server.

Thanks for watching!

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Nested Partitioned Views

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

A few weeks ago I had a customer engagement, where the customer wanted to implement Nested Partitioned Views. Before we go into the details of this approach and if it could work, I want to give you an overview about the requirements, and why the customer wanted to have Nested Partitioned Views.

The main idea was that the customer had to implement their SQL Server database for both Standard and Enterprise Edition. With the Enterprise Edition the customer used a combination of Partitioned Views and Partitioned Tables, as you can see it from the following picture.


Some of you might ask, why the heck you want to implement such a scenario. The answer is easy: Partitioned Views itself have some disadvantages, and on the other hand Partitioned Tables also have some disadvantages. But when you combine then you will get the benefits from BOTH worlds. The following list describes the pros and cons of Partitioned Views in SQL Server.

  • Pros
    • Available on any Edition of SQL Server
    • Each table has its own better Statistics
    • Index Rebuilds of any table is an ONLINE operation (when done on Enterprise Edition)
    • Each table can be indexed independently
      • Operational Data
      • Historical Data
  • Cons
    • Lots of tables to administer
    • Indexes must be created on individual tables
    • Check Constraints are needed for Table Elimination
    • Gaps and overlapping values are possible

The following list describes the pros and cons of Partitioned Tables in SQL Server.

  • Pros
    • Only one table to administer
    • Gaps and overlapping values are NOT possible
    • Completely transparent
  • Cons
    • Available only on the Enterprise Edition of SQL Server
    • Table-Level Statistics
      • Less accurate on larger Partitioned Tables
      • Filtered Statistics can help here…
    • Partition Level Index Rebuilds are OFFLINE operations
      • Only the whole Partitioned Table can be rebuild ONLINE
    • Supports Partitioning only over a single column
      • Persisted Computed Columns are needed

As you can see from this list, there are a lot of disadvantages of Partitioned Tables, especially in the area of maintenance (less accurate Statistics, Partition Level Index Rebuilds). As you have seen in the previous figure, you can achieve a very powerful solution, when you combine Partitioned Views with Partitioned Tables.

Because the customer also had to support the database on the Standard Edition of SQL Server (where Partitioned Tables are not available), they just wanted to substitute the Partitioned Tables with Partitioned Views – so in the final solution you just have Partitioned Views inside Partitioned Views – just Nested Partitioned Views. The following picture illustrates this concept.


So the question was now, if this could be done with SQL Server? Of course, you can make it this way, but what’s about performance? SQL Server can use with a single Partitioned View Table Elimination when each participating table has a corresponding CHECK constraint defined. Therefore SQL Server only has to query the necessary tables – it’s almost the same concept as Partition Elimination with Partitioned Tables. But would Table Elimination work with Nested Partitioned Views? Because there is no chance to define a CHECK constraint on the inner Partitioned Views…

Before I give you the ultimate answer on that question, I want to walk through a complete scenario, how you can setup and implement Nested Partitioned Views as illustrated in the last picture. In the first step we are creating a new database, and adding 2 new file groups to it, where the data of 2007 and 2008 is stored.


In each file group we create a Partitioned View which subdivides the data into 4 separate tables for each quarter (Q1, Q2, Q3, Q4). The following listing shows how to setup the Partitioned View for the 2007 sales data.


As you can see from the listing, each individual table has a corresponding CHECK constraint, so that SQL Server can eliminate the tables which don’t have to be accessed, when we query our Partitioned View. The next listing shows how to setup the Partitioned View for the 2008 sales data.


After we have now setup the Partitioned Views for 2007 and 2008, we finally create another Partitioned View which just unions the data of the other 2 Partitioned Views together:


So now we have created our Nested Partitioned View: Partitioned Views within Partitioned Views. The final thing that we now have to check is, if SQL Server is able to do Table Elimination with our Nested Partitioned View. When we talk about Table Elimination (or Partition Elimination with Partitioned Table) we have to differentiate between 2 types of elimination:

  • Static Elimination
  • Dynamic Elimination

Static Elimination means that you provide within your query a static value. So during the compilation of the Execution Plan the Query Optimizer already knows which data we are accessing. In that case SQL Server generates an Execution Plan, which only references the relevant data. Look at the following query:


We are providing here hard coded values, so the Query Optimizer can sniff them, and generates an Execution Plan, which only accesses the Sales2007_Q3 table. So Static Elimination works without any problems with Nested Partitioned Views, as you can see it in the following Execution Plan:


Static Elimination is fine, if you are doing demos etc., but in a production database when you are calling your query with different values through parameters, SQL Server has to do Dynamic Elimination. Dynamic Elimination means that the Query Optimizer has to generate an Execution Plan, which can’t take into account during the compilation which tables you are accessing, because those tables are just determined during runtime – therefore the name Dynamic Elimination. So let’s imagine we are executing the following parameterized query:


In that case SQL Server can’t determine which data is accessed, because the referenced variables are runtime constructs, they are not available during the compilation phase of our query. The above query produces the following Execution Plan:


When you look at the first step on that plan, you might think that SQL Server has to seek into each table to get the corresponding records. The Query Optimizer estimates each Clustered Index Seek with around 12%. But that’s only the half-truth of that Execution Plan! The real power lies in the Filter Operator which comes before the Clustered Index Seek. When you look at the Tooltip Window of the Filter Operator inside SQL Server Management Studio, you can see that the operator has a so-called Startup Expression Predicate:


As you can see from the picture, the Startup Expression Predicate has the same value as the CHECK constraint on the table itself. So that Filter Operator only calls the Clustered Index Seek operator on the underlying table when the Startup Expression Predicate is evaluated to true! You can also cross-check this with the property Actual Executions on the Clustered Index Seek Operators:


In our case SQL Server only executed the Clustered Index Seek Operator on the Sales2007_Q3 table. On all other tables the Filter Operator prevented through the Startup Expression Predicate the execution of the Clustered Index Seeks! That’s a really powerful concept for Dynamic Table Elimination inside Partitioned Views.

As you can see from this example, Nested Partitioned Views are working without any problems in SQL Server, and on the other hand SQL Server uses a Startup Expression Predicate inside a Filter Operator to drive Dynamic Table Elimination when you work with Partitioned Views – it doesn’t matter here if those Views are nested or not. In that case it is really important that you check the whole Execution Plan to get a better understanding what SQL Server is actually doing for your query. When you just look on that plan (without going into the details), you might think that SQL Server has to access every table, but that’s not really true!

A big Thank You goes to Paul White (Weblog, Twitter) who helped me to understand that behavior and implementation strategy of SQL Server. You can find the whole sample of Nested Partitioned Views here for download.

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickie #3 – Allocation Units

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I have uploaded my 3rd SQL Server Quickie to YouTube. In this episode I’m talking about Allocation Units in SQL Server.


 

Thanks for reading & watching

-Klausy

Do you like this post? Why not share it with the SQL Server community?

My upcoming speaking schedule

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

The summer goes by and in a few weeks the conference season starts over again. The last weeks were a little bit stressful, but now I had finally some time to prepare my sessions for upcoming SQL Server conferences, where I’m speaking over the next months.

24 Hours of PASS, September 12

At this online event (http://www.sqlpass.org/24hours/fall2012) I’m doing one session titled “Fast your Seatbelt – Troubleshooting the hardest SQL Server Problems“. I’m talking here about things like Thread Pool Starvation, Spinlock Contention, and VMware Memory Balloon Driver issues – expect to be this a Level 400 session.

SQLdays, September 20 – 21, Rosenheim/Germany

At this German speaking conference (http://www.sqldays.net) I’m doing 3 sessions about various SQL Server topics:

  • Troubleshooting SQL Server with Extended Events
  • Fast your Seatbelt – Troubleshooting the hardest SQL Server Problems
  • Migrating to SQL Server 2012 AlwaysOn

PASS SQLRally Nordic, October 1 – 3, Copenhagen/Denmark

It’s my first time that I’m speaking at a SQLRally event (http://www.sqlpass.org/sqlrally/2012/nordic), and I’m really looking forward going to Denmark – I’ve never been there! In Copenhagen I’m presenting one session about Troubleshooting SQL Server with Extended Events.

SQLPASS Summit, November 6 – 9, Seattle/USA

It’s now my 6th SQLPASS Summit where I’m presenting (2006, 2007, 2008, 2010, 2011, 2012) and I’m really proud to be able to do a day long precon about Practical SQL Server Performance Troubleshooting. In this precon I’m using a TPC-E workload to generate an initial baseline for performance improvements, and over the day I’m showing various techniques to improve SQL Server performance. But read yourself through the precon abstract:

Learn how to configure the Windows OS, SQL Server, and your database for optimal performance. In this pre-conference workshop, we’ll start by taking a default installation of SQL Server and running an OLTP workload with several hundred users to generate our initial baseline for performance tuning and troubleshooting. Throughout the day, we’ll work with various areas of SQL Server to implement different performance optimizations and then see how those changes impact the throughput of our initial test workload. At the end of the day, we’ll have a well-performing SQL Server that can handle a much larger workload than the initial (default) installation.

Topics for the day include:
• Windows OS settings
• Storage configuration
• SQL Server instance settings
• Database settings
• Index and statistics maintenance
• Locking, blocking, and deadlocking
• Memory management

Besides the precon I’m presenting the following 2 regular conference sessions:

  • Advanced SQL Server Troubleshooting
  • ColumnStore Indexes – the Turbobooster in SQL Server 2012

Besides all this conference engagements, I’m also running my “SQL Server Performance Tuning & Troubleshooting Workshop” in Austria, Germany, Switzerland, and Ireland! See http://www.SQLpassion.at/events.html for further details. You can also expect some new SQL Server Quickies over the next weeks at my YouTube channel: http://www.youtube.com/user/sqlpassion. Autumn 2012 will be a very busy for me (the same as last year), and I’m looking forward to see you at one of these great SQL Server events!

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Initialize a Transactional Replication from a Database Backup

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

During the last days I’m working on Replication – especially with Transactional Replication. The main idea of Transactional Replication is that you have a so-called Log Reader Agent on your Publisher SQL Server database running, that is analyzing the Transaction Log File and synchronize changes on so-called Articles through a Distribution database down to your Subscription databases.

When you setup your Transactional Replication through SQL Server Management Studio, you can only initialize the Subscription database through a so-called Snapshot. A snapshot is just a point-in-time “picture” of your database, which is dumped out to a file share through bcp.exe. Imagine now that you want to publish parts of a VLDB (Very Large Database) to various Subscribers. In that case Transactional Replication must make a snapshot on the Articles that you want to replicate from your VLDB. That’s not very practical, because it can take a very long time to make that snapshot and initialize the Subscribers from that snapshot.

Fortunately there is also an option to initialize a Subscriber from a backup file, but unfortunately this option isn’t available through SQL Server Management Studio – you have to choose that option through the Replication Stored Procedures that are used in the background by SQL Server to implement the actual Replication. The idea of initializing a Transactional Replication from a backup file is not very new, because there are also other blog postings on this topic, but I wasn’t able to find a step-by-step tutorial how to implement this approach. The most important blog posting is from Chris Skorlinski (http://blogs.msdn.com/b/repltalk/archive/2010/03/16/deep-dive-on-initialize-from-backup-for-transactional-replication.aspx). With the information provided in that posting I was finally able to do the initialization from a backup file. In this blog posting I want to show you now step-by-step how you can achieve that functionality.

In my distributed scenario Transactional Replication is running on 3 different VMs:

  • SQL2008HADR1: Publisher
  • SQL2008HADR2: Distributor
  • SQL2008HADR3: Subscriber

Running Replication in a fully distributed scenario is more realistic, but the setup of the needed security is also a topic on its own. Fortunately Joe Sack from SQLskills.com has written a few weeks ago a very nice article how to setup security – see http://www.sqlskills.com/blogs/joe/post/SQL-Server-Pro-articlee28093e2809cGetting-Started-with-Transactional-Replicatione2809d.aspx for further details on that.

To get started I have created a completely new database on SQL2008HADR1 which acts as the Publisher for Transactional Replication:

As you can see from the code, I have created a simple table called Foo, and inserted several records into that table. I have also created a full database backup and afterwards I have inserted several other records, which are currently stored in no backup. In the next step you have to create your Publication on the Publisher. This can be done completely through the UI provided by SQL Server Management Studio. The only thing that you don’t have to do is the creation of an initial snapshot, because we don’t need that snapshot.


In my case I have scripted out the creation of the Publication, which resulted in the following T-SQL code:

In that publication I’m just replicating the whole table Foo. I have not applied any filter. After you have created your Publication, you must change one setting through the UI of SQL Server Management Studio. I was not able to figure it out, if this change can be also done through T-SQL. You have to allow the initialization from backup files for subscribers. To change that setting you are doing to the local publication on the Publisher, then you go to the Properties-Window, and finally to the page “Subscription Options“. And here you are setting the option “Allow initialization from backup files” to “True“.


This setting allows the Subscribers to be initialized by a backup. In the next step I’m making a final Transaction Log Backup, so that all transactions are backed up.

Note: You need to do a backup after the Publication was configured on the Publisher. Otherwise the initialization from backup will not work!

When you are using Replication in a distributed scenario you also have to make sure that the Distribution Agent can access your Publisher database.

In the next step you can restore the taken database backups on the subscriber, in my case SQL2008HADR3.

Now you are ready to create the Subscription. When you create a new Subscription you have to make changes on both the Publisher and also on the Subscriber. Therefore you have to execute the following T-SQL code on the Publisher (SQL2008HADR1) to create the Subscription.

When you are calling the stored procedure sp_addsubscription on the Publisher you can now specify that you want to initialize the Subscribers through a database backup. For that reason you set the parameter @sync_type to “initialize with backup“. You also have to supply the *last* backup that you have restored on the Subscriber through the parameter @backupdevicename. In my case this is the Transaction Log backup that I have done previously. This backup is then opened by sp_addsubscription to get the last LSN (Log Sequence Number) of the backup. This LSN is used by the Distribution Agent to replicate commands from the msrepl_transactions table to the Subscriber that have occurred afterwards.

Note: You have to call sp_addsubscription directly from T-SQL, because setting these options is not possible through SQL Server Management Studio.

When you have created the Subscription on the Publisher, you are finally ready to create it on the Subscriber (SQL2008HADR3).

After completing these steps you have a fully working Transactional Replication which was initialized by a database backup instead of a snapshot. For VLDBs this can be a huge advantage instead of creating a very big snapshot that must be transferred over the network to the Subscriber, and finally applied. The drawback of this approach is that you have to restore your whole database on the Subscriber, and not only the Articles that you are publishing. Additionally you also have the whole data on the Subscriber, even when you are replicating Articles with Filter. For these reasons you have to delete the unnecessary data on the Subscriber afterwards manually.

Thanks for reading!

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickie #2 – Extents

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Yesterday I have uploaded my 2nd SQL Server Quickie to YouTube. In this episode I’m talking about Extents, and how SQL Server manages those through GAM and SGAM pages.

Thanks for reading & watching!

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Quickies – a new way to learn about SQL Server

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

We are all living in a fast moving world, especially in the IT industry. We have to learn new technologies every few months and must keep our knowledge up-2-date to be competitive. A lot of us are learning through reading, and a lot of us are also learning through watching web casts and training videos. I’m also a big fan of the later one, because it is a completely new way to learn.

Unfortunately web casts and training videos always lasts for at least an hour, and you are just staring at prepared Power Points, while listening to your trainer in the background. And sometimes it’s just impossible to watch web casts, because you just have a few minutes, but not a whole hour – like when you are on the road and waiting for a train. For that reason a new idea was born: SQL Server Quickies!

The goal of my new SQL Server Quickies video series is that you can learn about specific advanced SQL Server topics within 5 – 10 minutes by listening to me working hard on the flipchart explaining various SQL Server topics. And by the end we are digging into SQL Server Management Studio exploring the illustrated concepts with a practical example.

Today I’m very proud to announce my first SQL Server Quickie about Data Pages in SQL Server and how you can analyze them within SQL Server Management Studio. My goal is to deliver around 2 – 3 SQL Server Quickies per each month about various more advanced SQL Server topics – so stay tuned and enjoy the new learning experience. You can find all posted SQL Server Quickies at my YouTube Channel at http://www.youtube.com/user/sqlpassion.

Thanks for reading & watching!

-Klaus

Do you like this post? Why not share it with the SQL Server community?