Improved Temp Table Caching in SQL Server 2014

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Under some preconditions SQL Server is able to cache Temp Tables. Caching Temp Tables means that SQL Server doesn’t have to recreate them, when you are creating the same Temp Table over and over again. This can improve the throughout of your workload tremendous, because SQL Server doesn’t has to access specialized pages in memory (PFS, GAM, SGAM), which would lead to Latch Contention under a high workload. Paul White has an excellent blog posting which describes that behavior in *more* detail ;-)

One of the requirements of Temp Table Caching is that you can’t mix DML with DDL statements in a Stored Procedure. Imagine the following code:

Here you are creating through a DDL statement (CREATE UNIQUE CLUSTERED INDEX) an index, which means you are mixing DDL with DML statements. For that reason SQL Server is not able to cache your Temp Table. You can prove that behavior by tracking the performance counter Temp Tables Creation Rate from from the DMV sys.dm_os_performance_counters as in the following example:

When you are running that code, SQL Server has to create 1000 individual Temp Tables, as you can see from the output window in SQL Server Management Studio:

It’s very easy to overcome that problem by enforcing the UNIQUE CLUSTERED INDEX with the PRIMARY KEY constraint. In that way you are not mixing DDL with DML statements anymore and SQL Server is finally able to cache your Temp Table.

When you rerun the code from above which tracks the relevant performance counter, you can now see that SQL Server creates the Temp Table only once and reuses it:

As a conclusion this also means that SQL Server is not able to cache Temp Tables when you create additional Non-Clustered Indexes, because you are then again mixing DDL with DML statements within your Stored Procedure.

But with SQL Server 2014 you can overcome this limitation, because you are now able to create indexes inline within your CREATE TABLE statement. Look at the following code:

As you can see I have created 2 additional Non-Clustered Indexes on the Temp Table directly during the creation of the Temp Table itself. This again means we are not mixing DDL with DML statements, and SQL Server is again able to cache and reuse your Temp Table:

That’s a very nice addition and positive side effect by defining indexes inline on Temp Tables in SQL Server 2014.

What do you think about that new feature? Please feel free to leave a comment.

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

Non-Clustered Indexes on Table Variables in SQL Server 2014

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Earlier today I have seen a Tweet from Paul White, where he tweeted that SQL Server 2014 supports Non-Unique Clustered and Non-Clustered Indexes for Table Variables in SQL Server 2014. So I had to start up my virtual machine and try it out, because this would be a awesome new feature. Table Variables are great, because you can avoid excessive recompilations with them. They have no statistics, and when you are creating them, you are not changing the database schema. They are just variables, but still persisted in TempDb.

One drawback is that you where not able to create Non-Clustered Indexes on them, which is bad when you are dealing with a larger dataset. But with SQL Server 2014 CTP1 that behavior is now changed. Just have a look on the following code:

When you look on the Execution Plan of the SELECT Statement, SQL Server is executing a Non-Clustered Index Seek operator. As you can see you are now able to create additional Non-Clustered indexes on the Table Variable. Each created Non-Clustered Index has no Statistics Object attached to it. It’s a very nice, easy syntax which is also supported on “normal” database tables that you create. Let’s have a look on the following table definition:

Furthermore it’s also possible to create composite indexes with the new syntax:

Very nice :-)

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

First steps with Extreme Transaction Processing – Hekaton

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

A few hours ago Microsoft released the first public CTP of SQL Server 2014. The download and the installation went very smooth, the only new restriction is that you are not allowed to install the CTP 1 version in parallel with any other SQL Server version. Aaron Bertrand has blogged about that new behavior.

In this blog posting I want to give a brief overview how to make your first steps with Extreme Transaction Processing (XTP) that was formerly known as Hekaton. If you want to have a good overview about XTP, I recommend Kalen Delaney’s whitepaper about it, and the research whitepaper titled High-Performance Concurrency Control Mechanisms for Main-Memory Databases (can be downloaded here) that was released by Microsoft Research.

All the versioning history that XTP maintains transparently for you in the background is stored in a FILESTREAM filegroup with your SQL Server database. So the first thing that you have to do, when you want to work with XTP, is to add a new FILESTREAM Filegroup to the corresponding database. That new file group must be also marked with the property MEMORY_OPTIMIZED_DATA

After you have created the new FILESTREAM file group, you have to add a new file to the file group.

After we have prepared our database for XTP, we can finally add our Memory Optimized Table – that’s the name how SQL Server refers to XTP tables:

From the above screenshot you expect a very cool table wizard, but the only thing that we are currently getting is a T-SQL script template – nothing more. That reminds me back to my Service Broker days, and we still have no wizard for Service Broker in SQL Server 2014 – 8 years are just gone without any improvements in this area. Hopefully Microsoft invests a little bit more into XTP… Here is the necessary T-SQL script to create a XTP table:

Every table in XTP needs a primary key constraint that must be a Non-Clustered Hash Index. Clustered Hash Indexes are currently not support. You also have to specify the bucket count with the BUCKET_COUNT clause. And finally you have to mark your table as MEMORY_OPTIMIZED. Congratulations, you have created your first Memory Optimized table – it wasn’t that hard!
Working with the new table is also very easy. Let’s insert a simple record:

But the real power of XTP is in combination with concurrent users, because there is now no Locking/Blocking/Latching anymore. Even Exclusive Locks (X) are just gone. Normally when you are running the above INSERT statement on a “normal” table, you have an IX lock on the table, and on the page, and on the record itself an X lock. But with XTP all those locks are just gone. See the following query:

The following screenshot shows the output from sys.dm_tran_locks. As you can see there is only a Schema Stability Lock (Sch-S) on the table itself, but the IX and X locks are just gone – very nice:

When you are running the above transactions without committing it immediately, you can also retrieve the data without blocking from a different session through a simple SELECT statement. SQL Server uses an Index Scan (NonClusteredHash) operator in the Execution Plan:

Before XTP this behavior was only possible by enabling Optimistic Concurrency within the database – Read Committed Snapshot Isolation or Snapshot Isolation that was introduced back in SQL Server 2005.
Let’s try to run an UPDATE statement:

When we look at the Update Execution Plan, there is nothing special here expect the Index Seek (NonClusteredHash) operator. So XTP is able to Scan and Seek Hash Indexes. Let’s try to make the UPDATE statement in an explicit transaction:

In this case, SQL Server gives us the following error message:

Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for autocommit transactions. It is not supported for explicit or implicit transactions. Provide a supported isolation level for the memory optimized table using a table hint, such as WITH (SNAPSHOT).

So when you are using explicit transactions, we have to hint SQL Server. But WITH (SNAPSHOT) isn’t the same as the Transaction Isolation Level Snapshot, because when I’m changing the Isolation Level to Snapshot, and trying to rerun the transaction, I’m getting the following error message:

Msg 41332, Level 16, State 0, Line 3
Memory optimized tables and natively compiled stored procedures cannot be accessed or created when the session TRANSACTION ISOLATION LEVEL is set to SNAPSHOT.

So let’s let’s add a query hint to the query itself:

Now the transaction commits, and sys.dm_tran_locks again only shows the Sch-S lock. The next thing that I wanted to try is to run 2 UPDATE statements in parallel by not committing the 1st transaction. So let’s just run the following query in 2 different sessions, and make sure you are not committing both transaction. Without XTP, the 2nd transaction would block, because of the X lock that is held by the 1st session:

Unfortunately SQL Server gives us in the 2nd transaction the following error message:

Msg 41302, Level 16, State 110, Line 3
The current transaction attempted to update a record that has been updated since this transaction started. The transaction was aborted.
Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
The statement has been terminated.

We were just running into an Update Conflict, and SQL Server has rolled back the 2nd transaction. I have not expected that, but I have to think a little bit more about it.
The next thing that XTP provides you in the CTP1 version are so-called Natively Compiled Stored Procedures:

Again, there is no wizard, just a simple T-SQL script template that you can use. Native Compilation means that SQL Server is compiling the whole stored procedure down to C/C++ code in the background – the performance will be awesome, because we are now executing directly native code within SQL Server. The following script shows how a simple XTP stored procedure looks like.

There are a lot of prerequisites that must be met:

  • The Stored Procedure must be created with SCHEMABINDING and EXECUTE AS
  • The Stored Procedure must be marked with NATIVE_COMPILATION
  • You have to specify the ATOMIC block, where you set the Transaction Isolation Level, and the language

When you have created the stored procedure, you can simply execute it:

Because you are now dealing with native code, there is no Execution Plan anymore! Have fun with performance tuning… An ALTER PROCEDURE is also not supported (doesn’t make sense, because of the native generated code), means you have to DROP and CREATE the stored procedure again, if you want to make changes.

This was a short walkthrough of XTP that I have encountered in the 1st hour after installing the CTP1 version of SQL Server 2014. More information and additional content will definitely come. You can download the whole T-SQL code that I have shown in this blog posting here.

Call to action: Download the CTP1 of SQL Server 2014, play around with XTP and all the other nice cool features. Please feel free to leave a comment about your experience and your opinions about Extreme Transactional Processing.

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

My upcoming SQL Server Trainings

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

The 1st half of 2013 is almost over, so it’s time to announce my upcoming SQL Server trainings for the rest of 2013. Besides my traditional SQL Server Performance Tuning & Troubleshooting Workshop I have added 2 brand new workshops into the schedule:

  • DBA Essential Days
  • Without a Trace – Extended Events in SQL Server

The goal of the DBA Essential Days is to provide in a 2-day long course all the necessary DBA skills that are needed by Accidental or Involuntary SQL Server DBAs. In this course I’m talking about Hardware Selection, Installation, Configuration, Backups & Recovery Strategies, Database Maintenance, Baselining, and Performance Monitoring for SQL Server. This brand-new workshop runs at the following locations and dates:

  • October 28 – 29 in Zurich/Switzerland
  • November 11 – 12 in Vienna/Austria
  • November 25 – 26 in Kontich/Belgium

Further information about the workshop, the detailed agenda, and the registration details can be found here.

Besides the DBA Essential Days I’m also introducing the new 1-day workshop Without a Trace – Extended Events in SQL Server. As you might know, SQL Server Profiler and SQL Trace is a deprecated feature since SQL Server 2012, which means it will go away from the SQL Server product over the next release cycles. The follow-up technology that was introduced with SQL Server 2008 is Extended Events, a new lightweight tracing infrastructure. The goal of that 1-day workshop is to introduce Extended Events to you, describe the architecture, and show you with a lot of different demos and examples how you can migrate from SQL Server Profiler/SQL Trace to Extended Events, and how to do performance troubleshooting with it. I’m running this workshop at the following locations and dates:

  • October 30 in Zurich/Switzerland
  • November 13 in Vienna/Austria

Further information about the workshop, the detailed agenda, and the registration details can be found here.

Because of the high demand of high quality SQL Server trainings in Belgium, I’m also running my SQL Server Performance Tuning & Troubleshooting Workshop from November 18 – 22 in Kontich/Belgium (near to Antwerpen). The workshop now runs for 5 days, and includes the 1-day workshop about Extended Events, because troubleshooting SQL Server performance problems without Extended Events is since SQL Server 2012 almost impossible J. You can find further information about the workshop in Belgium here.

I’m looking forward to see some of you at one of the upcoming workshops to share my SQLpassion with you J

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

SQL Server Tipping Games – Why Non-Clustered Indexes are just ignored!

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Almost everytime when I’m doing SQL Server consulting engagements, DBAs are showing me queries and their Execution Plans, where SQL Server is just ignoring a good defined Non-Clustered Index. Imagine for example the following table and index definition:

When you are populating the table with 80.000 records and running the following query, SQL Server just ignores your Non-Clustered Index and scans the whole table:

Mostly people are now proud, because they think they have found a bug in SQL Server, and sometimes they are trying to hint the Query Optimizer with an index hint, which then produces the following (expected) Execution Plan:

As you can see from the picture, SQL Server has to do a Bookmark Lookup, because you don’t have a Covering Non-Clustered Index defined for that specific query. SQL Server is doing you here a big favor, when your whole Clustered Index is scanned: making the Bookmark Lookup for every retrieved record is too expensive, so SQL Server scans the whole table, which produces less I/Os for your query and reduces the CPU consumption, because the Bookmark Lookup is always done through a Nested Loop operator.

This behavior or “safety net” is called the Tipping Point in SQL Server. Let’s have now a more detailed look on that concept. In a short, the Tipping Point just defines if SQL Server is doing a Bookmark Lookup, or a whole Table/Clustered Index Scan. This also implies that the Tipping Point is only relevant for Non-Covering Non-Clustered Indexes. An index which acts as a Covering Non-Clustered Index for a specific query doesn’t have a Tipping Point, so the problems described in this blog posting are not relevant to it.

When you have a specific query in front of you, which produces a Bookmark Lookup, it depends on the number of retrieved pages if SQL Server is doing a full scan or using the Bookmark Lookup. Yes, you have read correct: the number of retrieved pages dictates if it’s good or not good to do a Bookmark Lookup! Therefore it is completely *irrelevant* how many records a specific query returns, the only thing that matters is the number of pages. The Tipping Point is somewhere between 24% – 33% of the pages the query has to read for a specific query.

Before that range, the Query Optimizer chooses a Bookmark Lookup, after the query tipped over, the Query Optimizer produces an Execution Plan with a full scan of the table (with a Predicate inside the Scan operator). This also means that the size of your records defines where the Tipping Point lives. With very small records you can only retrieve a smaller set of records from your table, with larger records you can retrieve a huger set of records, before the query is over the Tipping Point and is doing a full scan. The following picture illustrates this behavior in more detail:

Let’s have a look on some concrete examples of that concept. Let’s just populate the table definition from earlier with 80.000 records:

In our case every record is 400 bytes long, therefore 20 records can be stored on 1 page of 8kb. When we are doing a full scan (SELECT * FROM Customers) of the table, SQL Server produces 4.016 logical reads (can be seen from the session option SET STATISTICS IO ON):

In our case the table consists of 4.000 data pages in the leaf level of the Clustered Index, which means the Tipping Point is somewhere between 1.000 and 1.333 pages that we are reading for that specific table. This means you can read about 1,25% – 1,67% (1000/80000, 1333/80000) of the records from the table, before the Query Optimizer decides to do a full scan of the table. Imagine the following query:

In that case SQL Server decides to do a Bookmark Lookup, because the query itself produces 3.262 logical reads. Imagine that: you are retrieving 1.061 records out of 80.000 records, and the query already needs 3.262 logical reads – a complete scan of the table just costs you constantly 4.016 logical reads (regardless of the number of records that the query returns).

As you can see from these numbers, Bookmark Lookups are getting really expensive (regarding I/O costs, and also regarding CPU costs!), when you retrieve a huge amount of records through a Bookmark Lookup. Your logical reads would just explode, and therefore SQL Server implements the Tipping Point, which just discards that inefficient Execution Plan and scans the whole table. Imagine the following almost same query:

In that case SQL Server discards the Execution Plan with the Bookmark Lookup and is doing a full scan of the table which produces constantly 4.016 logical reads. Almost 2 identical queries, but 2 different Execution Plans! A huge problem for performance tuning, because you have no Plan Stability anymore.

Based on your input parameters you are getting 2 different plans! That’s also one of the biggest drawbacks of Bookmark Lookups. With Bookmark Lookups you can’t achieve Plan Stability! When such a cached plan get’s reused with a huger amount of records (or your statistics are out-of-date), you will really have performance problems, because the inefficient plan with the Bookmark Lookup get’s blindly reused by SQL Server! I have seen examples of that problem where queries were running minutes instead of a few seconds.

As we have already said, the Tipping Point depends on the number of pages you read for your specific query. Imagine we are changing our table definition a little bit, and each record is now only 40 bytes long. In that case 200 records can be stored on one page of 8kb.

When we are populating the table again with 80.000 records, we need only 400 pages in the leaf level of the Clustered Index. In this case the Tipping Point is somewhere between 100 and 133 read pages, means you can only retrieve around 0,125% – 0,167% of the records through the Non-Clustered Index. This is almost NOTHING! Your Non-Covering Non-Clustered Index is now just useless!!!

The following query produces a Bookmark Lookup, and the second one a Clustered Index Scan:

As you can see from the second query, you only retrieve 157 records out of 80.000 records, which means your query is very, very, very selective, but SQL Server just ignores your perfect (?) Non-Clustered Index. For that query your Non-Clustered Index isn’t really perfect, because it’s not a Covering Non-Clustered Index. Imagine what happens when you hint SQL Server for the Non-Clustered Index, and you just retrieve every record from the table through the ineffient Bookmark Lookup:

That query produces 165.151 logical reads – really far away from the logical reads that the Clustered Index Scan produces. As you can see from this example, the Tipping Point is just a safety net in SQL Server, and prevents that queries with a Bookmark Lookup get really expensive. But it doesn’t depend on the number of records. In both examples we dealed with the same amount of records in the table – 80.000. The only thing that we have changed is the size of the records, and therefore we have changed the size of the table, and finally the Tipping Point changed, and SQL Server just ignored our almost perfect Non-Clustered Index.

Moral of the story: a Non-Clustered Index, which isn’t a Covering Non-Clustered Index has a very, very, very, very, very selective Use Case in SQL Server! Think about that the next time when you are working on your indexing strategy.

Call to action: do you have also encountered in your workloads scenarios where SQL Server just ignored your almost perfect Non-Clustered Index, and you thought about what is going on with that specific query? Please feel free to leave a comment with your stories…

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?

My 2 cents on SQL Server 2014

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Yesterday during the opening keynote of TechEd North America Microsoft released the first official details about the next version of SQL Server: SQL Server 2014. There are huge improvements in the area of High Availability, Performance, and Scalability, on which I want to concentrate in this weblog posting.

Disclamer: This blog posting is based on the first official information that Microsoft released yesterday (June 3, 2013) about SQL Server 2014. This posting is not a complete feature reference for SQL Server 2014, nor it could be 100% accurate, because there is not really too much information currently available.

SQL Server AlwaysOn

With SQL Server 2012 Microsoft released the first implementation of AlwaysOn, mainly a completely new High Availability technology based on the principles of Database Mirroring. With the SQL Server 2014 release Microsoft now supports up to 8 different Secondaries, which provides you a huge scaleout in comparison of the 4 Secondaries that are possible currently with SQL Server 2012. The question for me is, who will really pay for it. I’m working with a lot of different customers, which are currently not able to migrate to AlwaysOn, because of the additional licensing costs (AlwaysOn is only part of the Enterprise Edition of SQL Server). Scaling out Read-Only workload to Secondaries is a perfect concept, but almost no one of my customers will actually pay for that nice feature (as soon as you use a Secondary, you have to fully license it).

Online Database Operations

Online Database Operations are for some SQL Server based shops just a foundation. Unfortunately Microsoft made here our life in the past not really easy. SQL Server 2012 has already made a huge step forward in this area. With SQL Server 2014 Microsoft is doing an additional step and provides Online Index Operations on the Partition Level, which will be a very nice feature for VLDBs (Very Large Databases).

Backups to Azure

The 1st time when I read about Backups to Azure, I thought Microsoft is kidding us. No, they really made it: you can now store your backup files directly in Azure. In my opinion this is just a crazy idea. It introduces a lot of overhead regarding the involved latency times, and you have no direct access to your backup files anymore. Just imagine your ISP (Internet Service Provider) goes offline, your connection to the cloud is cut-off, and in the mean time your database crashes, and you have to restore it from backup files that are currently not accessible. Just a horror story – nothing more. Not recommended from my side because you have no direct control over the RTO (Recovery Time Objective) anymore.

Smart Backups

With Smart Backups Microsoft takes regularily backups to Windows Azure of your databases or your whole SQL Server instance, whenever it is necessary. Yes, you read correctly: whenever necessary. Normally when I’m doing consulting and training, I’m saying your backup strategy is only as good as your recovery strategy. This means you should define how long it should take until your database is online after a crash/desaster recovery, and from that period of time you should derive your backup strategy. But with Smart Backup you are just doing it the opposite way. Also not really recommended.

SQL Server with Windows Azure Storage

This is a really funny feature which makes sure that I can also do a lot of consulting engagements over the next years which indirectly drives the improvements of the realism of my Boeing 737-800 flight simulator ;-) . With SQL Server 2014 you are now finally able to store your data and transaction log files directly in Windows Azure Storage. Very cool regarding performance because you just introduce a huge overhead regarding latency times for your files in your SQL Server installation. Mainly all my customers currently have problems with huge latency times to their local based SANs, imagine what happens when you go over the wire through the internet to the Cloud to access your data files, or write out transaction log records for committing your inflight transactions. I hope sys.dm_io_virtual_file_stats has BIGINT data types for the stall time columns – I have to check that afterwards ;-)

Hekaton

With Hekaton SQL Server 2014 provides us a true Multi Version Concurrency Control system (MVCC) integrated directly within the good old traditional relational engine of SQL Server. The main goal about MVCC is to avoid traditional locking and latching in the main memory of the SQL Server process. With Hekaton you can completely avoid locking when accessing and changing data. Yes, also the Exclusive Locks are completely gone when changing data – really awesome work!

In addition to that Hekaton also avoids completely the traditional latching architecture that relational databases must use for synchronizing concurrent resource access in main memory. Because of that big shift parts of the relational and storage engine of SQL Server are completely rewritten to take advantage of these new approaches for dealing with large scale OLTP workloads. Hekaton can be interesting, but in the 1st step you have to hit the limits of SQL Server, so that Hekaton can be interesting for you. In my opinion customers in the 1st step hit limits in SQL Server because of their bad indexing strategy, or because the transaction log can’t cope with the current amount of transaction log records (just think about Page Splits). I have just seen over the time a few customers which have really hit the limits of SQL Server because of Latch Contention. If you are hitting these limits, Hekaton can be interesting for you. But on the other hand, there is also a long list of restrictions that you have to keep in mind:

  • Row Sizes can’t be larger than 8060 bytes (incl. Variable Length Columns!)
  • LOB Data Types are not supported
  • No Foreign Key Constraints.

When I’m currently looking into database designs, they are mainly not qualified for Hekaton because of the above listed restrictions. Just imagine you have a table with 2 VARCHAR(4500) fields: in sum you are over the 8060 bytes limits, and you can’t use Hekaton for that specific table, without doing any reengineering of that table…

ColumnStore Indexes

Yes, they did it in SQL Server 2014: Updateable Clustered ColumnStore Indexes! Nothing more to say- just WELL DONE! J

Statistics Improvements

Statistics are now maintained on the partition level, but they are still created on the table level. Maintaining on the partition level means, that 20% of changes of the specific column on the partition level triggers the update of the underlying statistics objects. Also very nice, but I would have preferred Partition Level Statistics…

Resource Governor

Till SQL Server 2012 we were able to throttle CPU and Memory (Query Memory). With SQL Server 2014 they finally added IO Throtteling. You can now specify the maximum IOPS for a specific Resource Pool. Also a very nice addition J.

Conclusion

As you can see there are a lot of new enhancements in the area of peformance, scalability, and high availability in SQL Server 2014. But for some enhancements you really have to think if they really make sense for your environment, and if you can live with the introduced side effects – like higher latency times. All in all I’m really looking forward to the first public CTP version to try out all that nice stuff.

Call to action: What do you think about the various enhancements in SQL Server 2014, and do you think you will migrate after the RTM version to it? Please feel free to leave a comment with your feelings/concerns about it…

Thanks for reading

-Klaus

Do you like this post? Why not share it with the SQL Server community?