Running a TPC-C workload on SQL Server

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

I have a lot of customers with some impressive SQL Server workloads, those databases are several hundred GB large, with several hundred simultaneous users – just great environments for performance tuning and troubleshooting. Some weeks ago I thought about how I can simulate such workloads in my own environment, so that I can make my performance troubleshooting sessions and workshops more realistic and impressive. I’m not the big fan of using the (small) AdventureWorks database with some prepared queries for real performance troubleshooting. Unfortunately no one of my customers wants to share their workload and their specific problems with me and the whole other world in public ;-)

Because of that I was looking in the last days for some free tools, with which I can simulate large OLTP workloads with hundreds of different users, and where I’m also able to scale out the workload to thousands of users – if I want and I have the requirement to do it. During the weekend I finally found one very interesting tool that is free and simulates a whole TPC-C workload: it’s called Hammerora and it is a free Open Source Oracle Load Test Tool that simulates an OLTP based TPC-C workload. Yes, you read correct: the tool is written for Oracle, but fortunately they also support SQL Server! You can find the tool at In this weblog posting I want to give you a quick overview about the usage of the tool, and how easy it is to run a whole TPC-C workload on your own SQL Server within a few minutes.

After the installation of the tool, you can start it through the provided batch file hammerora.bat, which is stored in the installation directory.

When you want to simulate a TPC-C based workload, you have to do 2 different things:

  • Creating the necessary database with the initial data
  • Run the TPC-C against the created database

Let’s have a more detailed look on both of these steps. Before you can create the actual database, you have to tell the tool with which database system you are working. Hammerora supports the following database systems:

  • Oracle
  • MySQL
  • Microsoft SQL Server

You can set your actual database through the menu option Benchmark/Benchmark Options:

When you are working with Oracle or MySQL, Hammerora also supports a TPC-H (Data Warehouse) based workload. After you have set your database system to SQL Server, you can go to the menu Benchmark/TPC-C/TPC-C Schema Options.

This dialog allows you to set all the necessary options that are needed by Hammerora to create the actual TPC-C database schema. In the first part of the dialog you can specify your SQL Server connection info, and in the following part you can define the number of warehouses (parameter Number of Warehouses) that Hammerora creates in the database. You can find more detailed information about the database structure on the official TPC website at, which also describes the concept of a warehouse that is used by the TPC-C benchmark. With the parameter Virtual User to Build Schema you can specify how many sessions Hammerora should use during the creation of your database. You can think of a Virtual User as a session in SQL Server. This option allows you to create your TPC-C database in parallel with simultaneous sessions. When you have specified the necessary configuration options, you are ready to create your database. You just have to hit the Create TPC Schema symbol in the toolbar (12th symbol from left.). The creation of one warehouse needs around 130MB in the database. So it’s also very easy to create a database with several hundred GBs of data in it – just increase the number of warehouses, but trust – it will take some time! J

After the creation of the TPC-C database, you have to configure which kind of test suite Hammerora has to execute against your database. The test suite itself is implemented as a so-called Driver Script. When you go again to the menu Benchmark/TPC-C/TPC-C Schema Options, you can see that Hammerora supports 2 kinds of driver scripts:

  • Standard Driver Script
  • Timed Test Driver Script

The Standard Driver Script just executes a continuous workload against your SQL Server database: when one query is finished the next query is submitted without any pausing. Your SQL Server will be hammered with a continuous workload, and how can see how far your SQL Server installation will scale. With the Timed Test Driver Script Hammerora introduces so-called Keying- and Thinking Times, which are also part of the official TPC-C benchmark. With this option, Hammerora submits about 3 queries per minute to your SQL Server, because Hammerora tries to simulate real users, which have to key in their data (Keying Time) and need some time for thinking (Thinking Time). With the Timed Test Driver Script you need a very high amount of Virtual Users to get a reasonable workload on your SQL Server. Further information about the various options that the Timed Test Driver Scripts accepts can be also found in the documentation of Hammerora.

For my scenarios I have just used the Standard Driver Scripts, because I want to have a continuous workload against SQL Server, so that I can demonstrate various performance tuning and troubleshooting techniques. When you have selected the corresponding driver script, you have to generate the actual script that is used for creating the workload against SQL Server. You can generate that script by using the menu option Benchmark/TPC-C/TPC-C Driver Script. When you have selected that option, Hammerora shows the driver script as an output in the main window:

The only thing that you finally have to configure is the amount of Virtual Users that Hammerora uses to execute the Driver Script against your SQL Server database. You can set the amount of Virtual Users through the menu option Virtual Users/Vuser Options:

As a last step you need to create those Virtual Users inside Hammerora by using the symbol Create Virtual Users (10th symbol from left) from the toolbar. You can see the created Virtual Users in the middle part of the main window. Finally you can use the symbol Run Hammerora Loadtest (11th symbol from left) from the toolbar to start the TPC-C workload against SQL Server. It takes some seconds until Hammerora has created all the necessary sessions, but then the actual TPC-C workload is executed against SQL Server:

As you can see from this weblog posting, it is very easy to create an OLTP based workload that is executed against SQL Server. With this approach it is also very easy to demonstrate some key performance problems inside SQL Server like incorrect Transaction Log usage, Locking/Blocking scenarios, Memory Management, misconfigured I/O subsystems etc.

On the other hand you can also demonstrate some other advanced concepts like Clustering, Database Mirroring, Log Shipping, Replication etc. with real workloads, and how some features affects the availability of your database. The possible scenarios are just endless, and you are just working with a real OLTP workload which can scale very easily to thousands of concurrent users, if you want and you have the resources… I think this approach can give our conference sessions and workshops a new feeling when we demonstrate SQL Server concepts and performance troubleshooting J

Thanks for reading


Do you like this post? Why not share it with the SQL Server community?

My new event series across Europe: SQL Server 2012 Deep Dive Days

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

I’m very proud to announce today that I’m running next year a new event series across Europe: the SQL Server 2012 Deep Dive Days. You can learn in this 3 day event all about the new things in SQL Server 2012. The event series is currently planned for the following cities around Europe:

  • March 3 – 5, 2012 in Vienna/Austria
  • March 12 – 14, 2012 in Munich/Germany
  • March 19 – 21, 2012 in Zürich/Switzerland
  • April 16 – 18, 2012 in Cologne/Germany

You can find further detailed information at where you can also register for the various locations. If you are not from the German speaking area of Europe, I’m also planning 2 events in English in May 2012 at the following locations:

  • Brussels/Belgium
  • London/United Kingdom

In this 3 days long event I’m covering the following topics about SQL Server 2012:

Day 1: SQL Server 2012 Development

  • Project Juneau
  • T-SQL Enhancements
  • Spatial Data Types
  • File Table
  • Fulltext Indexes
  • Semantic Search
  • Service Broker
  • SQL Express Edition – LocalDB
  • Contained Databases
  • Metadata Discovery
  • SQL Server Integration Services

Day 2: SQL Server 2012 Administration

  • Extended Events
  • Columnstore Indexes
  • LOB Online Index Rebuilds
  • Adding Online Non-NULL Values
  • Custom Server Roles
  • Resource Governor
  • Database Recovery Advisor
  • Distributed Replay
  • Memory Management
  • Partitioned Tables
  • Troubleshooting Enhancements
  • Windows Server Core Support
  • Windows Server 8 & SQL Server 2012

Day 3: SQL Server 2012 High Availability

  • Failover Clustering Enhancements
    • Local TempDb
    • Health Check Policies
    • Failover Policy Levels
    • Indirect Checkpoints
  • AlwaysOn
    • Availability Groups
    • Availability Group Listeners
    • Monitoring & Troubleshooting
  • Multiple Secondaries
    • Automatic Failovers
    • Read-Only Connections
    • Backups

If you are interested in attending one of these events, don’t hesitate and register at There is also an Early-Bird price available until the end of this year.

BTW: if you have missed the news, Microsoft released a few hours ago the RC0 version of SQL Server 2012 – you can download the bits here:

Thanks for reading!


Do you like this post? Why not share it with the SQL Server community?

Why SQL Server is using so LESS memory

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today I want to talk about a phenomenon regarding memory management in SQL Server. A few weeks ago I had a consulting customer where we made some SQL Server performance improvements. As a side note the customer asked me, why SQL Server isn’t using as much memory as possible on their production system. Their database was about 500 GB, and when we looked into the Task Manager of Windows, we saw that SQL Server was just consuming a few hundred MBs:

In the first step this behavior of SQL Server was a little bit amazing, but the answer was found very fast, when we looked into the details of their production configuration. The problem was that the customer granted the Locked Pages in Memory privilege to the service account under which SQL Server was running. You can check if you have granted this privilege when you look into the current SQL Server Error Log. If you have granted that privilege, you can see following informational message during the startup of SQL Server:

In the first step of this posting I want to explain why and when you should use that privilege for SQL Server, and in the second step you will see, how SQL Server uses that privilege and why Task Manager is lying to you regarding the memory consumption.

Every time when the Windows OS gets into memory pressure, the Windows OS raises a so-called Memory Resource Notification Event to all processes that are currently running on the box (see for the corresponding API function). SQL Server subscribes to this event and processes it internally through a component called the Resource Monitor. Additional information about the Resource Monitor component is reported through the Ring Buffer type RING_BUFFER_RESOURCE_MONITOR that is stored inside the DMV sys.dm_os_ring_buffers. When SQL Server receives the Memory Resource Notification Event, SQL Server must trim its internal caches, like the Buffer Pool, or the Plan Cache to get out of memory pressure. But there are some scenarios in which SQL Server reacts to slow to memory pressure. In that case the Windows OS will do a so-called Hard Working Set Trim, and pages the process (in our case SQL Server) out to the page file. The Windows OS just want to survive and get out of its memory pressure. Imagine that – the Buffer Pool, the Plan Cache – In-Memory buffers – are paged out to the page file! Accessing the Buffer Pool – our in-memory cache – means reading pages from the physical disk! You can imagine that the performance of SQL Server will decrease massively…

Another case when a Hard Working Set Trim can occur is, when you have device drivers that have bugs, or when the Windows OS has bugs that leads to Hard Working Set Trims. Under Windows Server 2003 there were several bugs reported that caused Hard Working Set Trims, like:

  • Copying large files from the SQL Server box
  • Establishing a Remote Desktop Connection to your SQL Server box

But how you can find out if your performance problems occur because of Hard Working Set Trims? When you are running a version of SQL Server prior to SQL Server 2005 SP2, you have to monitor the overall performance of your SQL Server instance, there are no messages or indications inside SQL Server that can tell you, if a Hard Working Set Trim degraded the performance of SQL Server. Beginning with SQL Server 2005 SP2 Microsoft has added an error message to the SQL Server Log, as soon as a Hard Working Set Trim occurred, like:

” A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 1086400, committed (KB): 2160928, memory utilization: 50%”

In such cases you can enable the Locked Pages in Memory privilege to get rid of this issue. In this case the Windows OS is not allowed to page out pages that SQL Server allocated for the Buffer Pool. Yes, you read correct: Locked Pages in Memory is only used for Buffer Pool allocations inside SQL Server!

But on the other hand you also have to investigate WHY a Hard Working Set Trim occurred on your system. A Hard Working Set Trim is just the result of another problem that you have on your Windows box. So you should find the root cause, and eliminate it, instead of using the shortcut (enabling Locked Pages in Memory) infinitely. I know a lot of DBAs who are using Locked Pages in Memory by default, and there are several pro and cons for this which I don’t want to discuss here again. Glen Berry has a great article which describes more of those pros and cons (see One thing that I want to mention here again is the fact that you should limit the memory that SQL Server can allocate for the Buffer Pool when you use Locked Pages in Memory. You can limit the memory through the Max Memory Setting of your SQL Server instance.

By now you know when and how you can enable Locked Pages in Memory for your SQL Server instance. But why the heck Task Manager is reporting a wrong Working Set size when SQL Server is using that privilege. For this explanation we have to dig a little bit deeper into the Win32API. By default (without Locked Pages in Memory) SQL Server allocates memory for the Buffer Pool through the VirtualAlloc function of the Win32API (see But when the SQL Server service account has the Locked Pages in Memory privilege, SQL Server internally uses the AllocateUserPhysicalPages Win32API function (see to do Buffer Pool allocations. This function can be only called by a process, when the process has the SeLockMemoryPrivilege – in other words the Locked Pages in Memory privilege which is the user-friendly name. The Win32API function AllocateUserPhysicalPages is part of the Address Windowing Extensions API (AWE). Therefore those memory allocations can’t be paged out by the Windows OS, because AWE memory regions are not page able by design. Those memory regions are just locked in memory, therefore the name of this privilege. When you have Locked Pages in Memory for your SQL Server service account enabled, it also means that you are using AWE indirectly – funny isn’t it? As you can see from this explanation, Locked Pages in Memory is rather a Windows OS concept than a SQL Server concept. It has nothing to do directly with SQL Server. SQL Server is just a consumer of it.

With this basic knowledge in your hand, it is now very easy to explain why Task Manager doesn’t show the correct Working Set of the SQL Server process: Task Manager is not reporting those memory allocations that are done through the Win32API function AllocateUserPhysicalPages – that’s all about this phenomenon.

But how can you now find out, how much memory SQL Server is really using? There are several possibilities. Inside SQL Server you can use the DMV sys.dm_os_process_memory and the column physical_memory_in_use_kb. This returns you the physical memory that SQL Server is currently using including AWE memory allocations. If you want to have a more detailed breakdown of the memory consumption of SQL Server, you can use the DMV sys.dm_os_memory_clerks. A memory clerk is a component inside SQL Server, which tracks memory allocations for a specific component inside SQL Server. SQL Server gives you a memory clerks for each major component for each available NUMA node in your system. The column awe_allocated_kb shows you the AWE memory allocations in kb that were allocated by SQL Server through the Win32API function AllocateUserPhysicalPages.

Aaron Bertrand (see mentioned that it would also make sense to reference a Performance Monitor counter to track the memory consumption of SQL Server. So here it is:

Outside of SQL Server you can use the Performance Monitor counter SQLServer:Memory Manager/Total Server Memory (KB) to track how large the Buffer Pool currently is.

As you have seen with this posting, memory management inside SQL Server is a really complicated topic, and we just have touched the tip of the iceberg. So don’t trust Task Manager blindly regarding the memory consumption of SQL Server, you really have to know how SQL Server is configured to get the correct picture.

Thanks for reading


Do you like this post? Why not share it with the SQL Server community?

Correction of my unfair and incorrect words against

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

On October 18, 2011 I have made unfair and incorrect words on Twitter, Facebook, LinkedIn, and Xing against the owners of Because of this misbehavior of me, I got a so-called dissuasion letter from, which I have signed today, because I’m committing me now not to make such unfair and incorrect words any more in the future against

I also want to take the chance and correct my unfair and incorrect statements of October 18, 2011. It was incorrect and wrong by me that I have blamed for the following things:

  • Unfair Business Conduct (“Some people are not playing fair”)
  • Sending of threatening emails and letters (“They wrote me and Adam Machanic hard threating emails, because we were presenting on memory spills at sqlpass. At least I and Adam Machanic got a sqlsue email, because we have presented on memory spills at sqlpass.”)
  • Organizing and running workshops of bad quality and of oversized prices (“They are charging horrible amounts for their workshops. Talked to some customers and they really suck. The attendees are not really amazed. Content is not really that bright…”)
  • They have made pressure against me to remove my query memory spills demo from my various session downloads.

I apologize for any inconveniences that I have made with these unfair and incorrect statements.


Do you like this post? Why not share it with the SQL Server community?