The 3 Configuration Options that you always have to change in SQL Server!
You install your brand-new SQL Server using the installation wizard, and at the very end you hit the Finish button. Woohoo – now we can go into production with our server! Sorry, but that’s not really the truth, because your brand-new SQL Server is misconfigured by default!
Yes, you read that correctly: a default installation of SQL Server is misconfigured in so many different areas. In today’s blog posting I want to show you the 3 configuration options that you have to change immediately after the installation for faster performance. Let’s start!
Max Server Memory
Disclaimer: If you run your SQL Server these days on a 32 bit system, please throw away your hardware, buy a 64 bit system, install the 64 bit version of SQL Server, and continue reading here.
By now you should have a 64 bit SQL Server in front of you. 64 bit means that you can theoretically address a main memory size of 2^64 = that is 16 Exabytes (1 billion gibabytes)! Because of this huge amount of memory, computer vendors currently limit the address bus of a 64 bit system to “only” 48 bits – the whole 64 bits doesn’t really make sense. With an address space of 48 bits you can address up to 256 TB of memory – that’s still a huge amount of space.
You can use the Max Server Memory configuration option to configure how much memory SQL Server can consume. The following picture shows this configuration option after a default installation of SQL Server on a 64 bit system.
As you can see from the previous picture, SQL Server is configured by default to consume up to 2147483647 MB – that is 2 Petabytes! Hmm, with an address bus of 48 bits we can physically only address up to 256 TB of memory, and now SQL Server can consume up to 2 Petabytes? Something is wrong here… The Maximum Server Memory setting is just the largest possible 32 bit integer value – 2147483647. Nothing more. So SQL Server can consume more memory that can be addressed physically? Not really a good default configuration. SQL Server can eat up your whole physical memory by default!
You should always change this configuration option, so that you also give your OS some memory to live and breathe. Normally (without any other programs/processes on the server) you should give the OS at least 10% of the physical memory. This means that you have to lower the Max Server Memory setting. With 64 GB of physical memory I would configure the Max Server Memory setting with around 56 GB, so that the OS can consume and work with the remaining 8 GB.
Cost Threshold for Parallelism
The next configuration option that you have to change has to do with how SQL Server handles parallelism. Parallelism means that SQL Server is able to run operators in an execution plan across multiple worker threads. The goal of parallelism is to improve the throughput of your queries. The first configuration option that influences parallelism in SQL Server is the so-called Cost Threshold for Parallelism:
The number that you configure here defines the query costs at which the Query Optimizer tries to find a cheaper parallel execution plan. If the found parallel plan is cheaper, then this plan is executed, otherwise the serial plan is executed. As you can see from the previous picture, a default configuration of SQL Server uses a cost threshold of 5. When the query cost of your serial plan is larger than 5, then the Query Optimizer runs the query optimization again to find a possible cheaper parallel execution plan.
Unfortunately a cost factor of 5 is a very small number these days. Therefore SQL Server tries to parallelize your execution plans too soon. But parallelism only makes sense when you deal with larger queries – like in a reporting or Data Warehousing scenario. In a pure OLTP scenario a parallel plan is an indication of a bad indexing strategy, because when you have a missing index SQL Server has to scan your complete Clustered Index (in combination with a Filter or a residual predicate), and therefore your query costs will get larger, they cross the cost threshold, and finally the Query Optimizer will give you a parallel plan. And people always get worried when they see parallel execution plans! But the root cause was just a missing Non-Clustered Index.
During my various consulting engagements and SQL Server Health Checks, I always recommend a Cost Threshold for Parallelism of at least 20 or even 50. That way you can make sure that SQL Server only tries to parallelize larger queries for you. And even if you have a parallel plan in front of you, you should think about whether you can make this query plan cheaper by adding a supporting Non-Clustered Index. And as I have already covered in a previous SQL Server Quickie, CXPACKET waits are not an indication that you have problems with parallelism in your system!
Max Degree of Parallelism (MAXDOP)
When an execution plan goes parallel in SQL Server, the Max Degree of Parallelism defines how many worker threads each parallel operator in the execution plan can use. The following picture shows the default configuration of this option.
As you can see, SQL Server uses the default value of 0. This value means that SQL Server tries to parallelize your execution plan across all CPU cores that are assigned to SQL Server (by default all cores are assigned to SQL Server!). As you might expect this setting also doesn’t make sense, especially when you have a larger system with a huge amount of CPU cores. Parallelism itself introduces overhead, and this overhead gets larger and larger as you use more and more worker threads.
A recommendation is to set the Max Degree of Parallelism to the number of cores that you have in one NUMA node. Therefore SQL Server tries to keep parallel plans in one NUMA node during query execution which also improves performance.
Sometimes you will also see suggestions to set the Max Degree of Parallelism option to 1. This is a really bad advice, because this makes your *complete* SQL Server single-threaded! And even maintenance operations (like Index Rebuilds) are executed single-threaded, and this will seriously hurt the performance! Of course there are also some “award winning” products which dictate that you use a MAXDOP of 1…
After you have installed your SQL Server, the real work for the DBA begins: you have to configure your SQL Server installation to your hardware layout. As you have seen in this blog posting, the default configuration of SQL Server is simply misconfigured by default. Therefore it is very important to change some configuration options immediately after the installation. I have already seen SQL Servers in production with the default options that I have mentioned here, because they “will be configured later”. And “later” never ever happened…
So please do yourself a favour today, and configure your SQL Server for maximum performance and throughput!
Thanks for your time,