Perform a Rolling Upgrade to SQL Server 2008 R2 SP2 in a Database Mirroring Scenario

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Yesterday evening (European Time zone) Microsoft has released the SP2 for SQL Server 2008 R2. You can download the Service Pack here:

Because I’m currently working on a Database Mirroring Test Lab that is based on SQL Server 2008, I thought it would be a good exercise to perform a Rolling Upgrade to SP2 and document the correct sequence in this blog posting. In my Lab I have 3 Virtual Machines that are performing the following roles:

  • VM1: Principal
  • VM2: Mirror
  • VM3: Witness

When you are performing a Rolling Upgrade you have to do 2 failovers (Principal => Mirror => Principal), which means that you should perform the upgrade at a time where the user impact of those failovers is very little. The idea of a Rolling Upgrade in a Database Mirroring scenario is very straightforward:

  • Disable the Witness (when you are using Automatic Failovers)
  • Install the Service Pack on the Mirror
  • Failover to the upgraded Mirror
  • Install the Service Pack on the old Principal
  • Failover to the upgraded Mirror
  • Install the Service Pack on the Witness
  • Add the Witness back to the Mirroring session

Let’s have a more detailed look at these steps.

1. Disable the Witness

In my lab I have a dedicated VM which acts as a Witness to provide automatic Failovers. In that case, you have to remove the Witness from your Database Mirroring session:

Before you remove the Witness you should also write down its network address, because you need that address afterwards, when you want to add the Witness again to the Database Mirroring session. You can use the view sys.database_mirroring to find out the network address of the Witness:

2. Install the Service Pack on the Mirror

In the next step you can install the Service Pack on the Mirror. The update process itself just takes a few minutes and during the process your SQL Server Instance on the Mirror is restarted.

During the restart, your Principal runs in the Exposed state, which means, when you lose your Principal in that timeframe, your database is down.

As soon as your SQL Server Instance is restarted, the upgraded Mirror is automatically synchronized with the Principal:

3. Failover to the upgraded Mirror

In the next step you have to failover to the upgraded Mirror, which will become the new Principal in your Database Mirroring session.

During the failover your applications have to reconnect to your SQL Server databases, which means you could have a short outage (it mainly depends on how well-written your applications are…). During the failover the new Principal (the old Mirror) has also to redo all transactions from the REDO queue, which also impacts how long the failover will actually take.

Note: Only the Enterprise Edition SQL Server will provide you multiple REDO threads. You will get 1 REDO thread for every 4 CPU cores that are assigned to your SQL Server (per database). In Standard Edition the REDO is always done with just 1 thread (per database).

As soon as the failover completed, your new Principal will run on the new Service Pack Level:

Microsoft SQL Server 2008 R2 (SP2) – 10.50.4000.0 (X64) Jun 28 2012 08:36:30 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)

4. Install the Service Pack on the Mirror (the old Principal)

After you have applied the new Service Pack to the old Mirror, you just do the same on the old Principal.

5. Failover to the upgraded Mirror

After you have installed the Service Pack on the old Principal, you again fail back to the upgraded old Principal.

Now both the Principal and the Mirror are running on the new Service Pack Level, and you are almost complete with your Rolling Upgrade. The only thing that is left is the upgrade of the Witness, when you are using it for providing Automatic Failovers.

6. Install the Service Pack on the Witness

Nothing special is to note here.

7. Add the Witness back to the Mirroring session

As soon as your Witness is also upgraded, you can add the Witness back to your Database Mirroring session:


As you have seen in this blog posting, installing a new Service Pack through a Rolling Upgrade in a Database Mirroring scenario is not a big deal. But you have to plan it carefully, because you have to do 2 failovers, and those failovers will impact your applications.

Thanks for reading!


Do you like this post? Why not share it with the SQL Server community?

SQL Server 2012 AlwaysOn Availability Groups – Part 4: Adding new Replicas

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In the last 2 blog postings you have learned how to setup your first Availability Group with SQL Server 2012. Today I want to show you how you can join an additional Replica to an existing Availability Group. When you have followed the last 3 blog postings, you should have now an Availability Group called TestAG that is spanned across the following 2 nodes:

  • AG-NODE1
  • AG-NODE2

In the next step we are adding a 3rd Replica on AG-NODE3 to our existing Availability Group, so that we can then demonstrate various Failover scenarios between them. SQL Server Management Studio 2012 provides you here also a Wizard, with which you can accomplish this task. In the first step of the wizard you have to connect to your existing Replicas, because SQL Server must change the security settings on each Replica, so that each Service Account of each Replica has the CONNECT permission to the endpoint used for AlwaysOn. When you are adding a new Replica, and that Replica runs under a separate Service Account, then the Wizard will also create the corresponding login in each other Replica for you. After you have connected to each existing Replica, the wizard provides you a page, where you can actually add the new Replica to the existing Availability Group.

It’s almost the same page as when you are creating a new Availability Group. Here you can again specify if you want to have synchronous/asynchronous commit for the new Replica, if you want to have an Automatic Failover, and if the Replica should allow read only access in the Secondary Role. For the 3rd Replica I have specified Asynchronous Commit and no Automatic Failover, because you can only have an Automatic Failover between 2 Replicas (in my case between AG-NODE1 and AG-NODE2).

In the next step you can configure how the Initial Data Synchronization for the new Replica should happen. Here you have the same 3 options as when you are creating a new Availability Group:

  • Full
  • Join Only
  • Skip Initial Data Synchronization

Please refer back to the last blog postings for more information about the various options. Today I’m assuming that we have already prepared our databases on the new Replica, so I’m using the Join Only option. When you are using that option, you have to restore a Full Backup of your databases with NORECOVERY and all needed Transaction Log Backups with NORECOVERY (since the last Full Backup, when you are not working with Differential Backups). This is the preferred option when you are working with larger databases, where you can’t just take a Full Backup and restore it during the process of adding the new Replica. You just have to make sure that your Transaction Log isn’t backed up in the meantime, because then you also have to restore that backup on your new Replica. When you have added your new Replica to the Availability Group, you can check through the Dashboard if everything is fine with your AlwaysOn deployment.

As you can see from the picture the first 2 Replicas are in the Synchronized state, because we are using Synchronous Commit between them. And the 3rd Replica that we have added is in the Synchronizing state, because we are using Asynchronous Commit. That Replica will always remain in the Synchronizing state, as long as you are using Asynchronous Commit. One important thing to mention here is that you have to configure BOTH Replicas for Synchronous Commit to actually get Synchronous Commit. Look at the following table:


Primary Synchronous

Primary Asynchronous

Secondary Synchronous



Secondary Asynchronous



As soon as one of your Replicas is in Asynchronous Commit, you will get Asynchronous Commit. So you have to configure Both Replicas in the correct way to get the correct behavior. That’s one of the first pitfalls that you will encounter when setting up a new Availability Group: you are just configuring one Replica for Synchronous Commit, and you are wondering why the Synchronization State is not Synchronized. So be aware of that fact!

Note: Normally you are using Synchronous Commit for local High Availability and Asynchronous Commit for remote Disaster Recovery. In our case AG-NODE1 and AG-NODE2 will be in our local data center, and AG-NODE3 will be in a remote data center.

In the next blog posting you will then see how Failovers are working with AlwaysOn – so stay tuned.

Thanks for reading!


Do you like this post? Why not share it with the SQL Server community?

SQL Server 2012 AlwaysOn Availability Groups – Part 3: Setting up AlwaysOn through T-SQL

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In the last blog posting about AlwaysOn you have seen how you can deploy your first Availability Group through the AlwaysOn wizard provided by SQL Server Management Studio 2012. As usual this approach is very handy during development or testing, but as soon as you want to move your Availability Group into production, you need a more automated approach. For that reason you can also deploy and configure Availability Groups through T-SQL statements. As I have already mentioned in the previous blog posting, the wizard gives you the possibility to script your whole setup before you hit the Finish button.

Before you are actually creating your Availability Group through T-SQL you have to make the same preparations, as when you do it through the wizard, like that the database is in the FULL Recovery Model. You also have to restore your databases on the other Replicas with NO RECOVERY, so that they can be joined afterwards into your Availability Group. In the first step you have to configure security between the Replicas that are involved in the Availability Group. AlwaysOn provides the same security authentication mechanism as Database Mirroring:

  • Security through Windows Authentication
  • Security through Certificate Authentication

Certificate Authentication was used with Database Mirroring if both partners were not in the same Windows Domain. But as I have already mentioned in the first blog posting about AlwaysOn, ALL your Replicas must be in the SAME Windows Domain, so setting up security through Windows Authentiction is the most common and preferred scenario in AlwaysOn.

When all SQL Server Instances of your Replica are running under the same Windows Service Account, you don’t have to bother about separate logins in each SQL Server Instance. But when your Instances are running under different service accounts, you have to create on each Replica all the logins for the other service accounts that are connecting to that Replica. Today I’m assuming 2 Replicas, where each Replica runs under a separate service account:

  • Replica 1: sqlpassion0\ag-node1_sqlsvc
  • Replica 2: sqlpassion0\ag-node2_sqlsvc

So you have to create a login for the other service account on each Replica.

In the next step you have to create a new SQL Server endpoint for Database Mirroring. You need that endpoint again on both Replicas. Through that endpoint AlwaysOn handles the communication between the Replicas involved in your Availability Group.

When you are planning to create Availability Groups between SQL Server Instances on the same physical machine, you have to make sure that you are using different ports for each endpoint.

Note: Please make also sure to open the corresponding port number of the endpoint on your firewall.

As you can see from the previous listing, the endpoint must be also explicitly started. In the next step you have grant the CONNECT permission on the endpoint to the previous created login.

When you are installing a SQL Server 2012 Instance, the installation program also configured an Extended Event Session for AlwaysOn. This event session is disabled by default. So it’s a good practice to enable that event session, because it records some critical events about AlwaysOn that can help you in troubleshooting your deployment.

As you can see from the listing, the event session will be also automatically started, as soon as you are restarting your SQL Server Instance. By now you have configured the whole security stuff for AlwaysOn and both Replicas are now able to communicate with each other.

In the next step you have to create your actual Availability Group. SQL Server 2012 provides you for this task the CREATE AVAILABILITY GROUP T-SQL statement. See the description from Books Online for further information about it:

Note: You have to call CREATE AVAILABILITY GROUP from that Replica that should be the initial Primary Replica.

The following T-SQL code shows how to create your Availability Group between 2 Replicas, where the Availability Group contains the databases TestDatabase1 and TestDatabase2.

The property ENDPOINT_URL must match with the endpoint that you have created earlier through the CREATE ENDPOINT T-SQL statement. So you have to make sure that the port numbers are identical. To get the other Replica joined into the Availability Group you have to run the following statement on it:

By now you have set up the Availability Group between both Replicas, but you also have to explicitly include the databases on the other Replica into the Availability Group. So for that case, you have to restore the databases on the other Replica (with NO RECOVERY), and finally execute an ALTER DATABASE statement.

After that final step your Availability Group is up and running, and you can check its status through the Dashboard, as I have shown you in the previous blog posting.

I hope that you got a good overview in this blog posting how to deploy Availability Groups through T-SQL. This should be your preferred option when you are finally deploying your Availability Group into production. In the next installment of this series we will talk about Failovers in AlwaysOn.

Stay tuned and thanks for reading!


Do you like this post? Why not share it with the SQL Server community?

Moving from Windows to Mac – a true love story

Since yesterday it is official: I own my first Mac computer – the new MacBook Air that was released a few weeks ago by Apple. I’m already owning the Lenovo W510 notebook for my “serious” SQL Server work (and a few other servers in my rack in the basement of our house), but I wanted to have something small for travelling. Therefore the MacBook Air 13″ is the ideal solution for me. I’ve ordered the MacBook Air with the following hardware configuration:

  • 13″ MacBook Air
  • Intel i7 2GHz Processor
  • 8 GB RAM
  • 512 MB SSD

I have just upgraded every piece of hardware to its maximum J. Besides the customized hardware configuration I’ve also bought a VGA adapter (which I really need for my presentations/consulting engagements) and an Ethernet adapter (the MBA has wireless, but I wanted to be sure…). Additionally I have ordered a second power adapter that will always travel with me – the original one will stay at home. In this blog posting I want to share my MacBook Air experience that I have encountered over the first 24 hours, and how easy or hard it was to move from a Windows to a Mac system.

The first problem, with which I was confronted, was after doing the order through the Apple Store. My MBA (MacBook Air) was a CTO one (Configured to Order), which means the MBA was assembled in Shanghai and finally sent to Vienna/Austria. But as I have already mentioned, I had also ordered other hardware like the VGA and Ethernet adapters. My mistake was that I have ordered everything in a SINGLE order! Don’t do this with Apple! They have to merge every single part of your order together and this could take some time, because the adapters were coming from somewhere else in the world… So it took me around 2 weeks until I finally got my MBA. First lesson learned…

My MBA arrived finally yesterday around 2pm, which means I had plenty of time to get familiar with this new device and set it up, as I need it for my day-2-day work. The first amazing thing was when I opened the box in which the MBA was shipped from Shanghai to Vienna. Inside the carrier box you find a very nice box in the traditional Apple white style, where the opening of the box is also a very nice ceremony.

After I have carefully opened the box, I had finally my baby in my hands. It is very lightweight (around 1.3 kg) and the design is just awesome. It took me around a minute to find the On-Key to start the MBA – it’s located in the top right corner of the MBA. After a few seconds the MBA booted, and the configuration screen was already present. In the first step you have to configure your Wireless LAN connection to get a connection to the Internet. That was the easy part, but in the next step you have to select your country/region for your regional settings – it’s the same concept as when you are installing Windows. But hey, I wasn’t able to find Austria in the list box – I have searched for Österreich, Oesterreich, Austria, but I haven’t found it – really strange. So I have selected Germany ;-) I have to check this afterwards…

In the next step you have to enter your Apple ID to connect your MBA with it. You get your Apple ID as soon as you order through the Apple Store. Mainly it’s your email address with a chosen password – it’s the same concept as the Windows Live ID. So I have started to enter my email address, but hey, the MBA has a completely different keyboard layout than a traditional Windows PC! It took me around 2 minutes to find the @ character for entering my email address. On my German keyboard layout this character is on the L key, but you also have to press the ALT – so you have to do ALT + L to get a @. When you know it, it is easy…

After you have configured your Apple ID you have to setup the user account for your MBA. It’s the same as in Windows, where you can also have multiple user accounts. But trust me: I will NOT share this piece of hardware with anyone. You can also choose a picture for your account, but the great thing here is that you can directly make a shot of you from the integrated web cam – Windows doesn’t has such a functionality. After these few steps my MBA was up and running within a few minutes. So it’s not a big deal to get your MBA configured.

But as soon as you enter the Mac desktop everything changes, and you are feeling like an 80-year-old grandpa working the first time in his life with a computer. You are completely lost, and you have no idea where to start. The first thing that I wanted to do was checking which hardware is present in my MBA. But how you can do that? There is no Start Menu, no Control Panel, nothing. The only thing that you have is the so-called Dock at the bottom of your desktop.

The Dock is something similar to the Start Menu in Windows – you can pin your installed applications to the Dock for faster access. By default there are a few applications in the Dock. Your friend here is the 3rd icon – the Launchpad.

With the Lauchpad you get access to all of your installed applications, and there is one application called Utilities and inside it you find System Information. System Information is your friend to find out which hardware the MBA has. Fortunately I had the hardware that I had ordered inside my MBA J. Besides the Launchpad there is also a big difference how started applications are interacting with the desktop. In Windows each application has its own menu bar. Mac OSX uses here another concept: there is one centralized menu bar at the top of your desktop, and this menu bar shows the menus for the current active application. That’s a little bit tricky in the first few hours, but you will get it after some hours.

Closing or shutting down applications on Mac OSX also works a little bit different than on Windows. When you have the window of an application in front of you, you have some icons on the top left side, where you can minimize (yellow button) and maximize (green button) your windows. There is also a red button with an X. For me as a Windows user this means closing the application. But on Mac OSX the application just disappears from the desktop and is running in the background. This means that your application is still in memory and consumes the memory. Over the 3 – 4 hours I have “closed” the applications in this way, until I have encountered through the Activity Monitor that they are still running!

The Activity Monitor itself is nothing else than the Task Manager on Windows. You can see which applications are running, how much memory they are consuming, what your current CPU utilization is, how much traffic you have on the network, and how much space is left on your hard disk. So it is a really very handy tool. After I have seen that I have not closed my applications, I have done some “research” on the Internet: your application is only really closed when when go to the menu bar (on top of your screen), choose the first menu option, and then Quit. This option is really closing your application and cleaning it up from memory. So that’s a huge difference from Windows!

Installing new applications on the MBA also works like a charm. Mac OSX uses here the concept of DMG files. DMG stands for Disk Images, and you can think of it like ISO files. You can download DMG files for software directly from the software vendor. After you have your DMG file on your local hard disk, you can mount them with a double-click inside your MBA, and with a simple drag and drop to your Applications folder you can install the specific application. When you want to uninstall an application, you are just moving it to the Trash Icon in the Dock. That’s so easy compared to Windows. Brent Ozar also has a video, where he describes how you can work with DMG files on a Mac –

So what software I have currently installed on my MBA:

  • Microsoft Office 2011 for the Mac (I’m writing this blog posting in Winword for Mac)
  • TweetDeck
  • Skype
  • Amazon Kindle
  • Spotify

As soon as you are installing Microsoft Office 2011, you will also get a Remote Desktop Connection client. With that software it is very easy to connect to your Windows Servers or your Windows based VMs. So I can completely control my HP DL180 G6 that hosts 30+ VMs (for various lab scenarios) directly from my MBA when I’m sitting in the couch. That’s awesome J.

By now I had installed almost every application that I need for my day-2-day life expect SQL Server. Without SQL Server my business would not run ;-) . So how can I run SQL Server on a MBA? Mainly you have 3 options:

  • Porting SQL Server to Mac OSX (not really a good option)
  • Using Windows with Boot Camp
  • Using Virtualization

Besides the first (silly) option, running Windows with Boot Camp on a Mac is also a viable solution. The drawback here is that you have to dual-boot your MBA, once for Mac OSX, and once for Windows. So it’s not a real good experience when you want to move a little bit away from Windows (do I have already told you that I’m NOT a big fan of Metro and Windows 8…?). So for me the 3rd option, using a Virtualization software is the best choice, because I can stay in Mac OSX, and run my Windows based applications. When you want to virtualize on Mac OSX, you mainly have 2 paid software choices (besides some freeware applications):

  • Parallelism
  • VMware Fusion

There is a huge debate going on which software is the better one. For me it was easy to choose the right virtualization software: I have already worked with VMware, and VMware is a huge player in the Virtualization market. Installing VMware Fusion is also very easy, you just download a DMG file, and that’s it! And it costs around € 40, so it’s really cheap. The big surprise for me was, when I actually installed Windows 7 x64. It only took around 15 minutes, and applying SP1 took another 5 – 10 minutes. That MBA is soooo blazing fast!!! Thomas Kejser (, ex PM in the SQLCAT team) told me that the underlying SSD disk in the MBA can achieve around 300 – 400 MBs/sec. With a SATA 3 notebook (like the Lenovo W510) you are not able to achieve this throughput…

Installing SQL Server 2012, SQL Server 2008 R2 + SP1 also just took around 15 – 20 minutes. I received my MBA around 02:00pm, and I was able to connect to my SQL Server in a VM around 05:45pm – that’s awesome! When you are running VMware Fusion on Mac OSX, you have 2 choices regarding the interaction with the desktop. You can run your VM standalone, which means your VM is a separate window, like you know it from Windows.

But VMware Fusion also supports Unity. When you enable Unity, the applications inside your VM will be part of your Mac OSX desktop, and you have no idea which application is a Mac OSX native one, and which application is a Windows one (besides the Look and Feel). So the following screenshot shows SQL Server Management Studio directly on the Mac OSX desktop, where I have Unity enabled.

Brent also has a video which goes into more details of this behavior (see So when you are using your VMs with Unity, you don’t really care about Windows anymore. When you shut down your MBA, VMware Fusion will save the state of your VM, and when you restart the MBA, your VM will be also started. You can also directly start applications from your Windows based VM through the menu bar on the desktop:

When the VM isn’t yet booted up, VMware Fusion will boot it, and will finally launch your Windows application. With Unity your VM (with another OS on it) with its applications and Mac OSX are just melting together. You can stay in Mac OSX, but you still have the choice to execute Windows based application, without ever leaving Mac land anymore J.

Printing from the MBA is also very easy. I have 2 printers at home, and installing them on Windows is sometimes a little bit PITA, because you need special drivers for Windows, and sometimes Windows is finding them. So I thought: installing my printers on the MBA will be impossible. Wrong! You just turn on your printer, Mac OSX will detect them, install the correct driver (without ANY interaction from your side), and finally you can use your printer. It took me around 5 minutes to install both printers – on Windows it took me hours to do the same work, sometimes without any success.

The last thing with which I’m currently struggling is how to synchronize my data between my Lenovo W510 and the MBA. My goal here is as follows: I want to use the MBA for the basic stuff that I’m doing:

  • Writing (Articles, Blog Postings)
  • Presenting “basic” SQL Server sessions (no HA/DR stuff)
  • Doing my Consulting/Training Stuff
  • Internet Surfing
  • Accessing my Lab VMs, that are running on a HP DL180 G6

As soon as I need more power, or when I have to take my Hyper-V VMs with me, I’m planning to use my Lenovo W510, like for my upcoming SQLPASS precon in November, where I’m talking a whole day about Practical SQL Server Troubleshooting. Such sessions/workshops/precon’s can’t be done on a MBA, there is just too less power and hardware for it. But I’m still thinking about accessing the VMs on the Lenovo W510 through a Remote Desktop Connection, instead of working directly on it – but I haven’t yet done a final decision on that. I have to do some “Proof-Of-Concepts” about that idea…

But as soon as I’m working on 2 computers, there is a lot of data that must be synchronized between both of them. Currently I’m looking into Microsoft SkyDrive for synchronizing almost all of my data between both machines (besides videos like the MCM training videos). The great thing about SkyDrive is that you can upgrade to 100GB storage for just around € 40 per year. That’s not too much, and this approach will also give you a disaster recovery solution, because you are storing your critical data somewhere else (Yes, I’m doing almost regular backups of my stuff, but those backups remain in our house… in a safe location…). I’ve also looked into DropBox, but it’s just too expensive, you can get it cheaper with SkyDrive. SkyDrive provides you the necessary synchronization appliations for both Windows and Mac OSX. The crazy thing here is that the application works without any problems on Mac OSX, but crashes every few minutes on my Windows machine. I have to investigate this a little bit more….

Wow, I’ve now almost finished writing my 1st blog posting on the MBA. This was also a test drive for WinWord on the Mac, but I really liked it. It’s a little bit different – especially the keyboard shortcuts – but by the end of the day you will like it. The amazing thing about the last 24 hours was also that you don’t hear the fan of the MBA. The MBA is just silent. The only time when I heard it was when I have installed my Windows VM. Even when I run the VM, I can’t hear the fan – that’s also amazing.

So, I hope you have enjoyed this blog posting, and as you have seen it’s almost very easy to move from a Windows system to a Mac OSX system. Of course, a lot of things are different, but by the end of the day, both systems are just OSes with pro’s and con’s. It just takes some time to get familiar with it. But Mac OSX is really a great alternative to Windows 8, when you don’t want to have that silly touch/metro stuff on your working machine…

Thanks for reading!


SQL Server 2012 AlwaysOn Availability Groups – Part 2

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

It’s already a long time since I have blogged how to configure your Windows- and SQL Servers for SQL Server 2012 AlwaysOn. In the last 3 months I was almost every week on the road (or better in the air) visiting a lot of different clients across Europe, and helped them with their SQL Server deployments to improve performance and scalability.

Now I’m finally at home for at least 8 weeks (till the beginning of September), which means that I now also have some time for blogging, and other things. In today’s blog posting you will see now how to deploy your first AlwaysOn Availability Group (AG), and how to monitor the AG through the integrated Dashboard. As I have already described in the previous blog posting about AlwaysOn, I’m running my AlwaysOn Lab on 5 different Hyper-V VMs with the following network configuration:

  • Node1
    • DNS Name:
    • IP Address:
    • SQL Service Account:
  • Node 2
    • DNS Name:
    • IP Address:
    • SQL Service Account:
  • Node 3
    • DNS Name:
    • IP Address:
    • SQL Service Account:
  • Node 4
    • DNS Name:
    • IP Address:
    • SQL Service Account:
  • Node 5
    • DNS Name:
    • IP Address:
    • SQL Service Account:

As soon as you have enabled for your SQL Server Instances AlwaysOn through the SQL Server Configuration Manager (which needs a restart of the SQL Server instance), you are ready to deploy your first AG. For that reason SQL Server Management Studio 2012 provides you the new node AlwaysOn High Availability:

Before you can create your AG, you also need at least 1 database in the FULL recovery model, which will be part of that AG. In my case I’m creating 2 databases with 1 table in each database.

Another prerequisite is that you have to do at least 1 full backup of each database that will be part of your AG:

In the next step you can start the Availability Group Wizard from SQL Server Management Studio that asks you in the first step for the name of your new AG:

In the next step of the wizard you can choose which databases will be part of your new AG. The wizard also tells you, if each of the databases have met their required prerequisites.

In the next step you can specify how many replicas you want to have for your AG, if you want to have an automatic failover between them (up to 2), if you want to have synchronous or asynchronous data movement between the replicas, and if you want to have read-only access for a secondary replica. So there are a lot of things that must be configured in this wizard step.

In my scenario I’m initially spanning the AG between the first 2 nodes. Both replicas support automatic failover, and in that case the data movement between them must be also set to Synchronous Commit. If each replica is in the Secondary role, I’m also allowing read only access.

Bear in mind that you have to configure here each replica from both sides of their role: from the Primary role and also from the Secondary Role!

In subsequent blog postings you will see how you can add additional replicas to your AG, and how you can configure Backup Preferences and an Availability Group Listener for it. In the next step you have to configure how you want to join your Secondary Replicas to your AG. There are 3 options available:

  • Full
  • Join Only
  • Skip Initial Data Synchronization

Full means that the wizard will take a Full Database Backup, and a Transaction Log Backup, and will restore both backups with NO RECOVERY on the Secondary Replicas. This is the preferred option for very small databases, but doesn’t really work with larger databases. Of course, it will take some time to do the full backup, copy the backup through the network, and finally restore it on the Secondary Replicas. In that case you can already prepare the databases on the Secondary Replicas and use the option Join Only. Join Only assumes that the database on the other replicas is restored with the NO RECOVERY option. You can prepare your database for example through Log Shipping or Database Mirroring on the other replicas, and then join it finally to your AG.

In the final step the wizard validates all your configured settings, and you can create your first AlwaysOn Availability Group. On the final screen you also have the possibility to script out the whole setup of your AG, so that you can deploy it fully automatically. In the next blog posting I will show you how to configure an AG through T-SQL.

When everything went fine, SQL Server Management Studio will show your created AG.

You can also right click your AG, and display the Dashboard. The Dashboard shows you in one step the overall health of your AG, and if every Replica works as expected. It’s almost the same as the Database Mirroring Monitor that you know from previous versions of SQL Server.

You can now work very easily with your created AG. Just insert some records into the table Foo on your Primary Replica. The generated Transaction Log records will be transferred to your Secondary Replica, and will be finally redone from the Redo Queue. It’s almost the same concept as with Database Mirroring.

Because you have configured read only access during the configuration of the AG, you can now do a simple SELECT on the table Foo on the Secondary Replica to read the records. That was not possible with Database Mirroring, because here you needed a separate Database Snapshot to get a point-in-time view of your mirrored database. Read-Only access is a huge improvement in AlwaysOn over Database Mirroring!

Over the next blog postings we will now enhance our AG, and I will show several new use-cases that were earlier not possible with traditional Database Mirroring.

Stay tuned!


Do you like this post? Why not share it with the SQL Server community?

Service Broker Adventures: Stucked Queue Monitors

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

This week I’m in London helping a client with its large scale Service Broker deployment, mainly troubleshooting some strange problems with Service Broker and helping to improve the overall scalability and messaging throughput. Today I want to share a strange scenario with you, how a badly written Activated Stored Procedure isn’t really activated by Service Broker. Let’s assume the following basic setup of Service Broker, where we are sending messages from the Initiator Service to the Target Service:

For the Target Service an Activated Stored Procedure was written, and this Stored Procedure was driven by a configuration table. The configuration table specified if the Stored Procedure was retrieving messages from the underlying queue, or not. Have a look at the following code:

This approach is a little bit strange, because you just have to disable activation, which leads to the same scenario, so that the Activated Stored Procedure isn’t doing anything anymore (as soon as the queue is empty). This scenario was invented, because sometimes it was impossible to disable Service Broker queues, because of a Locking/Blocking scenario. The Locking/Blocking scenario occurred, because there were uncommitted transactions from the Activated Stored Procedure, which means that some locks were held forever in SQL Server, which finally blocked the disabling of the queue. See my last blog posting on this phenomenon:,guid,fc4f98af-f42a-4b3d-872e-c31815e6fc02.aspx. So when you have changed the entry in the config table, the Activated Stored Procedure was just exiting – so far so good in the theory…

The real truth about that “solution” was the fact, that the Activated Stored Procedure wasn’t ever processing messages. You might now ask why. The answer on that question was finally very easy: As soon as your Stored Procedure gets activated by Service Broker, you MUST process messages from your queue through the RECEIVE statement, otherwise Service Broker assumes that your Stored Procedure has encountered a problem, and considers the Stored Procedure to be failed. See the following remarks from Books Online (

An activated stored procedure must receive messages from the queue that activated the procedure. If the stored procedure exits without receiving messages or the queue monitor detects that the stored procedure is not receiving messages after a short time-out, the queue monitor considers the stored procedure to have failed. In this case, the queue monitor stops activating the stored procedure.

But how can you now troubleshoot that specific problem to find out if Service Broker had considered your Stored Procedure to be failed? There’s the DMV sys.dm_broker_queue_monitors, which shows the so-called Queue Monitors. Those components are responsible for activating your Stored Procedure. A Queue Monitor can be in 3 different states:


When a Stored Procedure is not activated for your queue, then the corresponding Queue Monitor is in the INACTIVE state. As soon as the Queue Monitor has started your Stored Procedure, the Queue Monitor goes into the NOTIFIED state. And finally the Queue Monitor goes into the RECEIVE_OCCURING state, when the Activated Stored Procedure receives messages. This means that the Queue Monitor remains in the NOTIFIED state, when your Stored Procedure isn’t receiving any messages. As long as your Queue Monitor is stucked in the NOTIFIED state, you are not processing any messages from your queue!

In this specific scenario the Queue Monitor is also not moving into the RECEIVE_OCCURING state, when you change the entry in the user-defined configuration table – it just remains in the NOTIFIED state. The only solution is to disable and re-enable the queue to restart the corresponding Queue Monitor. But when your queues are blocked through locks from an uncommitted transaction… you see one problem leads to another problem… You can find here the download to the script with which you can reproduce the stucked Queue Monitor within Service Broker.

What’s the moral of this story: read the f… manual and code your Activated Stored Procedures in a very robust way J. I’ve talked a lot to different people over the last years about Service Broker. Everyone is just scared about Service Broker, because it’s such an overcomplicated technology. In my opinion Service Broker isn’t really complicated, but you have to know the various design patterns behind Service Broker, and it’s up to your Activated Stored Procedures what you’re doing with Service Broker, and how healthy your Service Broker solution will be. Service Broker itself is very robust (he’s just sending messages from A to B, nothing more), and will not cause any problems to you, almost of the time you are the trigger of the problems…

Thanks for reading


Do you like this post? Why not share it with the SQL Server community?