Why I don’t (yet) recommend In-Memory OLTP to my customers

Ok, some of you will hate me after this blog posting, but it has to be said. Over the last months I have done a lot of work in the area of In-Memory OLTP, and I also had a lot of customer requests regarding this amazing new technology. Unfortunately none of them were really excited about In-Memory OLTP – after I talked to them.

What is wrong with In-Memory OLTP?

NOTHING! Really! I like this amazing new technology, but I can’t yet recommend it to any of my customers. That’s it! A lot of my customers are currently running on SQL Server 2008 (R2), and they just want to move on – to SQL Server 2014. And there is that amazing new technology that gives them a throughput improvement of 100 times. So let’s use it! Unfortunately it’s not that easy.

My customers are paying me a hell of a lot of money to advise them whether the switch to SQL Server 2014 makes sense for their workload and their environment. And based on my recommendations, they will make a decision which will last for at least 5 years. Therefore I have a huge responsibility here. And I have to make sure that I will not propose a technology which doesn’t fit 100% into their environment. Unfortunately In-Memory OLTP doesn’t fit in 99% of all environments.

So what is wrong with In-Memory OLTP, so that I can’t recommend it to my valuable customers? Let’s start…

No Foreign Keys

When Microsoft first showed In-Memory OLTP to the public, it was just amazing: you switch the switch and afterwards the throughput of your workload is 100 times faster. You remember the needle that immediately goes to the right hand side, don’t you? But the reality looks completely different. Imagine you have an existing database, and you want to move some of its tables to In-Memory OLTP.

It will not work, because In-Memory OLTP currently doesn’t support foreign keys. And hopefully your database design has referential integrity. If not, please go back to the basics about database design and learn about referential integrity. This is already one of the most common show-stoppers that many customers have experienced. They would love to migrate some tables to In-Memory OLTP, but they don’t want to lose their referential integrity with it…

No Schema Changes

Imagine now, you have agreed with yourself not to use foreign keys on your tables, and you have moved them to In-Memory OLTP. So far so good. How do you deal with schema changes on your Memory Optimized Tables? Any ALTER TABLE statement is not supported, you can’t later create any additional indexes that you might need (Hash- and Range Indexes). When you define a Hash Index, you always have to specify the Hash Bucket count during the table creation.

This means that you have to project your data growth, and how your data will change over time. When you want to change your Hash Bucket count later on, because you have realized that you have a huge amount of Hash Collisions, you have to drop your Memory Optimized Table, and recreate it again. What do you do with your data in the meantime? You have to transfer it another table so that you don’t lose it. Where you will store that data in the meantime? Will the users be able to access the data in the meantime? A lot of questions but no really helpful answers…

No Execution Plans

Ok, you are now fine with Memory Optimized Tables, and now you want to use the Native Compilation feature of Stored Procedures. Wow, now the real fun begins! Native Compiled Stored Procedures are really, really fast, because you just execute native machine code – C code that was compiled down to assembly instructions! Awesome! But during the query execution there is no actual execution plan anymore, because you just execute assembly instructions, and you no longer execute those traditional slow interpreted execution plans.

What do you do, when you have encountered some performance problems? You can’t look at the actual execution plan to spot the problem. SQL Server only provides you the estimated plan, but the estimated plan doesn’t tell you anything about the reality. It’s just an estimate, nothing more. Do you really want to dig into the generated C code to find out why your Native Compiled Stored Procedure is so slow? I don’t think so.

No Recompilations

In addition to no actual execution plans, there are also no Recompilations. When your data distribution changes, you still run with the same compiled execution plan. Native Compiled Stored Procedures can’t be recompiled during runtime. As I have said, you just execute assembly instructions during runtime – they can’t be recompiled.

How can you get rid of this problem? Drop and recreate your stored procedure, because a ALTER PROCEDURE statement is not supported. When you drop your stored procedure, you will also lose all the granted security permissions. Do you really want to lose them? In addition, your Native Compiled Stored Procedure must be created with SCHEMABINDING. Therefore you can only drop your Memory Optimized Table (e.g. when you want to change the Hash Bucket count), when you have dropped the stored procedure itself. Ouch…

Native Compiled Business Logic

That’s one of my favourite ones: running complicated CPU intensive business logic in stored procedures with Native Compilation. Wow, I would like that large wallet that you have! Running CPU intensive business logic in SQL Server is one of the most expensive ideas you could have. SQL Server is licensed with USD 7000 per core in the Enterprise Edition. Therefore I don’t want to run any CPU intensive work in SQL Server! That’s just not economical. Of course from a Microsoft perspective it makes very good sense.

If you really have some complex CPU intensive business logic in SQL Server, move it to an Application Server (e.g. Web Services), because an Application Server is licensed in a completely different – cheaper – way. You mainly have an architectural problem, nothing more. I never ever want to run business logic inside SQL Server.

“640K Ought to be Enough for Anyone”

Haha, that’s one of my famous quotes in computer history. Fortunately he was wrong as you can see when you look at the hardware specification of the device where you are currently reading this blog posting. When you work with In-Memory OLTP you need to have a huge amount of RAM. Microsoft recommends an amount of RAM which is double what your Memory Optimized Table is in size, e.g. 300 GB RAM for a table size of 150 GB. The additional RAM is used for the row versioning that In-Memory OLTP uses internally. Imagine now, when you drop your table, and in the mean time you want to transfer the data into another Memory Optimized Table: in that case you should have around 600 GB RAM! Are you prepared for that?

It’s about OLTP!

In the first week of March I presented at the SQLRally conference in Copenhagen/Denmark and also at SQLBits in London/UK. At both conferences I covered some In-Memory OLTP stuff in my sessions. Of course I also covered some of the disadvantages that you have just read about. After my sessions people were coming to me, and asking me why I was so negative about this amazing new technology. No, I’m not really negative, I just want to show you both sides of the story, so that you are able to make the right decision.

And some people told me that In-Memory OLTP is so cool, because it can be used for staging table in ETL processes, and we even don’t need temp tables anymore in SQL Server. Whew! Let’s spell out the name of this technology again – very slowly: In-Memory O L T P! It’s about OLTP – Online Transaction Processing. Of course it can also make sense in other scenarios, but the main focus should be OLTP scenarios! Just think about that. If you currently have problems with your TempDb, work on your workload and try to reduce the impact that the workload has on TempDb. As I always say during my performance troubleshooting sessions: our ultimate goal is to solve the underlying root cause, and not to create a workaround for it. (e.g. set MAXDOP to 1, when you see CXPACKET waits).

Summary

In-Memory OLTP is an amazing new technology. But it’s a v1 implementation. There are too many restrictions in place that prevent a large amount of people from using it yet in production environments. I have outlined some of these restrictions in this blog posting, so that you think twice about whether you want to integrate In-Memory OLTP into your specific scenarios.

And I’m more than happy to go back to this blog posting in the next version of SQL Server, and to mark every paragraph as not relevant anymore, because Microsoft has removed these limitations. It’s up to you Microsoft to drive this awesome technology forward!

Thanks for your time,

-Klaus

23 thoughts on “Why I don’t (yet) recommend In-Memory OLTP to my customers”

  1. George Walkey

    IM-OLTP is a NICE 1.0 feature,
    but hopefully like Columnstores, the limitations will be lifted in SQL 2016/17/18 /vNext/Azure of the month

    1. Klaus Aschenbrenner

      Hello George,

      Thanks for your comment.
      Yeah, I’m also already looking forward to a vNext of In-Memory OLTP, and I’m praying that some of the limitations will be removed 🙂

      Thanks,

      -Klaus

  2. Just out of curiosity… how many of these problems are common to all OLTP DB engines? I followed a MOOC about SAP HANA and there was no mention of anything similar.
    Of course, some things they may have “glossed over” (I don’t know /remember if they mentioned anything about query plans) but I would be very surprised if foreign keys would not work in a fully in-memory columnar DB.

    1. Klaus Aschenbrenner

      Hello Paolo,

      Thanks for your comment.
      I have no idea about other In-Memory technologies, but of course In-Memory is quite a new technology, and therefore the database vendors have to live currently with some limitations.
      Hopefully this will change over time.

      Thanks,

      -Klaus

  3. Thomas Franz

    There are some more limitations:
    – you could not use any UNIQUE or CHECK contraints
    – DEFAULT values are not supported
    – the native compiled procedures does not support many native T-SQL functions as REPLACE so you are very limited (except you use it only for very basic queries

    http://www.mssqltips.com/sqlservertip/3080/workaround-for-lack-of-support-for-constraints-on-sql-server-memoryoptimized-tables/ shows a view-based workaround for some of the limitations (foreign keys / constraints) but it is still a workaround.

  4. Christian van Eeden

    With all the limitations, my thinking was that it is a good intermediate step to speed up larger Temp tables. They are isolated, don’t have foreign keys, and can take advantage of the faster write speeds, and limited locking.

    Above you mention this is a bad idea, but when dealing with fixed system constraints (ie. “you have to use SAN”), what other options are there?

    Thanks for your insight.

    1. Klaus Aschenbrenner

      Hello Christian,

      Thanks for your comment.
      It’s not a bad idea to use Memory Optimized Tables for Temp Tables – but it’s just a very limited use case where you can use them (currently).
      As you have said, Temp Tables are used in a very limited scope, therefore In-Memory OLTP makes perfect sense in this area.
      But almost nobody want to use In-Memory OLTP currently in a broader range, because of the restrictions.

      If you deal with an existing system with fixed constraints, you just have to choose: do you want to use In-Memory OLTP and pay its price (the limitations), or are you able to improve your workload so that there is no need for In-Memory OLTP.

      Thanks,

      -Klaus

  5. Thomas Franz

    I forgot another limitation:
    – In Memory Tables always uses a BIN collation and are case sensitive for this reason -> it needs much code changes if you want to search f.e. for a name (you have to add an UPPER() or LOWER() to your code and indexes)

  6. Klaus Aschenbrenner

    Hello Thomas,

    Thanks for your comment.
    Yes, need for the BIN2 collation is also a very *big* issue.


    But you remember: In-Memory OLTP is part of SQL Server, and it just works! 🙂

    Thanks!

    -Klaus

  7. Alexis Cedeno

    This is a great article.
    Everything has advantages and disadvantages. I have been researching for around three month trying to incorporate the In-memory OLTP tables, as well as the clustered column store index new features in order to obtain better performance.
    In my humble opinion none of them present a clear solution. Missing some of the main concepts like primary key, foreign key, having to recreate table and store procedure, and some more restrictions make database designer feel uncomfortable.

  8. alberto Tapia

    Hallo Klaus,

    Thanks for the article. It is refreshing to read the parts that are not that good about SQL, and helps to put things in perspective.

    I’d really love to know your opinión about the improvements in SQL 2016. Will their be sufficient for you to change your mind and recommend in-memory to your customers, or are there still missing points that are so important to you that make it a no go yet?

    Regards,

    A.Tapia

    Disclosure: I am a Microsoft Employee

  9. Satyaki Chakraborty

    This is a great article. I want to get your suggestion about ERP development, should we implement In memory OLTP ? is it good for in terms of Database maintenance and design ? If you get a chance please give me your ideas

    1. Klaus Aschenbrenner

      Hello Satyaki,

      Thanks for your comment.
      The goal of In-Memory OLTP is not to move your *complete* database to In-Memory OLTP. The idea is that you move these tables and stored procedures where you have specific performance problems.

      Thanks,

      -Klaus

  10. Hi Klaus,

    Thank you for your wonderful blog – I wanted to share it on Linkedin but there is no option available.

    Appreciate if LinkedIn is also added as an option.

    Thank you.

    Br,
    Anil Kumar

  11. Nice Article….
    Really Appreciate your research on In Memory OLTP. SQL Server 2016 edition has made lot of new additions to this feature.

    I am Curious to know your opinion from SQL Server 2016 perspective.

Leave a Reply to alberto Tapia Cancel Reply

Your email address will not be published. Required fields are marked *