Paul Larson
SQL Down Under Show 62 - Guest: Paul Larson - Published: 4 Apr 2014
SDU Show 62 features Microsoft Principal Researcher Paul Larson discussing the in-memory tables, and clustered columnstore technologies behind SQL Server 2014.
Details About Our Guest
Paul Larson is a principal researcher in the Database Group at Microsoft Research. Paul's research is in the area of database systems, focusing on main-memory databases, column store technology, query processing and query optimization.
Show Notes And Links
Paul's research page and links to his papers are here (http://research.microsoft.com/en-us/people/palarson/)
Show Transcript
Greg Low: Introducing Show 62 with guest Paul Larson.
Welcome, our guest today is Paul Larson. Paul is a principal researcher in the database group in Microsoft Research. Paul’s research is in the area of database systems for focusing on main memory databases, column store technology, query processing and query optimization. So welcome Paul!
Paul Larson: thank you, glad to be here.
Greg Low: what I do with everyone first up is just explain how did you ever come to get involved in SQL Server in the first place?
Paul Larson: while that is a fairly long story but let me start out by saying I came from academia, where I was working on database research. I join Microsoft Research in 1996 and have worked on and off with the SQL Server group and quite closely over the last five years both on column store improvements and on Hekaton.
Greg Low: Indeed. Actually one question I have with that is, how closely do the research and product groups work together? Because I am mindful many, many years ago, in another lifetime I was sort of working with HP and I was also doing work in academia at the time. I just remember them very strong relationship between that and it is not something I see in a lot of companies. I was just wondering how much does that happen? And how effective is that at present?
Paul Larson: it varies from time to time and from group to group. I have had projects that I have not been although close related, directly related to a product group but this particular, one I worked very very closely with the product group for the last five years.
Greg Low: excellent, so look what sort of noticing here. I suppose this is a version of the product where there has been a fairly radical shift in some of the internal capabilities and so what some of the things that you felt caused the need to have that?
Paul Larson: well it was quite clear, even five years ago that we could not stay much longer with the traditional SQL Server architecture. It is an architecture that is 30 years old by now.
Greg Low: Yes.
Paul Larson: it hasn’t fundamentally changed but the underlying hardware platform has changed and is continuing to change. So we couldn’t imagine going ahead and staying with the same architecture that we have had for 20 years.
Greg Low: Yes, what in particular in the underlying hardware do you think has had the biggest effect there?
Paul Larson: be really biggest effect is definitely much larger than main memories. You got to remember back in the early 80s when the architecture all the main commercial database systems were designed. Memories were very very small, disks weren’t all that big either but that was the only place that you could actually store the amount of data that you needed. So we basically had to come up with more less a paging kind of system. It was optimised for, under the assumption that the data lived on disk and you occasionally bring it in when you need to do something with it.
Greg Low: well yes, that seems to me to be the shift is in it? We had small amounts of memory, we would bring things into memory off disk to do the processing but we didn’t have much room to have much of it there so. Really what that meant though as well I suppose, is of course this is why we were so bound to the I/O in terms of it being a bottleneck most of the time.
Paul Larson: absolutely I mean that was the big problem. How to reduce the number of I/O is that you had to do. Now with plenty of main memory you just turn that assumption on your head and say well today most of the time I am going to live in memory. So let’s optimize for that case and if we have to put it on disk well it is going to cost us a little bit more to actually get it to disk and from disk so be it.
Greg Low: Yes, I think how much do you think that is also driven not just by the amount memory though, but by the compression technologies that have appeared?
Paul Larson: I don’t think that the compression technologies is that dramatic effect.
Greg Low: I was thinking more in terms of the memory-based compression, like column stores and things like this and I was just wondering what effect that has? When I look at things like data warehouses the approach now are more and more seems to be pulled the whole thing into memory, compress the life out of it and then do everything almost by brute force.
Paul Larson: oh definitely but that doesn’t really matter, it is more urgent if you imagine you are going to store all of your data warehouse data in memory yes. Then you have to worry about compression, that even if you store it on disk and bring it in occasionally I’ll do some caching version of it to compress it in in any case. And use the amount of data that you transfer and the amount shown that you have to do scale. It is not just a matter of saying space it is also a matter of saving time.
Greg Low: no that’s excellent, so now in terms of SQL Server 2014 and the Hekaton story so what are the fundamental changes you felt have been made there?
Paul Larson: it is a very very fundamental change. The architecture of Hekaton is completely optimized for main memory and for large numbers of cores because that is what the future holds. I mean, what’s the name Hekaton. Hekaton means 100.
Greg Low: Yes.
Paul Larson: nothing else 100 in Greek and that was the aspirational goal that we set for ourselves. We didn’t expect that we would reach that but what it meant was that if force us to rethink everything from the ground up. You just couldn’t reach into the old Toolbox and say here is this tool we have use were many years.
Greg Low: another little tweak that we could apply, yet sort of if I am aiming for something really really high then I have to rethink everything that I am doing.
Paul Larson: absolutely, absolutely if you are going to go hundred times faster that means you have to do get rid of 99% of the instructions.
Greg Low: that is a great way of putting it.
Paul Larson: yes yes it is and it just highlights the fact that it is very very hard and you cannot do it just by tweaking excess things.
Greg Low: what are the main design changes that had to happen to allow that?
Paul Larson: well first of all, the data we stored the way we store data in memory is optimized for the other assumption that it lives in memory. It is not a disk core optimized structure, we use B-trees as one of the indexing structure but it is a version of the B-trees and is completely optimized for in memory data and so on. That is the first thing and the point is you can use completely different data structures and not have to worry about pages and latching pages and searching on pages and so on. That is actually much faster.
The other fundamental changes, that we use only log free data structures and that’s guided by the fact that we expect to run newer machines with lots of cores than a traditional sort of latching type approach, it just won’t scale.
Greg Low: mostly is that previously because you are also often holding those latches all holding those blocks for longer periods because you’re waiting on IOs to occur and you are now presuming most of those things happened very fast?
Paul Larson: you have to distinguish between latches and locks.
Greg Low: Sure.
Paul Larson: latches is short-term things that is used to protect changes to data internal data structures. Whilst locks are logical and you protect transactional against each other. Even though you think of latches as being very short-term it strikes very hard when you are running 100 cores concurrently.
Greg Low: Yes, and so the more things that are happening currently with that, the more collisions that are occurring all the time.
Paul Larson: the more collisions you are going to have, right. You are going to be you are completely going to be determined by how long you spend time in this critical sections. Even 1 or 2 percent of your time spent in in critical sections is severely going to limit your scalability.
Greg Low: and so is there a big push obviously to then I remember I used to do a lot of operating system internals and the size of the critical regions were always the things that would determine the overall responsiveness of the operating systems in a lot of ways. Are the critical regions like quite small in SQL Server now?
Paul Larson: Well in Hekaton, you have the classic SQL Server engine and of course that is beaming through. We are talking about Hekaton here, in Hekaton there is only one critical section and that critical section. While let me be a little more precise
Greg Low: Yes.
Paul Larson: in the core Hekaton engine itself, there is only one critical section and that critical section is one instruction log. Right
Greg Low: awesome!
Paul Larson: there are other critical sections because we are interfacing with parts of the traditional SQL Server engine. For example the log, there are latches in that and so on and those of course remain. But in the core Hecaton engine, there is only one critical section.
Greg Low: and of course I suppose that lead into the logging changes and so clearly the logging has fundamentally changed in terms of when using the Hekaton structures you are aiming for a different style of logging?
Paul Larson: yes, how the Hekaton logging is designed and then you have the issue of how is that integrated in SQL Server. The Hekaton logs only changes to user’s data, it doesn’t log any changes to indexes or anything else, just changes to user’s data. It doesn’t write log records at all while the transaction is running, it only writes out the only changes in one go when we determined that the transaction is actually ready to commit. And a border transactions logs nothing.
Greg Low: yes I think that is something I find a lot of people don’t get with the current or sorry the now previous versions of SQL Server. The degree of logging where people think it’s only just be committed data that is being written to the log.
Paul Larson: No.
Greg Low: but it is just this endless stream of things going out to the log.
Paul Larson: exactly, exactly all changes to indexes and so on are written out and I trickled out while the transaction is running. In sort of bits and pieces and we do it in one single log right at the end.
Greg Low: is there anything more efficient about how that log write itself is done as well? Given the fact, you know the whole outcome at that point.
Paul Larson: yes and no, there is a significant difference in that Hekaton per se doesn’t require any particular order. It doesn’t rely on the order in the log itself because these serialization is completely determined by the n times x by four transactions.
Greg Low: Yes
Paul Larson: so we can in principle paralyze log rights to our hearts content and just play them up on any number of devices. We just need to get it off to persistent storage somewhere that is how the logging is designed for Hekaton.
Greg Low: Yes.
Paul Larson: in the actual implementation, we decided then that we are actually going to log to the traditional SQL Server log to give users a completely transparent recovery story. You just recover SQL Server and everything is taken care of that didn’t change at all. We didn’t want to have users worry about the additional set of log files for Hekaton and trying to manage those.
Greg Low: yes and that is good, I think one of the beauties of what is being done with the product is how well it is sort of integrates with the existing structures that were there. What I see with a lot of the database that is trying to take advantage in memory technologies is you have to make the choice between having an in memory database or a traditional product and at least with these people can sort of move table by table or a bitter functionality at a time across as a make sense.
Paul Larson: Oh yes, once in the early days of the projects, once we had convince ourselves that we could actually build and fast in memory database engine. The question is well what form are we going to ship it? As a separate product or are we going to try and integrate it into SQL Server? By shipping it is a separate product it would have been much much easier.
Greg Low: Yes.
Paul Larson: but that clearly was not, what was preferred from the customer’s point of view. As you said customers want an integrated story they don’t want to have a second system to worry about.
Greg Low: Yes.
Paul Larson: and so on, and so the decision was too integrated into SQL Server and I can tell you that has been an enormous amount of work on part of the development team.
Greg Low: no it looks huge in terms of what actually has to be done there in terms of the structure. I suppose from somebody who coding point of view and the things they need to think differently about writing code to run against that style of engine?
Paul Larson: are you now talking about application writers or?
Greg Low: well suppose people writing T-SQL and/or I suppose now obviously natives stored procedures as well.
Paul Larson: one big difference is, we are using optimistic concurrency control so there are two things you need to pay attention as an application writer because of that. One is a transaction may abort at a higher rate than you are used to from traditional SQL Server. The traditional SQL Server a transaction can abort because of deadlocks but you may see especially in high contention situations, you may see more abort. Your application should be designed to deal with the porter transactions and retry. You just retry.
Greg Low: and again they are things I see as well, when people first design applications they don’t tend to see many but as the concurrency increases they can exponentially increase if they haven’t paid attention to this. Yes.
Paul Larson: and the other part is, well Hekaton is designed for short transactions. You can have long re-transactions they are just fine because they are using multi-versioning so they will read all the versions and it won’t interfere with anyone else. UPDATE transactions should be short and single sharp transactions.
Greg Low: actually that is an interesting thing in, what changes in regards to isolation levels?
Paul Larson: well we support Hekaton concurrently, we currently support three isolation levels: snapshot isolation, repeatable read, and serializable. We can support all the isolation level that SQL Server currently supports but those are the three that we went with in this release.
Greg Low: how much of an impact do you think serializable in is with this sort of scenario?
Paul Larson: one of the good things is, which is very different what you have elsewhere in SQL Server. That you have a locking system, is that the cost. If a transaction requires a higher isolation level, the only transaction that pays for that particular transaction nobody else is impacted. It means that that transaction has to do a bit more work.
Greg Low: Yes.
Paul Larson: so if you run a snapshot isolation, there is no validation, there are no locks, there is nothing.
Greg Low: yes that is awesome, yes because I must admit in the existing things one of the, if we do see things like a lot of deadlocks one of the very first things we often go looking for is things that go using serializable that actually don’t need to be because there are many many things. Like in fact, component services when you install DLLs defaults to serializable. Things like Biztalk adapters all default to serializable, there are just so many of these things that people install today that default to serializable and yet they have such a big impact at the back end.
Paul Larson: well so snapshot isolation, as I said there is no locking no latching nothing. For repeatable read before the transaction commits. It goes on checks the timestamps for all the records that it read.
Greg Low: Yes.
Paul Larson: to make sure that they have not been updated. Some say it is expensive, but in reality it isn’t because those records that you have just read are most likely sitting your caching in any case.
Greg Low: yes, exactly.
Paul Larson: they haven’t gone anywhere, it is just checking in the cache. If you request serializable, then in addition to checking the timestamps of the records that you read you also repeat your skeds.
Greg Low: Yes.
Paul Larson: and check well has any new records showed up that you are supposed to see.
Greg Low: Yes.
Paul Larson: yes I admit that is a little bit more expensive but again it is only the transaction that requested that pays for it.
Greg Low: yes that is excellent because, that is right. To avoid phantom reads and things in the existing one, where we do. For people who are not familiar maybe we read, give me the values from 5 to 10 and we don’t want that to change. In the previous thing it would actually lock it so that somebody can’t now insert eight when it wasn’t there before and that has an effect on the other people rather than on this.
Paul Larson: Exactly. The term I would like to use is that we are not penalizing bystanders.
Greg Low: I love that, that’s great! In fact that is a huge thing, being able to improve. Because again a lot of the applications, the reason why they die straight into serialisable, is that they want the most consistent view of the database that they can get pretty much without caring about other users or innocent bystanders. And so this architecture completely changes this equation, in terms of you pay the cost rather than them paying the cost.
Paul Larson: exactly, and if you have read only transactions, well snapshot isolation will give you a consistent view of the world.
Greg Low: yes, so is there much penalty with snapshot isolation there or is just following the versions are kept and follows the version chain?
Paul Larson: no we don’t actually change versions together. A lot of people can’t understand why why we don’t have to do that, but we don’t. Instead what we do, we carry in each record to timestamps, there begin timestamp and the end timestamp and every transaction has a read time. So a snapshot isolation, you take the read time is the time when the transaction starts and when you scaled down a hash change, you first check the timestamps to see does the valid time for this record intersect with my read-time. If it doesn’t then ignore it because I am not supposed to see this one.
Greg Low: Ha, so it is basically I start I know what I am supposed to look at and basically something else in the meantime that I just ignore it anyway.
Paul Larson: exactly, you won’t even see it.
Greg Low: that is awesome.
Paul Larson: the lowest level of the system will skip it for you.
Greg Low: now what you were saying there as well though it is the application itself needs to deal with, I suppose because it is so concurrent and assuming optimistic and concurrency the number of failures related to that could increase?
Paul Larson: it could, again our fallback always is this something goes wrong. Is well we abort the transaction and then you need to restart it
Greg Low: but again it is the client that need to do that.
Paul Larson: It’s a client that does that, you can do it straight in the application, or you can write an outer stored procedure that is a regular SQL Server stored procedure that does the retry.
Greg Low: Indeed, yes I see both approaches today around deadlocks. I mean again there is a lot of that you can handle within the stored procedure but I also think people need to think about doing these at the client level as well because I look at things like high availability systems failover and things like that. Again they need to deal with somewhere and that is going to be back at the client regardless.
Paul Larson: Exactly, failover occurs you typically lose some transactions and the client is not able to deal with that right?
Greg Low: yes now I think one of the other things that is sort of interesting in here is it introduces the concept of non-durable data in a much more formal way. Where you have got a type of table where you basically maintaining the schema on failure and what was the key thinking around that?
Paul Larson: that really was came from the earlier customers that we were talking to that you would be great if we had lots of cases where the data doesn’t need to be durable. They can afford to lose it, think of it for example, in an ETL kind of workflow. Where you have intermediate data, if you lose it for some reason, you can always rerun the workload.
Greg Low: yes, I can think of lots of scenarios where we would use this actually. I think back about applications we had where even licensing wise we used to track who was currently connected to our application for licensing purposes.
Paul Larson: exactly.
Greg Low: if the server restarted, the first thing we will always doing is having a start-up procedure that went and cleared that because we were actually storing it in a table. And so the idea of a table that when this restarted all the data just vanishes that is really really useful for things like that.
Paul Larson: exactly I mean everybody who has learnt for a while, there is applications running into these types of scenarios where it doesn’t need to be persisted.
Greg Low: I was saying even more so, you simply don’t want to be persisted. But yes at all, no know that is a huge thing. And so I suppose now the next is then the native stored procedures. I suppose we should talk about that table structures too. When you create a table, this is then running off basically creating a DLL or equivalent under the covers and so this sort of limits a bit in terms of what you can do changes to the table but it is sort of basically defining these structures for the table that live in memory?
Paul Larson: it defines the structures, it also creates a number of callback procedures because the Hekaton engine itself of course has no idea of how the records is structured and where the various fields, so you need access to the fields and so on.
Greg Low: Yes.
Paul Larson: you need, a way of comparing keys for example, and so that is one callback. If you are using a hash index, you need hash function what you need to compute the hash function over, what is the hash function and so on? Much of these auxiliary callback procedures that are needed, when you actually run queries. Then there are the functions that are needed for inter-op meaning, when you scan a Hekaton table from a regular SQL Server query there is an operator at the bottom that needs this scan the table and understand whether fields are and where you can find them.
Greg Low: Yes and we should mention year that’s right if you come from T-SQL there is in inter-op layer that needs to get you into this in memory structure.
Paul Larson: exactly!
Greg Low: now that’s great, in terms of altering a table then this really means we have fundamentally just replaced that.
Paul Larson: yes that is basically what it means at least today.
Greg Low: Yes, excellent! Yes I like today, indeed that’s good. If we are then having code accessing this, we have the ability to come from a traditional procedure through the inter-op layer or we have the ability to create code but itself then gets compiled as native code.
Paul Larson: right.
Greg Low: and so has there been a real challenge, I am imagining this has been the case of rewriting almost every function and things that live outside in T-SQL land and into or migrating that into something that can then be done inside this native mode?
Paul Larson: it is mixed I mean, when you say compiling the native code, yes but you don’t necessarily compile every last bit. If you have sort of generic function, you don’t necessarily want to generate code for that every time. You write it as you call it, it is just part of the system, so there is plenty of those functions that just called from the native code and what we call runtime in Hekaton. There are also sometimes it actually calls out to SQL Server, the functions and the traditional SQL Server functions. That is for functions that are quite complicated and you have to be very careful to retain full compatibility with the classical engine.
Greg Low: Yes.
Paul Larson: so you don’t want to necessarily rewrite them.
Greg Low: so they are basically just left as calls to the internal existing code in that case.
Paul Larson: exactly, exactly.
Greg Low: ok, and so now each of these stored procedures when we build that is a procedure and compile that again we again effectively end up with a DLL for that as well. What sort of limitations and things you think there are at present in terms of what we can do inside that? Compared to writing traditional stored procedures.
Paul Larson: well there is a fair amount of limitations in this first version and we aware of it. An analogy I like to use, so let me repeat that is we know that eventually we are going to build a skyscraper. If you know you are going to build a skyscraper, you have to make sure that you have the foundation right. Well what that meant is that we had to invest so much in building the foundation that we can only afford to build the first few floors of this skyscraper.
Greg Low: Yes, indeed and it is time as well.
Paul Larson: oh yes, I mean time to market is a feature.
Greg Low: yes it is absolutely is.
Paul Larson: right so we are fully aware, of the limitations in SQL Server and the T-SQL surface area but that will be expanded as rapidly as we can.
Greg Low: I noticed through the pre-release phase it kept increasing non-stop the surface area for that. It is being good.
Paul Larson: but right now the surface area, we think is sufficient for a lot of OLAP type of applications and we know that it is not for everybody but it was quite carefully selected.
Greg Low: yes that is great.
Paul Larson: one of your favorite features is going to be in there.
Greg Low: yes indeed, listen one of the discussions, we often get into, is a lot of people have been trying to sort of push to not have procedures and you see more where people are trying to say people are trying to remove procedures out of their applications. I am not a fan of that but I do hear that a lot and I just sort wonder, do you think this changes that a little bit in terms of you are going to get better performance now literally by having procedures in there in many of these cases.
Paul Larson: Yes, yes. I also think that, it is a bad idea specifically for Hekaton. Now how should I explain it, that the worst applications for Hekaton is very chatty applications? Interact with the database server, many many many times and just asked were trivial things because then the part that Hekaton can actually accelerate is so small that it doesn’t really matter. You spend a period of time coming in and out.
Greg Low: yes the interaction and the round-trip is a thing that is way too expensive.
Paul Larson: Exactly.
Greg Low: I often see it is one of the challenges where we see people are moving to Azure SQL database and there is this sort a perception that you can just sort of point the connection string there and it would just go but that is truly many many cases but the minute you have applications that are incredibly chatty, that incredibly highlights that. I saw an application a little while ago where it was a Windows app, from the minute they started it, too when the first screen came up they done nearly 90,000 remote procedure calls. Adding that sort of latency in the middle that could take two weeks to start up instead of a few seconds. I mean it is a tribute to SQL Server that you could do all that in a few seconds when they are closely connected. One of the things I used to love, was I was at a place years ago I used to get people doing work development and instead of having the web server on the box where the SQL Server was they made them connect with the dial-up modem. At least that way there became very painfully aware of every call they made to the database.
Paul Larson: there is no reason not to push application logic into the database server because we run it at native code speed.
Greg Low: Yes. I think look, the other reason I think I am a fan of having at least one layer of abstraction inside the database anyway because for me it is a big thing for agility. I go into organizations all the time where people are looking after the database feel they can never change anything and the reason is they have no visibility into the code whatsoever that is touching it. It will be Access databases and Excel spreadsheets and Reporting Services reports and Integration Services packages everything just has got T-SQL embedded or through it. And yes the minute they going change it anything, they know somebody is going to scream at them but they just have no idea who.
Paul Larson: Right, right.
Greg Low: yes I always like to have that at least one layer sitting in there anyway which I can then also wrap tests around. Again that is the other advantage to me is that I can make a change, run a whole series of tests and I don’t have to run every spreadsheet, every report, every whatever and still know I have a broken things.
Paul Larson: right, right.
Greg Low: now another big change is have you been involved with the clustered column store changes in this version?
Paul Larson: Yes I’ve been involved.
Greg Low: I have a feeling you would of.
Paul Larson: ever since started.
Greg Low: and so, in 2012 I thought it was a really interesting technology but the fact that once we build that in place I suppose there are two issues. We would put it in place in became only read only, it meant that updating, we ended up having to do a lot of fiddling in partitions and switching partitions and all that sort of thing. The other big one seem to be it was fairly difficult to get the queries into batch mode rather than raw mode and of course both of those things seem to be addressed in this version.
Paul Larson: Exactly, it was, yes we were fully aware the limitations of column store indexing in SQL Server 2012 but that was again all we had time to do for that release and we really wanted that to get out to plant a stake in the ground.
Greg Low: Yes.
Paul Larson: for a lot of people it was actually very helpful, for other users yes. The fact that you could update them and you had to plan it just right to get batch processing to work was a problem for some.
Greg Low: Yes.
Paul Larson: a lot of that has been addressed, now in this release with the clustered column store.
Greg Low: yes it must this in itself is going to make a huge difference simply to the size of the data warehouses where previously to use it you had the original data then you had a non-clustered column store index sitting beside it whereas it least now that often was a fraction of the size of the original data and of course with the clustered one that can now actually be the data.
Paul Larson: Exactly, exactly so that helps with the total cost and the amount of storage that you have to buy and so on. So yes it is a good thing!
Greg Low: in terms of achieving the update ability of that, I gathered from my looking at that, it is done via delta tables and so on. How tricky was that to put in place?
Paul Larson: conceptually was not tricky, column stores have been around for a long time there are a few approaches that work and they have been around long enough that it is really hard to greatly innovate on sort of a conceptual level. It was a matter of choosing what trade-offs you were willing to make and what to put emphasis on. So we settled for this solution with delta stores.
Greg Low: Yes.
Paul Larson: because building these column store segments and so on is pretty expensive process and updating them is prohibitively expensive. So each just something you could not do.
Greg Low: how much of an impact is there though in having to both check it and check the delta stores or do you somehow sort of try and pull the two together?
Paul Larson: for a typical data warehouse queries, that means many many many millions, millions of rows.
Greg Low: Yes.
Paul Larson: checking the delta, reading the delta store is going to be about reading the noise level.
Greg Low: this is important though, that the presumption in in that the amount of stuff in the delta store will be a small percentage of the size of the table itself.
Paul Larson: yes delta stores are created on demand and when delta store as soon as they exceed 1 million rows, it gets compressed right away.
Greg Low: Yes, what I was suppose getting at is that if somebody wanted to do an UPDATE right across the whole table, well that would suddenly become a much more painful operation.
Paul Larson: yes of course it is, you can do it. It is not really, how I should put it. If you do an update across a complete table that is an expensive operation but then that table should be in your data warehouse.
Greg Low: yes that’s right,that is a questionable operation to be doing in the first place. Listen the other thing, that appeared in this version was the column store archive compression strategy as well, I was sort of wondering this sort of things that are different? I notice that, in my own testing it does seem to drop it down another few percent and sometimes even a bit more than that again in terms of the size of the data. I am just sort wondering like is a slow again to update or something I was just sort wondering why that is in the default?
Paul Larson: no it is not slow to update, no not at all. Because the column segments, once they are created are immutable they do not change. UPDATES are basically having a bitmap on the side that can mask out when you are scanning ignore this row.
Greg Low: Oh ok, so don’t actually. Yes because it is being updated elsewhere.
Paul Larson: yes it is been updated and the new version got put into the delta store and maybe some other columns that we know. It is just basically logically deleted by adding something to.
Greg Low: to the bitmap. So what is different about the column store archive strategy as opposed to the column store?
Paul Larson: it is basically from the way it is being written out to disk, you apply essentially a zip on it. The same algorithm as our own implementation on it. You just run that across each segment before you write it out to disk.
Greg Low: Oh so literally that is just additional operation on the persisting of that row segment. All of the pieces for that particular column segments. Interesting!
How efficient is that in terms of?
Paul Larson: it does not add to very much.
Greg Low: yes I was going to say, it is already the things are done at the column level are pretty spectacular in terms of degree that it sort of compress that down.
Paul Larson: I mean it varies on data, but you can see shrinking by 50% or so is not unusual or in some cases you see less. But it does help.
Greg Low: so where do you pay for that? You pay for that just when the segment is loaded?
Paul Larson: you pay, first of all when it is written out the first time you created it.
Greg Low: Yes.
Paul Larson: then you pay for it when you bring it in to memory.
Greg Low: but the in memory version is the standard column store, it is only the persisted version is the archive.
Paul Larson: so in memory we have new cache where we store these column segments, they are not stored in the buffer pool. That is just the way they cache, any column or column segments that is heavily used will stay in memory.
Greg Low: Yes, no that’s great so the thing is, the main thing with the column store archive is nothing to do with what fits in memory and things like that, it is just more a case of how much space does it occupy on the drive.
Paul Larson: exactly.
Greg Low: no that is awesome! So listen probably the last few things, I would love your thoughts on generally where you think the industry is heading in this sort of direction?
Paul Larson: that is very hard to say, we are now in the midst of what I would say creating a new generation of database systems because things have changed so rapidly that we cannot stay with the old architecture that we did for many years.
Greg Low: Yes.
Paul Larson: we are seeing a proliferation, specialized engines. So we have specialized engines in memory database engines, specialized column stores, key value stores and so on. The question of course is what is going to happen? While the first of course is if you have a specialized engine, yes you can run some of applications faster than you can on a general purpose system there is no question about that. But what we have done with SQL Server is while the product remains but we are going to build in, these specialized engines. So we can cover a very broad class of applications with those specific engines that we have because customers really don’t want to have multiple products to deal with.
Greg Low: indeed that one thing to, is in the currently shipped product as well now there is also a tool that allows you to assess whether or not tables might be good candidates for going across to in memory.
Paul Larson: Exactly.
Greg Low: I was wondering how effective do you think that tool years and what sort of things does that sort of look for?
Paul Larson: it is not a tool that I am totally familiar with, but it is not perfect. There is a general question about that, but it looks first were tables that are heavily accessed and a reasonable size and you can put it in memory. That is what is looking for first by analyzing.
Greg Low: that is a point because we currently you also have a limit on the size too, that’s right. At shipping was that 256?
Paul Larson: it was 256 for this release, we wanted to go out carefully because there are some parts where we know that we have bottlenecks but we haven’t been able to progress in this particular version. One for example is the log, so going with a 2 TB in memory database felt too risky.
Greg Low: Indeed and looked that will come though. Those things will come down the track.
Paul Larson: Yes we will increase those limits as we gain more confidence and address some of the bottlenecks that we know exists and so on and yes as quickly as we can but they would definitely grow up. Remember use only 256 but that is not your whole database, that’s just
Greg Low: Hot tables.
Paul Larson: yes the hot tables.
Greg Low: no that is awesome, I suppose one of the other things is at least this architecture probably is much more scalable as the number processes go up or the number of cores in the processes.
Paul Larson: yes this is what is designed to be, it is designed for the speed and scalability.
Greg Low: yes I notice that even Intel talking about things like 60 and 80 core commonplace procedures in the next year or so. Yes it will be interesting to see how that sort of thing scales up.
Paul Larson: well we will have to see how quickly they increase the number of cores. There is a lot of risk associated with that but yes 25 cores per socket would not be in the too distant future. Are we going to see 80? I don’t know.
Greg Low: yes I think that might have been optimistic but yes.
Paul Larson: I don’t know!
Greg Low: but yes it certainly a big trend I think it is going to be interesting that a lot of these servers that are being put out at the moment. The amount a memory tends to be fairly limited but I notice the new servers are certainly providing more much more capability to have larger amounts of memory now. So I think it’s going to be really really important going forward to be able to get servers that have quite amount of large memory.
Paul Larson: You can and at a surprisingly affordable price so somebody had showed us at a presentation that priced out of server with 32 cores and a terabyte of memory on Dell’s website and came to $63,000.
Greg Low: yes which is amazing.
Paul Larson: yes it is amazing.
Greg Low: yes I think one of the white papers I was involved with a little while ago was the hardware sizing guide for Analysis Services tabular and they were saying to us do you think it is okay to suggest you know a starting point of like a terabyte of memory for the servers. My first reaction wow that sounds like quite a bit but I think going forward that sort of thing will become more and more common.
Paul Larson: Oh yes, oh yes.
Greg Low: No that is awesome. So listen thank you so very much for your time today Paul and that is just amazing insights into things. Are there any sort of I suppose, I notice you have got white papers I have read a couple of those papers up on your website which I will put a link into the show notes. Anywhere else where people would happen to see you coming up? Or follow with interest what your writings are?
Paul Larson: well I guess I am one of the backroom guys, I don’t show up on the front very often.
Greg Low: no that’s great.
Paul Larson: but yes the papers that we have written on Hekaton are the two papers I have been involved with. I also did two papers on the column store index that is the best way to get information.
Greg Low: yes I read through that excellent lot of insights into what was going on there that’s great! I’ll put a link into those into the materials.
Listen again thanks so very much for your time today.
Paul Larson: thank you my pleasure!
Greg Low: all good.
Phone: 1300 SQL SQL (1300 775 775) l International +61 1300 775 775 l Fax: +61 3 8676-4913
Copyright 2017 by SQL Down Under | Terms Of Use | Privacy Statement