Kalen Delaney
SQL Down Under Show 1 - Guest: Kalen Delaney - Published: 23 May 2005
In this show SQL Server MVP Kalen Delaney discusses the history of the SQL Server product and shares with us her top features for SQL Server 2005.
Details About Our Guest
Kalen Delaney has been working with SQL Server since 1997 when she joined Sybase Corporation in Berkeley California. Kalen’s worked for Sybase in the technical support department then for five years with the training organization. She was an independent trainer and consultant from 1992 until she created Solid Quality Learning where’s she now the CTO and a principal mentor. As a consultant, Kalen’s worked with both Microsoft and Sybase Corporations develop courses and providing internal training for their technical support staff. She’s taught Microsoft official curriculum courses as well as her own independently developed courses to clients around the world. In addition she’s been writing regularly about SQL Server since 1995. Latest book Inside SQL Server 2000 from Microsoft Press was released in November 2000. Kalen’s also a contributing editor and columnist for SQL Server Magazine and has been SQL Server MVP since 1995.
Show Notes And Links
Show Transcript
Greg Low: Introducing show number one with Kalen Delaney. So welcome. Our guest this evening is Kalen Delaney. Kalen has been working with SQL Server since 1997 when she joined Sybase Corporation in Berkeley California. Kalen’s worked for Sybase in the technical support department then for five years with the training organization. She was an independent trainer and consultant from 1992 until she created Solid Quality Learning where’s she now the CTO and a principal mentor. As a consultant, Kalen’s worked with both Microsoft and Sybase Corporations develop courses and providing internal training for their technical support staff. She’s taught Microsoft official curriculum courses as well as her own independently developed courses to clients around the world. In addition she’s been writing regularly about SQL Server since 1995. Latest book Inside SQL Server 2000 from Microsoft Press was released in November 2000. Kalen’s also a contributing editor and columnist for SQL Server Magazine and has been SQL Server MVP since 1995. So welcome Kalen.
Kalen Delaney: Well thank you, Greg.
Greg Low: When I started doing training with SQL Server which seemed many years ago, I do recall installing it from floppy disks so seems a while ago. But at time I joined trainer group, they have private news group. But person I always asked toughest questions was Kalen. Clearly been around involved in industry for long time and thought what I might get would be ideal guest on a bit of history of product and where products come from. So if you could give us some ideas on your own background and how you’ve seen product develop over the years?
Kalen Delaney: Well it’s been a lot of years as Greg said. As probably aware because of the fact worked with both Sybase and Microsoft, common ancestry between products. Sybase started in 1984, first public product was released in early 1987 and I joined company right as that was just starting to hit the market, which was version 3.0.
Greg Low: What platform?
Kalen Delaney: Number of platform. Was developed on UNIX operating system. I can’t even remember what hardware they were running on. Other was DECK BMS but then over next few years, expanded, ran on stratus machines, early Solaris machines, I know there were at least ten or twelve different operating systems. Oh and IBM AIX was another one. And then as personal computers became more prevalent, that’s when they started looking at porting the product on personal machine kind of operating system. Even before PCs became prevalent, I remember one day somebody brought in prototype of Japanese computer that was like a sun work station running UNIX. But it was portable. And it had the Sybase database engine running on it and I remember thinking that this was the coolest thing I had ever seen. I could pick it up and take it home with me and have my very own Sybase database engine running at home if I wanted. I had no idea what was coming down the tracks, but I saw that as a real breakthrough, that people could take database engine and do whatever they needed to do with it.
Greg Low: What was your first exposure to Microsoft then?
Kalen Delaney: Well Sybase and Microsoft formed partnerships that I wasn’t really involved in. I first found out about it when they brought some personal computers into the technical support area and loading this OS2 operating system. First I’d ever heard of system when we had computers in our labs. Then this version of SQL Server running on PCs seemed similar but different. I had heard about Microsoft just form programming language. I had heard of Microsoft BASIC up till that point. Weren’t in Seattle area, down in San Francisco Bay area. Weren’t big company back then. So I hadn’t really heard much about them but here was this thing that looked like Sybase engine running on different machine. Got to play around with it. Ran some tests, test insulations, installed OS2 off of floppy disk also. When you bought first version of Microsoft SQL Server in box also came with floppy disk to install OS2 also. Kept it long time until we had just no storage space. I was trying to keep my box of all these floppies.
Greg Low: Kept this stack that was foot and a half high.
Kalen Delaney: Can’t remember what I did but we had this massive cleaning day that we didn’t need so…
Greg Low: I had some people in company I used to work for, tried to get rid of 5 ¼ floppy disks. Don’t have disk reader for disks but had great attachment for disks, yeah.
Kalen Delaney: When my kids were using computers, thought 5 ¼ what floppy disks were because they were softer and calling 3 ½ disks hard cause they were hard.
Greg Low: The ancient ones were even floppier.
Kalen Delaney: I don’t think I used them but remember seeing them.
Greg Low: Recall them being very noisy.
Kalen Delaney: Well things change. So Microsoft and Sybase were in Partnership where Microsoft developing engine. At that time Microsoft would develop it for PC systems and Sybase kept non-PC based operating systems, or what they were considering for bigger companies, bigger installations.
Greg Low: Saw Microsoft one as small installations.
Kalen Delaney: For a few people, small office, because it was just PCs. Although they could see database engine might grow, weren’t looking at the fact that hardware would evolve, so our PCs now can run at Enterprise. It’s still PC but it’s got how many Terabyte it stores and how many CPUs and dozens of GB of RAM, but “Oh it’s a PC!”
Greg Low: What I often tell people is great video on Channel 9.msdn.com site where takes people through product crew. Show HP mega dome machines that’s got like 64-Bit processors and couple terabyte of RAM. I suppose it’s a PC.
Kalen Delaney: If it’s made by PC manufacture… After first couple of versions, some fuzziness in agreement there, if Microsoft kept writing for PC based systems, would be overlap in market as to who Sybase is trying to serve and Microsoft was trying to serve. What actual politics was, I don’t want to get into that, wasn’t my department. I knew Microsoft handled PCs and as Microsoft grew, world of possibilities for Microsoft grew. After left Sybase and went independent, worked with both products still. First big assignment was to go back to Sybase was to develop training and train technical support staff. Starting to work with Microsoft people in Seattle area and trying to do both for a while because they were still very similar, which was 1991-1992. As products started to diverge around Microsoft 6.0 Version, big departure, Sybase version 10…
Greg Low: What were essential differences in 6.0?
Kalen Delaney: Oh… Um… Okay you stumped me. Well they were just starting to diverge. Architecturally, they were very similar in way memory was managed, configured memory and pieces of memory pre-allocated, pages were still 2k, locking metaphors, isolation levels… Starting to have different syntax, different possibilities, each add new things without consulting the other. Only major difference I can think of off -hand was introduction of replication into Microsoft product, where Sybase added replication engine, which was separate product. Microsoft built it in, so if you bought SQL Server you got replication along with it. Sybase, it was add-on product that was so difficult to use, if you bought replication engine, had to buy 2 weeks’ worth of training because nothing could do out of box, needed expert to have it up and running.
Greg Low: Even replication I thought in SQL Server 6.5 even still at times challenging…
Kalen Delaney: Wasn’t really for Enterprise yet but whole lot of product wasn’t but if you were doing basic transactional replication and not anything fancy, from one publisher and handful of subscribers, could get going pretty well.
Greg Low: Find if teaching admin classes at that stage, you could tell which students could follow instructions because they were the ones who got the replication…
Kalen Delaney: If you could follow instructions you could get it working. If you tried to do anything fancy or didn’t follow directions exactly, that’s when you have problems.
Greg Low: My feeling on product at time, I used to think it required a lot more hands-on management than other database products I’d worked with at time. One of things I kind of liked when they moved to 7.0 addressed most reasons why people would constantly need to manage server. Found tuning and things like that they put it. I think also fact that disk files could grow automatically.
Kalen Delaney: In 7.0 yes…
Greg Low: By the time we go to 7.0 yes…
Kalen Delaney: Big changes in 7.0. In fact when 6.5 was released when so many of the what-had-been-difficult features became easily managed. Introduced precursor of enterprise manager. Very nice graphical interface. Internally similar between 6.0 and 6.5 but introduced graphical tools to make it much easier to manage, starting on move to 7.0 ease of management. Around the time of 6.5 I reached point professionally where I realized I would have to make a choice between Microsoft product line and Sybase. I knew they were diverging and I knew if I wanted to maintain edge of expertise, spent little time exploring, really trying to be honest and evaluating pros and cons of going each directions. It helped I moved up to Seattle area so I was closer to Microsoft engineers… But Microsoft just made so much easier to be an independent working with the product. Early or precursor to MSDN and TechNet that gave tools needed right in own hand, development versions, workshops they held, so much easier to get hands on tools I needed without paying a month’s salary to get them. Really appreciated how Microsoft making products available to other people to go out and spread the word. Became obvious choice after I studied what the options were.
Greg Low: Move to 7.0 very significant move?
Kalen Delaney: It was a traumatic move for me. In 6.5 time frame, actually thought I was an expert. Thought I knew everything about that product! “I can rest, start exploring other areas.” Then they came out with 7.0 and realized I had to go back to school. I had a head start because I got big project to develop internal training at Microsoft for technical support people to upgrade to 7.0, so whole product still in early data, got my hands on it, work with engineers as I’m writing this courseware, but I had head start but back to beginning. Engine stuff, way data is organized and stored on disks, way indexes organized and used to access data, optimizer, what plans are stored, how and where they’re stored, when recompilation takes places, how to tune queries, everything was different except for SQL language. Almost seemed like brand new product. One thing stayed same was locking paradigm. Way locks and transactions were related to each other still stayed same. Did have row-level locking, but way locks and transactions were related still stayed the same and still stayed different than a lot of other products that didn’t stay in same roots.
Greg Low: Remember read eBook you had on locking, which we must talk about. Make very interesting discussion.
Kalen Delaney: Worth whole talk on itself.
Greg Low: Move to 2000 then?
Kalen Delaney: Originally, 2000 just seen as a 7.5 ,just like 6.0 and 6.5, got to be 7.5 Not drastic as far as me writing my books. Barely finished 7.0 books. Basically got enough time to catch my breath before having to write 2000 book. Very fast. I know there was a lot of internal debate to give it a whole version number or not. Some specific changes, features, internally warranted being whole new number. Not involved in those decisions, I can’t remember what they all were. Close to being 7.5…
Greg Low: I was expecting a 7.5 and I was quite surprised when they gave it a new thing…
Kalen Delaney: New special things made it new version…
Greg Low: Also year where started naming things after years.
Kalen Delaney: Might’ve been part of it, just wanted brand new name. One of features was index views and the…
Greg Low: See that widely adopted? From memory, only in Enterprise edition…
Kalen Delaney: Fuzzy area. What’s only in Enterprise is magical behavior of optimizer, that can write query and never reference index view and optimizer say “Sounds familiar, oh look, I’ve got index view with answer already prepared.” Can go and find information in a view with index on it when you’ve never referred view. In Standard edition, can directly access index view. Do need to use particular hint called “With No Expand”. Don’t expand view in underlying tables and underlying select statement. But can select form index view and use in standard edition. Can get performance benefit but have to know view is there.
Greg Low: I suppose, your verdict on 2000 has been stable since it was released? I think it’s shipped like 1999… Must’ve been close to or prior to… Maybe when had conferences just prior to release.
Kalen Delaney: I think it was 2000… August 2000 I’m remembering. Release party at Microsoft. Pretty sure it was 2000…. Year of the name…
Greg Low: Didn’t seem to drag on…
Kalen Delaney: Think it was April 1999 when my SQL 7.0 book was done. Just little over year later, but this is long ago, and my memory might be fading here. A lot of years now, and whole new book to work on.
Greg Low: Do you think fact haven’t released product in mean time has cost in terms of market share or things like that? Given things like a minute is a long time…
Kalen Delaney: Not area I’m looking at all the time, but from customers I’m working with, don’t think worst thing in the world, to have it be obvious, it’s been around, stable, introducing service packs for it, a lot of expertise out there about it, I know when 2000 was on heels of 7.0 That was negative thing as far as sales go. I would’ve thought 7.5 wouldn’t have shocked people, that there’s this new product you’ve got to go and spend money on. Fact it’s been out there for so long, product is stable running incredible applications, I don’t’ think it’s a bad thing at all. Yeah it has been five years now so now is time for new version.
Greg Low: And so, over period, you generated second version of book, so had Inside SQL Server 2000 version of book…
Kalen Delaney: Right, and because there wasn’t new version of that, wrote eBook on locking you mentioned which does overlap with the Inside book, but eBook has some things I learned since time 2000 book was written always learning new things… I keep playing with product and keep learning new things and troubleshooting and teaching classes… things I learned since product, a lot more hands-on and look at how this lock can be generated, what scripts to run to see what is occurring, so it’s much more intended to be hands on and even deeper than Inside book.
Greg Low: Thing that impressed me was that actually how open I found Microsoft were with details how product works. I recall for example, about time product came out, remember Jeff Clark at local Microsoft office before local user group, would sit there and put on .NET Shows and sit and watch a lot of details on it. Recall like 40 hours of .NET Shows on just how query optimizer worked. I sat there and, almost bored…
Kalen Delaney: Some of it’s very deep, yeah.
Greg Low: Also torn because I was almost completely in awe in how they came up with the schemes.
Kalen Delaney: Even more hours of that for the next version of product. A lot is available, you can get webcasts, downloadable or with resource kits that give lots of juicy details.
Greg Low: We do need to move on then, probably talk about new product. I think one of the things most amazed me is how many things has been enhanced or changed. Have slide in PowerPoint presentation I’ve used couple times and I like it because it’s almost every tiny little part of slide covered with detail of some new feature in little type. I’ve got a mate who’s a PowerPoint MVP and would tell me it’s worst slide ever created.
Kalen Delaney: Depends on what your purpose is. If you wants them to learn what’s on there… I think I know what slide you’re talking about. Want to overwhelm someone and I think it does.
Greg Low: So thoughts on next version of product… You excited about it?
Kalen Delaney: Of course I’m really excited about it. New things are getting my blood flowing again, things I take look at and want to stop and go play with them for a few hours or days. Working on paper in concurrency enhancements, and I’ll talk more about that in a little bit. What I like is real guts in engine have not changed like they did between 6.5 and 7.0. Upgrade path much less painful than it was, but amount of change in product is just as big. Many of existing features enhanced, many new features, but existing applications should be much more easily portable. Many of them will just work under new version as you start to explore what you can add to it and way make them even better.
Greg Low: Found in groups I deal with, areas, like ADO groups, I’ve very impressed by their commitment to making sure the things still work like they did before. In fact, some of things I look at remoting formats for data sets in ADO area, and the changes they’ve made could be considered bug fixes in one or two areas. Maintained existing behavior just in case someone is depending on existing behavior and enhanced in different way. See that from ADO and right thru SQL end. See super commitment…
Kalen Delaney: Very few things that just won’t work. Haven’t found any that just won’t plain work. May be some that are a bit awkward, but most 2000 applications will be upgradable and runnable.
Greg Low: Only thing I’ve come across is in security area. I’ve found some things weren’t good things before that they’ve fixed. Catalog views if depending upon having users be able to see objects even if they don’t have permissions to them, yes not going to work anymore.
Kalen Delaney: Security model is much more secure out of the box, so if had built application assuming security was loose, might be something that doesn’t work, but you can as an administrator go in and grant permissions and enable them. Can go in and enable them if you want and won’t have to change application, just enable what features need application users to have ability to and then they’ll be able to run application.
Greg Low: I think Security is only area where potentially things where you might need to do… I think SQL Server logins, previously treated as deprecated, and used words like “Fall backwards compatibility only” and now they’ve dealt with fact going to be around for a while and fixed all issues.
Kalen Delaney: Password enforcement, set password policy and expiration on SQL logins, just like in Windows or Windows logins you use…
Greg Low: I think fact they’ve tightened it up, that default behavior is tighter behavior than what was there before. If dependent on looser behavior, this is only area I’ve seen might need to change things…
Kalen Delaney: My point is don’t have to change application. Administrator could go in an open up security holes and make it looser. If want application to run, it can by making other changes behind scenes.
Greg Low: What are you most excited about with product?
Kalen Delaney: Real tough question. Just got back from two weeks of giving key notes, in Microsoft Great Lakes districts on my favorite features. Not only was it difficult to come up with short list of favorite features, but as I’m preparing examples, would read something about one of things I was looking at and do hyperlink through books online and read about another feature I hadn’t heard about and jump to somewhere else. Even after I came up with list of ten, was really difficult, so I actually cheated on a few. “One of top ten features is enhancements to the security model.” And then I’d have five areas within security that were really changers for the better, things like security of metadata, password policy enforcement, have triggers on data definition statements… been question frequently requested feature to react when someone creates or drops an object. We could actually create triggers on system tables in 7.0 but they wouldn’t fire if “create table”, because really SQL Server didn’t do insert in system table behind scenes. SQL 2000 told you “No cannot create triggers on system tables.” What do you do? Want to know every time creates or drops a table. Now there are specific triggers for “altered tables” “drop a view”…
Greg Low: One of favorites is DDL “all event” for two reasons. One is I’m sure not only person in world who run scripts across master database when meant to be running against other database…
Kalen Delaney: Script has drop table and…
Greg Low: Now what I do is build DDL trigger that says in master database if not altering trigger, just roll back, and it completely protects my master database from running DDL. Stunning. I love it.
Kalen Delaney: Oh! So creating demo scripts but doing it in master instead of test database. Absolutely.
Greg Low: Load script, forget to change database and run in master and otherwise sit there weed out objects and get it back way it was.
Kalen Delaney: One hint about that there is used statement you can put in script and I’ve learned hard way to always go and check for used statement in script even if I think already going to be there, doesn’t hurt to use if you’re already there.
Greg Low: DDL triggers, let me fix it now in a flash, it’s great. Other thing I think is great is auditing if I want to build an audit table and load all changes…
Kalen Delaney: Much cheaper and much more specific. Whatever you want to know about what changes people making, much more straight forward not as expensive as having to turn on full auditing capability, or even write own trace to do that, which is a little indirect, this is direct way to know when something’s happened.
Greg Low: What I did ask product group about though is I’d like to change is data in trigger. At moment, event data is a read-only thing you get that tells what’s in there, but they said, “That’s a little tough.” I talked to them about giving an “instead of” trigger.
Kalen Delaney: For DDL triggers, yes…
Greg Low: Realize, could use it for type “create proc” could have it tidy up formatting before it stores it in database…
Kalen Delaney: Actually change trigger itself…. Oh.
Greg Low: It occurred to me in other triggers could change data during trigger, but DDL can’t and I thought that’d be nice enhancement. I think if you want to allow enforcement that would be considered bizarre, but now could enforce naming conventions for example.
Kalen Delaney: There you go. Interesting. So you’ve made a strong business case to Microsoft. They will respond if you have a strong business case for your request.
Greg Low: What else do you like look of in new version?
Kalen Delaney: Not just security on metadata, but all metadata that is out there. Dynamic metadata is going to be awesome. Dynamic management, objects coming in two categories, dynamic management views, and couple dynamic management functions. Table value functions so both views and functions treated as if table, but gives us data that up to point was never available, except in some obscure, DBCC commands. Now available as table so we can filter, save, inspect, whatever we need to do, we’ve got dynamic management objects to tell us all of the active transactions, all blocked transactions, long running queries… A lot of this was available in sysprocesses before but not nearly as friendly, as extensive—my absolute favorite as soon as I get high powered queries and applications running to start looking at metadata are dynamic functions which will show us index usage. Big hole in product up to this point that I have wished I had… A lot of consulting I do is already running applications where I’m called in to fix things that are broke. The 7.0 and 2000 introduced index tuning wizard which does allow to give us report of which indexes are used in workload that we supply to it, but if work load doesn’t supply to everything, function isn’t going to return useful information to us. New dynamic management views aren’t perfect, but because managed internally, much more direct. Will be function that can for any index which is still in cash can tell us how often it’s been scanned, updated it, and nothing like this has been available and really look forward to using that in some real testing situations.
Greg Low: Attended two indexing sessions that Kim Tripp ran at past conference in Munich and she was showing things on index usage with dynamic management views and completely stunning. Excellent. What’s your thought on the “execute as” for setting execution context? As to whether good or bad idea in procedures?
Kalen Delaney: I see that as another security feature to be able to have control over context running stored procedure in. I left it in security changes which in general are a really good thing. Haven’t done a lot of testing with it so far, but I know it’s going to be replacing, if used well, replace concept of chaining that had to be enabled that was awkward to use in SQL 2000.
Greg Low: Chaining has been difficult concept for a lot of people…
Kalen Delaney: Hasn’t been worked right. Been mis-documented in a lot of situations which made it more difficult to use if documentation doesn’t tell what’s going on with it.
Greg Low: Should point out to people, go on and download latest copies of books online…
Kalen Delaney: Something new that Books online has been updated along with most service packs so you can get updated set of documentation.
Greg Low: Noticed discussion about indexing bit fields or bit columns and what was interesting at point product released, document said “can’t build index on bit columns”, but you could but on updated books online say you can’t.
Kalen Delaney: Don’t come with service packs, but are available for free from Microsoft website, so something needs to be on regular to do list.
Greg Low: Need blog feed or something…
Kalen Delaney: Big skywriting thing “New books online available” *laughs*
Greg Low: Apart from security? Other features?
Kalen Delaney: The dynamic management views are wonderful thing and big piece that covers a lot of different aspects of product, both old and enhancement and new, is underlying technology called row-level versioning. Allows support for new feature called snap shot isolation which gives new way to control concurrency without locking. Usually people, lump this in with locking, but it’s an alternative to locking to maintain older version of rows so can get data without having to lock or be blocked by locks someone else is holding.
Greg Low: In general used to avoid readers blocking writers?
Kalen Delaney: Avoid readers being blocked writers and if running in high isolation levels, readers won’t block writers. Still have potential writers blocking writers so downsides of new snapshot isolation are: If do have situation of two writers concurrently, instead of one blocking the other, not using locking anymore. If have two writers trying to write same data, you’ll have one of them receiving error message and being kicked out. So needs to be tested in application layer. Other downside is once enabled snapshot isolation, whether or not using it you just enable it and say, “I want to be able to use it if someone needs it…” As soon as you’ve enabled that possibility, every process doing updates stores old versions of rows and will bear overhead. Every update, whether or not people want these special non-blocking reads and writes will have to store older version and take up space and extra overhead of writing these old versions.
Greg Low: Been implemented as another transaction isolation level…
Kalen Delaney: Whole new isolation level. Also different flavor of existing read-committed isolation level. Big topic. Not only could I talk about locking for hours, this new snap shot isolation, I could talk long time on. Built on technology called row-level versioning which SQL Server provides now to be able to store older versions and that technology comes in other areas. There’s new feature which allows indexes to be built completely online including clustered indexes. Use row-level versioning tool to do that. Could be update index while being rebuilt because changes are versioned. Old features, triggers have been reengineered. Look and act same but are built on row-level versioning. Inserted and deleted tables will be older versions of rows stored in temp db… Other feature uses row level versioning new feature called multiple active result sets. More of client side issue, so I’m not going to go into details, just row-level versioning is underlying structure a whole bunch of features are being built upon, that’s point I wanted to make.
Greg Low: Some of discussions I’ve seen with that tends to use Oracle but it’s different to have implemented in Oracle. A lot of people feel this will ease migration of Oracle.
Kalen Delaney: One of reasons is to make migration easier but also in cases where got heavy critical read operations that need to get done and in current version, those end up being delayed because of blocking, that’s a situation. Running big reports or building cubes in analysis service situation. If you’ve got to do heavy reads to build cubes, don’t want to wait for someone else’s locks, while doing this massive read and analysis operations. So snap shot isolation can be big help for analysis services.
Greg Low: Get impression from a lot of Enterprise customers that Microsoft is now better understanding this sort of higher end needs of in particular, things can build indexes without taking system down for great links of time. Does seem to be push for increased availability which on big systems has been big difficulty.
Kalen Delaney: I hear a lot. Especially coming from other products, Microsoft wasn’t able to handle enterprise or handle real applications before, but if look at some case studies Microsoft has done, I don’t think that was true. For many versions, been major applications running 24/7 servicing thousands of concurrent customers that have been using of whatever current version of SQL Server was. If willing to learn how SQL Server works and not expect it to work like some other product you were using before, could get it to do anything. This is what I’ve seen my job as being. Teach people how SQL Server works so you can take advantage of every little detail of it. Don’t convince people to switch to Microsoft from other product, I take people already using Microsoft product and want to use it best way they can and get most out of it and teach the way it works.
Greg Low: Interesting misconception. I know I saw slide about talking some largest websites in world. I saw list of top 16 and significant number of those running SQL Server today.
Kalen Delaney: Some of biggest applications, most users, most data, highest through put, using Microsoft SQL Server. Has been available for kinds of applications, but there is a learning curve, pain point of making transition, so I think some of features are to ease minds of people making transition, not that it couldn’t be done before but makes it easier to see.
Greg Low: Did database mirroring make list of top favorite things?
Kalen Delaney: Not mirroring so much but database snap shots which work very well with mirroring, so we could put mirroring as sub bullets… I talked about snap shots in my top ten lists, but works so well with mirroring that could get mirroring in there also. Snap shot is much more broad than just mirroring. Can do regular snapshot of database just to have historical record of what database looked like each week or month. Beauty of snap shot is it’s instantaneous because it doesn’t copy any data at time of snap shot. Way it plays nice with mirroring, is mirroring does keep two simultaneous copies of database. Second copy is inaccessible but can make snapshot of mirror and access snapshot, so snapshot is what makes mirroring more powerful.
Greg Low: Mirroring seems almost fault tolerance for the masses.
Kalen Delaney: Just like fault tolerance is only availability tool, data mirroring and clustering both fit category. Availability only. Once add snapshots onto mirroring, then becomes performance tool and scale availability tool.
Greg Low: I’ve seen a lot of people excited about mirroring where it looks like it’s going to be in standard edition of product where previously to do clustering, needed to be on Enterprise level issue… Plus also allows possibility of having complete systems in totally different buildings because there’s no shared hardware.
Kalen Delaney: Data is not shared. Don’t think want to get people’s hopes up about what’s in different editions because traditionally, last things they decided upon. Index views that we were talking about wasn’t decided until just a few weeks before they froze the final…
Greg Low: Many are still lobbying them about certain aspects of product as to what might be in versions…
Kalen Delaney: Jury is still out. Can submit vote, but decision what’s going to be in standard, Enterprise…
Greg Low: So database snap shots… I’ve noticed that if create snapshot of 500 meg database get 500 meg space file, if look in Explorer, get to see size of file is 500 meg but size on file might be 100k…
Kalen Delaney: Really see what you’re getting yeah. By looking at time to create or query, is original data until start making changes then snap shot starts to get data in it as source is changing
Greg Low: I think it’s interesting. Dealt with a lot of applications where people have worried about end of period processing or things like that, and try to make system where not too much work has to be done at end of period. But invariably, at end of period, working had to report backwards to what situation looked like at end of period and beauty of this could create snapshot and report on it for weeks. Possibly for recovering data?
Kalen Delaney: Recover data from snapshot back into database someplace or just use snap shot as your source of operations at any point.
Greg Low: It’s a read only…
Kalen Delaney: Right, as long as it’s officially snap shot, will be read only, but once copy it to some place out can be read-write database…
Greg Low: Other parts of products you’re excited?
Kalen Delaney: One of things that surprised me when I discovered it with all of big commotion of intro of CLR into SQL Server programmable units, CLR in triggers, functions, stored procedures, some people under impression, T-SQL going to be going the way of the dinosaur. If look at what Microsoft done with language with SQL language, it’s absolutely obvious they are not neglecting it. It’s parallel to any of changes of availability features, performance features… What can do with language so rich now. My partner, Itzik Ben-Gan who is T-SQL expert wrote a 90 paged white paper on new features. Co-writing volume of Inside SQL 2005. Multiple volumes, one will be all T-SQL. Been doing incredible things with changes.
Greg Low: How large is the new book?
Kalen Delaney: Will be in multiple volumes and looks like be T-SQL volume I’ll be working on with my partner, Itzik at time of product release and shortly after release time frame will be volume on storage engine and information on how data is usable, data is stored, talking about snap shots, isolation level in that volume and smaller changes in current changes. And then there will be third volume we will delay to have input out on the field and what really works out there. Have ears open for all tips and tricks so that I don’t have to just write “This is what Microsoft says it’s going to do.” But find out what it actually does.
Greg Low: So many changes…
Kalen Delaney: To do in multiple volumes gives us flexibility. Have volume on storage engine, data organization after release, but tuning will be delayed maybe up to six months to get good feedback.
Greg Low: Itzik Ben-Gan presented at user group and in session, one of things he showed were new uses for new row numbers. Seemed like minor feature, but there are some nice things you can do with it.
Kalen Delaney: He does things with that language that developers at Microsoft didn’t know were possible. Finally figured it out, every time he’s presenting at conference, make sure send someone to listen to what he’s saying because they learn things from what he’s saying. Some of the things they’re adding are as you mentioned, row number feature, recursive queries with common table expressions…
Greg Low: Interesting one, CTEs. Saw webcast where person doing them referred to them like there was temporary table and felt like he sort of missed table wasn’t in recursive queries so…
Kalen Delaney: Tricky to give presentation on any programming topic. To be able to talk about programming isn’t verbal activity, but coding activity. Isn’t really exciting to a lot of people. Want to get hands on it, write your own, but to listen or look at somebody else’s code isn’t most exciting thing there. When Itzik talks about T-SQL, want to hear what he has to say about it. A lot of features is exciting, but still, not going to appreciate them until you can really look at dozens of examples and see power of what you’re getting. Started to mention pivot and un-pivot, to turn table, turn columns into rows and row into table, error checking, error handling within T-SQL probably biggest hole in language p to this point that we’ll have in SQL 2005 make robust programming environment.
Greg Low: This is where the new try and try begin catch…
Kalen Delaney: Try and catch and you know something I’ve read Itzik’s writing, but I haven’t written a lot of my own code. All I can do is read what he’s done and test his examples, but… It’s on my list to make sure I get good at writing new queries myself.
Greg Low: Listening to Kim Tripp a week or so ago and she was talking about how there was a time you can know most things about product but it’s got to point now product is so broad now…
Kalen Delaney: As I mentioned in 6.5 I thought I knew everything, I have to focus so much… When SQL 7.0 came out and had new profiler tool, I thought, “I could just focus on that. Spend my whole professional time working with the profiler and its depths and coming up with the best trace definitions with what I need to figure out.” So big got to pick and choose areas of interest. Speaking of profiler, couple of changes there I want to mention on my top ten list. The graphical interface for profiler to set up the events and data you want is much more cleaner and intuitive. Matrix now with events along one side and data columns along other side so rows of events, data columns along top, and just click intersection where they meet, and don’t have to remember which data goes with which event. If data isn’t meaningful, no box to click. Trace engine behind profiler has been enhanced in that you can now grant permission for somebody to run a trace. Big hole in product in SQL 2000 that only administrators could define traces really crippled a lot of people, but a lot of new permissions in this list in security enhancement, so many possibilities in specific permissions to grant.
Greg Low: Any others?
Kalen Delaney: How much time do we have here? *laughs* In the tracing you can do, if capture the show plan for a statement or batch, can capture it as XML string… something I requested in 7.0 when first really nice show plan came out. So wonderful to see plan in nice graphical environment showing what plan looked like. “This would be great to pass this to somebody else!” I couldn’t email graphical output; I could take screen shot, but wouldn’t show details… And I said, “Since you can build this graphical display form information of plan, why can’t you build interface that I could submit text space output and regenerate graphical output!?” Everyone back in 7.0 said, “Sounds like a great idea!” Never did anything with it obviously, and in 2000 mentioned it again! Wanted to take text output of plan and generate graphical output so I can email it to someone so I can store it and share it with other people. Turned out what they were waiting for was XML. In SQL 2005, save plan as XML, and XML plan now can be displayed graphically whether or not have database available. Will be nice way to do shared trouble shooting. To collaborate with others.
Greg Low: Pass information to Microsoft.
Kalen Delaney: Collaborate with coworkers or remote support sites, it’s going to be a nice thing.
Greg Low: Then you deadlock graphs?
Kalen Delaney: Deadlock graphs also. Graphical displays in profiler of execution plan, deadlock, also profiler is integrated with performance monitor so if you have performance monitor log that spans time period as trace, can display log in its own graphical format as performance monitor chart and will show you as click on event will show where in performance monitor graph where event occurred so you can see what else happened at time event ran slowly. How much IO using. How much CPU using for other purposes—Whatever you’ve chosen in performance monitor log, can see what’s happening and in many cases, give you big clue as to “Oh well that’s why this query ran slow because there was so much IO going on not related to my process…”
Greg Low: Well look lots of things to look forward to in product. Hopefully we’ll see it fairly soon so we’ll be able to start working with real product. I gather from discussions, I’ve been telling people you’re coming to see us fairly soon, so what on the agenda?
Kalen Delaney: Exciting for me! My first trip to Australia will be in July. Coming down for two weeks, teaching class…
Greg Low: On what topic?
Kalen Delaney: SQL Server Internals and Query Tuning. Is SQL 2000 based course but a lot of topics I will be able to mention major changes during the course. Not officially 2005 course, but some of biggest changes will be brought up. Four day course that will run July 12-15 in Melbourne. Will be there second week, may be teaching second week of course or doing one day seminars like Itzik did, and it’s not yet decided. Solidqualitylearning.com.au or just US based site, go to schedule of activities and classes can be found there.
Greg Low: And writing and publications obviously, the new book coming? Or books?
Kalen Delaney: Books that’s currently in the works is the T-SQL but I’m also writing a white paper on concurrency enhancements that will include the internals of way new snap shot isolation works as well as new row level versioning gets applied to triggers and online index rebuilds. Will form part of basis of the next volume, storage engine volume of book, so that’s definitely planning…
Greg Low: Been involved in Beta programs…
Kalen Delaney: Great to see what’s coming, yes…
Greg Low: One of things I like most about it, avoids me having to learn all things of new product come the day of release…
Kalen Delaney: Little bit of a head start…
Greg Low: Learn little bit each day…
Kalen Delaney: Every time learn about one thing, I’ll look at a couple related features, and make sure I try to discover a couple new things every time I use it.
Greg Low: Lots of people use their book as their reference for finding how things work in product. Where do you most look to find out?
Kalen Delaney: Interesting. Different I think in SQL 2005, I do have connections on development team, obviously. I live within driving distance of main Microsoft campus so I do spend time heavily, on campus at least once a week, meeting with engineers… In all versions, 2000 and before, a lot of what I learned, learned by playing around with underlying metadata. Challenge in new version, because metadata not quite accessible or meaningful even when you can get to it, it’s going to be challenging, but by this point, I think I have enough contacts on DEV team my questions can be answered. Enough metadata can be tested on my own…
Greg Low: Dynamic management views will be learning for long time…
Kalen Delaney: Tool for me and you and anyone who wants to learn more about product…It’s way for you to see what SQL Server really doing.
Greg Low: Thank you very much Kalen, we’ve run out of time, but look forward to when you come.
Kalen Delaney: Looking forward to it, great to talk with you today Greg, thank you.
Greg Low: Thank you!
Phone: 1300 SQL SQL (1300 775 775) l International +61 1300 775 775 l Fax: +61 3 8676-4913
Copyright 2017 by SQL Down Under | Terms Of Use | Privacy Statement