David Campbell
SQL Down Under Show 27 - Guest: David Campbell - Published: 11 Jan 2008
In this show SQL Server Product Group legend David Campbell discusses the process of building SQL Server 2008 and what he's most looking forward to in it.
Details About Our Guest
David Campbell is a Technical Fellow for the Strategy Infrastructure and Architecture of Microsoft SQL Server. David graduated with a Masters degree in Mechanical Engineering, Robotics from Clarkson University in 1984 and began working on robotic work cells with Sanders Associates, later a division of Lockheed Corporation. In 1990 he joined Digital Equipment Corporation where he worked on the Curtis Cell database product, DBMS, as well as the relational database product, RDB. Upon joining Microsoft in 1994, David was a developer and architect on the SQL Server Storage Engine team, principally responsible for rewriting the core engine of SQL Server version 7. David holds several patents in data management schema and software quality realms. He is a frequent speaker at industry and research conferences on a wide variety of data management and software development topics.
Show Notes And Links
Show Transcript
Greg Low: Introducing Show 27 with guest David Campbell.
Greg Low: Our guest today is David Campbell. David is a Technical Fellow for the Strategy Infrastructure and Architecture of Microsoft SQL Server. David graduated with a Masters degree in Mechanical Engineering, Robotics from Clarkson University in 1984 and began working on robotic work cells with Sanders Associates, later a division of Lockheed Corporation. In 1990 he joined Digital Equipment Corporation where he worked on the Curtis Cell database product, DBMS, as well as the relational database product, RDB. Upon joining Microsoft in 1994, David was a developer and architect on the SQL Server Storage Engine team, principally responsible for rewriting the core engine of SQL Server version 7. David holds several patents in data management schema and software quality realms. He is a frequent speaker at industry and research conferences on a wide variety of data management and software development topics. Welcome David!
David Campbell: Thanks Greg.
Greg Low: Really pleased to have you on the show. I often mention that I have heroes in the SQL Server realm. With the work you’ve done bringing it forward from older versions, you qualify as one of the heroes.
David Campbell: It’s been a long and interesting ride. Changes in both the team and product over ten to 15 years.
Greg Low: What I do with most guests is describe how you came to be involved with SQL Server in the first place.
David Campbell: I was at Digital and Oracle bought product I was working on. A few of us looked around and decided to go work for Bill Gates and crew. Some went to Informant. I have friends all over the industry. Came from Digital in summer 1994. Some came from IBM, Sybase, Oracle. We came together remade SQL Server. Change to SQL Server. Interesting and fun time.
Greg Low: I supposed big change was to SQL Server 7?
David Campbell: Yes. Major architectural change. Team was starting work on query processor from ground up. I was on the storage engine team. Started looking at next version of SQL Server after 6.5. Market was beating us up in role level locking, an Oracle feature. We looked into putting it into Sybase architecture. Very difficult as the entire architecture is based on page locking. Put together design but made compromises to put it on paper. Came to conclusion that the architecture did not have a lot of head room. Even if we could shoe horn in role level locking, in five years, the product wouldn’t be relevant. Gut wrenching decision but ultimately paid off.
Greg Low: Challenge when doing rewrite is behaviors in previous product won’t match documentation. You always run the chance that you break a whole lot of things as you move forward.
David Campbell: Absolutely. Under the hood look. Sybase did great job designing the database it did. By the mid 1990’s, things had advanced so far that every page in the Sybase table was linked together. You had to read a page, then find out what next page was. Couldn’t do deep read ahead to be efficient. Also, the Sybase page size was only 2kb. Great for page locking, not great as processors became more efficient. Changed on disk format. Optimized how we allocated pages and kept track of them, so able to page scans without following page pointers. SQL 6.5 and earlier, if you did select * from a table with a clustered index, rows would be returned in clustered index order. Side affect in way implemented. SQL standard does not specify any order unless you put an order number by it, but people relied on that quirk. To do sufficient scan, did not want to maintain that so changed it for SQL 7.0. But put in backward compatibility flag. When database was in that mode, retained old behavior.
Greg Low: I’m seen that with ordered views. People selecting top 100 percent ordered by a view. Noticed that in 2005 that stopped working, but notice a recent hot fix that seems to put the behavior back. Thoughts?
David Campbell: Always struggle between the benefit and the pain people feeling. Look at in respect to standards but if people relying on behavior, might want to put it back. For 7.0, we felt the potential benefits were so great that we wanted to make that the default behavior.
Greg Low: On disk format question. 8k page size much more useful than 2k. Whole concept of mixed extents seems strange to me given the size of disks and objects.
David Campbell: Great question. When we were working on SQL 7.0, thought we might use engine to serve as access at some point. Alternate engine for access. Wanted ability to place data base on floppy disk, at that time 1.4MB. In versions 6.0 and 6.5, databases implemented in simple file system built on top of files. In 7.0 we separated so that attached databases did not have to be allocated storage tool. Changes came about in mixed extents to make small tables more efficient. Allows small tables to consume 8K pages.
Greg Low: Does it add level of complexity?
David Campbell: Adds a little complexity into code. Optimalize. If we’re creating index we know will have more than 64kb, xtart out with extent allocation. Knowledge of that is pretty much localized in allocation manager.
Greg Low: SQL 2008. One of the biggest change most noticed, CTP process, how you’re building the product.
David Campbell: Major change from 2005. When we first came and started, small team. Could work things out in real time. Scope of product grew, team grew, factored things in code and teams. SQL 2005 had specific large teams working on client access, protocols, storage engine others on other teams. Each team on feature had process they would follow, writing, reviewing, before they checked things in. Problem in it was a serial process. Designed feature, storage guys would do their stuff, check in; several weeks later query guys would work on. Would see interface not right. Took a lot of back and forth negotiation, not efficient. Complex features with lots of interdependencies, did not work the way people had envisioned. Started with one of five component teams, get to the third team and time to ship release. Had situations where things weren’t complete end to end or surfaced completely. Did not have time at the end. New development process, shift things around to form improvement teams. Some things internally lead to more efficiency, clean things up inside code. Put together cross disciple development test and program management and cross component team. Team figures out what they need to build, stick together until the thing is ready to RTM, integrate into source management system. They complete it end to end and validate it with customers before in reaches mainline of source code. In past betas, had hundreds of features, each working at 50 percent. Beta 2, 70 percent each. Too much to finish, so we’d cut things. How changes surfaced: instead of external profile, a whole bunch of features done, others go away, ending up with things fully implemented in engine but not so in the tools. With this release, a smaller set of features CTP by CTP. Our expectations are they will be pretty much complete end to end, of high quality. Looks much different. Release is very high quality with good feedback. Challenge, of course, if it’s at point where we put in code system, how to get feedback from customers.
Greg Low: Major challenge. Number of areas the very first time we see it, even prerelease, we come back and say we don’t like it. The response is that it’s too late.
David Campbell: Challenge. Dealing with it by involving MVPs and key customers up front as part of design process not validation process. Folks that are a representative sample of those very interested in particular improvement. Joint design and validation. They’ve already had a hand in designing it. Other thing, private CTPs for more complex features where we realize we need feedback. We’ve put together private CTPs and shown to representative people to get feedback before completion. Recognize need and are working processes during course of development of SQL 2008. Will refine for subsequent releases.
Greg Low: Challenge. Other thing, continuity wise, is changing product managers. Have many discussions with product manager, only to have that person replaced. New product manager likes the idea, but it’s like the previous discussions never happened. Continuity across versions of people heading up areas.
David Campbell: Until this release, process was people centric. As we’ve gotten more formal, how do we gather requirements and thoughtfully go through them? How do we know we finish and follow through on things release after release? SQL Server 2008 engine, were more thoughtful up front about what market needed. Things people had been asking for, for a number of releases. We got them done well. Separate date and times. Merger update statements. Number of things like this.
Greg Low: New process comments I’ve heard. The quality now is much higher than previous ones. How do you deal with dependencies then, if all these things are built in different trees? Does that mean you can’t really take dependency on one in another?
David Campbell: Great question. Knew it would be an issue at design. One reason why we cam up with notion of improvement versus feature. Conscious of dependencies. Some people call engineering transaction. Goes into product that might be in support of other improvements, but we believe when we finish one, we can ship with that in product. Never have to back one out. If case where three features had interdependences, would factor out set of changes need to happen with the others first. Would do those as separate improvements. You’d never see, but if we had to ship at that point, could, do other dependent ones later.
Greg Low: Features appear more in a rush of later CTPs rather than flowing through piece by piece. Presume as you continue this process on, future versions will even itself out?
David Campbell: Yes. Advantage of improvement model, integrate things when complete. Older process, held hostage by features in product but not yet completed. Not in shippable state. If we’re only incorporating things that are clean and complete, any time product has right value proposition, we can stop, finish it off, do final steps and ship it. Not a mad rush when a feature misses a release as we continuously develop things and say the train is leaving, close the doors, finish it up, get going. People can continue their work for the next train. Become more continuous.
Greg Low: How often do you see new releases coming out? Tradeoff between seeing all the new things and stability important.
David Campbell: Enterprise customers don’t want them all that frequently. How often are they going to pick them up? Think about lifecycle of application, how many time to upgrade database under application, Enterprise is comfortable with three to four years, not every 18 to 24 months. Can’t take them all in. If we did release that frequently, would have to pick and choose which to upgrade to.
Greg Low: Leapfrogging versions. I see major difference between ISVs and enterprises. Enterprises has own application, can upgrade much more readily than an application from an ISV, not in their control. Much harder situation.
David Campbell: Great point. If we release more frequently, ISV will have more releases on ours. Increases their servicing cost. There is a balance. My sense is three years for engine. Maybe engines themselves, we don’t rev, but as we look up stack at the value, that can move faster. Projects that can adopt that. Thought of a model with every 18 month result, only rev engines every other release.
Greg Low: Makes good sense. Years ago, I was an ISV. I’d have level of testing a problem. Go through huge cycle of testing on a particular version. When I’d buy licenses to supply, told no longer available. As an ISV, their upgrade offered me nothing because I’ve been through testing cycle. Introduces chance it will break something. Could have had big downside with things working differently. I see in ISV community, requirement to lock in on particular version and continue to buy that version.
David Campbell: A balance, challenge. Things that pull people forward and pull people back with every release.
Greg Low: Response to new mechanism been positive?
David Campbell: Inside building, the people who designed process thought big wins would come in the end. Felt that when they get into a rhythm, things come together quite nicely. Seeing that right now. Start up cost for people trying to figure out roles, what it really meant. Lot of stress on team. We’re through that and will refine process going forward. People seeing value inside the building. Outside, the other challenge. Profile looks differently, how to refine how we incorporate design feedback.
Greg Low: Third group would be competitor’s feedback. You must have some feedback about features.
David Campbell: New process was designed with that in mind. Feature in SQL Server 2008 that mid-cycle 2005, we saw trends in industry meant we needed that feature. Set team to design it. Scoped it. Too complex for SQL Server 2005. In new process, has queue of improvements. Keep in flight the things we can do end to end, not all 400 features offered up. If same situation arises, we can look at, respond competitively, design, pop up in queue, and run it through quite naturally. Very agile.
Greg Low: The whole process seems to make good sense. If enough value proposition at one release date, just go. Are there just some things you simply can’t ship without?
David Campbell: Yes, must have’s in releases. Ability to react and respond mid-flight is so much better.
Greg Low: Is there life outside SQL Server? Have you lived in Seattle long?
David Campbell: Yes, life outside. In Seattle since 1994. Moved out with wife and kids when they were four and five years old. Yesterday I dropped my oldest, back to college. The youngest heads out next year. Next step, figure out what do next year as empty nesters. Hobbies? I do some photography, love to travel. Our life list includes visiting all the U.S. national parks. We’ve done 23 or 24 so far. Will keep us busy for a few years.
Greg Low: Are their literally thousands of them?
David Campbell: There are many national historical monuments, but officially designated as national parks are 57 or 58. Invent new one every couple years.
Greg Low: If you run out of them, we have a large number of them in Australia. The size of some of them are breathtaking.
David Campbell: Looking forward to that!
Greg Low: Operating system dependencies. You don’t build in total isolation. Do you ever take a dependency on the operating system built at the same time, or would that be too awkward?
David Campbell: Try not to. Same challenge as an ISV. People will upgrade database but have last version of operating system. We have to be careful what features we can take dependencies on and not. Need to be fairly well deployed or of such value would want to prereq. Interesting thing, as operation system evolved, we’ve made use of capabilities. Scatter gather IO, TSS, some of security features.
Greg Low: Sparse files, data base snapshots. Technology used in other areas, not just snapshot. With SQL 2008 what’s real story?
David Campbell: Interesting thing for me, when we came, Microsoft was number five or six in a three horse race. People inside Microsoft did not know SQL Server. We have come a long way. First few releases, we were chasing tail lights of leaders. Knew what we needed to do. SQL Server 2008, honestly, is first release where across the board we told our own story, differentiating self and value against the original guys. Great opportunity to start that.
Greg Low: Microsoft has strong part of company working with developers all the time. Just different type of community than SQL Server community. Reason I started podcasts. I see things working in developer community that hadn’t infiltrated SQL Server side of things. Still see a big of disconnect with extensibility. Microsoft had major chance there. But when I look into developing, every product has large amounts of extensibility. Public interfaces. Frustrated with SQL Server discussion on extensibility coming in future. If a little more prepared to open extensibility story, allows much richer community to build around it. Off the top. Thoughts?
David Campbell: When we did SQL CLR for 2005, tremendous amount of work. I think of as down payment; we haven’t fully realized value from that. You’ll see things in SQL Server 2008 where we use that. Basis for more extensibility in database. Serve two communities. SQL Server itself and allow us to have broader group of people, providing more value above the core engine. Also makes it available to others to extend and provide value. I was looking through your podcasts and saw you did an interview with Jim Gray. He always told us we were nuts. SQL CLR stuff amazing and we weren’t telling the story. He was right. As we talked with more people in other communities, once they realize what it does they are blown away. We have to tell the story, complete the story, and get it out there. In discussions on how to do that aggressively.
Greg Low: I went to a session on declarative management. Facets. I’m the one holding up hand, asking how to build a facet. They don’t know. Internally, they will have well defined interfaces, all the things required to do that. SQL CLR mechanisms will provide right basis. Strikes me that thinking is down the track. Version 1 should have had biggest holes, and if you have chance for extensibility, people can rush in to fill the holes.
David Campbell: I agree in principal. Challenge we have, peculiar to database is inherited non linearites. Slightest things will perturb optimizer. If we offer extensibility we have to offer explanations on how optimizer can reason over them. We can architect for extensibility, but then how do we constrain it we we know all policies will converge, be consistent? Tend to be more conservative, doing over in course of releases, rather than make that be the way to fill in gaps.
Greg Low: Usually the approach. Contrasts with ASP.NET guys who build things, and if you don’t like, interface to work your own in. Hit in the tooling. I can’t build supported add in for Management Studio. When you have more and more developers, think they can do that in visual studio, why not in Management Studio? What you end up with is that everyone still does it as hacks rather than in supported way.
David Campbell: Interesting feedback. We can consider that going forward.
Greg Low: Big picture, SQL Server 2008 in leading position?
David Campbell: Yes. Pretty cool. The declarative management framework is going to be very interesting as it plays out over releases. Another piece, entity framework. In declarative management, lots of people have lots of SQL Servers. Easy to deploy. Surprised at how many they have. Each is easy to manage, but when you have a lot, difficult to work. If you have 1,000 SQL Servers, only five or six classes classes of servers, back up, mission critical. If you had policy for each class, if you could define, and then bind policy to server, one way to explain declarative management framework. Write own policies. First glimpse of this in SQL 2008, build out from there. Entity framework. Ten years ago, all of the features were expressed and built in terms of the logical relational schema, tables, rows. Since SQL Server 7.0 with merger application, some of the new services should have been built on higher level concepts. Synch or replace orders, not headers and line items.
Greg Low: Endless question, of how to send order to stored procedure. Now a good answer with table parameters.
David Campbell: Yes. Designing cubes and analysis services. Have UDM, but want to build cubes out of things that make sense to business. Describe orders by region, quarters, not worrying about joins or tables. Same things with doing reporting. Within SQL Server, had many ways to describe these, then started to put all things together with entity framework. Provided development feature. Future, SQL Server will be recast in terms of amenities. Really interesting is that we can tag these attributes of entities and write policies against that. Entity with attribute like a credit card. I can tag it as financially sensitive, and write financially sensitive policy against it. Any database with schema or entity model installed with one of these attributes, all backups on encrypted media, or all clients connecting must do so over encrypted channel. Becomes very interesting to write policies against model of the data, instance level of data. Very interesting moving forward.
Greg Low: Yes. I’d want to fit policies that say when I have a column with words credit card, that’s dealt with in different way. Another thing that looks like missing piece, idea that DDL triggers are after triggers, not instead of or before. If I wanted policy where you can’t reindex tables in middle of day, so many things I can’t undo, like create database in trigger. Even if I could undo, wouldn’t want to. Seems to be a need to run code instead of what’s intended. Another aspect, I would like to write code in loose format and formatting wise. When there is a proc statement, have trigger that would format it nicely automatically to my standards, comments, my own signature. Ability to modify it on the fly like a DML trigger would make it powerful.
David Campbell: Yes. Look at from architect perspective. Separation of mechanism and policy. Challenge is trying to do something large and broad such as declarative management framework, make sure architecture has legs so you can build out over several releases. In first release, get most value with least amount of change in product. How to reuse existing mechanisms, target to get model right, then build out over several releases. Did from 7.0 on. Online page restore. We wanted feature. Made sure architecture was consistent. Didn’t like until SQL Server 2005. You’ll see with DMF over next releases as we fill it out with consistent architecture.
Greg Low: Love the move in SQL Server 2005 to doing things with standard DDL statements rather than dependency on system stored procs. In 2008 seeing more system stored procs. If I go to turn on change data capture, I now have interestingly named procs. Concern is that I can’t do policy that doesn’t allow this on credit card columns.
David Campbell: That is challenge. As group scales, fundamentals around security, naming, etc., put in process. They will be refined. Sybase, everything stored procs. We added more in DDL. More expensive to do end to end. Back to one person before who pushed to move to DDL. Something that needs to be captured going forward as fundamental as matter of process.
Greg Low: Otherwise, chance of one thing undoing another. Policy where you can’t name tables particular way, then someone names it, something that catches the rename. Should replace rename with alter object. Fits with standard DDL trigger. Flow on effect, good work in DMS, but if only works off DDL, and someone else is not building DDL, challenge.
David Campbell: I see us refine this going forward, having one direction. I like DDL myself.
Greg Low: In terms of engine, everyone seems to give good performance feedback in SQL 2008. Anything a particularly strong discussion?
David Campbell: Where to start. Notions of scenarios in effective process, end to end value propositions. In SQL 2008, data warehousing the scales. BIK as product. Found that we were doing well in transaction processing and had room to grow in data warehousing back end. Looked at what it would take end to end. Defined scenario in size of data warehouse, how much data turned over daily, performance expectations. Before beginning to design features, had in mind what performance wanted to get out end to end across all components. ETL, query processor, parallelism, portioning, optimizer improvement. Lots of improvement there. Full text search, dramatic speed ups. Merge statements, dramatic improvements. Lot of things to talk about.
Greg Low: Road constructor things. In single statement, able to do multiple operations.
David Campbell: Great use of that. ISVs have been asking for that for a long time.
Greg Low: What is the limit to size of statement? Insert generated on fly with more and more rows in one statement. Limits I suppose? I’m sure large.
David Campbell: Seven years ago I might have known.
Greg Low: What are you most proud about with this version, overall?
David Campbell: Product team proud of the polish of it when it goes out. End to end, things are going to be there. Funny thing inside, since 7.0, leadership looks at value proposition. Took some time to get process up and running. Remaking process, redoing product. Is it going to be meaningful enough? Four to six months ago, we looked at velocity going in and the feedback. We know we’ve got the quality. People are going to feel good when it goes out cleanly, works well for everyone.
Greg Low: First point where analysis services or data mining added was significant turning point. Do you think spatial will be the same?
David Campbell: Has been interesting. PM on spatial, I hired two years ago. Has been doing spatial stuff for about 20 years. He came to me couple months ago. Spatial stuff and feedback amazing. Blowout. He was stunned. Interesting about Microsoft is that if you do something big, it’s big. Impacts wide range of people. Number of factors coming together that makes spatial important. That SQL Server just has it in there will make it that much more approachable. My cell phone has a GPS receiver. Inside the building, if someone’s putting together a web service with SQL Server. We can subscribe via pushing our GPS up.
Greg Low: Even with phones that don’t have that, there are services that will try to do triangulation based on cell towers.
David Campbell: Days I think really good thing, days I think really bad.
Greg Low: Do you think it’s got chance to be big turning point?
David Campbell: In that realm it will. A sleeper will be the think framework that we took on. The experience. What people really want. Experience with Exchange, Outlook, on your device, where I can get browser based experience with rich client experience, all over same data, synchronized. People would love to build those applications, ISVs. Mobility is hot. So hard to do it right now. Some of features in 2008 and Orcas will make that so much easier, many more people can go out and build applications.
Greg Low: Applications aren’t that hard to build, but plumbing is.
David Campbell: Great point. We often talk that over time, what we want to do is move things from app to platform. History of computing, happens all the time. Some of these services that people would love to get. How many years to make it work. Tremendous effort. Make that part of platform, make accessible to developers, that will be powerful.
Greg Low: What about the BI story? Early taste in 2005. Seems to be good story.
David Campbell: BI story is amazing. I came from engine side. When we started, SQL Server was the engine. Had the OLAP guys. Over last several years, people are clamoring for BI. Realize competitive advantage to getting more information out of data. For many businesses, that is the difference. BI is very easy to deploy and to get up in running. Lot of stuff coming in SQL 2008, performance enhancement, analysis services, scale services, importing services, integration services. It is driving a lot of SQL Server deployment at this point.
Greg Low: Interested in the whole concept of SQL Server as an app platform or not? Jim Gray video. He mentioned multitiered applications as opposed to two tier type model. Argument that simplicity usually wins. Always discussion on how much of an application should SQL Server should be as opposed to database engine.
David Campbell: I wrote provocative paper on this topic. Jim Gray features this as well. Early 1990’s, point where if you build application, used TP monitor. Transaction processing monitor. Databases came along and some of the things the TP had done. Thread pooling, transaction coordination. Debate was two-balled, three-balled. I did play on that. Called database TP Lite. Did follow up a couple years ago “App Server Lite”. Putting CLR into database server, web services from database server. Case to collapse things down. Some scenarios warrant one or the other with the various tiers. Sometimes more simple or efficient when the TP monitor capabilities came into play. Same thing will play out. Just a matter of what makes sense in what environment.
Greg Low: The whole idea of being able to get multiple things, multi-threading, happening because of number of processes involved. SQL Server designed for environments with large number of processes. Talked with Jim Gray and liked his description about writing multi-threading processes. Early phase where it looked like voodoo, then when you think you understand it, then the third phase when you actually get wise. A lot harder than you thought and you didn’t understand. Challenge is to allow people to write in single-threaded code, yet take advantage of large numbers of processes. SQL Server does that by nature.
David Campbell: If you think of SQL as a language, how you express problem can aid you. SQL, as declarative language, I express what I want as a query and answer. Allows machinery to factor and parallelize, marry the resources. Express query and take a look at it and factor the query, coordinate results. If expressed at lower level, might not be able to do all that. How do you capture it? Cases where you have multi-threaded run times. Programmer gets call out, event handlers. Can look at that for code running inside server.
Greg Low: App server discussion interesting. One of his points was if you two machines interacting, each in different states, asynchronous callbacks, 100 possible states. When you get in that discussion, want to quickly get back to the 20 states. Queues in between are one way to do that. Put in queue, take them out. Service breakers seem logical fit to that. 2005, I was excited to get it implemented in larger number of sites. Tooling always was missing. What are your feelings to take up service broker?
David Campbell: Service broker very interesting technology. I was early folk on service broker. Mechanism, policy, how to express. Our focus in first release is making sure we had the right mechanisms. Envisioned several program models over it. For SQL Server 2005, experience was people wondered what is this thing? Interesting to watch those who stuck with it when they had an “ah ha” moment. Stepped back and rethought whole bunch of things and have done very interesting things with it. By putting in durable queues between these things, distributed services, lots of people worry about service state transfer in consistent way. Transactions spanning. I did resource manager for SQL Server. Pat was working on distributed transaction. Two phase. We did things with service broker. Distributed transactions are great when things are very close, but as further apart, want to pass it off in reliable way, but not tightly coupled. Service broker fits that well.
Greg Low: Decoupling is the thing I keep stressing. Traditional thing is building bigger procs. Super tightly coupled. House of cards when you keep adding and coupling. Fragile. Any tooling coming for service brokers?
David Campbell: Did not take program model that far. Did work in admin, set-up and monitoring. Will be getting back on after 2008. Decoupling in terms of space, time, security context. Allows you to reason about the problems in smaller pieces, getting right, and connecting in reliable fashion so it works in scale. Go through effort to work it out. Reward significant.
Greg Low: My ah-ha moment was Roger Walters book. I liked that it was readable in short period of time, had insights I wouldn’t have picked up by reading material. Things like difference between conversation and dialogue. I struggled with at first.
David Campbell: There are interesting subtleties. What you want to do in transaction processing system is as things get more active, want to naturally trade latency for throughput. As more work comes in, want system to respond gracefully. In transaction log, group commit. Can only write transaction log 150 times second, but can do more transaction than that by naturally batching up. Service broker gives same sort of form of behavior. As more messages pile up, can do more work with each gulp. Naturally trades latency for throughput. Really scales.
Greg Low: Changes in reporting services, doesn’t sit behind IAS. Currently have HTTP points exposed. Where do you see that side of things heading? A part of app server discussion.
David Campbell: History of all databases, enterprise. In some sense operating system themselves. Give us space in IO. We built a very interesting run time at bottom of SQL Server, interesting component for high scale, multi sorted applications. Decided to take reporting services and rehost on that component for this release. Big change.
Greg Low: Will help with deployments. Good story.
David Campbell: The operating system guys did great job. Made this all come together.
Greg Low: Allowed many applications to become clients to driver. Brings us up to time. Thank you. The insights have been excellent. Where will we be seeing you?
David Campbell: Headed for launch activities over next couple of months. Worldwide launches. I’ve never been south of the equator. Maybe I’ll head your way.
Greg Low: That would be great. We have an outstanding event in October. Wagga Wagga, SQL Down Under Code Camp. If you want to see the outer part of the country, it’s in the middle of nowhere with SQL people for a whole weekend.
David Campbell: Sounds like fun!
Greg Low: Thanks again, David. It’s been excellent.
David Campbell: My pleasure, Greg. Good chatting with you.
Phone: 1300 SQL SQL (1300 775 775) l International +61 1300 775 775 l Fax: +61 3 8676-4913
Copyright 2017 by SQL Down Under | Terms Of Use | Privacy Statement