Gert Drapers
SQL Down Under Show 17 - Guest: Gert Drapers - Published: 8 Jun 2006
In this show Microsoft Architect and Development Manager Gert Drapers discusses the upcoming Visual Studio Team Edition for Database Professionals (formally known as Data-Dude).
Details About Our Guest
Gert Drapers is an architect and development manager in the Visual Studio Team Edition for Database Professionals product which is formerly known as Data Dude.
Show Notes And Links
Show Transcript
Greg Low: Introducing show number 17 with guest Gert Drapers.
Greg Low: Our guest today is Gert Drapers. Gert is an architect and development manager in the Visual Studio Team Edition for Database Professionals product which is formerly known as Data Dude. So welcome Gert.
Gert Drapers: Well thank you. I’m happy to be here.
Greg Low: That’s great! I met you at past conference in Munich last year and I really enjoyed your session, it was great.
Gert Drapers: I remember first time when we started working on project and I couldn’t tell you anything about it. But we’re there!
Greg Low: This is sort of an interesting one. Anyway, for people who haven’t come across you if I get you to first just tell how you ever got to be involved with SQL Server at all.
Gert Drapers: That’s a long story. Dates back to 1988 at time working for company Ashton-Tate, might remember like dBase III and dBase IV. I was working there and one of things working on was dBase IV Server Edition. Tool supposed to be declined to something new called Microsoft’s Sybase Ashton-Tate SQL Server for OS2. My first introduction to SQL Server was brown box which contained bunch of dark brown manuals, old style Microsoft manuals. Pre-release of OS2 of, Addin Manager, and SQL Server. Only thing sort of got was white paper about client server computing and an API for DB-Library. That’s what we received. From there on, went downhill with me. Has been SQL Server all my life after that. Continued to work for Ashton-Tate ‘til 1991, until Borland bought. Well coincidence at same day having board meeting. When we got call “Being bought by Borland” everyone run to phone to call the stockbroker. Same day I resigned. Planned ahead of time, so I had no prior knowledge to this. Joined Microsoft in 1991 to actually start evangelizing SQL Server. My role actually to evangelize Addin Manager and SQL Server but only thing I knew was SQL Server. Addin Manager necessity that needed to use to get connected to SQL Server. So I did that in first five years worked in team called developers relations group and my job was to help ISV’s, independent software vendors to actually implement Microsoft technology, so Windows NT to Windows API, SQL Server. So I did that for five years and then its like, “Time to move up and do something more interesting.” So I joined SQL Server Development Team where I became developer. I worked on conversion of 6.5 databases to seven now at that time, so I was working on on-page conversion from 2K formats to 8K formats and DBCC infrastructure around that. While doing that, came up with bulk inserts, that was my preferred area I worked on. From there on I’ve been hip-hopping between different jobs in SQL Server I took on job in tools team after that responsible for SQL-DMO and Query Analyzer. So I Build Query Analyzer for SQL Server 2000.
Greg Low: People seem to love Query Analyzers still. Yearn for performance they had even today.
Gert Drapers: Yeah, today it’s still one of my primary tools together with File Manager. If you look at my desktop, two tools I use is File Manager… and Query Analyzer. It’s still there, it still works, I still love it. After that I managed SQL Server tools team. Managing wasn’t really my thing so I went back at being an architect and started working on new version of DTS which is now called SSIS. So I did that, also worked on foundation for SMO, infrastructure underneath that. Left for two years to work in .NET team where I did something called Sysindal Transactions, new programming model for transactional programming in .NET. In 2003 went back to SQL Server to join Mark Souza’s team called SQL Server Customer Advisory team. That’s what I was doing when we met in Munich.
Greg Low: Heavily involved in large deployments?
Gert Drapers: Yep. Our mission was to help customers implement largest databases around world. Most challenging project. It was a great job. Only thing is you’re never home.
Greg Low: I know that sort of job well. *laughs*
Gert Drapers: Sort of forced me back into finding another position, because I wanted to be home more, be with my three kids and my wife. At that time, opportunity came up to start new team insides Visual Studio that was going to build SQL Server developer tools and that was the gig for me. So I’m passionate about that, there’s a huge opportunity to change the world by providing better tooling for SQL Server database developers so that’s what I’m doing right now!
Greg Low: Outstanding news. I was fortunate enough to be software design review for this product. One of things covered back in last year and I must admit was very excited about all the things I saw this particular thing had amazing potential. Must admit some of the things discussed at the time haven’t appeared yet. But maybe down track we’ll see what happens. I think has amazing potential to change how things are done. In several previous shows I remember discussing with people… I think a lot of things developers take for granted like refactoring tools. Access to those sorts of tools hasn’t been available to database people in very easy to use way. I often wonder if part of reluctance to do some sort of continuous improvement in databases is because tools aren’t up to it. Do you think?
Gert Drapers: That’s exactly reasons why we exist. Confirms our findings that we have run into by just talking to DBAs or database developers. Today there’s variety of small tools they buy or download or build themselves, but nobody really provides them with end-to-end integrated solution. If you really look at what we’re doing, is providing tool set for database developer or DBA that to manage schema in continuous fashion and to make it part of like development life cycle which other developers have been using for years. If you go to database developer today or a DBA, somebody responsible for this schema, and ask, “Hey, what is the truce? What is the latest version? Latest state, state before that?” Most of time they point to database, “Well production end is over there.” That has latest state. But is it same as test environment? Staging environment? What are your developers referencing? One of projects I was doing while working in customer advisory team, I was called in by ISV and they asked me, “Can you look at these performance problems? We have query performance problems. We just deployed to this bank in New York and we have performance problems.” And I was like, “Great, give me schema and give me statistics so I can look at query plans. They gave me what they thought was production schema and there was facilitator on site at bank and send me schema. “Why are five tables missing and twelve indexes?” “Uh well they should not be missing.” Turns out they had to restore one of servers and lost schema. That’s not one and only real life example I have demonstrating this problem. There is real need that’s for sure. Other thing which became very apparent is there’s reluctance to trust a tool. DBAs are skeptical by nature, rightfully so. So do you want to see what these tools are doing? They really only trust you if you can give them SQL scripts and say, “Here’s what we’re going to deploy to server. You can read it, tweak it, if you don’t like it, throw it away.” Think there’s level of trust that needs to be established. These people have been deprived form a lot of tooling that can make their lives easier. That’s exactly what we’re targeting with the tool set we just released.
Greg Low: As you say, do tend to trust scripts. One of things been discussing with Belgian SQL CLR people. One of things I struggle with in terms of relationship between developers and DBAs is “black box nature” of assemblies at moment as unit of deployment. When I look at database script that involves assembly by nature at the moment where end of with hexadecimal dump basically of assembly. I think one of things need to move to is something where code is visible in script rather than just hex dump. Anyway along same lines but really want to be able to see those sort of things in the script and I think anything that reeks of magic in background won’t go down well. Needs to be something that translates quite directly to scripts.
Gert Drapers: One the changes we’ve been making to not directly facilitate this because you could attack this exact problem in two ways. Could say, “I like server side compellation of assemblies.” So user submits CLR, C Sharp or VB mass scripts. Could build that in easy way. I’m surprised nobody has done that in community actually.
Greg Low: One of projects I’ve had on back burner is to build CLR Compile function basically that takes a little bit of source code and throws back effectively an assembly as a hex dump. What I look at with that, I would love to even gone simpler maybe database had concept of default assembly. If could just say, “Create function” from CLR Compile VB or C Sharp, “Here’s code just for function” I think that’d be really sweet. Could be added to default assembly for database. I think you would remove amazing amount of fear because people could see few lined function and C Sharp would have embedded…
Gert Drapers: Visual Studio already submits source into database anyhow.
Greg Low: Yes. Anyway, we’re getting off track so tell us about new product and what breadth of coverage is for it and what can it do?
Gert Drapers: We’ll start with elaborate name you tried to pronounce when announcing program. We are part of team system suite. And our product is called Visual Studio Team Edition for Database Professionals. Also known as Data Dude project. Just little story for amusement, a lot of people think I’m Data Dude. I’d like to be but really name came from our Senior VP Eric Rutter where in a meeting discussing where Visual Studio is ignoring large and important group of database developer. Always wants to tack a name to something and we were looking for name, like “Who is this?” And he’s like, “Well he’s the Data Dude.” “Alright ok.” From that day on, project called Data Dude. What we were delivering is in our first version, we looked around and said, “What are we going to do?” Everybody yelled, “You should be doing modeling.” We said, “That’s true. From developer perspective, want to see modeling solution. But let’s look at what is fundamental in terms of developing your SQL Server application.” We always came back to, “Well, need to have handle on how to manage schema.” Instead of building modeling first as isolated thing like Irwin does or ER/Studio or all other products, we want to get handle on how we are going to manage schema that you would actually be modeling. How are we going to manage schema changed on top of schema how are you going to deploy this? Are we going to give you tools to detect differences in schema versions between one server and project or between two databases? How are we allowing you to make changes onto schema, we’d call refactoring. How are you going to test your schema? I was amazed to find how few people… Testing isn’t big industry anyhow. Not a lot of people like it. Most people are doing some level of testing against application. How many people are testing database objects? Not many.
Greg Low: Adam Machanic had good session at past conference in Dallas last year where talking about NUnit TSQL equivalence and so on. Of course there are significant challenges. One of things I think is most exciting about SQL Server 2005 is database snap shots. Always getting system back into known state is always difficulty with unit testing procs and things like that. I must admit, something been meaning to write up article on. Whole ability to quickly create database snap shot, run tests, then restore from snap shot I think is just outstanding otherwise often too difficult to deal with production size amounts of data and get back to a known state each test.
Gert Drapers: So we approach that from another angle that in combination with database unit testing, introduce another product, data generator. Actually have facility that allows generate meaningful testing. When I say meaningful, means we understand relationships in schema, domain constraints and allow you to generate data values in repeatable fashion. What that means is create data generation definition that’s actually repeatable and because it’s repeatable useful in unit testing perspective. Could say, “Have my schema, deploying schema to test server, now generate me data set in size number ‘x’”, because you vary size in terms of rows or how big database needs to be. Then say, “Run unit test against these.” That way, not just having a unit tests but also representable test data that mimics very closely to production data without security concerns, privacy concerns off data but do have very close correspondence in distribution of data so query plans going to be very close which is normally what problem is with generated data. You can generate… but doesn’t distribute as production data. Query plan might be way off.
Greg Low: Really interesting because one of things often sat and looked at is whenever people doing unit testing by nature, test boundary conditions and not so much the standard conditions. What had me intrigued, sitting back thinking about old programming language where one of things in Simula, every time go to decision point like “if” statement, could write odds in there or likelihood in taking each of branches. Starting to think in terms of testing, like load testing is if in attribute-based way, somehow define relative distribution of data that would occur in a test, but anyway, thing in another day. Just thought make interesting adaptation where could push on load testing as well to get right proportions of the data and still make sure you test boundaries.
Gert Drapers: We look at two dimensions. Ratio purely row count ratio between tables. Understand what your table relationships are and therefore we can infer what ratios are so we can say, “Every order has on average ten order lines.” We can defer level of information. At column level abstract statistic histogram from SQL Server. Trying to generate same value distributions for specific column according to statistics information we pulled from SQL Server. That way we’re getting very close in giving data set that has relative distribution logistics of production data. Very close for most data types. Some are a little challenging for example… CLR UDTs right now… figuring out how to do this and XML data type bit more complex so we don’t have that either but…
Greg Low: May need to get developers to help a bit. When UDT could apply attribute or something, method or something that generates or does what you want.
Gert Drapers: That’s definitely possible.
Greg Low: I often think somebody generating UTDs in good position to build you what you need to get sample data.
Gert Drapers: True as long as they would have notion of some distribution. At point in time, very helpful, yes. Makes sense, yeah.
Greg Low: So, in product of moment, what are main pillars of Data Dude or of Visual Studio Team Edition for Database Professionals?
Gert Drapers: All pivots around database project. So yes there is database project in Visual Studio today. In reality no one uses that it’s just container of files and connections. What we’ve created is database project which holds whole schema as DDL fragments. So probably if have database today, have schema today. Would say, “Create new project.” You’d engineer schema in project. What would happen at this time, we take every object in database and generate to smallest possible DDL fragments and store it as tiny .SQL files in project container. While doing this, parsing old SQL statements. We build understanding, “Hey, this is table. Has columns, column has name and type.” But we’re looking into body text example store procedures. Can do interesting things like, “Hey, you’re referencing this table from this store procedure.” And we know real dependency SQL Server doesn’t know of today between objects. We even know facts like you’re assigning column to variable and variable is different than column definition. That’s level of granularity that we have understanding into schema because we completely parse schema and understand up to variable and data type definition, what’s going into that schema.
Greg Low: I did put suggestion on ladybug site which came from students I had in course a while ago. He keen to see similar to what have in Oracle, the ability to declare variable instead of specific type as column.
Gert Drapers: I would love feature. I’ve been bugging a lot of engine developers that are responsible for this feature to do this. It’s sometimes like… Because we were always involved with customer projects, we were running into very… From their perspective, low end features but high end for the customer. Defining parameters or variables as column type. Simple things as create or replace another construct which people really like. There are bunch of these constructs we’d like to see in engine.
Greg Low: Glad to hear you’re keen on it. *laughs*
Gert Drapers: Once you have everything inside project, that’s now center of truce. Everything pivots around project. Optionally put on source control so can start changes. Then we have tooling around this to make changes. New tools is refactoring tool .Allow you to re-factor code. Different types of refactoring and we’re far away from supporting all variations, but main thing we will allow is for example, take column inside table. Column referenced inside function, view, foreign key definition, and inside another table. We will make changes as atomic change for you in all or nothing fashion. Can preview changes, “Did we do right thing? Identify right places to use this? ” That’s way can make changes in schema. To revent SQL Editor inside Visual Studio, for those of you who used it, you know it deserves label “lame”. Wasn’t doing what people expected. Didn’t handle outputs very well, so we’re bringing it on with SSMS execution environments so it has grids, outputs, cline statistics, all bells and whistles SSMS execution engine has but truly inside, has advantages like keyboard mapping, macro recording, all aspects of being inside are helping us as well. We have whole facility to do unit testing and test data generation, two other parts of feature set. Schema and daily comparison other features and of course, at end of day, done developing schema. Last two features finish it off are Build and Deploy. What Build will do take information inside project and build SQL scripts and represent creation new database, of course easy approach. Collects fragments of projects, puts them in right dependency order in script file and allows you to say, “I have server over there with database ‘x’, go update that and make sure schema in sync with what I have in project.” In that case, we’re building incremental update script for you. Feed into deploy step, and reason two things joined, is because might be security boundary. Might need different credentials, or different person that wants to deploy script you generated. Then we’re full circle. That’s what we are about in V1.
Greg Low: Well listen, probably good point to take just a short break.
*Break*
Greg Low: So welcome back from break, first up, what I might get you to do, is tell us about yourself or hobbies or things you’re interested in?
Gert Drapers: Where to start? Probably the biggest hobby is called my three kids. They take up all the time which is great but they deserve a lot of attentions. I have twins of almost three now and a six year old and they consume a lot of time.
Greg Low: That’s a handful, yep.
Gert Drapers: If I’m not doing that, I like to cook. I spend a lot of time cooking, preparing dinner, figuring out things, going through my wine collection is the next related activity which comes in handy if you like cooking. I like to make some music, but I don’t really have enough time to do that…
Greg Low: Music, what do you play?
Gert Drapers: I used to play bass. But problem with bass is, if you’re alone, playing bass is really lonely, doesn’t make a lot of sense. I used to play in a band, I switched back to guitar actually. I still want to go back to bass because that’s where my roots are and that’s what I understand. Actually I’m not good at guitar then I have a keyboard that I play but it’s just for my own entertainment actually. I like it to be pretty off the road band that the company plays in…
Greg Low: Ah the band on the run.
Gert Drapers: Ah, thank you for correcting me.
Greg Low: Trying to think of what Carl was talking about it on .NET Rocks! Talking about Band on the run. Indeed. What I’m interested in is how effective the column re-naming and things are because people bit coy about things like sp_depends and things like that because sort of ways to fool that and so on, just wondering how effective is renaming?
Gert Drapers: So, I can claim it’s 100 percent effective because we fool it that’s why we have to pars the TSQL ourselves. Don’t rely on sysdepends information. Don’t rely on fact that objects can be created out of order and therefore sysdepends will always have opportunity be wrong because of the solution so since we are parsing every object and we look into inside statement text of objects coming out of sys comments like FUSE, triggers, procedures and functions, we understand relationships inside schema. So within database we are 100 percent right with regards to renaming right thing. Even if you would have column C1 and a table C1 and would rename column, we will not rename table C1 because we know context that this is column C1 and not a table C1. We know that level of difference. Not blunt search replace inside. Actually rename refactoring goes beyond what goes on inside project. We also allow you to post script files inside project container. If you indicate you’re context of the script file is that user database, we could easily refactor where you want to refactor into that script. We’ll tell you, “Inside script, have select statement that’s referencing column, do you want to update this?” We are going to drag it into your data definition files, refactor this into unit tests. This is just preview of where we’re going. You can imagine down the road, we can say, “You’re making change in column definition, this effects application code.” And we can push that to other side of house, “Hey, Mr. Application Developer, we have schema change for you that you need to pick up.” That’s what the future’s going to look like.
Greg Low: Interesting. One of things raised on news groups I was on, talking about fact that even if you do this, where you then got issue, people have CLR-based procs or things in assemblies and so on. Typically have names of objects in there. Again ability to find and replace those will be a challenge. The other one I was looking at, I sat and watched with interest Pablo Castro and guys at ADO 3 or ADO.NET 3.5, or whatever next version, where put video up and talking about mapping layer which leave x of mill configuration files outside database and sort of wondering, if you have things like column mappings and things like that, carrying layer outside, then that’s another challenge…
Gert Drapers: Exactly right. We just started working with Pablo and his colleagues to work on how ADM integration should be dealt with from a schema change perspective. Because they are mapping entities of classes and classes be used by application developers, and entities do storage between entity and underlying database. We know about changes in database for us, trivial to push that into entity layer which could reflect in class definition. Actively talking about how problem should get resolved. We’re completely aware that this is next thing we need to start looking at and start bringing that to life. We’re there.
Greg Low: What intrigued me, one of arguments they were making for having mapping layer outside database I was struck by though that 99 percent they were talking about could’ve been done inside database rather than mapping layer except when it traversed multiple databases. What had me intrigued, I was saying, “If DBA changed all these things, then wouldn’t have to change application because we could change mapping layer.” But then got DBA changing in one spot and somebody else responsible for mapping layer, “If mapping had been done via procs or views or inside database, then one person making change would have whole view of change.” That made me a little nervous. Been hoping to get down and sit to Pablo hopefully he’ll be at TechEd, I’d like to see what his thinking is there.
Gert Drapers: I’m not exactly sure what the answer there is on that specific question. I do think what ultimately will happen is it can’t be such way one person that sits on whole change. Biggest problem is right now, even if take that to today’s technology, like data sets, you already have problem over here with technology, in terms of dependency of exhibition file, that you use to generate classes form which needs to match with table definitions inside database. These technologies, there’s already fair amounts of disconnects. If look at, if want to support that insider development life cycle and whether one person doing everything on his own notebook or on machine, or it’s development team with responsibilities, that scenario needs to be supported from end-to-end perspective. Not just like this is one piece of the puzzle. That’s what we’re trying to do in Visual Studio Team System, is piecing all links together so you can, no matter if it’s single person or three person, or three roles responsibility, that you can seamlessly bridge these inside the development life cycle.
Greg Low: What I was thinking though, even today with type data set and having X is D is fact it’s tied to tables, where if there was an abstraction layer, like stored procs or views, at top end of database level and that’s what the type data set was sort of tied to, then wouldn’t have dependency on tables and so on underneath. Literally chose all of those without changing…
Gert Drapers: You would have one extra level of indirection.
Greg Low: Yes. Which many do.
Gert Drapers: As we commonly see, we solve problem by adding one layer of indirection.
Greg Low: *laughs* Just thinking in places where people don’t have direct access to tables just access stored procs and views. They don’t tend to have that problem anyway because you could change structure of table in many ways not effect proc or view. So…
Gert Drapers: True to a certain degree but it tends to become same problem as interface versioning. At certain point, depending on where you want to make change you will have a problem somewhere. Its like, do you have process to do this in a structured fashion? We thought about what would be the next step in refactoring? With could name rename refactoring, certainly think of type refactoring which is another obvious one, but then get into, “Is normalization, or denormalization or splitting tables refactors?” Yes they are. At that point in time, also starting to deal with “If I’m going to take table and split it up into two tables, am I automatically going to generate view that takes place of original table as a fallback facility for the application so it doesn’t break?” And that’s type of strategies we’re trying to think through now, like what is next? After this? What are approaches we’re going to put in place for people to do this in safe way? If we think “We’re going to provide them with mechanism and say ‘Take table, split in two. This is key, so duplicate key make sure links are there.’” Are we going to generate this old style view that represents old world? If there were references to old objects, don’t immediately have to be fixed up.
Greg Low: Yeah. And I suppose more than that, where you’ve got procs, might require intricate change to procs themselves. Can you make that, yeah…?
Gert Drapers: That’s where refactoring becomes really interesting.
Greg Low: Big challenges, that’s good.
*Break*
Greg Low: What about testing? How far is the testing in the product?
Gert Drapers: Sorry can you repeat that?
Greg Low: Wondering how far you’ve got with facilitating testing in this version?
Gert Drapers: We can basically test every schema object inside the environment. You basically could make two choices. We’ve been going back and forth, “How do we want to do database testing, unit testing?” There’s one class of users that says, “My only language is TSQL. That’s what I live and understand, that’s what I’m comfortable with.” We made very sure we delivered environment where this user can create his unit test. We allowed him to write sweep of unit tests by only using TSQL. Writing test assertions, service side test assertions, to point and click while inside assertion, he can set up whole test environment for his procs, generate skeleton frameworks for procs, triggers, functions, but also test inserts, updates, and deletes if he wants to. It’s his TSQL, just assertions what results of statements should be. Even have assertions about things like, “We anticipate this statement returns in ‘x’ number of milliseconds.” Even performance assertions in there…
Greg Low: Performances are wonderful. Yeah.
Gert Drapers: Can say, “Wow this store procedure really has to run in five milliseconds.” If it doesn’t meet that bar, the test will fail. The test infrastructure is extension of existing Enterprise testing infrastructure in Visual Studio Team System which is a C Sharp... For users who are familiar with that, they can take it one step further and can say, “I’m going to deep dive also into source codes… because you can dive into source codes that would generate forty unit tests and start to do additional work there. Can make new test environment completely data driven, that’s standard feature of unit testing environment. Fan it out between multiple test servers. It’s very complete database unit testing environment. Things we wanted to do more, but don’t have time is putting in more test assertions. Right now have simple assertions like “Return code of proc should be ‘x’ output variable should be ‘y’. Should return ten rows.” We wanted to do things like “Go validate result set you sent is actually this result set.” So we wanted to do digging comparison between result sets that’s something we couldn’t do for this release.
Greg Low: What about schema wise? Can you say, “When I call this proc, can you say, ‘These are columns I should get back’”?
Gert Drapers: Yes.
Greg Low: Okay, so not just number of rows, yep.
Gert Drapers: Yep. These are assertions you put on output parameters. As long as it’s in the singleton area we’re able to cope with it right now, don’t do great job in handling result set, although you could do that yourself if you’re proficient in C Sharp of VB.NET. One of things might do if we have extra time is do a check sum assertion. So actually say, “Check sum result said or row get back is ‘x’”. That’s one of things we’re leaning to.
Greg Low: What about build and deployment then? What you’ll get in product for that?
Gert Drapers: Build takes all fragments we have inside project and constructs them into deployment scripts. There’s notice of pre and post deployment scripts you can manually insert in there. That’s also where you can store security, adding log-ins, for example, or adding users to database, and granting permission something need to do close in deployment scripts. What Build does is really constructs creation of schema objects and builds this single script that you then can run against a database or server to create database or to update database. Creation is fairly simple, it understands things like all set options, collation options, full text options, all SQL aspects are supported. No longer have to worry, “Are my set options consistent?” If you specify set options at project level, we make sure all set options get propagated consistently across whole script. Same with collation. If you specify at project level and don’t override them at table or column level or selects, there’s no need to ever change them or make sure they’re ever same. If you do incremental update, at construction time of Build script, comparing with target server with current state and building script on the fly, making sure that the goal is to get in sync with information you have in project. Take two approaches, “I really wanted to be sync.” Which also means objects I don’t have in project should get removed. Not default. Default is, “Here we have objects, if they exist, they need to get altered into new state, if they don’t exist they need to get created.” Existing objects don’t get dropped if they’re not in project, but you could choose option if you really want sync option. Then we have schema differ of course, for schema objects that allows you to visualize, “Are databases really in sync? Are they reflecting same schema state?” That’s how all these pieces fit together.
Greg Low: That’s magic. Well I’m really looking forward to seeing whole list. I did see shore web cast on it that my Raid did other week which was very interesting and certainly a lot of us excited about it so. What have you got coming up? Are you going to talk about this at TechEd?
Gert Drapers: So TechEd, that’s really next week already. We’re going to have a real public appearance where we’re going to be in notes measured in minutes, then we are in a couple of other sessions. We’re in the Dave Campbell session that’s going to talk about the database vision or data vision for Microsoft which we are part of that, then we have our own sessions that’s going to talk about product in various stages, I think in four different sections. We are available to talk to people at booths, and people can come and grab a CD or download CD. That’s what’s coming in first week. What we’re doing at development team level, we’re in civilization stage, where for next four week we’re working on stabilizing sync problems and rerelease CTP at point in time so after point in time will be new CTP. Then going back to Visual to work, and one of reason we do it this way is that we’re releasing CTP and want input from users. So instead of keep going and doing all bug fixing and stabilization at end, we do it in middle and we try to stay at high quality stable bar so we also have time to take back feedback from users so we can at least have shot at incorporating this. That’s why we will have very small number of CTPs, and then if everything goes well at end of year, should be available for everybody to use in final form, which is one of main attractions for me to join this team. As I wrote in my blog, “I started in team July first, 2005, we started developing September first” And we will be done this year and have V1 product from scratch in nine months really because real developing began in January. That’s exciting part, we will be busy which is good, we’re already getting a lot of feedback, but I want a lot more feedback. People connect to my blog and post your questions that would be great.
Greg Low: Looked like you had a really good team of people doing the work as well so I was really pleased to see that. That’s great.
Gert Drapers: One of familiar people, Richard Waymire who is amongst our team, he’s familiar face in SQL Server community…
Greg Low: Indeed, our friend Richard hasn’t been out to Australia for a while but he used to come out occasionally so we certainly got to know him some years back, he’s a great guy. It’s good to see those people on the team I certainly had high hopes for the team so that’s outstanding. So apart from that, is there anything else coming up in your world you want to mention? I presume this is the main thing?
Gert Drapers: My world revolves around what’s going on in Visual Studio land, of course we’re working on next release of Visual Studio ,but that goes all in parallel, this is main thing trying to get done first, and it’s why V1 makes SQL Server first class citizen in Visual Studio that’s our main priority right now.
Greg Low: That’s great. Well listen, I’ m really looking forward I will certainly be at sessions at TechEd unless I’m dragged off somewhere else to do something. I must admit I mentioned in my blog I drew short straw, I’ve got the 8:00 a.m. Thursday speaking slot so… *laughs* I’ll probably have a tame crowd that morning and fortunately I’ll be back in Seattle the following week for the internal lab on this stuff. I don’t know if you’ll be there for that but…
Gert Drapers: I’ll be there, and I’ll see you there.
Greg Low: I’m really looking forward to getting some time with this product so that’s outstanding. So listen, thanks for spending your time with us today Gert, that’s just outstanding.
Gert Drapers: No problem it was a pleasure.
Phone: 1300 SQL SQL (1300 775 775) l International +61 1300 775 775 l Fax: +61 3 8676-4913
Copyright 2017 by SQL Down Under | Terms Of Use | Privacy Statement