Code Sample | TFS | TFS 2010September 09, 2011 7:51 AM
TFS is a great tool – don’t get me wrong – but there are some things conspicuously missing from TFS out-of-the-box and the TFS Power Tools. One of these is a way to update work items published to a large number of projects. Certainly, you can use the witadmin command-line tool to import work items or, if the command-line frightens you, you can use the Work Item Import tool in the TFS Power Tools. While both of these work fine, they only import a single work item into a single project. And here’s where TFS fails the TFS administrator that is responsible for an implementation with many projects – you could spend all day importing a single work item across 100 projects. And if you have to import multiple work items? Well, there goes your vacation.
So … let’s break this down. You have a highly repetitive task and differs by only a couple of variables that you want to do over and over and over again. Sounds like a good application of our old friend, the looping construct, right? In fact, it certainly is. Let’s start with what we need to do to import a single work item into a single project:
private void ImportWorkItem(string workItemFilePath, Microsoft.TeamFoundation.WorkItemTracking.Client.Project project)
//Need to get the raw XML for the work item.
var fileInfo = new System.IO.FileInfo(workItemFilePath);
string workItemXml = String.Empty;
using (var reader = fileInfo.OpenText())
workItemXml = reader.ReadToEnd();
Really … it is that simple.
The fun part comes when you want to take this simple concept and wrap it up in a nice pretty bow that allows a TFS Admin to update multiple work items across multiple projects. It’s not hard … except for the fact that I am UI-challenged and anything that is functional and looks somewhat usable is not my strong suit. But … I did manage to get it done. Here it is:
Running the tool is pretty straightforward. Start the EXE. It will then prompt you to select a Team Project Collection, which gets you to this screen. Use “Browse…” to select the work item files that you want to import. These should be in the same format as you would use in the project template. Then you select the projects that you want them imported into. Hit OK and away it goes. It will take a few minutes, so go get yourself some coffee. Or Mountain Dew. Then relax and surf the web a bit … after all, you did tell the boss that this was a tedious, time-consuming process right?
You can download this tool on MSDN Code Samples.
Community | Events | TFS 2010 | User GroupsOctober 01, 2010 10:35 AM
It’s almost time for TechFest again … it’s amazing how quickly the time flies! It’s seems like only yesterday that I told Michael what my sessions would be, thinking, of course, that I’d have plenty of time to do them. No, it wasn’t yesterday either – it was a few months ago. But … I’m just now starting to do the presentations. While there is a blurb about the sessions on the TechFest site, I also wanted to post them here – with some more information. Customizing TFS Process Template – Level 300 Or maybe it’s level 400. This is not going to be your every day talk about customizing process templates; there is plenty of that out there. While I may mention (and even, very briefly, show) the TFS Process Template Editor Power Tool, that’s not what I’m going to focus on. I’ll be digging into the core of what makes these templates work and how they are configured behind the scenes. This means that we’ll be playing in the XML behind the templates. We will look – deeply – at how the templates are constructed and where all of the pieces are. I’ll also share some code utilities that I’ve developed to help create process templates from an existing Team Project; specifically, I’ll show code to export work items and work item queries. I’ll also mention how to do custom plugins for the Project Creation Wizard, allowing you to really “kick it up a notch!” Team Foundation Build 2010 With Team Foundation Server 2010 Build Server, the build services got a complete redesign, moving away from an MSBuild-based build process to a more flexible and extensible build process based on Windows Workflow Foundation 4.0. I will talk about TFS build, the architecture and how to configure build controllers and agents. From there, I’ll be digging into the Build Process Templates, how to customize and extend them with out-of-the-box activities as well as custom activities, as well as some tips and tricks about how to manage extensions and your build environment. Finally, I’ll talk about the upgrade path from TFS 2005/2008 builds (yes, there is one) and some of the gotchas that you may experience on the way. I’ve not yet decided which activity to use as an example, but I’m leaning towards using the one that I wrote to change the build workspace as a part of the process. Or the zip activity. Or both? We’ll see. Keep in mind … this is the kind of stuff that I do all day, every day, day in and day out. Those of you who know me also know that I’m not one to settle for touching on the surface but get deep into the technical aspects … these sessions will be no different. They will not be your typical TFS sessions. There will be code - and a goodly chunk of it - that works with the mysterious and very poorly documented TFS APIs.
TFS 2010 | TFS BuildSeptember 15, 2010 10:01 AM
Since Microsoft doesn’t seem to be interested in publishing docs on built-in TF Build 2010 Activities, I’ll be publishing blog posts about activities that I’ve worked and/or experimented with. The titles will be the full name of the activity class with the hope that it’ll come up near the top on searches. The information here is a result of analysis of the activity from .NET Reflector as well as testing and experimentation in the context of a TFS 2010 Build Server but should not be construed as “official documentation”. Description This activity will expand Environment Variables specified in the standard MSBuild property syntax. For example, $(WinDir) will be expanded to the Windows directory on the current machine. Note that this may be different for the controller and the agent and will reflect the current machine context based on where it is executed in the build process. Because of that, be careful if you evaluate/cache results between these contexts. The syntax is similar to an MSBuild property. For example, to expand the %WINDIR% environment variable, use $(WINDIR). Note that this method only includes standard environment variables by default and does not include any default MSBuild or TFBuild (<2010) variables. Internally, it uses a regular expression to identify properties identified as $([VarName]). It first tries to resolve the matches using System.Environment::GetEnvironmentVariable. If that fails, it then looks in the dictionary supplied in the AdditionalVariables property. Comments Like MSBuild properties, this will use the $(PropName) syntax. For example, to expand %WinDir%, you will use $(WinDir). While this activity does have its uses, there are some key variables missed. For example, any and all variables that were available in TF Build < 2010. This makes things very difficult to have a single template that can be used across many projects. To help with this, you can use the AdditionalVariables property and add any (custom) expansion variables that you want to, making this activity far more useful. I will be creating activities to do this that will be available as a part of the Community TFS Build Extensions and may also be available separately. I’m planning a single, code-based activity to do one variable at a time as well as a composite activity that will do the standard TFS Build 2008 properties. In the meantime, you can download an example build process template here. It adds a couple of those TF Build <= 2008 standard properties as AdditionalVariables and also shows how to call a standard environment variable. Once you figure this activity out, it’s pretty simple and can be highly useful. However, the lack of documentation (are you listening Microsoft DevDiv?) makes it far more complicated than it really needs to be. One should not need to use Reflector to understand how a Microsoft-supplied component works.
TFS 2010March 12, 2010 7:39 AM
I went to the “Next Generation Testing with Visual Studio 2010” event in Houston yesterday. And … while it is somewhat strange going to a Microsoft event where I’m a “mere attendee” … I did really enjoy the experience. There was a TON of information there and, for me, one of the very coolest technology revolved around lab management and the automation of Hyper-V based environments to do this. Abel (I hope that I spelled his name correctly) talked very deeply and extensively about his experience working with Lab Manager; very good stuff as he talked about real-world challenges around this. Plus he’s a biker … he and I chatted for quite some time after the event about motorcycles and motorcycling and that, in itself, is worth 20 points (out of 100) in my book. One thing that he mentioned was the cost of copying VHD’s. There is only so much that can be done about that but … it struck me … as he was talking about it, it seemed that he was using his test servers as AD controllers. Maybe not … but the expense of AD controllers, etc. … was mentioned. And yes, little things like AD controllers (and DNS and DHCP) are very important when creating a dev/test environment. And then … and this may be just me … I like to keep dev/test environments as isolated as possible from production environments. A dev/test environment is meant to be blown up. It should be a place where crazy what-if scenarios (as well as normal testing) can be tried and evaluated without any fear of disrupting real work. This includes AD domains … the test environment should have its own, dedicated AD. If there is a trust, then it’s a one-way trust and all traffic goes through a firewall between test and production environments. For Lab Management to work effectively and smoothly, you’ll need to do this. This type of environment is hard to set up. TFS 2010 certainly makes the test virtual servers easy to set up and configure but it doesn’t seem to help with the core networking services that need to be there. But then, if I was a PM on that team, I’d put that task well “out of scope”. Probably forever … putting that “in scope” would require dealing with a core network environment that is completely unpredictable. That’s a recipe for Ugly. But … configure the core network services and it’s all easier. Core network services include Active Directory (authT and authZ), DNS (name resolution), DHCP (IP Address independence) and RRAS (routing outside the test network). These are core network services that developers just expect to “work” – as well they should – but most developers never actually work with them. I learned this stuff “way back in the day” when you needed to understand the details of the underlying specs (can you say IUnknown?) to debug stuff. Fortunately, development has moved beyond the point where that is important on a daily basis. So … some details … Windows Server 2008+ Server Core It doesn’t matter if it’s Win2K8 or 2K8 R2. Server Core is your friend for any and all test environments. Create a virtual machine with Server Core and you’ve got AD, DNS and DHCP services for your test environment at a pretty low (virtual machine) cost. I run my server core VM at 256MB … I’ve tried less but it didn’t work well; 256MB seems to be the (practical) lower limit. On this Server Core VM, add Active Directory Services, DNS Services and DHCP Services. It’s all command-line stuff that’s pretty well documented on TechNet. Connect to the Server Core instance with remote administration tools and set your DNS and DHCP environments up. Dealing with IP settings on individual machines with static addresses is far too painful and time-consuming for an effective test environment. Plus, I can never remember what IP is where anyway. Then, set this little Server Core VM to run all the time. Set all of your virtual machines to use your internal network with DHCP enabled. If you do/want a static address, use DHCP to create a reservation that assigns the same IP address to the same NIC based on the physical (MAC) address. SysPrep For your virtual machines, Sysprep is your friend. Setting up the core OS can be a pain, especially when the same roles and role services are needed over and over and over and over again. Wouldn’t it be nice to just do all of this common stuff once? Sysprep lets you do that … set up your server roles (Application Server, DNS, DHCP, IIS …) and then run sysprep (C:\%System32%\Sysprep\sysprep.exe). Select “Enter System Out-of-Box Experience” with “Generalize” checked. Save the VHD and use it as the base VHD for your test servers. Not differencing disks but a copy-rename-attach process. Then use your “sysprep’d” VHD to start your test virtual machines. They will go through a mini-setup and then you’ll have a running server that’s ready for its specific functionality in just a couple of minutes. Tip: when you are creating your sysprep’d image, make sure that you apply all of the updates before sysprepping. It won’t make anything break, but it’s helpful to have all of them already applied. And … you can also install some software before sysprepping. Antivirus, Visual Studio and Office are my typical candidates; I do try to avoid server products (i.e Sql Server, MOSS, etc.) in the sysprepped image. Be prepared to rev your base Sysprep’d image from time to time, usually with just new updates. RRAS RRAS stands for “Routing and Remote Access Services”. In Windows Server 2008, this is under “Network Policy and Access Services”. This allows you to bridge a private virtual network with the public Internet. It also allows you to route between two different isolated test networks. Finally, you can use it to allow access to wireless networks from a Hyper-V virtual network. Install NPAS on the host machine and set it up as either a regular router for a complex, enterprise implementation with domain trusts and the like or as a NAT router just to get Internet access for your virtual machines. Just to repeat … you can also use this to allow Hyper-V virtual networks to use a wireless connection for connecting to the Internet. All of this … I won’t say it’s simple. But it’s really not all that hard, either … there are a lot of “dotting i’s and crossing t’s”. It helps – tremendously – if you know the basics of how networking, TCP/IP and routing work. You do not need to know this in the depth required to implement a 30,000+ desktop Active Directory implementation; that’s a whole different ballgame.