About Me · Send mail to the author(s) E-mail · Twitter

At GROSSWEBER we practice what we preach. We offer trainings for modern software technologies like Behavior Driven Development, Clean Code and Git. Our staff is fluent in a variety of languages, including English.

Feed Icon


Open Source Projects


Blogs of friends

Now playing [?]

Error retrieving information from external service.

WinDbg Commands

Posted in .NET | Debugging at Saturday, 24 July 2010 22:45 W. Europe Daylight Time

I just finished watching Ingo Rammer’s sessions on debugging from NDC 2010:

While I consider myself experienced in debugging with Visual Studio I still didn’t know the Ctrl+B trick Ingo shows in the first session to create breakpoint groups, for example to break on all methods named WriteLine.

Ingo’s second session goes into detail how to start with WinDbg. During his talk Ingo wrote down quite a lot of WinDbg commands that I copied and extended a bit for my own reference.

# Use debugger according to architecture that is being debugged.

# Drag exe onto WinDbg to start debugging.

# Debugging services:
# 1. Using Global Flags
#  - On "Image File" tab, enter service exe
#  - Set debugger to cdb.exe -server tcp:port=1234
# 2. Start service
# 3. Start WinDbg
#  - connect to remote session: tcp:server=localhost,port=1234
# Also works (unsecured) over networks


.loadby sos mscorwks # CLR 2
.loadby sos clr      # CLR 4, both after the debuggee has loaded the CLR
.chain               # Shows loaded extensions

sxe <event code> # Stop
sxn <event code> # Notify
sxi <event code> # Ignore
# ... on <event code> exceptions (for example, <event code> = clr)

g            # Go
.cls         # Clear screen

!pe          # Print exception
!clrstack    # Display stack trace
!clrstack -a # Stack trace with additional information (parameters and locals)
# If there is no stack information the JIT optimized the code away (i.e. inlining).
!dumpstack   # Another way to get the stack trace

!u <address> # Unassemble code at <address>
# Look for calls into managed code (to the right) to find the line/call that caused the exception.
# <assembly>_ni = Native image

!do <address> # Dump object
!da <address> # Dump array
# To copy addresses: Left double-click a numeric value, double right-click to copy it to the command line.

~            # Show all (managed and unmanaged) threads
!threads     # Show managed threads
~2s          # Switch to thread 2 (#2 in the unnamed column)
!runaway     # Show thread execution times (user-mode) - to find hanging threads

!dumpheap    # Show heap information, 1 line per instance
!dumpheap -stat # Heap statistics, most memory-consuming at the bottom. MT = class "pointer"
!dumpheap -stat -type TextBox # Show instances of classes containing "TextBox"
!dumpheap -type TextBox
!dumpheap -mt <type> # Dumps all instances of "TextBox" or <type>
!gcroot <address> # Why is the instance at <address> in memory?
# Domain = new GC root that reference <address> (~ static instance)
# Ignore WeakReferences, look for (pinned) references

# Create dumps from code:
[DllImport("DbgHelp.dll", SetLastError = true]
static extern bool MiniDumpWriteDump(
    IntPtr hProcess,
    int processId,
    IntPtr fileHandle,
    int dumpType, // 0x0 or 0x6 for managed code
    IntPtr exceptionInfo,
    IntPtr userInfo,
    IntPtr extInfo);

How To Set Up A Git Server On Windows Using Cygwin And Gitolite

Posted in Git at Sunday, 28 March 2010 16:48 W. Europe Daylight Time

Updated on 2012-03-06 to reflect the changes to the Gitolite installation process.

For obvious reasons, a couple weeks ago my team made the switch to Git. Hosting a Git server on Windows is by all means possible, and there are two options:

  1. Gitosis is a Python-based solution that provides basic Git hosting with per-repository permissions. Shannon Cornish has an excellent two-part guide how to set that up.
  2. Gitolite, a Perl-based rewrite of Gitosis, is a more advanced Git server that has a lot more configuration options. For example, it’s possible to specify who is able to force a push to a Git branch, an operation that is possibly problematic when working in teams.

A notable aspect of both solutions is that repository configuration and permissions management is done through Git itself. Over time, you will build a versioned history of the server configuration. Without further ado, let’s get started!


You’ll see that we have to deal with Cygwin and SSH mostly. Gitolite’s installation is pretty easy and does not require a lot of work by itself. Getting the Windows Server in a condition where it handles SSH takes most of our time.

  1. Installing Cygwin
  2. Connecting Cygwin to Windows Security
  3. Setting Up the SSH Server
  4. Enabling SSH Client Access
  5. Verifying SSH Password Access
    1. Creating Your SSH Identity
  6. Making the SSH Server Aware of Your SSH Identity
  7. Installing Gitolite

What You Need

  1. A Windows Server (I’m using Windows Server 2008 x86) with permissions to log in as an Administrator.
  2. An internet connection to download Cygwin.

Installing Cygwin

  1. Download the Cygwin setup program to C:\Cygwin and launch it. For this guide, I’ve used the current version 1.7.2.
  2. Select “Install from Internet”, click Next.
  3. Leave Root Directory as the default, C:\Cygwin, and install for all users. Click Next.
  4. Select C:\Cygwin\pkg as the Local Package Directory. Actually it doesn’t really matter what the directory is, you can delete it after the installation. Click Next.
  5. Select the Internet Connection you prefer: Direct, IE Settings or enter a manual proxy. Click Next.
  6. Select a mirror near your location, click Next.
  7. Acknowledge the “Setup Alert” warning about your installation.
  8. In the packages list, select the following packages by clicking on “Skip” text in the “New” column. When you clicked, the version that will be installed is displayed instead of “Skip”.
    • Net | openssh
    • Devel | git
    • Editors | vim
  9. Click Next and wait for the installer to complete.
  10. You may choose to add icons to the Desktop and Start Menu. Click Complete.

I recommend leaving the setup.exe in place, as you can use the installer to add, remove or upgrade Cygwin packages later.

Repeat the process on your local machine, this time with an extended set of packages to install:

  • Net | openssh
  • Devel | git
  • Devel | git-completion (optional)
  • Devel | git-gui (optional)
  • Devel | git-svn (optional, if you want to commit to SVN)
  • Devel | gitk (optional)

Connecting Cygwin to Windows Security

In preparation for the SSH server installation in the next section, we need to provide Cygwin with means to impersonate a SSH user as a Windows user with public key authentication. You can read more about integrating with Windows Security in the Cygwin documentation.

  1. On the server, open C:\Cygwin in Explorer.
  2. Locate Cygwin.bat, right-click and choose “Run as Administrator”.
    Copying skeleton files.
    These files are for the user to personalise their cygwin experience.
    They will never be overwritten nor automatically updated.
    `./.bashrc' -> `/home/Administrator//.bashrc'
    `./.bash_profile' -> `/home/Administrator//.bash_profile'
    `./.inputrc' -> `/home/Administrator//.inputrc'
    Administrator@GIT-SERVER ~
  3. Execute /bin/cyglsa-config
    Warning: Registering the Cygwin LSA authentication package requires
    administrator privileges!  You also have to reboot the machine to
    activate the change.
    Are you sure you want to continue? (yes/no)
  4. Type yes.
    Cygwin LSA authentication package registered.
    Activating Cygwin's LSA authentication package requires to reboot.
  5. Reboot the machine.

Setting Up the SSH Server

SSH will encrypt and authenticate connections to your Git repositories. SSH will use public key authentication to check if the user is permitted to access the server. Once the user got past the SSH security check Gitolite will take over handling the request.

When the Git server finished rebooting:

  1. Open a new Cygwin Bash prompt by running C:\Cygwin\Cygwin.bat as Administrator.
  2. Execute ssh-host-config
    Administrator@GIT-SERVER ~
    $ ssh-host-config
    *** Info: Generating /etc/ssh_host_key
    *** Info: Generating /etc/ssh_host_rsa_key
    *** Info: Generating /etc/ssh_host_dsa_key
    *** Info: Creating default /etc/ssh_config file
    *** Info: Creating default /etc/sshd_config file
    *** Info: Privilege separation is set to yes by default since OpenSSH 3.3.
    *** Info: However, this requires a non-privileged account called 'sshd'.
    *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep.
    *** Query: Should privilege separation be used? (yes/no)
  3. Type yes.
    *** Info: Note that creating a new user requires that the current account have
    *** Info: Administrator privileges.  Should this script attempt to create a
    *** Query: new local account 'sshd'? (yes/no)
  4. Type yes.
    *** Info: Updating /etc/sshd_config file
    *** Warning: The following functions require administrator privileges!
    *** Query: Do you want to install sshd as a service?
    *** Query: (Say "no" if it is already installed as a service) (yes/no)
  5. Type yes.
    *** Query: Enter the value of CYGWIN for the daemon: []
  6. Just hit the Return key.
    *** Info: On Windows Server 2003, Windows Vista, and above, the
    *** Info: SYSTEM account cannot setuid to other users -- a capability
    *** Info: sshd requires.  You need to have or to create a privileged
    *** Info: account.  This script will help you do so.
    *** Info: You appear to be running Windows 2003 Server or later.  On 2003
    *** Info: and later systems, it's not possible to use the LocalSystem
    *** Info: account for services that can change the user id without an
    *** Info: explicit password (such as passwordless logins [e.g. public key
    *** Info: authentication] via sshd).
    *** Info: If you want to enable that functionality, it's required to create
    *** Info: a new account with special privileges (unless a similar account
    *** Info: already exists). This account is then used to run these special
    *** Info: servers.
    *** Info: Note that creating a new user requires that the current account
    *** Info: have Administrator privileges itself.
    *** Info: No privileged account could be found.
    *** Info: This script plans to use 'cyg_server'.
    *** Info: 'cyg_server' will only be used by registered services.
    *** Query: Do you want to use a different name? (yes/no)
  7. Type no.
    *** Query: Create new privileged user account 'cyg_server'? (yes/no)
  8. Type yes.
    *** Info: Please enter a password for new user cyg_server.  Please be sure
    *** Info: that this password matches the password rules given on your system.
    *** Info: Entering no password will exit the configuration.
    *** Query: Please enter the password:
  9. Type and confirm a secure password for the SSH service account. This account will later fork processes on behalf of the user logged in via SSH. You will see another slew of text (which you should read) and then a blinking prompt.
  10. Open the Windows Firewall and create an exception for port 22/tcp. Thanks to Marcel Hoyer for the command line equivalent:
    netsh advfirewall firewall add rule dir=in action=allow localport=22 protocol=tcp name="Cygwin SSHD"
  11. Execute sc start sshd

Enabling SSH Client Access

Next we will enable SSH access for the git user that will be used to access repositories.

  1. Create a new Windows user account named git with a secure password. That user should have no password expiration. You can also delete any group membership.
  2. In the Cygwin Bash prompt, execute mkpasswd -l -u git >> /etc/passwd
  3. Close the Bash prompt (Ctrl + D) and log off from that machine. The rest of the setup process will be done from your machine.

Verifying SSH Password Access

  1. On your workstation, open a Cygwin shell.
  2. Execute ssh git@git-server
    you@YOUR-MACHINE ~
    $ ssh git@git-server
    The authenticity of host 'git-server (' can't be established.
    RSA key fingerprint is 13:16:ba:00:d3:ac:d6:f2:bf:36:f4:28:df:fc:d5:26.
    Are you sure you want to continue connecting (yes/no)?
  3. Type yes.
    Warning: Permanently added 'git-server,' (RSA) to the list of known hosts.
    git@git-server's password:
  4. Enter the password for the git account and you will be presented with a prompt from git-server.
    Copying skeleton files.
    These files are for the user to personalise their cygwin experience.
    They will never be overwritten nor automatically updated.
    `./.bashrc' -> `/home/git//.bashrc'
    `./.bash_profile' -> `/home/git//.bash_profile'
    `./.inputrc' -> `/home/git//.inputrc'
    git@git-server ~
  5. Press Ctrl + D or execute logout to end the session and you’ll be back on your machine’s prompt.

Creating Your SSH Identity

The next steps to create two SSH identities. The first is required to access the soon-to-be Git server, the second will be used to install and update Gitolite. Execute the following commands on your local machine.

  1. We’re about to generate a private and public key pair for you that will be used to authenticate SSH connections. Execute ssh-user-config
    *** Query: Shall I create an SSH1 RSA identity file for you? (yes/no)
  2. Type no.
    *** Query: Shall I create an SSH2 RSA identity file for you? (yes/no)
  3. Type yes.
    *** Info: Generating /home/agross/.ssh/id_rsa
    Enter passphrase (empty for no passphrase):
  4. Type and confirm a passphrase. You can omit the passphrase if you want, but that makes you less secure when you loose your private key file.
    *** Query: Do you want to use this identity to login to this machine? (yes/no)
  5. Type no. (Unless you want to remotely log in to your workstation with that key. Don't worry, this can be enabled later.)
    *** Query: Shall I create an SSH2 DSA identity file for you? (yes/no)
  6. Type no.
    *** Info: Configuration finished. Have fun!
  7. Repeat the steps above using ssh-keygen -f ~/.ssh/gitolite-admin. We need that key for the installation process.

Making the SSH Server Aware of Your SSH Identity

In order to be able to log-in to the Git server as the git user using your gitolite-admin SSH identity, execute ssh-copy-id -i ~/.ssh/gitolite-admin git@git-server. This adds the gitolite-admin public key to the list of authorized keys for the git account.

$ ssh-copy-id -i ~/.ssh/gitolite-admin git@git-server
git@git-server's password:
Now try logging into the machine, with "ssh 'git@git-server'", and check in:


to make sure we haven't added extra keys that you weren't expecting.

Verifying that public key authentication works, on the next log-in you do not have to enter git@git-server’s password.

$ ssh -i ~/.ssh/gitolite-admin git@gitserver
Last login: Fri Mar 26 02:04:40 2010 from your-machine

git@git-server ~

You are now ready to install Gitolite!

Installing Gitolite

The Gitolite installation process documentation is sufficient to get you started. There's just one more thing that you need to do on Windows.

Upgrades to newer versions of Gitolite are easy and run like the first-time installation. That is, you can just repeat the process outlined below, probably with a new Gitolite version. This installation method requires a SSH login, but we’ve just set-up things this way.

  1. Before proceeding, we need to copy the non-admin public SSH key to the server.
    $ scp -i ~/.ssh/gitolite-admin ~/.ssh/
  2. Connect to the Git server by executing ssh -i ~/.ssh/gitolite-admin git@gitserver
  3. We need to prepare your .bashrc file for the installation process to succeed. We'll do it with the Vim editor, which might seem a bit basic at first. Actually, it's very powerful.
    1. On the command prompt, type vim .bashrc to open up the editor.
    2. Depending on whether someone created the .bashrc file before, it might not be empty. Navigate to the bottom by pressing G (uppercase is important).
    3. Press the letter o to enter Vim’s insert mode on a new line (o = "open a line").
    4. Type the following into the text file: PATH=/home/git/bin:$PATH
    5. Press ESC to leave Vim’s insert mode.
    6. Type :wq and hit Return to save the file and close Vim. To dismiss any changes made in the last step and exit Vim, type :q! and hit the Return key.
  4. Back on the command prompt, type source .bashrc to update the PATH environment variable.
  5. Next, clone to the Gitolite bits as outlined in the installation documentation.
    git@gitserver ~
    $ git clone git://
    Cloning into 'gitolite'...
    remote: Counting objects: 5360, done.
    remote: Compressing objects: 100% (1806/1806), done.
    remote: Total 5360 (delta 3708), reused 5118 (delta 3498)
    Receiving objects: 100% (5360/5360), 1.79 MiB | 655 KiB/s, done.
    Resolving deltas: 100% (3708/3708), done.
    git@gitserver ~
    $ gitolite/src/gl-system-install
    using default values for EUID=1005:
    /home/git/bin, /home/git/share/gitolite/conf, /home/git/share/gitolite/hooks
    git@gitserver ~
    $ gl-setup -q ~/
    creating gitolite-admin...
    Initialized empty Git repository in /home/git/repositories/gitolite-admin.git/
    creating testing...
    Initialized empty Git repository in /home/git/repositories/testing.git/
    [master (root-commit) 3725b39] gl-setup -q /home/git/
     2 files changed, 8 insertions(+), 0 deletions(-)
     create mode 100644 conf/gitolite.conf
     create mode 100644 keydir/
  6. gl-setup will create the .gitolite.rc config file that needs our attention. We'll use Vim again.
    1. On the command prompt, type vim .gitolite.rc to open up the editor.
    2. Press the letter O (uppercase this time) to enter Vim’s insert mode on a new line before the current line.
    3. Type the following into the text file: $ENV{PATH} = "/usr/local/bin:/bin:/usr/bin";
    4. Press ESC to leave Vim’s insert mode.
    5. Type :w and hit Return to save the file.
    6. Apply any changes to the well-commented configuration you want to make.
      You can navigate using the cursor keys, and enter insert mode by pressing i. Leave insert mode by hitting ESC.
    7. Type :wq and hit Return to save the file and exit Vim. To dismiss any changes made in the last step and exit Vim, type :q! and hit the Return key.
  7. Leave the SSH session by pressing Ctrl + D.

Once the installation is finished, you can clone the gitolite-admin repository to your desktop.

$ git clone git@gitserver:gitolite-admin.git

To add repositories or change permissions on existing repositories, please refer to the Gitolite documentation. The process uses Git itself, which is awesome:

  1. Make changes to your copy of the gitolite-admin repository in your home directory.
  2. Commit changes locally.
  3. Push to the Gitolite server and let it handle the updated configuration.

If you ever want to update or manage the Gitolite server, you can still SSH into the server with

$ ssh -i ~/.ssh/gitolite-admin git@gitserver

Wrapping Up

This guide as been pretty long, longer than I wish it had been. Following Shannon Cornish’s example, I wanted it to be rather too verbose than too short. At least, I did appreciate the detail of Shannon’s instructions when I installed Gitosis back in December. I’ve just begun to grasp the power of Unix – leveraging a set of tiny programs to orchestrate a system.

With the setup you have now in place, you can do anything you like – it’s a complete Git server. However, if you want to publish your server on the internet there’s more you will want to take care of. I will go into that in a future post, detailing some of Cygwin’s security features that helped us reduce the number of attacks on our server. Also, I will take a look at how you can enable the Gitweb Repository Browser using the lighttpd web server.

Rezept: Eintopf

Posted in Recipes (German) at Saturday, 27 March 2010 14:33 W. Europe Standard Time
  • 1 Suppengemüse (Lauch, Sellerie, Karotten, Petersilie)
  • 300 g Hackfleisch
  • 400 g Kartoffeln
  • Nudeln oder Spätzle

Kartoffel schälen und in mundgerechte Stücke schneiden. Kartoffelstücke und Nudeln separat in Salzwasser kochen. In Scheiben geschnittetenen Lauch, gehackte Zwiebeln und Fleisch anbraten. Mit etwas Mehl überstäuben und verrühren. Beiseite stellen.

Das restliche kleingeschnittene Gemüse in einem großen Topf kochen. Wenn das Gemüse gar ist, die anderen Zutaten hinzugeben und gut durchmischen. Ziehen lassen, fertig.

Machine.Specifications Templates For ReSharper

Posted in BDD | MSpec | ReSharper at Wednesday, 03 March 2010 14:56 W. Europe Standard Time

A couple of days ago Hadi Hariri posted his set of MSpec (Machine.Specifications) templates for ReSharper. ReSharper’s templating system helps you type less repeated code. On top of that, ReSharper templates are much richer when compared to what’s built into Visual Studio. Plus, you edit them with a decent editor instead of hacking XML files.

Like Hadi, I also created a couple of templates specific to MSpec over the course of the last year or so and found them often to reduce the amount of text I have to write. ReSharper Templates are divided into three categories, with at least one MSpec template in each.


  • foo denotes an editable part of the template
  • | denotes where the cursor will be put upon expansion

File Template

Basically, this is just a new C# file with a single MSpec context in it.

using System;

using Machine.Specifications;

namespace ClassLibrary1
  public class When_Context
    Establish context = () => { | };

    Because of = () => { };

    It should_ = () => { };

Live Templates (a.k.a. Snippets)

Live Templates provide expansion of keyword-like identifiers. For example cw (Tab) will expand to Console.WriteLine();

Name Expanded
spec A new context, similar to what the File Template above creates.
est Establish context = () => { | };
bec Because of = () => { | };
it It should_observation = () => { | };
fail It should_fail = () => Exception.ShouldNotBeNull(); static Exception Exception;
l () => | ;

Only valid for assignments. For example:

var x = l (Tab) var x = () => | ;
ll () => { | }; Only valid for assignments.

Surround Templates

Surround Templates are useful when you want to wrap a block of code with other code, for example, an if statement (this is one that’s built-in).

Unit testing frameworks almost always have means to assert that are particular test should fail with a specific exception, for example by marking the test method with the ExpectedExceptionAttribute.

The MSpec way of handling/expecting exceptions is to surround the code in Because with Catch.Exception:

public class When_a_negative_amount_is_deducted
  static Exception Exception;
  static Account Account;

  Establish context =
    () => { Account = new Account(); };

  Because of =
    () => { Exception = Catch.Exception(() => Account.Deduct(-1)); };

  It should_fail =
    () => Exception.ShouldNotBeNull();

There’s a surround template named Catch.Exception that we can make wrap the call to Account.Deduct:

  1. Create the context with just the Account field, Establish and the Because. Select the highlighted code.
    public class When_a_negative_amount_is_deducted
      // Account field and Establish cut for brevity.
      Because of =
        () => { Account.Deduct(-1); };
  2. Press the shortcut for ReSharper | Edit | Surround With Template, select "Catch.Exception" from the list of available templates.
    public class When_a_negative_amount_is_deducted
      // Account field and Establish cut for brevity.
      Because of =
        () => { Exception = Catch.Exception(() => { Account.Deduct(-1); }); };
  3. Navigate out of the Because field, for example by pressing the (End) key. Type fail (a Live Template, see above) and press (Tab).
    public class When_a_negative_amount_is_deducted
      // Account field and Establish cut for brevity.
      Because of =
        () => { Exception = Catch.Exception(() => { Account.Deduct(-1); }); };
  4. Marvel at the amount of code you didn't have to write.
    public class When_a_negative_amount_is_deducted
      // Account field and Establish cut for brevity.
      Because of =
        () => { Exception = Catch.Exception(() => { Account.Deduct(-1); }); };
      It should_fail =
        () => Exception.ShouldNotBeNull();
      static Exception Exception;


How We Practice Continuous Integration And Deployment With MSDeploy

Posted in Build | Deployment | PowerShell at Saturday, 06 February 2010 18:35 W. Europe Standard Time

About two years ago I quit the pain of CruiseControl.NET’s XML hell and started using JetBrains TeamCity for Continuous Integration. While being a bit biased here, I have to admit that every JetBrains product I looked at is absolutely killer and continues to provide productivity on a daily basis.

I’ve been a fan of Continuous Integration ever since. I figured the next step in improving our practice was not only to automate building/compiling/testing the application, but also deploy it either by clicking a button or based on a schedule. For example, updates to this blog’s theme and the .NET Open Space web sites are automated by clicking the “Run” button on my local TeamCity instance.

Deployment Build Configurations in TeamCity

Compare that button click to what we are forced to do manually for some projects at work. Every time we roll out a new version someone will:

  • Build the deployment package with TeamCity.
  • Download the deployment package, which is usually a ZIP containing the application and database migrations.
  • RDP into the production server.
  • Upload the deployment package.
  • Shut down the web application, Windows services, etc.
  • Overwrite the binaries and configuration files with the current versions from the deployment package.
  • Sometimes we have to match up and edit configuration files by hand.
  • Upgrade the database by executing *.sql files containing migrations in SQL Server Management Studio.
  • Restart web application and Windows services, etc.
  • Hope fervently that everything works.

I believe you can imagine that the manual process outlined has a lot of rope to hang yourself with. An inexperienced developer might simply miss a step. On top of that, implicit knowledge of which files need to be edited increases the bus factor. From a developer and business perspective you don’t want to deal with such risks. Deployment should be well documented, automated and easy to do.

Deployment Over Network Shares Or SSH

When I first looked into how I could do Continuous Deployment there were not many free products available on the Windows platform. In a corporate environment you could push your application to a Windows network share and configure the web application through scripts running within a domain account’s security context.

A different story is deployment over an internet connection. You would want to have a secure channel like a SSH connection to copy files remotely and execute scripts on the server. This solution requires SSH on the server and tools from the Putty suite (i.e. psftp) to make the connection. I had such a setup in place for this blog and the .NET Open Space web sites, but it was rather brittle: psftp doesn’t provide synchronization, integration with Windows services like IIS is not optimal and you’re somewhat limited in what you can do on the server.


Last year, Microsoft released MSDeploy 1.0 which was updated to version 1.1 last week. MSDeploy is targeted to help with application deployment and server synchronization. In this article, I will focus on the deployment aspects exclusively. Considering the requirements for deployment, MSDeploy had everything I asked for. MSDeploy either

  • runs as the Web Deployment Agent Service providing administrators unrestricted access to the remote machine through NTLM authentication, or
  • runs as the Web Deployment Handler together with the IIS Management Service to let any user run a specified set of operations remotely.

Both types of connections can be secured using HTTPS, which is great and, in my opinion, a must-have.

I won’t go into the details of how MSDeploy can be set up because these are well documented. What I want to talk about what concepts we employ to deploy applications.

The Deployment Workflow

With about three months of experience with MSDeploy under our belts, we divide deployments into four phases:

  1. Initial, minimal manual preparation on the target server
  2. Operations to perform in preparation for the update
  3. Updating binaries
  4. Operations to perform after the update has finished

The initial setup to be done in phase 1 is a one-time activity that only occurs if we decide to provision a new server. This involves actions like installing IIS, SQL Server and MSDeploy on the target machine such that we can access it remotely. In phase 1 we also create web applications in IIS.

Further, we put deployments into two categories: Initial deployments and upgrade deployments. These only differ in the operations executed before (phase 2) and after (phase 4) the application files have been copied (phase 3). For example, before we can update binaries on a machine that is running a Windows service, we first have to stop that service in phase 2. After updating the binaries, that service has to be restarted in phase 4.

Over the last couple of weeks, a set of operations have been identified that we likely execute in phase 2 and 4.

Operation Description During Initial Deployment During Upgrade Before Or After Deployment
Set-WebAppOffline Shuts down a web application by recycling the Application Pool and creating App_Offline.htm No Yes Before
Set-WebAppOnline Deletes App_Offline.htm No Yes After
Create-Database Creates the initial database Yes No After
Update-Database Run migrations on an existing database No Yes After
Import-SampleData Imports sample data to an existing database for QA instances Yes No After
Install-Service Installs a Windows service, for example one that runs nightly reports Yes Yes After
Uninstall-Service Stops and uninstalls a Windows service No Yes Before

It’s no coincidence that the operations read like PowerShell Verb-Noun cmdlets. In fact, we run operations with PowerShell on the server side.

The deployment directory that will be mirrored between the build server and the production machine looks like the one depicted in the image to the right.

The root directory contains a PowerShell script that implements the operations above as PowerShell functions. These might call other scripts inside the deployment directory. For example, we invoke Tarantino (created by Eric Hexter and company) to have our database migrations done.


$scriptPath = Split-Path -parent $MyInvocation.MyCommand.Definition

# Change into the deployment root directory.
Set-Location $scriptPath

function Create-Database()
    & ".\SQL\create-database.cmd" /do_not_ask_for_permission_to_delete_database

function Import-SampleData()
    & ".\SQL\import-sample-data.cmd"

function Upgrade-Database()
    & ".\SQL\update-database.cmd"

function Install-Service()
    & ".\Reporting\deploy.ps1" Install-Service
    & ".\Reporting\deploy.ps1" Run-Service

function Uninstall-Service()
    & ".\Reporting\deploy.ps1" Uninstall-Service

function Set-WebAppOffline()
    Copy-Item -Path "Web\App_Offline.htm.deploy" -Destination "Web\App_Offline.htm" -Force

function Set-WebAppOnline()
    Remove-Item -Path "Web\App_Offline.htm" -Force

# Runs all command line arguments as functions.
$args | ForEach-Object { & $_ }

# Hack, MSDeploy would run PowerShell endlessly.
Get-Process -Name "powershell" | Stop-Process

The last line is actually a hack, because PowerShell 2.0 hangs after the script has finished.

Rake And Configatron

As you might remember from last week’s blog post we use Rake and YAML in our build scripts. Rake and YAML (with Configatron) allow us to

  • build the application,
  • generate configuration files for the target machine, thus eliminating the need to make edits, and
  • formulate MSDeploy calls in a legible and comprehensible way.

Regarding the last point, please consider the following MSDeploy command line that synchronizes a local directory with a remote directory (think phase 3). PowerShell operations will to be performed before (-preSync, phase 2) and after the sync operation (-postSyncOnSuccess, phase 4).

"tools/MSDeploy/msdeploy.exe" -verb:sync -postSyncOnSuccess:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/Crimson/deploy.ps1 Create-Database Import-SampleData Install-Service Set-WebAppOnline ",waitInterval=60000 -allowUntrusted -skip:objectName=filePath,skipAction=Delete,absolutePath=App_Offline\.htm$ -skip:objectName=filePath,skipAction=Delete,absolutePath=\\Logs\\.*\.txt$ -skip:objectName=dirPath,skipAction=Delete,absolutePath=\\Logs.*$ -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/Crimson/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/Crimson

The command line is convoluted and overly complex, isn’t it? Now please consider the following Rake snippet that was used to generate the command line above.

remote = Dictionary[]
if configatron.deployment.connection.exists?(:wmsvc) and configatron.deployment.connection.wmsvc
    remote[:wmsvc] = configatron.deployment.connection.address
    remote[:username] = configatron.deployment.connection.user
    remote[:password] = configatron.deployment.connection.password
    remote[:computerName] = configatron.deployment.connection.address

preSyncCommand = "exit"
postSyncCommand = "exit"

if configatron.deployment.operations.before_deployment.any?
    preSyncCommand = "\"powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command #{"deploy.ps1".in(configatron.deployment.location)} #{configatron.deployment.operations.before_deployment.join(" ")} \""

if configatron.deployment.operations.after_deployment.any?
    postSyncCommand = "\"powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command #{"deploy.ps1".in(configatron.deployment.location)} #{configatron.deployment.operations.after_deployment.join(" ")} \""
end \
    :tool =>,
    :log_file => configatron.deployment.logfile,
    :verb => :sync,
    :allowUntrusted => true,
    :source => Dictionary[:dirPath, configatron.dir.for_deployment.to_absolute.escape],
    :dest => remote.merge({
        :dirPath => configatron.deployment.location
    :usechecksum => true,
    :skip =>[
            :objectName, "filePath",
            :skipAction, "Delete",
            :absolutePath, "App_Offline\\.htm$"
            :objectName, "filePath",
            :skipAction, "Delete",
            :absolutePath, "\\\\Logs\\\\.*\\.txt$"
            :objectName, "dirPath",
            :skipAction, "Delete",
            :absolutePath, "\\\\Logs.*$"
    :preSync => Dictionary[
        :runCommand, preSyncCommand,
        :waitInterval, 60000
    :postSyncOnSuccess => Dictionary[
        :runCommand, postSyncCommand,
        :waitInterval, 60000

It’s a small Rake helper class that transforms a Hash into a MSDeploy command line. That helper also includes console redirection that sends deployment output both to the screen and to a log file. The log file is also used to find errors that may occur during deployment (see below).

For your convenience, these are the relevant parts of the configuration, expressed in YAML and parsed with Configatron.

    location: C:/Crimson
      before_deployment: [Set-WebAppOffline, Uninstall-Service]
      after_deployment: [Create-Database, Import-SampleData, Install-Service, Set-WebAppOnline]
      wmsvc: true
      address: BLUEPRINT-X86
      user: deployer
      password: deployer

What I Haven’t Talked About

What’s missing? An idea that got me interested was to partition the application into roles like database server, reporting server, web application server, etc. We mostly do single-server deployments, so I haven’t built that yet (YAGNI). Eric Hexter talks about application roles in a recent blog entry.

Another aspect where MSDeploy unfortunately doesn’t shine is error handling. Since we run important operations using the runCommand provider (used by -preSync and -postSyncOnSuccess) we would want to fail when something bad happens. Unfortunately MSDeploy, to this day, ignores errorlevels that indicate errors. So we’re back to console redirection and string parsing. This functionality is already in my MSDeploy helper for Rake, so you can rely on it to a certain degree. Manually scanning log files for errors, at least for the first couple of automated deployments is recommended, though.

Since we’re leveraging PowerShell on the server, why should we have to build the PowerShell script handling operations ourselves? I can imagine deploying the PowerShell-based PSake build tool and a PSake build script containing operations turned build targets. This will allow for common build script usage scenarios like task inspection (administrators would want that), having task dependencies, error handling and so on.

Wrapping Up

In this rather long post, I hope I could provide you with information how MSDeploy can be used to deploy your applications automatically. For us, over the last couple of weeks, MSDeploy in combination with our Rakefiles has helped us tremendously deploying an application that’s currently under development: The pain of delivering current versions to the customer has gotten a breeze.