Contact

admin

About Me · Send mail to the author(s) E-mail · Twitter

At GROSSWEBER we practice what we preach. We offer trainings for modern software technologies like Behavior Driven Development, Clean Code and Git. Our staff is fluent in a variety of languages, including English.

Feed Icon

Tags

Open Source Projects

Archives

Blogs of friends

Now playing [?]

Error retrieving information from external service.
Audioscrobbler/Last.fm

ClustrMap

Page 1 of 1 in the Build category

How We Practice Continuous Integration And Deployment With MSDeploy

Posted in Build | Deployment | PowerShell at Saturday, February 06, 2010 6:35 PM W. Europe Standard Time

About two years ago I quit the pain of CruiseControl.NET’s XML hell and started using JetBrains TeamCity for Continuous Integration. While being a bit biased here, I have to admit that every JetBrains product I looked at is absolutely killer and continues to provide productivity on a daily basis.

I’ve been a fan of Continuous Integration ever since. I figured the next step in improving our practice was not only to automate building/compiling/testing the application, but also deploy it either by clicking a button or based on a schedule. For example, updates to this blog’s theme and the .NET Open Space web sites are automated by clicking the “Run” button on my local TeamCity instance.

Deployment Build Configurations in TeamCity

Compare that button click to what we are forced to do manually for some projects at work. Every time we roll out a new version someone will:

  • Build the deployment package with TeamCity.
  • Download the deployment package, which is usually a ZIP containing the application and database migrations.
  • RDP into the production server.
  • Upload the deployment package.
  • Shut down the web application, Windows services, etc.
  • Overwrite the binaries and configuration files with the current versions from the deployment package.
  • Sometimes we have to match up and edit configuration files by hand.
  • Upgrade the database by executing *.sql files containing migrations in SQL Server Management Studio.
  • Restart web application and Windows services, etc.
  • Hope fervently that everything works.

I believe you can imagine that the manual process outlined has a lot of rope to hang yourself with. An inexperienced developer might simply miss a step. On top of that, implicit knowledge of which files need to be edited increases the bus factor. From a developer and business perspective you don’t want to deal with such risks. Deployment should be well documented, automated and easy to do.

Deployment Over Network Shares Or SSH

When I first looked into how I could do Continuous Deployment there were not many free products available on the Windows platform. In a corporate environment you could push your application to a Windows network share and configure the web application through scripts running within a domain account’s security context.

A different story is deployment over an internet connection. You would want to have a secure channel like a SSH connection to copy files remotely and execute scripts on the server. This solution requires SSH on the server and tools from the Putty suite (i.e. psftp) to make the connection. I had such a setup in place for this blog and the .NET Open Space web sites, but it was rather brittle: psftp doesn’t provide synchronization, integration with Windows services like IIS is not optimal and you’re somewhat limited in what you can do on the server.

MSDeploy

Last year, Microsoft released MSDeploy 1.0 which was updated to version 1.1 last week. MSDeploy is targeted to help with application deployment and server synchronization. In this article, I will focus on the deployment aspects exclusively. Considering the requirements for deployment, MSDeploy had everything I asked for. MSDeploy either

  • runs as the Web Deployment Agent Service providing administrators unrestricted access to the remote machine through NTLM authentication, or
  • runs as the Web Deployment Handler together with the IIS Management Service to let any user run a specified set of operations remotely.

Both types of connections can be secured using HTTPS, which is great and, in my opinion, a must-have.

I won’t go into the details of how MSDeploy can be set up because these are well documented. What I want to talk about what concepts we employ to deploy applications.

The Deployment Workflow

With about three months of experience with MSDeploy under our belts, we divide deployments into four phases:

  1. Initial, minimal manual preparation on the target server
  2. Operations to perform in preparation for the update
  3. Updating binaries
  4. Operations to perform after the update has finished

The initial setup to be done in phase 1 is a one-time activity that only occurs if we decide to provision a new server. This involves actions like installing IIS, SQL Server and MSDeploy on the target machine such that we can access it remotely. In phase 1 we also create web applications in IIS.

Further, we put deployments into two categories: Initial deployments and upgrade deployments. These only differ in the operations executed before (phase 2) and after (phase 4) the application files have been copied (phase 3). For example, before we can update binaries on a machine that is running a Windows service, we first have to stop that service in phase 2. After updating the binaries, that service has to be restarted in phase 4.

Over the last couple of weeks, a set of operations have been identified that we likely execute in phase 2 and 4.

Operation Description During Initial Deployment During Upgrade Before Or After Deployment
Set-WebAppOffline Shuts down a web application by recycling the Application Pool and creating App_Offline.htm No Yes Before
Set-WebAppOnline Deletes App_Offline.htm No Yes After
Create-Database Creates the initial database Yes No After
Update-Database Run migrations on an existing database No Yes After
Import-SampleData Imports sample data to an existing database for QA instances Yes No After
Install-Service Installs a Windows service, for example one that runs nightly reports Yes Yes After
Uninstall-Service Stops and uninstalls a Windows service No Yes Before

It’s no coincidence that the operations read like PowerShell Verb-Noun cmdlets. In fact, we run operations with PowerShell on the server side.

The deployment directory that will be mirrored between the build server and the production machine looks like the one depicted in the image to the right.

The root directory contains a PowerShell script that implements the operations above as PowerShell functions. These might call other scripts inside the deployment directory. For example, we invoke Tarantino (created by Eric Hexter and company) to have our database migrations done.

 

$scriptPath = Split-Path -parent $MyInvocation.MyCommand.Definition

# Change into the deployment root directory.
Set-Location $scriptPath

function Create-Database()
{
    & ".\SQL\create-database.cmd" /do_not_ask_for_permission_to_delete_database
}

function Import-SampleData()
{
    & ".\SQL\import-sample-data.cmd"
}

function Upgrade-Database()
{
    & ".\SQL\update-database.cmd"
}

function Install-Service()
{
    & ".\Reporting\deploy.ps1" Install-Service
    & ".\Reporting\deploy.ps1" Run-Service
}

function Uninstall-Service()
{
    & ".\Reporting\deploy.ps1" Uninstall-Service
}

function Set-WebAppOffline()
{
    Copy-Item -Path "Web\App_Offline.htm.deploy" -Destination "Web\App_Offline.htm" -Force
}

function Set-WebAppOnline()
{
    Remove-Item -Path "Web\App_Offline.htm" -Force
}

# Runs all command line arguments as functions.
$args | ForEach-Object { & $_ }

# Hack, MSDeploy would run PowerShell endlessly.
Get-Process -Name "powershell" | Stop-Process

The last line is actually a hack, because PowerShell 2.0 hangs after the script has finished.

Rake And Configatron

As you might remember from last week’s blog post we use Rake and YAML in our build scripts. Rake and YAML (with Configatron) allow us to

  • build the application,
  • generate configuration files for the target machine, thus eliminating the need to make edits, and
  • formulate MSDeploy calls in a legible and comprehensible way.

Regarding the last point, please consider the following MSDeploy command line that synchronizes a local directory with a remote directory (think phase 3). PowerShell operations will to be performed before (-preSync, phase 2) and after the sync operation (-postSyncOnSuccess, phase 4).

"tools/MSDeploy/msdeploy.exe" -verb:sync -postSyncOnSuccess:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/Crimson/deploy.ps1 Create-Database Import-SampleData Install-Service Set-WebAppOnline ",waitInterval=60000 -allowUntrusted -skip:objectName=filePath,skipAction=Delete,absolutePath=App_Offline\.htm$ -skip:objectName=filePath,skipAction=Delete,absolutePath=\\Logs\\.*\.txt$ -skip:objectName=dirPath,skipAction=Delete,absolutePath=\\Logs.*$ -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/Crimson/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/Crimson

The command line is convoluted and overly complex, isn’t it? Now please consider the following Rake snippet that was used to generate the command line above.

remote = Dictionary[]
    
if configatron.deployment.connection.exists?(:wmsvc) and configatron.deployment.connection.wmsvc
    remote[:wmsvc] = configatron.deployment.connection.address
    remote[:username] = configatron.deployment.connection.user
    remote[:password] = configatron.deployment.connection.password
else
    remote[:computerName] = configatron.deployment.connection.address
end

preSyncCommand = "exit"
postSyncCommand = "exit"

if configatron.deployment.operations.before_deployment.any?
    preSyncCommand = "\"powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command #{"deploy.ps1".in(configatron.deployment.location)} #{configatron.deployment.operations.before_deployment.join(" ")} \""
end

if configatron.deployment.operations.after_deployment.any?
    postSyncCommand = "\"powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command #{"deploy.ps1".in(configatron.deployment.location)} #{configatron.deployment.operations.after_deployment.join(" ")} \""
end

MSDeploy.run \
    :tool => configatron.tools.msdeploy,
    :log_file => configatron.deployment.logfile,
    :verb => :sync,
    :allowUntrusted => true,
    :source => Dictionary[:dirPath, configatron.dir.for_deployment.to_absolute.escape],
    :dest => remote.merge({
        :dirPath => configatron.deployment.location
        }),
    :usechecksum => true,
    :skip =>[
        Dictionary[
            :objectName, "filePath",
            :skipAction, "Delete",
            :absolutePath, "App_Offline\\.htm$"
        ],
        Dictionary[
            :objectName, "filePath",
            :skipAction, "Delete",
            :absolutePath, "\\\\Logs\\\\.*\\.txt$"
        ],
        Dictionary[
            :objectName, "dirPath",
            :skipAction, "Delete",
            :absolutePath, "\\\\Logs.*$"
        ]
    ],
    :preSync => Dictionary[
        :runCommand, preSyncCommand,
        :waitInterval, 60000
    ],
    :postSyncOnSuccess => Dictionary[
        :runCommand, postSyncCommand,
        :waitInterval, 60000
    ]

It’s a small Rake helper class that transforms a Hash into a MSDeploy command line. That helper also includes console redirection that sends deployment output both to the screen and to a log file. The log file is also used to find errors that may occur during deployment (see below).

For your convenience, these are the relevant parts of the configuration, expressed in YAML and parsed with Configatron.

some_config:
  deployment:
    location: C:/Crimson
    operations:
      before_deployment: [Set-WebAppOffline, Uninstall-Service]
      after_deployment: [Create-Database, Import-SampleData, Install-Service, Set-WebAppOnline]
    connection:
      wmsvc: true
      address: BLUEPRINT-X86
      user: deployer
      password: deployer

What I Haven’t Talked About

What’s missing? An idea that got me interested was to partition the application into roles like database server, reporting server, web application server, etc. We mostly do single-server deployments, so I haven’t built that yet (YAGNI). Eric Hexter talks about application roles in a recent blog entry.

Another aspect where MSDeploy unfortunately doesn’t shine is error handling. Since we run important operations using the runCommand provider (used by -preSync and -postSyncOnSuccess) we would want to fail when something bad happens. Unfortunately MSDeploy, to this day, ignores errorlevels that indicate errors. So we’re back to console redirection and string parsing. This functionality is already in my MSDeploy helper for Rake, so you can rely on it to a certain degree. Manually scanning log files for errors, at least for the first couple of automated deployments is recommended, though.

Since we’re leveraging PowerShell on the server, why should we have to build the PowerShell script handling operations ourselves? I can imagine deploying the PowerShell-based PSake build tool and a PSake build script containing operations turned build targets. This will allow for common build script usage scenarios like task inspection (administrators would want that), having task dependencies, error handling and so on.

Wrapping Up

In this rather long post, I hope I could provide you with information how MSDeploy can be used to deploy your applications automatically. For us, over the last couple of weeks, MSDeploy in combination with our Rakefiles has helped us tremendously deploying an application that’s currently under development: The pain of delivering current versions to the customer has gotten a breeze.

Rake, YAML and Inherited Build Configuration

Posted in Build | Ruby at Saturday, January 30, 2010 3:03 PM W. Europe Standard Time

We’ve been using Rake for quite a while at work. Sometime last year I sat down and converted our ~30 KB NAnt build scripts to Rake, a light-weight Ruby build framework with low friction and no XML. Since then I have written a bunch of Rake tasks to support our builds (we use TeamCity).

I started a bit out of the blue, because frameworks like Albacore didn’t exist back then and other .NET-specific task collections didn’t fit our needs or simply were inconvenient to use.

Without prior Ruby experience it was also a great opportunity to learn Ruby and give the language and design concepts a spin. I have to admit, I like the fluent style of Ruby, it’s almost like the language tries to stay out of your way.

YAML

Soon after I started building the first Rake script I needed to configure the build for different environments. Like: in production, we have to use another database server. You want to externalize such information into a configuration file. Having database connection strings hard coded in your application’s App.config will make tailoring the application for deployment tedious and error-prone. I’ve been there, and I don’t recommend it!

I came across YAML which is an intuitive notation for configuration files (amongst others):

development:
  database:
    server: (local)
    name: Indigo

qa:
  database:
    server: DB
    name: Indigo_QA

production:
  database:
    server: DB
    name: Indigo_Production

Is that legible? I think so!

We use the configatron Ruby Gem to read such files and dereference configuration information in the build script.

configatron.configure_from_yaml 'properties.yml', :hash => 'production'

puts configatron.database.server
# => 'DB'

puts configatron.database.name
# => 'Indigo_Production'

YAML’s “Inheritance”

Another useful aspect of YAML is that it supports a simple form of inheritance by merging hashes.

qa: &customer_config
  database:
    server: DB
    name: Indigo_QA

production:
  <<: *customer_config
  database:
    name: Indigo_Production

Unfortunately this kind of inheritance has some subtleties as it wouldn’t work as you would expect. I read the snippet above like the production configuration inherits all values from qa and overwrites the database.name. Let's see:

configatron.configure_from_yaml 'properties.yml', :hash => 'production'

puts configatron.database.server.nil?
# => true
# Huh? That should be "DB".

puts configatron.database.name
# => 'Indigo_Production'

Actually, there is an article describing the problem with merging hashes in YAML files  that I found after our build broke in interesting ways after loading an incomplete configuration. The proposed solution is either to duplicate all configuration information between qa and production, or to use more anchors (&foo) and merge references (<<: *foo). I think both clutters a YAML file unnecessarily.

Custom Inheritance

After I identifying why composition doesn’t work as one would expect let’s see what we can do about it.

I went with solution based on a convention that inheritance should be defined using a default_to configuration entry.

qa:
  database:
    server: DB
    name: Indigo_QA

production:
  default_to: qa
  database:
    name: Indigo_Production

The default_to entry in the production section refers to another section that the configuration will be inherited from. You could also build inheritance chains like productionqadefault and additionally use ordinary transparent YAML hash merges.

Instead of initializing configatron from the YAML file, we’ll preprocess the deserialized YAML (basically, a Hash), evaluate the configuration inheritance chain and then pass the Hash to configatron:

yaml = Configuration.load_yaml 'properties.yml', :hash => 'production', :inherit => :default_to
configatron.configure_from_hash yaml

puts configatron.database.server.nil?
# => false

puts configatron.database.server
# => 'DB'

puts configatron.database.name
# => 'Indigo_Production'

The code for the Configuration class that accounts for evaluating the inheritance chain is up on GitHub.

How To Set Up Secure LDAP Authentication with TeamCity

Posted in Build | Networking at Monday, February 02, 2009 5:35 PM W. Europe Standard Time

Last week we got a TeamCity Enterprise license at work. After using this great product for about a year we found ourselves running out of available build configurations. (There are 20 in the fully-functional free Professional edition which should be enough to evaluate the product. I recommend giving it a try.) There are a couple of advanced features in the TeamCity Enterprise edition we were looking forward to, for example authentication against a LDAP directory, an Active Directory in our case (AD = LDAP + DNS + a bunch of other stuff).

TeamCity uses LDAP to determine if a user should be able to access the TeamCity web interface. It does that through the LDAP bind operation, asking LDAP to validate the username and password combination entered at the login page.

TeamCity Login Dialog

After hitting the login button TeamCity will connect to the LDAP server, basically taking the text entered in the dialog above passing it to the LDAP bind operation. If the server accepts the username/password combination this means that access is granted. Some things to take into consideration when using LDAP authentication are:

  • TeamCity does not authenticate against an organizational unit in Active Directory (X.500 address). It just determines if the user (authenticated by username and password) exists anywhere in the directory. You can vote on this ticket to get that fixed.
  • Because TeamCity does not try to get additional information on the user’s groups memberships it is currently (as of TeamCity 4.0) not possible to automatically assign TeamCity roles to an LDAP user.
  • If you use the default LDAP configuration settings as shown in the TeamCity documentation, the LDAP connection will be unsecured, making the username and password vulnerable to eavesdropping by anyone who knows how to use packet sniffer.
  • Specific to Windows: You do not need an Active Directory infrastructure with a Domain Controller in place. Windows also supports Active Directory Application Mode (ADAM) on Windows Server 2003, renamed to Active Directory Lightweight Directory Services (AD LDS) in Windows Server 2008.

Given the things above, what are your options to secure the LDAP connection? You could change the authentication scheme to not use "simple” LDAP authentication, but choose from a variety of SASL options. I didn’t go down that road, because when I started to configure LDAP for TeamCity I basically knew nothing about neither LDAP nor SASL.

Using LDAPS (LDAP over SSL), which is also supported by Windows servers running some AD mode, appeared to be a viable option to enforce secure communication between TeamCity and the LDAP server.

Installing The LDAP Server

Setting Up LDAPS with Active Directory (Domain Controller mode)

There’s not much set up needed with this configuration. When you install Active Directory in Domain Controller mode you should also get an instance of Certificate Services that will create a self-signed certificate for your domain controller. This certificate will be used for LDAPS connections to the directory server, which is typically the domain controller.

As an aside, I’m not expert in setting up AD, please refer to your network administrator.

Setting Up LDAPS with Active Directory Application Mode (ADAM) or Active Directory Lightweight Directory Services (AD LDS)

As noted above, this setup is supported on any Windows Server and does not require the full-blown “Domain Controller” version of Active Directory. ADAM/LDS supports user authentication either against the ADAM/LDS instance (users created in the directory) or a against local Windows accounts (through a user proxy, see below)

Installing ADAM or AD LDS

Installing ADAM/LDS differs depending on which Windows Server version you have. I did it with Windows Server 2003:

  1. Navigate to the Control Panel and open up the Software control panel applet, appwiz.cpl
  2. Click “Add or remove Windows features”
  3. Select Active Directory Services, click on the Details… button and select Active Directory Application Mode. Close the window.
  4. Scroll down to Certificate Services entry and check it. (IIS will also be installed as part of Certificate Services to support web registrations.)
  5. Click Next.
  6. On the next dialog, you will be asked what type of Root Certificate Authority (CA) to install. Select “stand-alone“ CA and also check the “Create key pair” option.
  7. The next dialogs allows to select different options for the Root CA keys and the CA itself. I went with the defaults.
  8. Certificate Services and ADAM will be installed.
  9. Under Programs in the Start Menu there will be a new folder named “ADAM”. Click on the “Create ADAM instance” link.
  10. The ADAM wizard pops up, click Next.
  11. Choose “Create new unique instance” and click Next.
  12. Enter the name of the ADAM instance. I chose “TeamCity”, because we’re using ADAM to authenticate TeamCity users. Click Next.
  13. Write down the ports that are presented in the next step. The default LDAP port is 389, and the port for LDAPS is 636. Click Next.
  14. In the next step, choose to create a directory partition. Mine is called CN=TeamCity, DC=test, DC=local. Click Next until you reach the “Import LDIF files” dialog.
  15. Import at least the MS-User.ldf and MS-UserProxy.ldf schemas to enable the creation of local directory users and user proxies for Windows accounts.
  16. Click Next and wait for ADAM to be configured.
Setting Up ADAM or AD LDS to Accept SSL Connections

There are two good tutorials that I used to enable SSL on ADAM, so I won’t reiterate them here. I suppose the process of enabling SSL on LDS is similar.

User Management

You now have a LDAP server running that will serve requests for the LDAP and LDAPS protocols. Next, you would have to add users to the directory, which could either be

  • Local directory users: user and password stored in the directory; used with “simple” bindings, or
  • Windows users: users password stored by the local Windows account manager or in a full-blown AD domain; used with “proxied” bindings (from the outside, these also appear as “simple” bindings).

Windows users require a user proxy in the directory, linking the proxy to a Windows account SID. The link between the proxy and the Windows account is established though the Windows account’s Security Identifier (SID) which must be supplied when the proxy is created. Setting up user proxies is a bit complicated and well worth another post.

Please note that by default authenticating users through their respective proxies (proxied binding) requires a secure connection, unless you explicitly disable it. Unfortunately the attribute to change is not given in the linked article: it is msDS-Other-Settings. You can either require security for simple or proxied bindings by setting RequireSecureProxyBind (defaults to 1) and RequireSecureSimpleBind (defaults to 0) to either 0 or 1.

The net result of the default ADAM configuration (RequireSecureProxyBind=1) together with the default TeamCity configuration (ldap://some-server, which is unsecured) is that authentication requests for user proxies will always fail.

Setting Up TeamCity

Setting Up TeamCity to Use The LDAP Server

The easiest way is to start with the default TeamCity configuration in <TeamCity data directory>/config/ldap-config.properties:

java.naming.referral=follow
java.naming.provider.url=ldap://ldap.test.local:389
java.naming.security.authentication=simple

Unless you want to require your users to enter their login in the DOMAIN\username format I recommend adding the loginFilter property:

java.naming.referral=follow
java.naming.provider.url=ldap://ldap.test.local:389
java.naming.security.authentication=simple
loginFilter=.+

Now we need to set up the correct "user name" string to present it to the LDAP server. This string is created from the text entered in the "Username" text box on the login screen ($login$) and differs depending on whether you use LDAP with AD or ADAM/LDS:

java.naming.referral=follow
java.naming.provider.url=ldap://ldap.test.local:389
java.naming.security.authentication=simple
loginFilter=.+

# AD - authenticate against the TEST domain
formatDN=TEST\\$login$

# ADAM and presumably AD LDS - users will have to reside in the CN=Users,CN=TeamCity,DC=test,DC=local container
formatDN=CN=$login$,CN=Users,CN=TeamCity,DC=test,DC=local

Setting Up LDAPS Security

Enabling LDAPS is pretty easy from a TeamCity perspective. You just have to change line 2 of the configuration above to use the secure LDAP protocol:

java.naming.referral=follow
java.naming.provider.url=ldaps://ldap.test.local:636
java.naming.security.authentication=simple
loginFilter=.+
formatDN=<some value>

Changing the protocol to use ldaps:// will not instantly work and users would not be authenticated. Why?

Trusting The Certificate

What does LDAPS mean from a Java perspective? If you work on a domain (AD) or use ADAM/LDS with SSL you are very likely to work with self-signed SSL certificates. Such certificates are inherently untrusted as they are not issued by some trusted party (and this trusted party will charge money). Nevertheless they are perfectly okay for your environment.

When TeamCity establishes the SSL connection to your LDAP server, it is first presented with that untrusted certificate – and bails. Here’s a snippet from the TeamCity log files:

[2009-01-27 16:14:39,864]  ERROR - Side.impl.auth.LDAPLoginModule - 
 
javax.naming.CommunicationException: simple bind failed: ldap.test.local:636
[Root exception is javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target]

To establish LDAPS connections successfully, you have to tell Java to trust your LDAP server’s certificate. Andreas Sterbenz has created a little Java utility called InstallCert that helps in that regard. Unfortunately you will have to compile it yourself against the Java 1.5 runtime, so here’s a compiled version that works with TeamCity.

Place the files from the ZIP in your <TeamCity root>\jre\bin directory. Open a command prompt and enter

java InstallCert ldap.test.local:636

Following the procedure described in Andreas' post, the utility will create a file called jssecacerts in the same directory. Overwrite <TeamCity root>\jre\lib\security\cacerts with that file.

After re-starting the TeamCity web server, it is now able to establish secured connections to the LDAP server. The user names and passwords transmitted over these connections will not be visible to outsiders.

Wrapping It Up

In this article I’ve shown you how to enable and secure TeamCity’s LDAP authentication in any Windows environment, be it an Active Directory domain or a couple of stand-alone Windows Servers. For both scenarios user management is centralized, either though the AD console or LDAP console in combination with the Windows user management console.

Figuring out all that has taken a considerable amount of time for me and hopefully saves you a couple of minutes that you can spend outside in the sun.

Debugging/Printing Custom NAnt Properties When Building A Project

Posted in Build at Monday, March 10, 2008 9:06 PM W. Europe Standard Time

A while ago, after reading Jean-Paul Boodhoo's excellent NAnt starter series, I switched my builds to NAnt (before I had just used VS to build). After a couple of hours of figuring out how to organize the structure of the build script, I was delighted how easy it could be to build whole Visual Studio solutions on the command line. NAnt also makes the development environment a lot easier to manage (and more fun) in regard to different configurations on the developer machine vs. test and production. Several projects come to mind in which I would have loved to use NAnt for a better experience when working in a team of developers.

NAnt allows us to apply different settings (think of a database connection string) for various environments and configurations, for which it relies heavily on the easy to grasp concept of properties. Properties may be defined anywhere within a NAnt project file, but I like to keep them separate in a file called default.properties, which might look like this:

<?xml version="1.0"?>
<project xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">
    <
property name="db.connectionstring"               value="Data Source=CRM03; Integrated Security=true" />
</project>

In the contrived example above, there is one property defined, db.connectionstring. A developer might want to use another database name on his machine, so he creates another properties file, local.properties, which is loaded by the build script and overwrites the default value of db.connectionstring.

<?xml version="1.0"?>
<project xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">
    <
property name="db.connectionstring"               value="Data Source=CRM_Database; Integrated Security=true" />
</project>

The build script loading both files (in case they exists):

<?xml version="1.0"?>
<project name="Project"
         default="all"
         xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">

    <!--
Load default configuration. -->     <if test="${file::exists('default.properties')}">         <echo message="Loading default.properties" />         <include buildfile="default.properties" />     </if>

    <!--
Load developer-specific configuration. -->     <if test="${file::exists('local.properties')}">         <echo message="Loading local.properties" />         <include buildfile="local.properties" />     </if>
    ... </project>

Read more about this concept in part 6 of Jean-Paul's series.

The interesting part is now how to output the properties when you run a build. It's easy to print property values using NAnt's built-in <echo> task:

<?xml version="1.0"?>
<project name="Project"
         default="all"
         xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">

    <!--
Load default configuration. -->     <if test="${file::exists('default.properties')}">         <echo message="Loading default.properties" />         <include buildfile="default.properties" />     </if>

    <!--
Load developer-specific configuration. -->     <if test="${file::exists('local.properties')}">         <echo message="Loading local.properties" />         <include buildfile="local.properties" />     </if>
    <echo message="Build configuration:" />     <echo message="db.connectionstring: ${db.connectionstring}" />
    ... </project>

In a larger project the NAnt properties file might grow pretty quick, depending on how many aspects of the system are dependent on environment specifics or build configurations. If you have 25 properties, it becomes a hassle to write an echo task for each of them. Also, when adding a new property, the developer has to remember to add the corresponding echo task.

The NAnt <script> task comes in handy here, which allows us to iterate through all properties and print them using an echo task we create on the fly. Note that we also get back NAnt's own properties starting with "nant.", which will be excluded from the output.

<?xml version="1.0"?>
<project name="Project"
         default="all"
         xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">
 
    <!--
Load default configuration. -->     <if test="${file::exists('default.properties')}">         <echo message="Loading default.properties" />         <include buildfile="default.properties" />     </if>
 
    <!-- Load developer-specific configuration. -->     <if test="${file::exists('local.properties')}">         <echo message="Loading local.properties" />         <include buildfile="local.properties" />     </if>
 
    <echo message="Build configuration:" />     <script language="C#">         <code>             <![CDATA[                 public static void ScriptMain(Project project)                 {                     System.Collections.Generic.SortedDictionary<string, string> sortedByKey = new System.Collections.Generic.SortedDictionary<string, string>();                     foreach(DictionaryEntry de in project.Properties)                     {                         sortedByKey.Add(de.Key.ToString(), de.Value.ToString());                     } 
 
                    NAnt.Core.Tasks.EchoTask echo = new NAnt.Core.Tasks.EchoTask();
                    echo.Project = project;                     foreach(System.Collections.Generic.KeyValuePair<string, string> kvp in sortedByKey)                     {                         if(kvp.Key.StartsWith("nant."))                         {                             continue;                         } 
                       echo.Message = String.Format("{0}: {1}", kvp.Key, kvp.Value);                         echo.Execute();                     }                 }             ]]>         </code>     </script>
    ... </project>
Page 1 of 1 in the Build category