About Me · Send mail to the author(s) E-mail · Twitter

At GROSSWEBER we practice what we preach. We offer trainings for modern software technologies like Behavior Driven Development, Clean Code and Git. Our staff is fluent in a variety of languages, including English.

Feed Icon


Open Source Projects


Blogs of friends

Now playing [?]

Error retrieving information from external service.


Page 1 of 2 in the PowerShell category Next Page

How We Practice Continuous Integration And Deployment With MSDeploy

Posted in Build | Deployment | PowerShell at Saturday, February 06, 2010 6:35 PM W. Europe Standard Time

About two years ago I quit the pain of CruiseControl.NET’s XML hell and started using JetBrains TeamCity for Continuous Integration. While being a bit biased here, I have to admit that every JetBrains product I looked at is absolutely killer and continues to provide productivity on a daily basis.

I’ve been a fan of Continuous Integration ever since. I figured the next step in improving our practice was not only to automate building/compiling/testing the application, but also deploy it either by clicking a button or based on a schedule. For example, updates to this blog’s theme and the .NET Open Space web sites are automated by clicking the “Run” button on my local TeamCity instance.

Deployment Build Configurations in TeamCity

Compare that button click to what we are forced to do manually for some projects at work. Every time we roll out a new version someone will:

  • Build the deployment package with TeamCity.
  • Download the deployment package, which is usually a ZIP containing the application and database migrations.
  • RDP into the production server.
  • Upload the deployment package.
  • Shut down the web application, Windows services, etc.
  • Overwrite the binaries and configuration files with the current versions from the deployment package.
  • Sometimes we have to match up and edit configuration files by hand.
  • Upgrade the database by executing *.sql files containing migrations in SQL Server Management Studio.
  • Restart web application and Windows services, etc.
  • Hope fervently that everything works.

I believe you can imagine that the manual process outlined has a lot of rope to hang yourself with. An inexperienced developer might simply miss a step. On top of that, implicit knowledge of which files need to be edited increases the bus factor. From a developer and business perspective you don’t want to deal with such risks. Deployment should be well documented, automated and easy to do.

Deployment Over Network Shares Or SSH

When I first looked into how I could do Continuous Deployment there were not many free products available on the Windows platform. In a corporate environment you could push your application to a Windows network share and configure the web application through scripts running within a domain account’s security context.

A different story is deployment over an internet connection. You would want to have a secure channel like a SSH connection to copy files remotely and execute scripts on the server. This solution requires SSH on the server and tools from the Putty suite (i.e. psftp) to make the connection. I had such a setup in place for this blog and the .NET Open Space web sites, but it was rather brittle: psftp doesn’t provide synchronization, integration with Windows services like IIS is not optimal and you’re somewhat limited in what you can do on the server.


Last year, Microsoft released MSDeploy 1.0 which was updated to version 1.1 last week. MSDeploy is targeted to help with application deployment and server synchronization. In this article, I will focus on the deployment aspects exclusively. Considering the requirements for deployment, MSDeploy had everything I asked for. MSDeploy either

  • runs as the Web Deployment Agent Service providing administrators unrestricted access to the remote machine through NTLM authentication, or
  • runs as the Web Deployment Handler together with the IIS Management Service to let any user run a specified set of operations remotely.

Both types of connections can be secured using HTTPS, which is great and, in my opinion, a must-have.

I won’t go into the details of how MSDeploy can be set up because these are well documented. What I want to talk about what concepts we employ to deploy applications.

The Deployment Workflow

With about three months of experience with MSDeploy under our belts, we divide deployments into four phases:

  1. Initial, minimal manual preparation on the target server
  2. Operations to perform in preparation for the update
  3. Updating binaries
  4. Operations to perform after the update has finished

The initial setup to be done in phase 1 is a one-time activity that only occurs if we decide to provision a new server. This involves actions like installing IIS, SQL Server and MSDeploy on the target machine such that we can access it remotely. In phase 1 we also create web applications in IIS.

Further, we put deployments into two categories: Initial deployments and upgrade deployments. These only differ in the operations executed before (phase 2) and after (phase 4) the application files have been copied (phase 3). For example, before we can update binaries on a machine that is running a Windows service, we first have to stop that service in phase 2. After updating the binaries, that service has to be restarted in phase 4.

Over the last couple of weeks, a set of operations have been identified that we likely execute in phase 2 and 4.

Operation Description During Initial Deployment During Upgrade Before Or After Deployment
Set-WebAppOffline Shuts down a web application by recycling the Application Pool and creating App_Offline.htm No Yes Before
Set-WebAppOnline Deletes App_Offline.htm No Yes After
Create-Database Creates the initial database Yes No After
Update-Database Run migrations on an existing database No Yes After
Import-SampleData Imports sample data to an existing database for QA instances Yes No After
Install-Service Installs a Windows service, for example one that runs nightly reports Yes Yes After
Uninstall-Service Stops and uninstalls a Windows service No Yes Before

It’s no coincidence that the operations read like PowerShell Verb-Noun cmdlets. In fact, we run operations with PowerShell on the server side.

The deployment directory that will be mirrored between the build server and the production machine looks like the one depicted in the image to the right.

The root directory contains a PowerShell script that implements the operations above as PowerShell functions. These might call other scripts inside the deployment directory. For example, we invoke Tarantino (created by Eric Hexter and company) to have our database migrations done.


$scriptPath = Split-Path -parent $MyInvocation.MyCommand.Definition

# Change into the deployment root directory.
Set-Location $scriptPath

function Create-Database()
    & ".\SQL\create-database.cmd" /do_not_ask_for_permission_to_delete_database

function Import-SampleData()
    & ".\SQL\import-sample-data.cmd"

function Upgrade-Database()
    & ".\SQL\update-database.cmd"

function Install-Service()
    & ".\Reporting\deploy.ps1" Install-Service
    & ".\Reporting\deploy.ps1" Run-Service

function Uninstall-Service()
    & ".\Reporting\deploy.ps1" Uninstall-Service

function Set-WebAppOffline()
    Copy-Item -Path "Web\App_Offline.htm.deploy" -Destination "Web\App_Offline.htm" -Force

function Set-WebAppOnline()
    Remove-Item -Path "Web\App_Offline.htm" -Force

# Runs all command line arguments as functions.
$args | ForEach-Object { & $_ }

# Hack, MSDeploy would run PowerShell endlessly.
Get-Process -Name "powershell" | Stop-Process

The last line is actually a hack, because PowerShell 2.0 hangs after the script has finished.

Rake And Configatron

As you might remember from last week’s blog post we use Rake and YAML in our build scripts. Rake and YAML (with Configatron) allow us to

  • build the application,
  • generate configuration files for the target machine, thus eliminating the need to make edits, and
  • formulate MSDeploy calls in a legible and comprehensible way.

Regarding the last point, please consider the following MSDeploy command line that synchronizes a local directory with a remote directory (think phase 3). PowerShell operations will to be performed before (-preSync, phase 2) and after the sync operation (-postSyncOnSuccess, phase 4).

"tools/MSDeploy/msdeploy.exe" -verb:sync -postSyncOnSuccess:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/Crimson/deploy.ps1 Create-Database Import-SampleData Install-Service Set-WebAppOnline ",waitInterval=60000 -allowUntrusted -skip:objectName=filePath,skipAction=Delete,absolutePath=App_Offline\.htm$ -skip:objectName=filePath,skipAction=Delete,absolutePath=\\Logs\\.*\.txt$ -skip:objectName=dirPath,skipAction=Delete,absolutePath=\\Logs.*$ -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/Crimson/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/Crimson

The command line is convoluted and overly complex, isn’t it? Now please consider the following Rake snippet that was used to generate the command line above.

remote = Dictionary[]
if configatron.deployment.connection.exists?(:wmsvc) and configatron.deployment.connection.wmsvc
    remote[:wmsvc] = configatron.deployment.connection.address
    remote[:username] = configatron.deployment.connection.user
    remote[:password] = configatron.deployment.connection.password
    remote[:computerName] = configatron.deployment.connection.address

preSyncCommand = "exit"
postSyncCommand = "exit"

if configatron.deployment.operations.before_deployment.any?
    preSyncCommand = "\"powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command #{"deploy.ps1".in(configatron.deployment.location)} #{configatron.deployment.operations.before_deployment.join(" ")} \""

if configatron.deployment.operations.after_deployment.any?
    postSyncCommand = "\"powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command #{"deploy.ps1".in(configatron.deployment.location)} #{configatron.deployment.operations.after_deployment.join(" ")} \""
end \
    :tool =>,
    :log_file => configatron.deployment.logfile,
    :verb => :sync,
    :allowUntrusted => true,
    :source => Dictionary[:dirPath, configatron.dir.for_deployment.to_absolute.escape],
    :dest => remote.merge({
        :dirPath => configatron.deployment.location
    :usechecksum => true,
    :skip =>[
            :objectName, "filePath",
            :skipAction, "Delete",
            :absolutePath, "App_Offline\\.htm$"
            :objectName, "filePath",
            :skipAction, "Delete",
            :absolutePath, "\\\\Logs\\\\.*\\.txt$"
            :objectName, "dirPath",
            :skipAction, "Delete",
            :absolutePath, "\\\\Logs.*$"
    :preSync => Dictionary[
        :runCommand, preSyncCommand,
        :waitInterval, 60000
    :postSyncOnSuccess => Dictionary[
        :runCommand, postSyncCommand,
        :waitInterval, 60000

It’s a small Rake helper class that transforms a Hash into a MSDeploy command line. That helper also includes console redirection that sends deployment output both to the screen and to a log file. The log file is also used to find errors that may occur during deployment (see below).

For your convenience, these are the relevant parts of the configuration, expressed in YAML and parsed with Configatron.

    location: C:/Crimson
      before_deployment: [Set-WebAppOffline, Uninstall-Service]
      after_deployment: [Create-Database, Import-SampleData, Install-Service, Set-WebAppOnline]
      wmsvc: true
      address: BLUEPRINT-X86
      user: deployer
      password: deployer

What I Haven’t Talked About

What’s missing? An idea that got me interested was to partition the application into roles like database server, reporting server, web application server, etc. We mostly do single-server deployments, so I haven’t built that yet (YAGNI). Eric Hexter talks about application roles in a recent blog entry.

Another aspect where MSDeploy unfortunately doesn’t shine is error handling. Since we run important operations using the runCommand provider (used by -preSync and -postSyncOnSuccess) we would want to fail when something bad happens. Unfortunately MSDeploy, to this day, ignores errorlevels that indicate errors. So we’re back to console redirection and string parsing. This functionality is already in my MSDeploy helper for Rake, so you can rely on it to a certain degree. Manually scanning log files for errors, at least for the first couple of automated deployments is recommended, though.

Since we’re leveraging PowerShell on the server, why should we have to build the PowerShell script handling operations ourselves? I can imagine deploying the PowerShell-based PSake build tool and a PSake build script containing operations turned build targets. This will allow for common build script usage scenarios like task inspection (administrators would want that), having task dependencies, error handling and so on.

Wrapping Up

In this rather long post, I hope I could provide you with information how MSDeploy can be used to deploy your applications automatically. For us, over the last couple of weeks, MSDeploy in combination with our Rakefiles has helped us tremendously deploying an application that’s currently under development: The pain of delivering current versions to the customer has gotten a breeze.

Creating Remote Desktop Connection Files On The Fly With PowerShell

Posted in .NET | PowerShell at Friday, February 13, 2009 6:28 PM W. Europe Standard Time

About a month ago I got myself an IBM Lenovo X301 laptop. It’s the third machine I regularly use: I have a desktop at work, one workstation at home and now there also is the sweet X301. Even with only the second machine at work in place I found it crucial to keep my work environments automatically in sync. Live Mesh has been of great help with that. Despite being a CPU hog at times, it’s a great tool to synchronize your files across machines.

Now with Remote Desktop Connection files (*.rdp) there is actually a bit more work involved than just syncing files. My workstations both have two monitors and I like Remote Desktop windows to be placed on the second monitor. The laptop, on the other hand just has a single display (surprise!) and less screen real estate. Because of the varying screen resolutions a RDP window on my home workstation will not fit the display area at work without obscuring other UI  elements like the taskbar. On the laptop, the situation gets worse as the display is simply not large enough to fit my default window size of 1500x1024 pixels.


The dimensions and location of the Remote Desktop window is stored in the plain-text RDP file itself. A conceivable solution might be to create three RDP files per server, one for each of my machines. Actually this would involve touching three files every time I need to change some value for the connection. Horrible.

Fortunately there is PowerShell to the rescue. There’s even a script that you can use to create RDP files on the fly. You’ll have to patch line 178 to make it work (see below). Also make the Set-RDPSetting function global by uncommenting lines 87 and 216.

# Old
$content += @("${tempname}:$($datatypes[$tempname]):$tempvalue")

# New
$content += @("${tempname}:$($datatypes[$tempname]):$tempvalue" + [System.Environment]::NewLine)

Now that we're set to use the Set-RDPSetting function, let us create a script to create a RDP file in the system’s temporary folder. See the highlighted lines below, there are three hashtables:

  • $workstations for your workstations (mine are AXL, FIRIEL and EOWYN),
  • $servers for the RDP servers you want to connect to and
  • $defaults for default connection properties.
. c:\Path\To\Set-RDPSetting.ps1

$workstations = @{
	'AXL' = @{
		'desktopwidth' = 1500
		'desktopheight' = 1024
	'FIRIEL' = @{
		'desktopwidth' = 1300
		'desktopheight' = 800
	'EOWYN' = @{
		'desktopwidth' = 1300
		'desktopheight' = 800
$servers = @{
	'' = @{
		'session bpp' = 24
		'domain' = 'DEVELOPMENT'
	'host.with.ssl.certificate' = @{
		'session bpp' = 24
		'authentication level' = 2
		'disable wallpaper' = $true
		'desktopwidth' = 1280
		'desktopheight' = 1024

$defaults = @{
	'allow desktop composition' = $true
	'allow font smoothing' = $true
	'alternate shell' = $null
	'audiomode' = 2
	'authentication level' = $false
	'auto connect' = $true
	'autoreconnection enabled' = $true
	'bitmapcachepersistenable' = $true
	'compression' = $true
	'connect to console' = $false
	'desktopheight' = $null
	'desktopwidth' = $null
	'disable cursor setting' = $false
	'disable full window drag' = $false
	'disable menu anims' = $true
	'disable themes' = $false
	'disable wallpaper' = $false
	'displayconnectionbar' = $true
	'domain' = $null
	'drivestoredirect' = '*'
	'full address' = $args[0]
	'keyboardhook' = 1
	'maximizeshell' = $false
	'negotiate security layer' = $true
	'prompt for credentials' = $false
	'promptcredentialonce' = $true
	'redirectclipboard' = $true
	'redirectcomports' = $false
	'redirectdrives' = $false
	'redirectposdevices' = $false
	'redirectprinters' = $false
	'redirectsmartcards' = $false
	'remoteapplicationmode' = $false
	'screen mode id' = 1
	'server port' = 3389
	'session bpp' = 32
	'shell working directory' = $null
	'smart sizing' = $true
	'username' = 'agross' # Does not really matter what's in here.
	'winposstr' = '0,3,2046,129,3086,933'	

Next we check if the local machine has a configuration section in the $workstations hashtable and the script has been called with parameters.

if ($workstations.Keys -inotcontains $Env:ComputerName)
	"The local computer is not configured."

if ($args -eq $null -or $args.Length -eq 0)
	"No arguments. Supply the RDP server name as the first argument."

Note the Patch-Defaults function and how we use it to add and replace keys in the $defaults hashtable. The replacement values come from $workstations and $servers, with the server settings having precedence. This way, you can configure the connection profile according to the current machine and the server to which the connection will be made. Flexibility!

function Patch-Defaults
		[Hashtable] $defaults = $(Throw "Defaults hashtable is missing"),
		[Hashtable] $patch = $(Throw "Patch hashtable is missing")

		if ($patch -ne $null)
			$patch.GetEnumerator() | ForEach-Object { $defaults[$_.Name] = $_.Value }

Patch-Defaults -Defaults $defaults -Patch $workstations[$Env:ComputerName.ToUpper()]
Patch-Defaults -Defaults $defaults -Patch $servers[$args[0].ToLower()]
$defaults.GetEnumerator() | Sort-Object -Property Name

Now that we have all connection properties in place, we create a temporary connection file from the hashtable key/value pairs and start mstsc.exe with that file.

$tempFile = [System.IO.Path]::GetTempFileName()
$defaults.GetEnumerator() | Sort-Object -Property Name | Set-RDPSetting -Path $tempFile -Name $_ -Value $_

# For debugging purposes.
#"Temporary file: " + $tempFile
#Get-Content $tempFile

$MstscCommand = $Env:SystemRoot + "\system32\mstsc.exe"
&$MstscCommand $tempFile

How do we use the script we just created?

You can either create batch files calling the script or use a tool like SlickRun to execute PowerShell with the script.

@powershell.exe -noprofile -command .\Open-CustomRDP.ps1 your.server.example

Another tedious task has been automated!

RSS Enclosure Downloader In PowerShell

Posted in PowerShell at Friday, July 25, 2008 11:07 PM W. Europe Daylight Time

I used this one today to download all images of a Picasa album:

$feed=[xml](New-Object System.Net.WebClient).DownloadString("http://the/rss/feed/url")

foreach($i in $ {
	$url = New-Object System.Uri($i.enclosure.url)


	(New-Object System.Net.WebClient).DownloadFile($url, $url.Segments[-1])

General Considerations For Securing Windows Servers On The Internet (And Anywhere Else)

Posted in IIS | Networking | PowerShell | Security | SQL Server | Windows at Friday, February 08, 2008 5:04 PM W. Europe Standard Time

By now there are a couple of Windows Servers that I actively manage, or, in the case of projects, I touched while moving the project forward. Most of these servers have an Internet connection. Since I've been asked how to make servers more secure and manageable, here are a couple of management rules I applied. Consider it a checklist.

  • Use a firewall and configure it accordingly.
  • Enable automatic Windows Updates and upgrade to Microsoft Update to receive updates of other Microsoft products like SQL Server.

Okay, the two above should have been obvious.

  • Keep the machine clean.
    Don't install any unnecessary software and don't leave any temporary files on the server. I've seen certificate requests lingering on drive C: and "test" folder remnants. A clean system might reveal hacker activity early in case they leave unfamiliar files behind.
  • Leverage the principle of least privilege.
    All users and service accounts should only have the minimum rights they need. Configure the file system such that system services can only access the files and folders they are in charge of. Typical example: Use a dedicated service account for SQL Server. (Setting this up on SQL 2005 is even more simple.)
  • Rename the Administrator account.
    Rename the Administrator account and make it match your preferred user naming scheme (i. e. agross). You might apply this to your whole organization if you use Active Directory. This is another hurdle to guess the Administrator account from a list of user accounts and works good with account lockout enabled (see below).
  • Create a new "Administrator" account
    and give it a very strong throw-away password. I typically use two or three concatenated GUIDs that I immediately forget. Disallow this user to change his password, remove all group memberships and disable the account.
  • Audit the hell out of the machine.
    Windows uses the Security event log to record security-related events. Configure auditing using secpol.msc and enable success and failure logging at least for
    • Account logon events,
    • Logon events and
    • Policy change.
    The last option is for tracking changes to the policy you just set.
  • Enable complexity criteria for passwords and account lockout.
    To be found in the same MMC snap-in as above. For account lockout I often go with the default values of 5 attempts and 30 minutes of threshold and duration.
  • Deactivate file sharing/Microsoft Networking on the WAN connection.
    Because it's most likely unneeded.
  • Secure RDP sessions using a certificate.
    Torsten has a nice write-up on how to leverage SSL to secure the RDP handshaking/authentication phase to overcome man-in-the-middle and ARP spoofing attacks. His article is available in German only so here's two-sentence recap: On the server's RDP security tab, enable SSL and choose the same certificate you use for HTTPS encryption. On the client side, enable server authentication.
  • Extra: Allow RDP sessions only from white-listed workstations.
    For a server that was hacked a while ago using an ARP spoofing attack (see bullet above) I wrote a Powershell script forces RDP session to originate from a list of known workstations. Each RDP user can have multiple allowed workstations. If a logon attempt occurs from another machine that RDP session is killed immediately.
    # Alert.ps1
    # Logon script for users with known RDP client names.
    # Array of users with known workstations.
    $userWorkstations = @{
    	"user2" = @("VALIDCOMPUTERNAME3")
    # Logoff executable.
    $logoffCommand = $Env:SystemRoot + "\system32\logoff.exe"
    # Trim the user name.
    $currentUser = $Env:UserName.Trim()
    # Cancel if a user that's not contained in $userWorkstations logs on.
    if ($userWorkstations.Keys -inotcontains $currentUser)
    # Send alert e-mail and log off if the user logs on from an unknown workstation.
    if ($userWorkstations[$currentUser] -inotcontains $Env:ClientName.Trim())
    	$subject = $("Unknown RDP client '{0}' for user '{1}'" -f $Env:ClientName.Trim(), $currentUser)
    	$message = New-Object System.Net.Mail.MailMessage
    	$message.From = ""
    	$message.IsBodyHtml = $false
    	$message.Priority = [System.Net.Mail.MailPriority]::High
    	$message.Subject = $subject
    	$message.Body = $subject
    	$smtp = New-Object System.Net.Mail.SmtpClient
    	$smtp.Host = "localhost"
    	# Force logoff.
    Set the script as a logon script for your users using the Alert.cmd helper script below.
    rem Alert.cmd - runs the Alert.ps1 Powershell script.
    @powershell.exe -noprofile -command .\Alert.ps1
  • Enable SQL Server integrated authentication.
    I don't see a need for SQL Server Authentication in most scenarios, especially if you're running/hosting .NET applications. However, in some projects I've worked on there seems to be a tendency towards SQL Server Authentication for no special reason.
  • Configure IIS log detail and directories.
    I tend to enable full IIS logs, that is, all information regarding a request should be logged. Also, I like my logs residing in "speaking" folders, so instead of %windir%\system32\LogFiles\W3SVC<Some number> they should be placed in %windir%\system32\LogFiles\IIS <Site name>\W3SVC<Some number>. This makes it easy to find the logs you're interested in.
  • Use SSH to connect remotely.
    There's a little free SSH server out there that should fit most user's needs. Besides a secure shell environment freeSSHd offers SFTP and port tunneling capabilities to tunnel insecure protocols. Authentication works natively against Windows accounts.
  • Deploy a server monitoring tool.
    I like to use the free MRTG tool, of course any other tool allowing you quickly view any uncommon activity to will do.
  • Use a dedicated management network interface, if possible.
    You should configure strict firewall rules for that interface. Allow access only from a known management subnet.

What rules do you apply to make your servers more secure and manageable?

Now Playing [?]: MorcheebaDive DeepEnjoy the ride (feat. Judy Tzuke)

Updating ISA Server's Local IPSec Endpoint Address for Dial-Up Site-To-Site VPNs Through PowerShell

Posted in Networking | PowerShell at Sunday, September 16, 2007 11:50 PM W. Europe Daylight Time

Wow, what a mouthful of a title.

What do we have here? We've got a Site-To-Site VPN to be established from a ISA Server with a dynamic IP address. Dynamic IP addresses are assigned by ISPs for consumer cable and DSL internet connections and will be renewed at least each 24 hours. That is, every time your internet connection is established, your public IP changes. This prevents you from hosting internet servers on top of your cheapo DSL service.

Site-To-Site VPNs basically connect two private networks over a secured internet tunnel. ISA Server supports Site-To-Site VPNs but requires a static IP address for the local VPN gateway. Since the local VPN gateway is your ISA Server that has a dynamic IP address, you have to either purchase a static IP address (expensive) or update the local endpoint's IP address every time your internet connection is established. A pitfall I ran into when I was setting up the VPN connection together with my very helpful colleague Timo from Synexus was that we were able to successfully connect to the remote site until the DSL connection dropped. All further VPN connection attempts failed because due to the reconnect, ISA Server's public IP changed. This isn't reflected by the remote site configuration, the dropdown box is just emptied:

Local Endpoint IP is empty after reconnects

Now you'll either have to change the local endpoint address using the ISA Server management console every time your DSL line is connected or you could automate that using ISA's rich COM object model.

Scripting the ISA Server COM objects with Windows PowerShell is easy. It actually takes just six lines of code to set the new public IP for all Site-To-Site VPN connections.

param([string] $newIP = $(throw "Update-IsaServerIPSecPublicIp: New IP address is missing"))

$root = New-Object -comObject "FPC.Root" -strict
$server = $root.Arrays | Select-Object -first 1
$publicNetwork = $server.NetworkConfiguration.Networks | Where-Object { $_.NetworkType -eq 3 }
$ipSecSite2SiteNetworks = $server.NetworkConfiguration.Networks | Where-Object { $_.NetworkConnectionType -eq 4 }

$ipSecSite2SiteNetworks | ForEach-Object { $_.VpnConfiguration.IPSecSettings.LocalServerAddress = $newIP; $_.VpnConfiguration.IPSecSettings.Save() }

The next step will be to integrate this script to an application that detects changes to IP address of certain network interfaces – we use DirectUpdate to keep our domains current. But any other tool like the free DynDNS Updater that is able to spawn a process when a new IP is detected will certainly do.

C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe -noprofile -command C:\Tools\Update-IsaServerIPSecPublicIp.ps1 <new IP address>

If you didn't use PowerShell until now, take a look at Scott Hanselman's most recent dnrTV webcast. Scott has recorded a really nice piece where he starts downloading the PowerShell bits and walks you though the basic concepts and some more interesting details.

Page 1 of 2 in the PowerShell category Next Page