Introduction to Kitchen-DSC

This is the second article in a “getting started with DSC Development”, covering tools and workflow. The previous post was covering general Windows Automation techniques, some of them existing since the Windows XP days and before!

This time we’ll look at tools, techniques and workflow that will enable us to make changes using scientific method, iteratively.


For the demo of my session at WinOps London I was looking for the simplest way someone could experiment with DSC and Windows configuration technologies, while also having a glimpse of a sustainable workflow.

The Test-Kitchen framework and its different components make it very convenient for getting started quickly, easily and making progress iteratively.
This post is an attempt to introduce you to Windows server automation development, starting from scratch (Windows Evaluation VHD) and manual (with GUI), incrementally moving towards a more streamlined workflow with PowerShell, Test-Kitchen, Kitchen-DSC, kitchen-hyperv, Pester, git and whatever comes handy!


This step by step guide assumes a few things:

  • Your dev system is running Windows
  • WMF 5 or greater is installed
  • Hyper-V is installed and configured
  • You have a Hyper-V vSwitch configured
  • You have IP (DHCP?), DNS and Internet access from your vSwitch


  1. Kitchen DSC development workflow
    a. The Release Pipeline Model
    b. Mapping Test-Kitchen to TRPM
    c. Test-kitchen Architecture
  2. Setup Test-Kitchen Environment
  3. Bootstrap Test-Kitchen
    a. SCM, cuz you need it
    b. .kitchen.yml basics
    c. Pester tests
  4. Putting it together
    a. Testing the Setup
    b. Making your first change


The Kitchen DSC development workflow

a. The Release Pipeline Model

When trying to automate the management of your server estate, it might be overwhelming to go from the injunction “manage your servers like cattle, not pet” to actually go and figure out what that means, what are the benefits, how to implement this to a Windows environment, and what tools might be useful along the way.

The good news, is that Microsoft in collaboration with Chef Software (Michael Greene and Steven Murawski), have released a brilliant whitepaper describing those concepts in details: The release Pipeline Model (TRPM thereafter)

Michael Greene also presented it during the WinOps conference 2016: The Release Pipeline Model (also on Channel9).

Although this model encompass the whole delivery pipeline of an IT Operation team, Test-Kitchen allows you to have a simplified version of this workflow on your dev machine, giving you similar benefits, a very short feedback loop, and will ease your transition to scale out to production.

In the TRPM the simplified flow is SourceBuildTestingRelease.

The use of Test-Kitchen I will describe here, can be seen within a global Release Pipeline, in the source step: That means I focus on what a developer would experiment locally (at least in an isolated environment).

You will develop your configuration locally by writing your source, building, and testing locally as you progress, until you’re happy with a change, and release them to your environment via commit / push / Pull-request (whatever your workflow is) to the environment’s Source repository. In turn, this change will trigger system-wide build / test / release: Your environment‘s global delivery pipeline.

Although you don’t have to do this locally, the target of this article is for someone to start experimenting quickly with little infrastructure, and a very short feedback loop, hence my recommendation on doing this locally.


Test-Kitchen can also be leveraged in a global Release pipeline, but it is out of scope for this post and would also require other components. This could be the next thing someone would try to set up once they’ve got their local playground working and get their first deliverable ready.

b. Mapping the Kitchen workflow with TRPM

There’s a very good intro to test-kitchen here that I found quite late in my journey, but some parts are focused on Chef (to be fair, the site is to “learn chef”).

Bear in mind that what I describe here is my interpretation, and one of the ways of working with it. I’m sure there are other approaches, which would be interesting to get feedback from (please blog them and post link in comments). Particularly, there are a few challenges and work-around when you want to scale this up to a bigger system (multi nodes), or integrate with other “variables” (team, technology…).

Here I assume the following scenario:

  • You (or your customer, even internal) have an idea of what the need is, configuration-wise
  • You are starting to configure a single node system
  • You know how to configure it manually (i.e. through the GUI)
  • You can find how to test the setting in PowerShell
  • You can find how to configure the setting in PowerShell
  • You are using TDD, because you should 🙂

Within that inner loop, for each user story (single, value-adding, configuration item) here’s what you’d do:

  1. Experiment with the configuration (i.e. manually)
  2. Write the Pester Test that confirms the settings is what you expect
    (you can try that on a manually configured VM)
  3. Run that test on an un-configured VM and confirm it fails
    (Because you should “Never trust a test that you haven’t seen fail“)
  4. Write the configuration of your configuration item
  5. Try to apply your configuration definition to the base image (converge)
  6. Run the Pester test you wrote and confirm it passes (Verify)
  7. Destroy the VM to start afresh (destroy)
  8. Run your test suite end to end (that’s a full test: converge – verify – destroy)
  9. Commit to SCM (i.e. git)
  10. To add another item, go back to one adding more test and configuration one at a time


You can roughly see the mapping of those steps with TRPM:

  • 1 to 4 are the SOURCE step
  • 5 could be seen as the BUILD (I know, you may see it as part of testing phase)
  • 6 – 8 the TESTING
  • 9 the RELEASE

Note: Especially if you’re starting with Policy Driven Infrastructure (aka. Infrastructure as Code) or DSC, make sure you start with something very simple that you are sure to work. It will make learning the platform and debugging easier with a small starting scope. Only then you can iteratively add small changes and test them often. This loop will allow you to add variables, and potential breaking changes, one at a time, in small batches.
This will be giving you instant and targeted feedback on specific changes when something does not work.
Same goes with implementing TRPM, if you try to do everything from the start, you have too many source of conflicts/failures, follow the KISS principle!

Test-kitchen handles the steps 5, 6, 7 automatically when you run a test.
Calling kitchen test, will actually run a sequence of the following commands.
Kitchen converge
Kitchen verify
Kitchen destroy

Unless an exception is raised and the process stops (so that you can troubleshoot the issue).
Under the hood, here is an overview of what’s happening during that process.

kitchen- workflow

c. The Kitchen Architecture

Apologies for the quick and dirty hand drawings, but that’s a quick way to illustrate. Below I’ve represented part of the architecture of Test-Kitchen and the components we are using for getting started with DSC.
To make the principle work across different OS and technologies, the design of Test-Kitchen is abstracting each specific technology in layers, to achieve low coupling, high cohesion.
The obvious benefit, is that in my example I use Kitchen-HyperV as my Test-Kitchen driver, but you could use any of the available alternatives or develop your own, the rest would still be relevant without changing.
For instance, Steven Murawski in his example uses vagrant to drive the virtualization layer.

Our case leverages DSC for the Provisioning (converging to our expected state), but you could also use Chef.

Pester is our test framework used as the verifier, but should we want to create and use another one, it is possible to do so without affecting the other layers.


Note: It is worth noting that despite their name and the diagram, the communication between test-kitchen and the different components of the Virtualization solution requires connectivity. To that regard, I have assumed (and drawn) a LAN that has access to Internet via a Layer 3 device that also provides IP (via DHCP?) and DNS, to both the Dev Machine and the test VM. It is not a hard requirement, but that is the easiest scenario to be in.

Note: Not all layers or components are represented in the diagram, and especially one is missing between Transport and Driver: the PLATFORM. To keep it simple for now I have left it out, and will explain more when looking at the .kitchen.yml configuration file.


Setup Test-Kitchen Environment

This bit is easy and has been covered by Steven Murawski on his blog: Getting Started With Test-Kitchen and DSC.
That’s what triggered me to give it a go and write this post not too long ago.

Install-package chefdk

Note: once you’ve installed the ChefDK via PowerShell Package management (or Chocolatey), the Path may not be updated in your current session (so you won’t be able to call the command chef). [choco v9.10 fixes that, not PPM]
Either open a fresh console, or add the ChefDK path to your Path environment variable:

$env:Path += ';C:\opscode\chefdk\bin\'

You can now call the chef command to install the required gems:

chef gem install kitchen-dsc kitchen-pester kitchen-hyperv


Bootstrap your Test-Kitchen for DSC

To get you started quickly and simply, you may start by watching the Mississippi PowerShell User Group recording on that subject: Acceptance Testing PowerShell Desired State Configuration with Test-Kitchen

If you’re more the kind of people who likes to play with technology before the explanation, you can try the following ways:


a. SCM, you need it. Period.

It might be obvious for many, but for the rest: You need a Source Code Management software.
Git is probably the most popular these days, and it’s pretty easy to install (remember, install-package git with WMF 5).

The whys and how are beyond the scope of this post, but you will want to version the changes you are making during your development. Best is to sync them to a central location (such as github / bitbucket or any other) to serve as a backup, or even better, to enable collaboration!

If you use git and github, you may want to fork my repo to your account, and clone it to your desktop.
To fork, start by browsing to the repo:

Click on ‘Fork’ on the top right corner:
ForkGithubThen clone your repository to your workstation:

What you get is a folder with all the files needed to get started, my slide deck from the WinOps Conf London (you won’t need it, feel free to delete it), and a with some light information.

Note: The gitignore is configured to ignore .iso files (see my trick for unattend.xml), .kitchen/ folders (the bit test-kitchen creates you won’t need to keep versions of, like VM diff disks).

The interesting bit for you is what’s in the TestKitchen folder.

The YAML file .kitchen.yml is your test configuration, more details to follow.

b. kitchen.yml basics

You can find the documentation for the configuration file .kitchen.yml here:
I’ll detail the specifics for Test-Kitchen on Windows using Kitchen-Hyperv and for Kitchen-DSC with Kitchen-pester.

Note: Ruby does not like BOM. This is annoying because on PowerShell natively creates files in UTF8 with BOM, which creates issues for Test-Kitchen.
Here’s the issue I got:

If you use Visual Studio Code for your Edits it should say at the bottom right in the status bar:

The file is a YAML definition of different section for the different components.


Hyper-V Driver:

As we’re using Hyper-V for this example, here’s how the driver section look like.
The driver settings are the defaults, unless a platform is defined in the Platforms section.

Bear in mind that different drivers might support different functionalities. For instance, the boot_iso_path is an addition I made (Merged by Steven Murawski in kitchen-hyperv 0.2.0) so that I could do the Unattend.xml ISO trick.

    name: hyperv
    parent_vhd_folder: C:\src\hyperv\WIN2012r2WMF5
    parent_vhd_name: WIN2012r2_WMF5.vhd
    boot_iso_path: C:\src\kitchens\TestScript\Unattendxml.iso
    #iso_path: C:\src\kitchens\TestScript\other.iso
    memory_startup_bytes: 1073741824
    vm_switch: NAT
    dns_servers: ['']

Remember that you should ensure the vm_switch has the right connectivity to communicate with your development machine.
After some issues with DHCP at work, I finally opted to use a NAT vSwitch available in Windows 10 or Win2016, here’s a good post on the subject.

As we have done manually in the other post, the hyper-v driver for Test-Kitchen will
1. create a differencing disk from the parent VHD
2. create a VM based on that diff disk and other configuration
3. Attach the ISO to the DVD device
4. Starts the VM


WinRM Transport:
The transport mechanism is in essence the protocol used to communicate with the VM.
SSH is the de-facto standard for Linux VMs and WinRM is the standard for Windows.
We can imagine that one day, someone may develop a PowerShell Direct driver for Test-Kitchen.

   name: winrm
   username: Administrator
   password: P@ssw0rd 
   #matches the Admin password set in the Unattend.xml file

This is what Test-Kitchen will use to transfer file, and invoke command remotely.

DSC Provisioner:
The provisioner we chose is kitchen-DSC, so this section allows you to configure the DSC engine (the LCM) on the VM with the settings. Bear in mind that we’re expecting WMF5 (the config settings changes slightly from wmf4 an wmf4_with_update).
Good to note that Kitchen-DSC now supports reboots initiated by DSC.

    name: dsc
    dsc_local_configuration_manager_version: wmf5
        reboot_if_needed: true
        #configuration_mode_frequency_mins: 30
        #debug_mode: none
    configuration_script_folder: examples
    configuration_script: dsc_configuration.ps1
    #modules_path: .
    #configuration_data_variable: configData
    # - xPSDesiredStateConfiguration
    # - PackageManagementProviderResource


The verifier is the driver that is used to check if the convergence is a success, in our case kitchen-Pester: It runs a series of Pester (all it can find) tests and if there are no failed test, the result is considered successful. Here I specify the path to my test files, but this folder is what Kitchen-Pester uses by default anyway.

   name: pester
   test_folder: Tests/Integration


This is the list of platforms available to run test suites. The most obvious use case is to have a test suite running on different platform such as Windows 2012 R2, and Windows 2016.

    - name: 2012r2_WMF5
      os_type: windows
      shell: powershell
        parent_vhd: C:\src\hyperv\WIN2012R2\WIN2012r2_WMF5.vhd
        dsc_local_configuration_manager_version: wmf5
    - name: WIN2016
        parent_vhd: C:\src\hyperv\WIN2016\WIN2016.vhd
        dsc_local_configuration_manager_version: wmf5

As you can see, you can override the driver’s default depending on the platform.

Note: When the platform is named win* it assumes windows, if the shell/os_type settings are not specified. Should you omit the Shell and Os_type and name your platform without prefixing with win, Test-kitchen will fail.


The test suites you would like to run, with order and target (platforms).
We’re keeping it simple for now so we’re just using a default suite without parameter.

    - name: default


c. Pester tests

Pester is a Test framework for PowerShell. Using its DSL coupled with PowerShell’s capabilities to query a system, you can leverage it to test if a system has its settings configured as you expect.
In the Test-Kitchen and DSC context, it allows you to validate that the configuration made by DSC after convergence is what you expect.

The test I use for this demo is very basic, as the only goal is to demonstrate how a test would pair with the configuration.
My dsc_configuration.ps1 example is a simple Script resource that will simply Create a file C:\winops.txt on the tested platform.

The test above expects to resolve C:\WinOps.txt and return the object.
As we’re starting with a ‘clean’ image every time, should the DSC Configuration not work this test would fail.

We’ll dig further to see how that workflow works end to end once we’re setup.

Kitchen + Pester + DSC: Putting it together

I’ve spent a long time on the purpose, explaining the workflow and the components, but I believe it’s something people miss far too often when experimenting with a new tool. Now that we have the background, and all the pieces, let’s connect the dots with an example.

a. Testing the setup

Assuming you’ve cloned my example, the first step will be to adapt the configuration file to match your environment:
Specifically you need to update your VHD details, unattendxml.iso, vm_switch and IP addressing.
Don’t forget to change the Parent_vhd in the platforms section.

When this is done, you can try a kitchen converge. If you see a VM being created and started, you should be good to carry on.

If you did not change the dsc_configuration.ps1, then the converge should be successful, make sure you test before doing further configurations. If it is, then you should commit and push your changes to your git repo.

Now that you’ve configured your machine, you want to ensure the configuration is what you expect: you will run the test that verifies that the file exists on the system.

Simply run kitchen verify, it will upload and execute the test on your test VM.

If it worked (no sea of red text in your console), you’re ready to go to the next step.
As you’ve successfully tested that a clean machine could converge to a configuration, and this configuration does what you expect, you can tear down the VM so that your next iteration starts again from a fresh VM. If you wonder why you should always start from a clean VM, please either watch my session at WinOps conference London or my post on Idempotence and Immutability.
Running a kitchen destroy will do that for you, only destroying the VM and the Diff disk test-kitchen created for you.


Sometimes you want this sequence to happen automatically all the way, unless something fails along the way. This is what the kitchen test command does for you.
It always starts from fresh, deleting previously created VMs, creating new ones and doing converge – verify – destroy.
In contrast, the kitchen converge only creates the VM if it does not exist. Calling it multiple times will only force convergence to your configuration, which is very useful when you’re trying things out. When you think you have it, just destroy your VM and try to converge from your base, or try a kitchen test, so that you’ll be ready for your next iteration if it works.

Note: Running this on a very raw VM, and doing the unattended setup step every time will add delay to your feedback loop, and at the end of the day, if you run your tests many times, it will slow you down noticeably. You should probably look into creating a lighter VM for your tests (reducing footprints, removing services, GUI, pre-installing Windows…), but always regularly test against raw VMs, so that you know you are not missing some steps. One way to do this while saving some disk space is to chain your diff disk (but be careful, changes to parents invalidates children).

Also, if you have lots of CPU intensive or memory/disk hungry applications running, this will massively slow your test down (that’s why the last recording is so loooong).

b. Making your first changes

In the previous step we merely adapted the configuration, and ensured it was working in your environment. Now that we validated it works, we can start making change to actually do some configuration.
But keep the steps small! We’ll start by making a simple change to discover the workflow.

Let’s have for objective to create another file next to the one already created by the configuration. We are not removing, the previous configuration yet, we want it to be our ‘Scientific Control’, so that we can compare the output or logs to what we just run. Remember that we want to limit the number of variables in our changes.

  1. Because you’re making incremental changes in configuration, you should spin up a new VM that is up to your current development: After ensuring previous VM are destroyed, simply run kitchen converge to create a clean VM and converge to your current expected state.
  2. Make your change, manually: Create a text file with some content in the right location.
  3. Create the test that validates the file is as you expect.
  4. Make sure running your new test fails on an cleanly converged VM (we haven’t changed the configuration yet): kitchen destroy; kitchen verify
  5. Automate your change by editing dsc_configuration.ps1
  6. Make sure your configuration is valid by running kitchen converge
  7. When the converge is successful, run kitchen verify to validate the state against your test
  8. If you’re happy with the result, try an end to end configuration by running kitchen test
  9. If it works, commit your change to SCM.

You’re ready to add a new change, going through the same process.


(apologies, the step 4 passed during the recording because of my pester tests, but shouldn’t… I did not follow my own advice to limit variability for the recording! I’ll release anyway and revisit later.)


I hope I gave a enough details for you to get started with DSC using Test-Kitchen, and provided a workflow that can help you getting started on a solid incremental wheel.

Test-Kitchen and its component like kitchen-dsc, kitchen-hyperv keep improving so do revisit periodically. If you find a bug, do post it on the issues of the github repo, and give details to reproduce. It’s actively maintained!

Thanks to Chef for making this tool open source, and for the community to contribute! And special thanks to Steve Murawski for helping me out getting started!

Let me know if I missed anything… like multiple configuration files, test-suites,  what happen next in the release pipeline, multiple-nodes and so on…


Preparing an Image for DSC development

This is the first article in a series that aim to explore one way of getting started with DSC Development, covering tools and workflow.

In very short and abstract, the life cycle of a Windows Server could look like:

  • Deployment (aka OSD)
  • Configuration
  • Maintenance
  • Decommission

This series aims to focus at the Configuration part, mainly in its development phase, but also covers the basics of creating windows Image for experimentation purposes. The idea is to get started experimenting quickly, but I would not use this approach of Image creation for something that needs to be maintained.

The Test-Kitchen framework and its different components makes it very convenient for getting started quickly, easily and making progress iteratively, but I wanted to start from the very beginning, for those who start from scratch with little automation experience.
Below I explain how someone can get started, by creating a base VHD image that will be used for experimentation.


If you want to understand the ultimate goal, and what DevOps means for some part of an IT Operations team in a Microsoft environment, I recommend watching Michael’s Greene talk at WinOps London about The Release Pipeline Model and read the whitepaper he published in collaboration with Steven Murawski.

For this series, I run my development workflow on my laptop running Windows 10, with WMF 5.1 (I’m on TP Fast Ring).
My virtualization platform for now is Hyper-V, because it’s available locally (when you enabled the feature). The main advantage beside having it readily available is the very quick feedback loop: you can spin up a clean server, configure it, assert its configuration and destroy it in a very short time. I assume you have a vSwitch configured to provide internet access to your VMs, and with DHCP.
I use git for source control.

Below we’ll see how to:

  1. Download Windows 2012 R2 Evaluation VHD
  2. Prepare Windows 2012 R2
    a. Install WMF5
    b. Sysprep
    c. Test our base image (with diff disk VM and Pester)
  3. Automating VM Setup/Install
    a. Unattend.xml
    b. DISM – Modify and Capture wim Image
    c. Windows System Image Manager


Download Windows 2012 R2 Eval vhd

As the goal is to learn as quickly as possible how to get some value for a production system, we’ll be using Windows 2012 R2, but the principles are applicable to most version of Windows since 2008 R2 at least.
By default Windows 2012 R2 comes with WMF 3 (which Includes PowerShell 3), so we’ll have a look at configuring it quickly to our need, as you would do for your production image.
There are also a few tweaks that should be done to the image, to ease the management out.

To do our testing and development, let’s download the Evaluation VHD from Microsoft Download. Even if the evaluation runs out after 180 days, you’ll be able to recreate one easily.

Prepare Windows 2012 R2

With this VHD, we can create a VM the ‘Hard way’: manually, using the GUI, and prepare it. I do it this way, so that you can get a feel of the process, but I might show you how to do that automatically in another post.


Now that our VM is ready to be customized, we can get it up to our standard, in this case I’m only interested in installing WMF5 and doing a Windows Update, so that future deployment take less time.

a. Install WMF5

To install WMF5 without thinking too much about it, Ryan Rates created a sweet little script to manage that for you.
As I expect your VM to have Internet Access (through External vSwitch, or NAT) you can simply run it from his Github account.

Or if you want to use his shortened url:
iex (New-Object Net.WebClient).DownloadString(‘’)

This will install the right WMF5 Microsoft update for your system, and restart your machine.
When you’ve done this you can patch the system and Sysprep it to make it ready to be re-deployed fresh but with latest updates and WMF5.

b. Sysprep

As I tend to re-image my VM every now and then after tweaking it, I like to create a batch/ps1 file under the Sysprep folder that allows me to just execute it when needed.
It’s never big, but it allows for some basic clean-up before sysprep’ing, you’ll see later what for.
For now, I only run this: Sysprep.exe /generalize /oobe /mode:vm


Now we have an up-to-date Base image, ready to be automated and do whatever we want to it.
Worth taking a copy, just in case, although Test-kitchen actually makes change to a diff disk, so there’s little margin for error.

We have not baked-in any application/binary on it, and I’d recommend not to do so yet.


c. Test base image: Diff VHD and Pester

We have our image, we believe it’s up to our expectation (WMF5) but we should ensure / Validate that when we deploy that image in a new VM, it will still have the configuration we expect.

One good way to do so, is to create a new VM with a copy of the hard disk, start it, and check we have the right version in $PSVersionTable.

A better way would be to take a snapshot of that disk before starting so that we can always revert.

The best way is to create a differentiating disk based on our image so that changes are committed to another disk that we can simply destroy when we don’t need it anymore. No changes will be committed to the base image we created, and we can create several VMs with diff disk off that base (so you can test different scenario, without copying the base image several time).

Another reason to show you this, is that’s what Test-Kitchen (or more accurately, Kitchen-Hyperv) does under the hood.

Create VM with Diff Disk


Create and Run Pester Test

Pester is a Test framework for PowerShell.
In Windows 10, WMF 5 and Pester are bundled with the OS by default. But when you have installed the WMF 5 from the MSU, Pester is not present on your system.
WMF 5 includes the PackageManagement module, which is a Package Management Management tool (yes, Management x 2, not a typo).
This module let you interact with the PowerShell Gallery, where Pester is available for download via this tool, although it requires NuGet (another provider) under the hood.

Note: You should first bootstrap nuGet, but even if you import the provider, you won’t be able to use it until you restart your PowerShell session:

You can now create a quick Pester test that ensure the PSVersion.Major from PSVersionTable is the one expected.

And run it in ISE with F5 (execute page).

Remember to “Never trust a test that you haven’t seen fail“, so at least ensure that the test fails when you change the logic (replace 5 by 6, or -ge by -lt if your image is correct).


Feel free to delete that test VHD and the VM, once the test is successful. Otherwise you should ensure it’s setup properly and sysprep again.

You probably noted that when it boots you still have to configure some bits and pieces and click a couple of times, we’ll address this next.

Automating VM Setup/Install

a. Unattend.xml – Unattended Windows Install

As you saw from the previous steps, when booting a sysprep’ed image, you need to go through the setup again, and some of the elements of your base image might not be configured to your taste (Server Manager popping up, winrm not configured, Firewall settings, RDP, administrator password not set and so on…).

Although it is possible to bake-in all your default configuration in your image, you should be very careful. Whenever you make changes to an image, even if it becomes your base, this is not your starting point. The most un-configured image you get is what is given to you by Microsoft, such as the VHD or better the ISO. This is the RAW state of a machine, or the absolute starting point.

The set of changes you apply to create your base image from RAW should be documented and ideally automatic, because your base is a transitional point between what you get (RAW) and where you want to be (your end state).

The traditional way to automate a Windows Installation (or first run/oobe) is by use of Implicit unattend files (or Autounattend.xml) in one of the default places the installation process is looking for.

Note: If you have different files in different places, remember the Implicit Answer File Search Order.

In short, if you place one autounattend.xml at the root of a removable media, the setup process will automatically pick it up and apply it. You can do that with floppy drive but I prefer using DVD drives as it’s easier to find support for them (i.e. in virtualization environment) and it’s easier to create an ISO programmatically.

I found this gem ‘hidden’ on the TechNet Gallery that allows you to create an ISO from a given list of files or folders.
I’ve added it to my repo in scripts\IsoFile.ps1, so that the function can be dot sourced, and the ISO creation automated.


Once you have your iso, you can create a VM to check that the settings are applied to your taste.

With the new VM starting with the unattend.xml on the removable media, the Setup steps should be skipped and the machine should be configured for you.
I started with Matt Wrock’s Unattend.xml from his post about creating windows image with packer.

You can manually edit the file, but when you want to design your own, see what’s available and so on, you’ll soon need to do it the right way, which I’ll show you next.

b. DISM – modify and capture your image .wim

Here’s a couple of tricks that can prove useful when working on image.

Note: WIM files can store several images while reducing footprint via Single-instance storage (data de-duplication). Make sure you refer to the correct one when updating (the index parameter, 1 usually).

Updating an offline image (VHD or WIM)

We’ve seen earlier how to install WMF5 and run Windows update before Sysprep’ing an image to ensure it’s up to date before deployment.
In the case of WIM or VHD(X) windows image, some of this management can be done without creating and booting a VM thanks to the DISM cmdlets.
You can apply a patch or update (msu or cab, no exe), add package (appx), or apply unattend.xml file.

Note: If you want to reduce the footprint of your image (to speed-up your feedback loop), you can:


Capturing a WIM Image

If you want to deploy your created image through WDS, or if in our case you need to create a catalogue for Windows SIM (System Image Manager), here’s how you extract a WIM image from your VHD (you can also do it on a running machine).

Note: When you download an Installation media (the ISO), it contains two WIM files, a boot.wim and an install.wim. The last one is similar to what we’re generating in the capture process.

c. Windows System Image Manager – unattend.xml authoring

When authoring the Unattend.xml files, tiny changes can be done by hand, but more extensive changes might invalidate the XML, breaking the Sysprep or Setup process.
Using the Windows System Image Manager allows easy edit and ensure validity.

The SIM tool needs a catalogue extracted from a WIM Image, here’s how to extract it:


You can now edit the Unattend.xml using Windows SIM.


Although the plan is to use DSC for provisioning, the default settings of Windows 2012 R2 are not really facilitating it. For instance, WinRM and the firewall are not configured and so on…
Most of the configuration can be done in this answer file, but some changes are impractical and need to be scripted.

One way to go about that is to setup autologon temporarily (using the LogonCount), and create a FirstLogonCommands.
Those scripts will be run as the Administrator and will run after logon but before showing the desktop.

You can call this, and point to a command or a script file, that you can for instance bundle in your ISO, or call remotely from a web service:
powershell.exe -ExecutionPolicy ByPass -Command “& { <your cmd here> }”

Note: If you add the legal disclaimer before this step, it will block the automation as someone will have to press OK manually before the autologon kicks in. This is why in my Sysprep prepare script, I delete the Registry entry beforehand.

REG DELETE HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon /v LegalNoticeCaption
REG DELETE HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon /v LegalNoticeText



This you should have given you most of the information needed to create your base image and an ISO you can use to bootstrap your Windows Server. Please let me know if I missed something.

I listed a few tips and tricks that I’ve found along the way, and I introduced a few concepts, tools and techniques I will refer back later.

Wrap .Net method in PowerShell the lazy way

This is a work through of some exploration I implemented in PowerObject and MethodHelpers, the gist shows an example of the latter.

[edit: Code updated and new gist to illustrate bundling two methods in one Cmdlet should you need to. This will work only if there is no method overload conflict (two method with same signature) or if variables have same name but different types within or across method overloads.]


Some time ago I started implementing log4net for PowerShell for my Log4ps module, and I started to wrap some methods or object creation in commandlet to make them more PowerShell-y-ish. There are many classes and methods, so that quickly became annoying and repetitive, even though I did not need all of them…
And I’m a bit of a DRY adept, when I can, so I started exploring…

The first helpers I did was to make a similar function to New-Object, but that would be a bit more clever based on the type you where giving as a parameter.

The hypothesis was that if you know which type (class) you wanted to instantiate, DotNet is clever enough to already have the metadata about the constructors and the writable properties of the object:

 PS C:\> ([System.Drawing.Rectangle].GetConstructors() | select Name,@{N='params';E={($_.getParameters() | % { "[$($_.ParameterType)] $($"}) -join ',' }})
Name params 
---- ------ 
.ctor [int] x,[int] y,[int] width,[int] height 
.ctor [System.Drawing.Point] location,[System.Drawing.Size] size

With that information, you could define your different Commandlet signature, or ParameterSets, you’d have one parameter Set with parameters x, y, width and height, and one with location and size.
Keeping the order is important for the next step, and all parameters are mandatory for each constructor/parameterset they’re defined in.

So only with the type, I could find all the metadata I needed for creating New-Object parameters specific to that type on the fly using DynamicParam { } block.

So while I was down there, I thought let’s explore further ‘down the rabbit hole’…

Second assumption was that each type defines properties that can be set once the object is instantiated. Wouldn’t it be cool, if you could add those writable properties as parameters, and in one command instantiate the object and set properties straight after that.

[System.drawing.Rectangle].GetProperties().Where{$_.CanWrite} | select name,propertytype

Name PropertyType 
---- ------------ 
Location System.Drawing.Point
Size System.Drawing.Size 
X System.Int32 
Y System.Int32 
Width System.Int32 
Height System.Int32

So that’s cool, but those properties are already set by the constructor, so I don’t need to set them again.

Looking at other examples:

[System.Drawing.Bitmap].GetProperties().Where{$_.canwrite} | select name,PropertyType
Name PropertyType 
---- ------------ 
Tag System.Object 
Palette System.Drawing.Imaging.ColorPalette

I found objects that are writable, and not in the constructors, so doing the same thing than with Constructors parameters, you can dynamically add those parameters via the DynamicParam{} block, and set those properties after the object is instantiated in the process block.

As we kept the parameters for each parametersets in order (via the Position attribute of the Parameter), we can simply invoke the original New-Object command with the parameters in ArgumentList.

$instanceOfObject = New-Object TypeName $type.ToString() ArgumentList $parameters

The whole thing wrapped up in a function called New-PowerObject, the parameters populates and show in intellisense as soon as you have typed the class wanted.


Finally, the second helper used similar techniques to find the overload definitions of a public static method; use regex to extract type and name; and return the dynamic parameters to be inserted in the DynamicParam{} of a wrapper commandlet. In the process block of the wrapper, you would call some code that would invoke the right method overload, by finding it based on the ParameterSetName in use, and sending the $PSBoundParameters.

The end result allows you to quickly create a wrapper function with very few lines, and the ability to extend on the functionality of that method:

function Resolve-DNSHost {
 Param ()
 DynamicParam {
 Get-DynamicParamForMethod -method ([System.Net.Dns]::Resolve)
 process {
 Invoke-MethodOverloadFromBoundParam -method ([System.Net.Dns]::Resolve) -parameterSet $PSCmdlet.ParameterSetName -Parameters $PSBoundParameters


I had that in my repo for a while and used it a bit, and today thought it could be useful to others!

Let me know if you think it’s useful and if you use it!



Reading list

I wanted write down somewhere what technical books I’ve read, or I want to read in future.

Here’s my list, please comment to suggest some more! A couple of them haven’t been released yet, but I want to grab them either when they’re released, or through MEAP.

Adding a few suggested by BenH following DSCCamp:

That’s all I have in mind at the moment, but my books are all stacked in a corner of the leaving room, waiting for the bookshelf to be built…

What do you think, anything worth adding?