Getting Started with LogStash on Windows using Docker and nxlog

Summary

Today we’re going to be doing the absolute bare minimum to get up and running using LogStash in an ELK (Elasticsearch, Logstash, Kibana) setup to aggregate Windows event logs.

To do this we’re going to use:

Why nxLog instead of logstash-forwarder? Just simplicity at this stage, we’re doing the bare minimum to get up and running, and an installer and a plethora of predefined Windows nxLog configs made that easy. I hope to do some research into a more in-depth comparison of nxLog vs logstash-forwarder on Windows at some point, but that’s not today!

Setting up Docker & the ELK container

Setup - Boot2Docker for Windows

Install VirtualBox from here.

Install Boot2Docker from here.

Open a new PowerShell window and run

boot2docker up

Then execute the 3 lines of environment variables it prompts you to run

boot2docker-upNow, because the Boot2Docker VM automatically shares C:\Users\ on your host machine as /c/Users/ internally, we’re going to create a “logstash” folder in your Documents folder, and a logstash.conf  text file within that.

logstash folder

Populate that log file with the following very simple config

input {

  udp {
    type => "WindowsLog"   
    codec => json 
    port => 3515
  }

}

output {
  elasticsearch { 
     host => "127.0.0.1"
     cluster => "logstash"
  }
}

Excellent! We can now download and start our container! In the same PowerShell window enter the following (editing line 2 to reflect your documents path):

 
docker run -p 80:80 -p 3515:3515/udp `
      -v /c/Users/smartin/Documents/logstash:/etc/logstash `
      willdurand/elk

docker run
(Yours will look different if you haven’t downloaded wildurand/elk previously.)

Excellent! So long as logstash isn’t restarting every few seconds (which suggests you have a syntax error in your config) we can proceed to setup our Windows log agent!

Setting up NxLog

NXLOG-CE Setup

I chose to install NxLog on my host computer to save spinning up a Windows VM which takes forever. You can install it on a VM if you prefer, just make sure that you give it a NIC which has access to VirtualBox’s Host-Only Adapter, as we’ll be using that to communicate with LogStash!

Before we start, we need the IP of the boot2docker VM. In a Powershell window that is the same security context as the one you ran boot2docker in (i.e. administrative or non-administrative), run the following command.

Boot2Docker ip

Now download and install NxLog from here.

Open up C:\Program Files (x86)\nxlog\data\nxlog.conf and enter the following

## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
 
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
 
 
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
 
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
LogLevel INFO
 
<Extension charconv>
    Module      xm_charconv
    AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2
</Extension>
 
<Extension json>
    Module  xm_json
</Extension>
 
<Input in>
    Module      im_msvistalog
    Query       <QueryList>\
                    <Query Id="0">\
                        <Select Path="Application">*</Select>\
                        <Select Path="System">*</Select>\
                        <Select Path="Security">*</Select>\
                    </Query>\
                </QueryList>
    Exec    convert_fields("AUTO", "utf-8");

# For windows 2003 and earlier use the following:
#   Module      im_mseventlog
    Exec        to_json();
</Input>
 
<Output out>
    Module      om_udp
    Host        192.168.59.103
    Port        3515
</Output>

<Output file_out_test>
	Module 		om_file
	File		'C:\Program Files (x86)\nxlog\data\nxlogtestout.log'
</Output>
 
<Route 1>
    Path        in => out
</Route>

<Route logstash_debug>
	Path		in => file_out_test
</Route>

Customising the host IP to reflect the IP you got earlier from the boot2docker ip command.

The logstash_debug file isn’t strictly necessary, but is helpful to confirm that nxlog is actually sending log entries somewhere. Be sure to clean this up before using this anywhere that matters!

Once that’s done, start the service (you’ll need to be in an administrative PowerShell window naturally).

Start-Service nxlog

You should immediately see C:\Program Files (x86)\nxlog\data\nxlogtestout.log be created and start filling up.

Browsing Kibana

Okay, hopefully all’s gone well so far! Now we can get to the good stuff (in this case, your logs in Kibana).

Then copy and paste the IP you retrieved earlier using boot2docker ip into your browser and… Voila!

Settings - Kibana 4 - Google Chrome

Select the only time-field name option available to you, click create and you are live.

To see your logs, click the discover button.

Discover - Kibana 4 - Google Chrome

While it’s a relatively daunting interface to start off with, you’ll soon get used to it.

Before we do anything else though, let’s test it out by creating a new log entry in the Application Windows Event Log.

In your PowerShell window execute the following:

New-EventLog –LogName Application –Source "Hello LogStash" -ErrorAction SilentlyContinue
Write-EventLog –LogName Application –Source "Hello Logstash" –EntryType Information –EventID 1 –Message "Don't panic!"

Now go back to your Kibana browser window and enter the following (capitalisation is important):

Message:panic

Then click the search icon!

Discover - Kibana 4 - Google Chrome -  don't panic

Hurrah! We’re aggregating live logs🙂

Now go and figure out how to search Kibana properly with https://www.mjt.me.uk/posts/kibana-101/.

Then go and figure out how to consume other log types

Further Reading

The above draws heavily from the below articles. All I did was tie them together.

Getting Started with Chef on Windows Server – Part 3a – Packer, Vagrant, and Vagrant Omnibus

Introduction – HashiCorp Atlas

It’s been a fair few months since my last post in this series (or at all for this matter) and I haven’t made anywhere near the progress with Chefifying (definitely a word) my environment as I would like due to new more urgent projects.

Since my last post, HashiCorp has announced Atlas as a tech preview, which (in part) replaces VagrantCloud which I referenced in part 3. HashiCorp are placing Atlas as a solution to a large portion of your DevOps deployment lifecycle.

how-it-works-4a283ee2

This is powered by four components:

  • Consul (monitoring & service discovery)
  • Packer (VM template management)
  • Vagrant (Repeatable development environment creation)
  • Terraform (Cloud agnostic infrastructure-as-code)

Atlas looks to be the wrapper that ties all these together to make it easy to manage vagrant boxes, terraform deployments, packer images and consul configurations in a single place as a coherent pipeline.

Terraform looks very interesting to me as it’s placed in a CloudStack style space but looks much lighter weight. But currently we’re only interested in Vagrant and by extension Vagrant boxes and Vagrant Cloud.

So what happened to Vagrant Cloud?

vagrantcloudandatlas

Well the good news is that all existing VagrantCloud Boxes and URLs still work, so in theory Part 3 will still work for everyone!

However, for some reason, when you search Atlas for “Windows” the kensykora/windows_2012_r2_standard box we were using no longer shows up!

windows search on hasicorp atlas

No idea why as it still exists, but anyway this gave me a good excuse to play with Packer!

Rolling your own Windows 2012 R2 Trial Vagrant box with Packer & Joe Fitzgerald

Why? Well, initially I thought kensykora’s Vagrant box had disappeared which was what caused me to look into this in the first place, but there are a few additional reasons:

  1. kensykora’s box took a very long time to download (I imagine because the chap was dutifully hosting it on his own hardware in which case power to him and we shouldn’t complain about conveniences we get for free!)
  2. Packer allows you to roll your own boxes from the Windows Trial ISOs, meaning you’re not dependent on other people keeping their boxes up-to-date with fresh trials
  3. Packer is a useful tool in its own right. It allows you to create your own golden images (even if they’re just Windows with the latest updates installed) and deploy them pretty much anywhere, AWS, VMWare, VirtualBox, Azure, you name it.

Like pretty much every tool in this space it has its origins in in the nix world, but fortunately all the hard work of creating Windows packages has been done for us already by Joe Fitzgerald in his freaking awesome Windows Packer GitHub repo.

There’s a lot of stuff going on with Packer, but we’re not trying to explore Packer in depth just yet. All we’re trying to do is roll a Windows 2012 R2 trial Vagrant box with the minimum amount of fuss.

You will need

  1. VirtualBox
  2. Vagrant
  3. Some knowledge of the above two (the previous two parts of this blog series would be helpful! :))

Step 1) Install Packer

Well… I say install, more like extract and add to your path environment variable.

Download the windows version from https://packer.io/downloads.html and extract it anywhere you like, and add that path to your PATH environment variable like so:

[Environment]::SetEnvironmentVariable("path", "$($env:path);C:\Users\smartin\Documents\packer", "machine")

Where “C:\Users\smartin\Documents\packer” is the path you extracted packer to.
You could of course set it using the GUI like a normal person if you prefer!

Step 2) Download and extract Joe’s Packer-Windows repo

Go to https://github.com/joefitzgerald/packer-windows and hit the “Download Zip” button.

download zip github

Then extract the Zip to wherever you like (though preferably on a hard disk with some space as this directory will be where the ISO is downloaded to AND where the resultant box file is created).

Step 3)  Disable Windows Updates, Enable Head, and Build

Okay you don’t have  to disable Windows updates, in fact, if you’re building a golden image for a production environment you absolutely should not do this step. However, if you’re just trying to get a 2012 R2 Vagrant box built as fast as possible (i.e. in less than an hour!) then follow Joe’s steps to disable Windows Updates, which are:

Open the directory to which you extracted packer-windows-master.

packer-windows-master

Find answer_files\2012_r2\Autounattend.xml and uncomment the section that starts “WITHOUT WINDOWS UPDATES” and comment out the section that says “WITH WINDOWS UPDATES”. It should look like the following screenshot once you’re done:

comment out windows updates

This will cut out a lot of time for the packer build!

Next, open up packer-windows-master\windows_2012_r2.json and change the virtualbox-iso headless value to false. This isn’t strictly necessary, but helps you see what’s going on and appreciate how awesome it is that you don’t have to do all this by hand!

headless false

Save, that, open a PowerShell window and CD to your packer-windows-master directory and run:

packer validate windows_2012_r2.json

then once that returns successfully

packer build windows_2012_r2.json

This will take a while to run as it does the following:

  1. Downloads the Windows 2012 R2 Trial ISO
  2. Installs Windows 2012 R2 Trial to a temporary VirtualBox VM
  3. Packages that VM up into a Vagrant box and dumps it into your packer-windows-master directory

packer build

*snore*

finished building vagrant box

This will throw errors about VMwareapplication not existing, but that’s fine, we don’t want a VMware template.

Finished? Then let’s proceed to register it.

Step 4) Register the box with Vagrant

Now, you’ve not had to do this in any of the previous guides because we always used Vagrant Cloud for our Vagrant boxes. But this time we need to tell Vagrant about the new box we’ve just created!

vagrant box add C:\Users\smartin\Downloads\packer-windows-master\windows_2012_r2_virtualbox.box --name "smartin-2012r2"

add vagrant box

The name is completely arbitrary, do change it for your purposes, and the location obviously has to be amended to reflect where your packer-windows-master directory is!

 Step 5) Install Vagrant Omnibus and Create a Vagrant Project

Vagrant Omnibus is a Vagrant Plugin that ensures Chef is installed on your Vagrant box as part of the provisioning process (much neater than my messy PowerShell script in Part 3).

vagrant plugin install vagrant-omnibus

install vagrant omnibus
With that done you can create a new folder (wherever you like) for your vagrant project. Mine’s called vagrant-base-2012R2 but it really doesn’t matter.

In that new folder, create a file named VagrantFile (no extension) and paste the following into it:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.

  config.vm.provider "virtualbox" do |v|
	  v.gui = true
	  v.memory = 1024
      v.cpus = 1
  end
  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "smartin-2012r2"
  
  # Install the latest chef
  config.omnibus.chef_version = :latest
   
end

Most of this should be familiar from previous parts, but the important bit to change is the name in config.vm.box as this must match whatever name you passed as part of vagrant box add.

Once that’s created, VAGRANT UP!

vagrant up #1 vagrant up #2Et Voila! One repeatable build from ISO > Vagrant Box with Chef installed!

Getting Started with Chef on Windows Server – Part 3 – Vagrant, Windows, and Managed Chef

In the previous two parts (Intro and Chef Server & Bootstrapping) we used a plain old VirtualBox VM with Windows 2012 R2 as our Chef client, which required downloading VHDs, registering them as individual VMs and then installing Chef manually. Part 2 even required that you still had your old VM from the first session lying around in order to start where you left off!

This is not very chicken farm of us, and, I’ve since learned, really doing it the hard and old-fashioned way. So what’s the easy way?

Vagrant

Vagrant is a tool for building complete development environments. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases development/production parity, and makes the “works on my machine” excuse a relic of the past.

About Vagrant

For anyone that has familiarity with AWS, I describe Vagrant as (loosely) CloudFormation for VirtualBox (other hypervisors are supported!).

It allows you to easily spin up an environment based on any template found on VagrantCloud.com, bootstrap it, test it, throw it away and start again.

So go and download it, I’m sure you’ve already got VirtualBox installed, but if not, download that too.

vagrantup

Exercise

Prerequisites

  1. Vagrant
  2. Virtualbox
  3. Chef Client/DK
  4. Some awareness of what Chef is
  5. Some familiarity with VirtualBox
  6. Some familiarity with scripting/cmdline

We’re going to use Vagrant to setup a Windows 2012 R2 virtual machine, install Chef client on it, and apply a basic cookbook. Once you’ve done this you’ll have a great platform for creating and testing your own cookbooks without having to manage redeploying VMs manually.

1) Setup Managed Chef

For the purposes of this trial run of Chef inside Vagrant, we’re going to use Managed Chef.

Managed Chef is Chef hosted by OpsCode, sorry Chef (the company), relieving you of the necessity to setup your own server and host it yourself. If you’re interested in setting up your own Chef Server, see Getting Started with Chef on Windows Server – Part 2 – Chef Server & Bootstrapping.

Visit manage.opscode.com and register for a free account (up to 5 nodes).

manage.opscode

Once you’ve signed in, download the starter kit and extract the contents to a new directory called “vagrant-chef-windows” somewhere in your My Documents folder.

Important: It is imperative that you create this folder in your My Documents, or some other subfolder within your user’s home directory. Vagrant, Chef, and other tools which have their roots in Linux, use the current working directory and sometimes the user’s home directory in order to figure out where to look for their configuration files. Always be aware of your CWD when executing Vagrant and Chef commands, as it’s surprisingly important!

Download Starter Kit

chef-repo

Now we’re setup, you’re ready to start with Vagrant!

2) Setup Windows Chef with Vagrant

Windows in Vagrant is pretty tried and tested now it seems. Although support for Windows hosts was only officially added in April 2014, it was a plugin for quite a while before that.

Nonetheless, the selection of “Boxes” (VM templates) on vagrantcloud.com is pretty limited right now, presumably due to licensing concerns.

vagrantcloudsearch

The most popular Windows 2012 R2 box is currently one provided by OpenTable, however it seems to have issues with password expiry, so, we’ll go with the second most popular, the one by kensykora.

If you open up the link to that box, you’ll see a handy command in a textbox, ready for you to copy out.

vagrantcloudcommand

Copy that command, open a new PowerShell window on your computer, create a new folder in your My Documents called “vagrant-chef-windows”, then execute the command:

vagrant init kensykora/windows_2012_r2_standard

vagrantinitThis creates a Vagrantfile in the directory in which you’ve executed the command.

2.1) Setup Initial Vagrant Configuration

Open the Vagrantfile in your favourite text editor, and replace the contents with the following:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
	# Every Vagrant virtual environment requires a box to build off of.
	config.vm.box = "kensykora/windows_2012_r2_standard"
	
	# Forward ports
	config.vm.network "forwarded_port", guest: 80, host: 8080
	
	config.vm.provider "virtualbox" do |vb|
		# Don't boot with headless mode
		vb.gui = true
	end

	# Shell Provisioning
	config.vm.provision "shell" do |shell|
		shell.path = "install-chef.ps1"
	end
	
end

The configuration file is Ruby based, and does several things.

  1. Provisions the VM based on kensykora/windows_2012_r2_standard (downloading it if necessary)
  2. Forwards port 80 in the guest machine to port 8080 on your machine (the host)
  3. Pops up a Virtualbox window with the guest’s console for simplicity’s sake
  4. Executes install-chef.ps1 in the guest

Take a few moments to pair up the list above with the lines in the configuration file, once you have, you’ll wonder “where the hell is it getting install-chef.ps1 from?”. At the moment, it isn’t.

2.2) Use PowerShell Bootstrapping to Instal Chef

Create a new file in your vagrant-chef-windows directory called install-chef.ps1 and populate it with the following:

$progressPreference = 'silentlyContinue';
$chefInstaller = 'C:\vagrant\chef-windows-11.16.2-1.windows.msi';
$chefInstallerUri = "https://opscode-omnibus-packages.s3.amazonaws.com/windows/2008r2/x86_64/chef-windows-11.16.2-1.windows.msi";
 
if(!(test-path $chefInstaller)){
    Write-Host "$(Get-Date) Downloading Chef...";
    Invoke-WebRequest -Uri $chefInstallerUri -outfile $chefInstaller;
}
 
 
if(!(Test-Path "C:\chef")){
    Write-Host " $(Get-Date) Installing Chef";
    Start-Process -Wait -FilePath 'C:\\Windows\\system32\\msiexec.exe' -ArgumentList @('-i',$chefInstaller,'/quiet','/log','C:\\tmp\\chef-client-install.log')
    Write-Host " $(Get-Date) Installation Complete"
}else{
    Write-Host " $(Get-Date) Chef is already installed!";
}

Ideally, we wouldn’t need to do this as Chef would already be installed in the Box we got from VagrantCloud.com, however, at the time of writing there are no Windows 2012 R2 boxes with Chef pre-installed.
Your folder should now look like this:
folder with install chef.ps1

2.3) Power On – Vagrant Up

Now, ensure you’re in your vagrant-chef-windows folder in the PowerShell console, then execute:

vagrant up

vagrant up #1

It will scurry off, download the kensykora 2012 R2 box (not shown as I already had it), power up a new VM and execute your ps1. Once complete, you should have a VirtualBox console pop up and allow you to sign in (right ctrl + del = Ctrl + Alt + Delete).

Username: Vagrant
Password: vagrant

2012 vagrant VMIf you login, you’ll see C:\chef exists, and if you browse into C:\vagrant, you’ll see that the entirety of your vagrant-chef-windows folder is available within the VM!

see c vagrant

This is important because almost all file paths you’ll set in your Vagrantfile configuration will be relative to this directory.

2.4) Setup Vagrant Chef Provisioning Configuration

Now it’s time to actually use Chef. But we’re not going to just open up a PS console inside the VM and run chef-client. Oh no, we’re going to use Vagrant’s chef-client provisioning functionality!

That means that every time we deploy a new VM, our PS1 file will install Chef, then Vagrant will run chef-client for us, with the configuration we’ve defined in the Vagrantfile.

Add the following lines to the end ofyour Vagrantfile (but before the final “end”).

        # Chef Provisioning
	config.vm.provision "chef_client" do |chef|
	 chef.chef_server_url = "https://api.opscode.com/organizations/orgname"
	 chef.node_name = "node20141019"
	 chef.validation_client_name = "orgname-validator"
	 chef.validation_key_path = "chef-repo\\.chef\\orgname-validator.pem"
	 chef.add_recipe "learn_chef_iis"
	end

You will, of course, need to replace orgname with your organisation name on the highlighted lines, and amend the node_name if you like.

Your Vagrantfile should now look like this:

Final Vagrantfile

This code uses the Chef Client we’ve already installed and the orgname-validator.pem which came with our Starter Kit in order to add this guest as a node to our managed Chef environment.

2.5) Upload the Cookbook

But wait, we haven’t got the cookbook learn_chef_iis (a simple Windows/IIS example used by the learnchef.com/windows walkthroughs)! CD into your chef-repo directory and execute:

knife cookbook site download learn_chef_iis

download learn_chef_iis

Now extract the resulting tar.gz into your cookbooks subdir.

learn_chef_iis extracted

And finally, upload it to your managed Chef environment.

 knife upload learn_chef_iis

knife upload learn_chef_iis

2.6) Vagrant Provision

Excellent! The cookbook’s ready to go. Now CD up a level into your vagrant directory and run:

vagrant provision

vagrant provision

Vagrant has now kicked off a chef-client run with the learn_chef_iis cookbook as its runlist. Once it’s finished (and in combination with the forwarded port we setup earlier) you should now be able to open your favourite browser on your host machine and go to http://localhost:8080 and see…

localhost8080Voila!

You’re seeing the results of the IIS webserver that Chef configured in the VirtualBox that Vagrant deployed and bootstrapped for you! *Phew*

2.7) Redeploy from Scratch

Now for the moment of truth. Delete the node from the managed Chef environment, destroy the VM and redeploy a fresh one based on the configuration we’ve provided!

delete node

vagrant destroy -f
vagrant up

vagrant destroy

Wait a little while for Vagrant and Chef to finish doing their thing and you should be able to go back to localhost:8080 again and see exactly the same thing on a fresh VM!

You can use this environment to test the custom cookbooks we created in Part 1, but I’ll leave that to you to figure out in combination with what we’ve done today!

Further Reading

Chef Manage

Vagrant Chef Client Provisioner

Vagrant Getting Started

Vagrant Cloud

Enforcing AWS Multi-Factor Authentication with IAM, PowerShell and PRTG

Introduction: MFA

Multi-Factor Authentication as utilised by AWS uses a TOTP (Time based One Time Password) setup with either a hardware or ‘virtual’ MFA device. The virtual device being the most commonly used, allowing you to use applications like Google Auth on your smartphone to generate passwords that are only viable for 60 seconds.

This means that if you have MFA enabled, even if someone has your password, so long as they don’t also have access to your (hardware or virtual) MFA device, they’re unable to login to your account.

Introduction: AWS MFA

MFA as utilised by AWS is pretty straightforward to setup, scan a QR code, type in a couple of PINs, job done. So long as you have the right permissions.

In order to allow your IAMs users to even setup their MFA device you need to set a policy against their user (preferably indirectly using a group). Something like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowUsersToCreateDeleteTheirOwnVirtualMFADevices",
      "Effect": "Allow",
      "Action": ["iam:*VirtualMFADevice"],
      "Resource": ["arn:aws:iam::123456789012:mfa/${aws:username}"]
    },
    {
      "Sid": "AllowUsersToEnableSyncDisableTheirOwnMFADevices",
      "Effect": "Allow",
      "Action": [
        "iam:DeactivateMFADevice",
        "iam:EnableMFADevice",
        "iam:ListMFADevices",
        "iam:ResyncMFADevice"
      ],
      "Resource": ["arn:aws:iam::123456789012:user/${aws:username}"]
    },
    {
      "Sid": "AllowUsersToListVirtualMFADevices",
      "Effect": "Allow",
      "Action": ["iam:ListVirtualMFADevices"],
      "Resource": ["arn:aws:iam::123456789012:mfa/*"]
    },
    {
      "Sid": "AllowUsersToListUsersInConsole",
      "Effect": "Allow",
      "Action": ["iam:ListUsers"],
      "Resource": ["arn:aws:iam::123456789012:user/*"]
    }
  ]
}

Where 123456789012 is your AWS account ID.

Okay, so far so good. Your AWS users can set their own MFA devices. But currently whatever other privileges you’ve given them are usable even if they haven’t setup an MFA device for their account, meaning their account is a security vulnerability. Best put pay to that!

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": "*",
      "Condition":
      {
          "Null":{"aws:MultiFactorAuthAge":"false"}
      }
    }
  ]
}

Now we’re giving the user full access to everything but only if they have authenticated with MFA. So if they login with just a password and try to access, e.g. EC2, they’ll get a big fat access denied.

accessdeniedwithoutmfa

Great! So they go and setup their MFA device, logout, login again with MFA.

loginwithmfa

And voila! Access allowed.

accessallowedwithmfa

Which is great! Really secure, can’t get in with that policy without using using MFA.

But what if someone sets up another policy (which itself is lovely and granular, preserves the principle of least privilege) but forgets the MFA constraint? When you get into more numerous and complicated policies attached variously to groups, users, etc. it becomes cumbersome to audit them all for compliance even with automation.

Further, what happens when someone gets woken up on call, forgets all about MFA for this particular AWS account (which may well be one of a dozen or so they’re involved with) then gets access denied when he tries to login. Will he know to setup MFA? Or will he wake up someone to give him “the right access” to the system?

In any case, until AWS allows MFA to be part of the ‘password policy’ and prompts you to set it up as soon as you login for the first time (and even potentially afterwards depending on how paranoid you are), there’s a need to ensure all your users have MFA setup from the get-go.

The Monitoring

I have the pleasure of using PRTG for monitoring. A capable little tool, but the following code can be adapted for any tool running on Windows.

[CmdletBinding()]
Param(
    [parameter(Mandatory=$true)]
    [string]$accessKey,
    [parameter(Mandatory=$true)]
    [string]$secretKey
)

# Grab the current working directory of the script for the purposes of loading the DLL
$scriptWorkingDirectory = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent

# Ensure you use the .NET 4.5 DLL not the .NET 3.5 DLL from the AWS .NET SDK
# Load AWS API DLL
$AWSAPIFiles = @(
    "$scriptWorkingDirectory\AWSSDK.dll"
)
foreach($apiFile in $AWSAPIFiles){
    
    # Try loading the DLL
    Write-Verbose "Loading $apiFile";
    try{
        $fileStream = ([System.IO.FileInfo] (Get-Item $apiFile)).OpenRead();
    }catch{
        Write-Error $_.exception.message;
        Exit 1;
    }
    
    # Read the contents of the DLL
    $assemblyBytes = New-Object byte[] $fileStream.Length
    $fileStream.Read($assemblyBytes, 0, $fileStream.Length) | out-null;
    $var= $fileStream.Close()

    # Load the library 
    [System.Reflection.Assembly]::Load($assemblyBytes) | out-null;
}

# Set the AWS Access Key and Secret Key for authentication using the .NET SDK
[System.Configuration.ConfigurationManager]::AppSettings["AWSAccessKey"] = $accessKey
[System.Configuration.ConfigurationManager]::AppSettings["AWSSecretKey"] = $secretKey

# Connect to the AWS API
Write-Verbose "Connecting to AWS API";
$client= New-Object -TypeName Amazon.IdentityManagement.AmazonIdentityManagementServiceClient;

# Fetch the list of users that have passwords but not MFA
Write-Verbose "Fetch users that have passwords, but no MFA";
$mfadevices = @()
$usersWithoutMFA = $client.listUsers().ListUsersResult.Users | ?{
        
        # Ensure the user has a password (if they only have a secret key, they don't need MFA)
        try{
            $client.GetLoginProfile($_.username) | Out-Null;
        }catch{
            return $false;
        }
        
        # Return false if they don't have MFA (otherwise we don't care about them as they're doing the right thing!)
        return !$client.ListMFADevices($_.username).MFADevices;
    }

# Output to PRTG
Write-Verbose "Output in a PRTG friendly format (XML)";
Write-Host "
<prtg>
	<result>
		<channel>Number of users without MFA devices registered</channel>
        <value>$(($usersWithoutMFA | Measure-Object).count)</value>
    </result>
    <Text>$(($usersWithoutMFA | select -expandProperty  "Username") -join "; ")</Text>
</prtg>";

# Return success exit code
exit 0;

In order to execute this you need the following pre-requisites:

  1. The .NET 4.5 AWSSDK.dll from the AWS .NET developer’s SDK must be housed in the same directory as the .ps1
  2. PowerShell 4.0 or higher must be installed on the PRTG Probe
  3. .NET 4.5 must be installed on the PRTG probe executing the custom sensor
  4. A user with at least the following privileges in AWS:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1410864868000",
      "Effect": "Allow",
      "Action": [
        "iam:ListUsers",
        "iam:ListMFADevices",
        "iam:GetLoginProfile"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:*"
      ]
    }
  ]
}

Where, again 123456789012 is replaced with your account ID.

In order to get the .NET 4.5 AWSSDK.dll from the AWS .NET developer’s SDK just install the SDK on your machine, then copy AWSSDK.dll from C:\Program Files (x86)\AWS SDK for .NET\bin\Net45 to the directory your script lives in.

This directory should be under your PRTG probe’s Custom Sensors\ExeXML\ directory.

Once you’ve done that, you can create a Script/Exe custom sensor in PRTG pointing at your new .ps1 file like so:

PRTGsensorMFA

Setting the arguments to reflect the access and secret keys of the AWS user you created earlier.

Once that’s done, you’ll have a sensor that shows the names of the users in your AWS account that have a password, but no MFA device. Great! But how do we alert on that? As when that devices goes to an error state, the message will be replaced with an error message!

No problem, just create a factory sensor that references the first sensor, then create a threshold on the channel.

Create Sensor > Factory Sensor > Properties

#<factory sensor channel ID>:<factory sensor name>
Channel(<custom sensor id>,<custom sensor channel>)
#1:Users without MFA on AWS
Channel(10101,2)

Then set the threshold against the channel like so:
mfachannelthreshold
Voila! You will be alerted whenever you have a user that has a password, but no MFA device associated!

How do you handle this issue in your environment? Any suggestions on how to do this better? Please let me know in the comments!

Further Reading

StackOverflow – Can you require MFA for AWS IAM accounts?

AWS Docs – Configuring and Managing a Virtual MFA Device for Your AWS Account (AWS Management Console)

JeffW@AWS – Allow your user to self-manage a virtual MFA

Getting Started with DSC and PowerShell 5.0 – Part 1 – Installing WordPress with Desired State Configuration

So we’ve checked out the basics of Chef on Windows in Part 1 and Part 2 of Chef On Windows, and with the recent release of the Windows Management Framework 5.0 Preview September 2014  I thought it was time to stick a toe into the water of the Desired State Configuration side of configuration management on Windows.
As quite a lot of intros focus very heavily on the theory and don’t necessarily show a lot of results up front, I’m going to continue the precedent of the preview Chef articles and show you the shortest path to something tangible, hopefully gaining some familiarity with the tech involved along the way.

In Part 1 we’re going to use the WMF 5.0 preview, DSC, and a little bit of OneGet/PowerShellGet (name seems to be up for discussion the moment), to install WordPress 4.0 on to a blank VM. In order to do this we’re going to follow the guide laid out in the quick-start of the WordPress PowerShell/DSC module, so all credit goes to the wonderful people who created this module for providing our first entry point into DSC!

Important: You don’t need WMF 5.0 to use DSC, it’s been around since PS 4.0, but the WordPress PowerShell/DSC module we’ll be using requires WMF 5.0 for OneGet.

Important #2: This guide uses a WMF 5.0 preview and DSC modules that are labelled x for eXperimental, don’t use these in production🙂

Requirements

  1. Blank Windows 2012 R2 VM 
  2. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell

We won’t need the VMs we created in the Chef series as we’ll be focussing on just DSC for today.

1) Preparing the VM

As the  WordPress PowerShell/DSC module we’ll be using requires WMF 5.0 for OneGet, we need to go and grab the September 2014 Preview!

Download WMF 5.0 to your 2012 VM from http://www.microsoft.com/en-us/download/details.aspx?id=44070WMF 5.0 September 2014 Preview

Now we need to install the xWordPress module and its dependencies.

Whoa whoa whoa, don’t download it from the link! What is this, the 90s? We’ve just  installed PowerShell 5.0 and with it, OneGet, let’s use it!

Open up a PowerShell console and run


Install-Module xWebAdministration -MinimumVersion 1.3.2 -Force

and accept the offer to download NuGet_anycpu.exe.

install-module xWebAdministration

Now install the remaining modules.


Install-Module xPSDesiredStateConfiguration -MinimumVersion 3.0.1 -Force

Install-Module xMySql -MinimumVersion 1.0 -Force

Install-Module xWordPress -MinimumVersion 1.0 -Force

Install-Module xPhp -MinimumVersion 1.0.1 -Force

Excellent! Okay, where did they go?

$env:ProgramFiles\WindowsPowerShell\Modules folder

Program Files WindowsPowerShell Modules

Awesome! Since when has that been a thing? WMF 5.0? I assume, but I’m not sure. Getting modules to load automatically has always been a bit of a per-user PITA in the past, so if this is user-agnostic way of installing PowerShell modules, it’s only a good thing!

2) Prepare the Configuration

Now we need to grab the sample files from the xWordPress module and customise them to our needs.

Copy the contents of C:\Program Files\WindowsPowerShell\Modules\xWordPress\samples to your Documents folder

samples in my documents

Open up SingleNodeEndToEndWordPress.ps1 in the PowerShell ISE and check that the Download URLs are still correct for PHP and MySQL.

MySQL and PHP URLs

I only had to change PHP to http://windows.php.net/downloads/releases/archives/php-5.5.14-nts-Win32-VC11-x64.zip, but double check MySQL as well, as it may have changed by the time you read this!

3) Executing the Configuration

Go back to your PowerShell window, cd into your documents folder and execute SingleNodeEndToEndWordPress.ps1.

This will perform the following tasks (at least):

  1. Install IIS
  2. Install PHP and dependencies
  3. Install MySQL
  4. Install WordPress into IIS with * port 80 HTTP bindings.

 SingleNodeEndToEndWordPress

After some time, your system will restart to complete the installation.

DSC is Restarting the computer

Once it’s restarted, DSC will continue to configure the computer, to see the progress, go to the DSC event log.  (Event Viewer > Applications and Services Log > Microsoft > Windows > Desired State Configuration > Operational)DSC Event Log

Once you see “Warning” “The local configuration manager was shut down”, your new WordPress site should be ready! Check out Localhost in IE!

WordPress 4.0 default

Ooh, this is the first time I’ve seen WordPress 4.0 default installation! First impressions are very monochrome, but eh, that’s what themes are for!

Summary

So what have we achieved here?

We’ve used community provided modules for DSC/PowerShell to install WordPress and all its dependencies, including IIS, PHP, and MySQL.

Was this easier than doing all the work ourselves, clicking through installers and typing out config ourselves? Much!

Does it mean we no longer need Chef and all that work we did in the past couple of posts was unnecessary? Not at all!

Does this illustrate the power and flexibility of DSC and OneGet? No, we’re just getting started!

I’ll be writing a subsequent post to dig in and write our own DSC module/template/whatever-the-correct-nomenclature–is but I suspect that bringing what we’ve learned today into Chef with Chef’s new DSC evaluation release recipes will be the post immediately following this one.

Further Reading

Steven Murawski

Everything Else

Getting Started with Chef on Windows Server – Part 2 – Chef Server & Bootstrapping

Now that we’ve done Part 1 – Configure a Package & Service, we can start getting a little more into the meat of Chef: centralisation. In the previous scenario we had defined a single recipe and applied it locally. Very simple, not very useful. In this part, we’re going to create a Chef Server, upload the recipe we created in the previous part to it, and then bootstrap another VM using it.

This is a relatively long winded setup, and if you’re itching to get started I highly recommend running through the LearnChef.com Redhat Enterprise Linux tutorial which even provides you with the VMs and hosted Chef Server, which will get your feet wet and started on the road to Chef. If, however, you’re interested in getting slightly deeper into Chef, step right this way.

Requirements

  1. Ubuntu Server VM
  2. IMPORTANT: The 2012 R2 VM you made in the previous part of this series
  3. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell
  1. Basic understanding of what Chef is (ideal, but not required).
  1. Basic Linux knowledge

1) Setting up the Chef Server

“Wait, what? Ubuntu Server? What happened to the “On Windows” part of this? I thought that was the whole point!”

Unfortunately at the time of writing Chef Server is only available on Linux. So in order to manage our Windows servers we’re going to need an Ubuntu Server VM on which we can install Chef Server. Don’t worry, Chef Server isn’t really the focus here, we just need it for configuration centralisation and user management.

There are a couple of alternatives to self-hosting a Chef Server including: Opscode Hosted Chef, and OpsWorks (and probably others). The former looks pretty sexy if you’ve got the cash to splash, and their free trial is crucial in the learnchef.com examples which we’re blatantly ripping off.

  1. Spin up an Ubuntu Server instance, make sure it has its own IP, can talk to our old 2012 VM, and access to the internet
  2. Visit http://www.getchef.com/chef/installensure you click “Open Source Chef Server 11″ and select the latest version of Chef, copy the URL the link provides you with into notepad.
  3. In your Ubuntu Server VM enter:
    wget [downloadURL]

    e.g.

    wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chef-server_11.1.3-1_amd64.deb

    This will download the chef-server installation file to your current directory.

  4. Once that’s complete, execute the installer using:
    sudo dpkg -i chef-server*.deb
  5. Setup Chef Server using the following command (you won’t be asked for any details
    sudo chef-server-ctl reconfigure
  6. Once the configuration is complete, you’re done! You can visit the server in your browsing on https://<ip if your ubuntu server>

2) Log on to the Chef Server and Download Credentials

You will need the following private keys in order to set up the workstation we previously created on our 2012 VM to talk to our new Chef Server.

  1. An administrative user (in our case, admin)
  2. A validator user (in our case, chef-validator)

To get these credentials, login to your new Chef Server (https://<ip of ubuntu server>) using the default credentials:

Username: admin
Password: p@ssw0rd1

Note the lowercase p in the password, this is not an MS educational sample!

chef-server

You will be immediately prompted to save the ‘admin’ user’s private key, save this to your desktop as chef-admin.psm.

private-key

Now navigate to

Clients > chef-validator > Edit > Regenerate Private Key

chef-validator

To download the validator’s private key. Save it into a text file called chef-validator.pem on your desktop.

3) Setup the Development Kit in your 2012 VM to Talk to Chef Server

Now we’re going to highlight a distinction that we did not draw in our previous article (mostly because I didn’t really know it existed). That is the difference between a Workstation and a Chef Client.

You’ll remember that we installed both the Development Kit and the Chef Client on to our VM previously, well, as you might imagine, the devkit isn’t something you need on every server, as it is that which we were using to create our recipes and templates. The Development Kit is something you’d (I’d guess) install on a bastion server or RD Gateway allowing you to author your recipes and then upload them to your Chef Server to be deployed elsewhere.

One of the big advantages configuration management is the fact that you can version control your configuration, and to this end we’re going to place our existing recipes into a repository based on the Github Chef repo. Why exactly the repo needs to based on the full Chef repo from OpsCode I’m not sure, but I’m not inclined to contest the official documentation!

On your 2012 VM from the previous article, download GIT from http://www.git-scm.com/download/win ensuring you tick “Use GIT from the Windows Command Prompt” when asked.

git install

Once installed, open Powershell and CD into your Documents folder and run:

git clone git://github.com/opscode/chef-repo.git

clone-opscode-chef-repo

This will pull down the latest copy of the Chef repo from Github and form the basis of our new working directory.

Once complete, create a folder inside the new ‘chef-repo’ folder called .chef  (you’ll probably need to use mkdir as the Windows UI won’t let you create a folder starting with a ‘.’) and copy the two pem files you downloaded from the Chef server earlier into it:

.chef

Because these files are secret, we don’t want to sync them with our source repo, so open up .gitignore and check that the .chef folder is already ignored.

Important: Because I didn’t have a domain available to me, I lacked the FQDNs required for communication with the Chef Server. To workaround this for my test environment. I simply added an entry to the hostfile on my 2012 VM with the IP of the Chef Server and named it chef-server.fakedomain, which worked fine. (You will also need to do this on the machine you’re bootstrapping later.)

Now we can configure Knife to talk to our new Chef Server by running

knife configure --initial

Which will prompt for the following info:

Location of Config File: <accept default>
Chef Server URL:
 https://<ubuntu server IP>
Name for New User: w2k12a
Existing Admin Name: admin
Location of Existing Admin’s Private Key: C:\users\<yourname>\documents\chef-repo\.chef\chef-admin.pem
Validation Client Name: chef-validator
Location of Validation Key: C:\users\<yourname>\documents\chef-repo\.chef\chef-validator.pem
Path to Chef Repo: C:\Users\<yourname>\Documents\chef-repo\
Password for New User: <your choice>

knife configure --initial

And you’re done! Your workstation is now setup to talk to your Chef Server. Next we need to upload the recipe we created previously and bootstrap an unwitting victim server.

4) Upload Recipe & Bootstrap a New Server

In the previous article we created a basic recipe which installed IIS and amended the default.htm to say “Hello World!”, which is perfect for an illustration of how to take a completely blank server and bootstrap it with a specific recipe.

Upload Recipe to Chef Server

Now that your workstation (old 2012 VM) is setup to talk to our Chef server, we can upload the ‘webserver’ recipe we created locally last time.

Copy the “webserver” directory from C:\chef\cookbooks into the repo we just created C:\users\<yourname>\documents\chef-repo\cookbooks\ (if the cookbooks subfolder doesn’t exist, create it).

On the same server, run:

knife cookbook upload webserver

knife cookbook upload webserver

Bam, simple as that! Your webserver recipe is now available to any server configured to talk to our Chef server.

Bootstrap a New Server

Go off and spin yourself up a new 2012 server, I’ll wait.

Once you’re done, we’ll need to

  1. Add your chef server’s FQDN (e.g. chef-server.fakedomain) to the new server’s host file if like me you didn’t have a DNS server to hand.
  2. Enable Windows Remote Management on the new server
  3. Install a plugin for Knife on our workstation (the old VM).

Enable Windows Remote Management

On your fresh 2012 server run the following to allow remote access and set the recommended remoting settings from Chef (I neglected the MaxMemoryPerShellMB setting because W2012’s is higher than 300MB already).

Enable-PSRemoting -force
Set-Item WSMan:\localhost\MaxTimeoutms 1800000
Set-Item WSMan:\localhost\Service\AllowRemoteAccess $true
Set-Item WSMan:\localhost\Service\Auth\Basic $true
Set-item WSMan:\localhost\Service\AllowUnencrypted $true

Why Enable-PSRemoting and not  Set-WSManQuickConfig? Simply because using Invoke-Command from the workstation to the Chef Client is an easy way to troubleshoot connectivity issues.

Important: Do not copy this and use it in your production environment! Use it for testing and PoC and take the time to use proper encrypted auth in your production environment.

On your workstation (old 2012 server) run the following to allow the server to reach out and remote on to the server we’re going to bootstrap.

Set-Item wsman:\localhost\Client\TrustedHosts -value *

Important: Again, don’t copy this straight into production, use a value like *.contoso.com to allow your AD domain’s computers only.

Install Knife-Windows and Bootstrap Server

Hop back onto your old 2012 VM (the one we configured as a workstation with Chef DK) and run the following:

gem install knife-windows

This will call out and download the knife-windows plugin which allows bootstrapping via WinRM instead of the default SSH.

Once that’s done, it’s one simple command (well, kinda) to call out and install Chef Client and execute your Recipe on your new VM!

knife bootstrap windows winrm [new 2012 server ip] -x [windows admin username] -P [password] --node-name node1 --run-list 'recipe[webserver]' -V

(I’ve included -V for verbose because this took nearly ten minutes on my ageing-laptop-powered VMs and wanted some feedback during.)

start bootstrap

Some time later…finish bootstrap

Knife has now reached out to your blank 2012 VM, downloaded the MSI for Chef Client, installed it, and applied your ‘webserver’ recipe, which in turn installed IIS and populated Default.htm.

Did it work? The moment of truth… put http://<ip of your new server> into your browser

it worked!

Holy crap it actually worked!

Synopsis

So what have we actually achieved here? We’ve taken a recipe for installing IIS and an extremely basic custom website that was previously only applicable locally, and uploaded it to our own locally hosted Chef Server, allowing us to execute it remotely even when Chef isn’t already installed.

We’ve only scratched the surface of Chef here, and there are loads of questions to ask and answer, e.g.:

  1. How does Chef benefit from Desired State Configuration?
  2. How do I define per-server or per-environment settings like connection strings?
  3. How do I manage databases?
  4. How do I manage service account credentials?
  5. How do I deal with my existing executable installers?
  6. How do I manage upgrades?

And so on ad infinitum. Some of these may be answered in upcoming posts about OneGet and Desired State Configuration, others may be the subject of a further introduction-to-concepts blog post, depending on how well I get on with Chef. All are, I’m sure, answerable with appropriate research though. If you know of any useful conceptual introductions on Chef, please share them in the comments!

Further Reading

Install the Server on a Virtual Machine

How to Install a Chef Server, Workstation, and Client on Ubuntu VPS Instances

Managed Reference for WinRM Windows PowerShell Command Classes

Enable and Use Remote Commands in Windows PowerShell

Getting Started with Chef on Windows Server – Part 1 Intro

I’ve never had the opportunity to work with configuration management software, but a recent project has pushed me over the edge from “Wow, that sounds really cool in theory!” to “Well, I’d better get my feet wet!”.

As the learnchef.com’s Windows page is currently under constructionUnder construction, I thought I’d write my efforts up to help anyone who might also be getting their feet wet for the first time in the configuration management space using Chef on Windows.

IMPORTANT: As I’m writing these posts while going along, it’s not to say that any of what’s reported adheres to Chef’s best practices. So if you notice any glaring errors, please say so in the comments!

In this series I intend to explore what I understand to be the glorious trifecta of configuration management on Windows:

  1. Chef: Part 1Part 2, Part 3
  2. Windows Desired State Configuration: Part 1
  3. Oneget

At the start of this series we will have a very rudimentary/non-existent understanding of the three elements above, and will work through each individually, then tie them together (if possible).

This first post will be dedicated to an introduction to Chef on Windows.

Chef – Configuring a Package and a Service

About Chef

Although LearnChef’s Windows page is under construction, they still have a fantastic introduction on RHEL (Redhat Enterprise Linux) which even provides you with a preconfigured VM! I would highly recommend running through this just to get a basic intuitive feel for Chef if you’re on the fence and not sure if you can be bothered to spin up your own 2012 VM and install things yourself.

Steven Murawski has a good blog post Is the Chef Learning Curve Worth it?” which, while obviously a little biased as he’s now a community manager at Chef, gives a good overview of why you would use Chef on Windows and answers some of the main questions surrounding Chef on Windows.

Pre-requisites

The following steps will require:

  1. Windows 2012 R2 (in theory this should work on 2008 R2+ so long as you have PowerShell 4.0, but I haven’t tested it)
  2. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell
  3. Basic understanding of what Chef is (ideal, but not required).

Steps

We’re going to pretty much steal the exact steps from the RHEL Configure a Package and a Service lesson, mix it with the legacy Windows tutorial, and see what happens!

1) Install Chef & Chef Development Kit

Install the Chef Client and the Chef Development Kit on your 2012 R2 VM.

2) Generate a Cookbook

We’re going to create a cookbook that installs IIS and generates a custom Default.htm to display.

The working directory for Chef in Windows looks to be C:\Chef by default, so

cd c:\chef\cookbooks
chef generate cookbook webserver

chef generate cookbook webserverThis will generate the structure and default files for a cookbook named “webserver”.

2) Configure the Default Resource File

Now we need to write the Ruby that will define the following:

  1. Install IIS
  2. Start IIS
  3. Populate Default.htm with our message

To do so we’ll edit default.rb in the recipes directory of the webserver cookbook.

Notepad C:\chef\cookbooks\webserver\recipes\default.rb

Then define the following in the file. EDIT: Amended thanks to @cjeffblaine!

powershell_script 'Install IIS' do
 action :run
 code 'add-windowsfeature Web-Server'
end

service 'w3svc' do
 action [ :enable, :start ]
end

template 'c:\inetpub\wwwroot\Default.htm' do
 source 'Default.htm.erb'
 rights :read, 'Everyone'
end

This will execute Add-WindowsFeature Web-Server in a PowerShell context (installing IIS if necessary), then start IIS, and copy the contents of Default.htm.erb to C:\inetpub\wwwroot\Default.htm and give everyone read access, so we’d better define the contents of Default.htm.erb!

3) Create a Template

Templates allow you to use variables from Knife which include basic info like IP and Hostname by default, but can also be populated with custom information using data bags. An obvious example of a use-case for templates is for populating web.config information like DB connection strings.

chef generate template webserver Default.htm

chef generate template webserver Default.htm

If this throws an error saying Chef was not found, ensure you’ve installed the Chef Development Kit.

Next we need to edit the template file to reflect our custom splash page!

Notepad C:\chef\cookbooks\webserver\templates\default\Default.htm.erb

In this file we just enter a simple web page.

&amp;lt;html&amp;gt;
 &amp;lt;body&amp;gt;
 &amp;lt;h1&amp;gt;Hello World!&amp;lt;/h1&amp;gt;
 &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

4) Apply the config!

All done! Now we can apply the configuration!

chef-client --local-mode --runlist webserver

chef-client --local-mode --runlist webserver

All this does is kick off the Chef client in local mode specifying a runbook called ‘webserver’, but in the background Chef beavers away installing IIS, starting it, and customising the default.htm page.

working website

Et voilà!

5) Reapply the Configuration

We can now reapply this recipe over and over again, and each time Chef will check the config we’ve declared in the recipe against the actual configuration, and bring it back in line as necessary.

chef-client --local-mode --runlist webserver

So you can delete your default.htm, uninstall IIS, disable the service, but as soon as you run the code above, it will all be reset in accordance with your recipe!

Summary

Now those of you familiar with configuration management will be feeling a bit underwhelmed at this point. Where’s the automatic application? Where’s the centralisation? Bootstrapping? You didn’t even define any variables in your template!

Not to worry, we will do that in the next post.

Further Reading

Redhat Enterprise Linux / CentOS Training – LearnChef

Chef Reference – Chef.com

Is the Chef Learning Curve Worth it? – Steven Murawski

Chef Fundamentals Webinar Series – LearnChef