Category Archives: Windows

Getting Started with Chef on Windows Server – Part 3 – Vagrant, Windows, and Managed Chef

In the previous two parts (Intro and Chef Server & Bootstrapping) we used a plain old VirtualBox VM with Windows 2012 R2 as our Chef client, which required downloading VHDs, registering them as individual VMs and then installing Chef manually. Part 2 even required that you still had your old VM from the first session lying around in order to start where you left off!

This is not very chicken farm of us, and, I’ve since learned, really doing it the hard and old-fashioned way. So what’s the easy way?

Vagrant

Vagrant is a tool for building complete development environments. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases development/production parity, and makes the “works on my machine” excuse a relic of the past.

About Vagrant

For anyone that has familiarity with AWS, I describe Vagrant as (loosely) CloudFormation for VirtualBox (other hypervisors are supported!).

It allows you to easily spin up an environment based on any template found on VagrantCloud.com, bootstrap it, test it, throw it away and start again.

So go and download it, I’m sure you’ve already got VirtualBox installed, but if not, download that too.

vagrantup

Exercise

Prerequisites

  1. Vagrant
  2. Virtualbox
  3. Chef Client/DK
  4. Some awareness of what Chef is
  5. Some familiarity with VirtualBox
  6. Some familiarity with scripting/cmdline

We’re going to use Vagrant to setup a Windows 2012 R2 virtual machine, install Chef client on it, and apply a basic cookbook. Once you’ve done this you’ll have a great platform for creating and testing your own cookbooks without having to manage redeploying VMs manually.

1) Setup Managed Chef

For the purposes of this trial run of Chef inside Vagrant, we’re going to use Managed Chef.

Managed Chef is Chef hosted by OpsCode, sorry Chef (the company), relieving you of the necessity to setup your own server and host it yourself. If you’re interested in setting up your own Chef Server, see Getting Started with Chef on Windows Server – Part 2 – Chef Server & Bootstrapping.

Visit manage.opscode.com and register for a free account (up to 5 nodes).

manage.opscode

Once you’ve signed in, download the starter kit and extract the contents to a new directory called “vagrant-chef-windows” somewhere in your My Documents folder.

Important: It is imperative that you create this folder in your My Documents, or some other subfolder within your user’s home directory. Vagrant, Chef, and other tools which have their roots in Linux, use the current working directory and sometimes the user’s home directory in order to figure out where to look for their configuration files. Always be aware of your CWD when executing Vagrant and Chef commands, as it’s surprisingly important!

Download Starter Kit

chef-repo

Now we’re setup, you’re ready to start with Vagrant!

2) Setup Windows Chef with Vagrant

Windows in Vagrant is pretty tried and tested now it seems. Although support for Windows hosts was only officially added in April 2014, it was a plugin for quite a while before that.

Nonetheless, the selection of “Boxes” (VM templates) on vagrantcloud.com is pretty limited right now, presumably due to licensing concerns.

vagrantcloudsearch

The most popular Windows 2012 R2 box is currently one provided by OpenTable, however it seems to have issues with password expiry, so, we’ll go with the second most popular, the one by kensykora.

If you open up the link to that box, you’ll see a handy command in a textbox, ready for you to copy out.

vagrantcloudcommand

Copy that command, open a new PowerShell window on your computer, create a new folder in your My Documents called “vagrant-chef-windows”, then execute the command:

vagrant init kensykora/windows_2012_r2_standard

vagrantinitThis creates a Vagrantfile in the directory in which you’ve executed the command.

2.1) Setup Initial Vagrant Configuration

Open the Vagrantfile in your favourite text editor, and replace the contents with the following:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
	# Every Vagrant virtual environment requires a box to build off of.
	config.vm.box = "kensykora/windows_2012_r2_standard"
	
	# Forward ports
	config.vm.network "forwarded_port", guest: 80, host: 8080
	
	config.vm.provider "virtualbox" do |vb|
		# Don't boot with headless mode
		vb.gui = true
	end

	# Shell Provisioning
	config.vm.provision "shell" do |shell|
		shell.path = "install-chef.ps1"
	end
	
end

The configuration file is Ruby based, and does several things.

  1. Provisions the VM based on kensykora/windows_2012_r2_standard (downloading it if necessary)
  2. Forwards port 80 in the guest machine to port 8080 on your machine (the host)
  3. Pops up a Virtualbox window with the guest’s console for simplicity’s sake
  4. Executes install-chef.ps1 in the guest

Take a few moments to pair up the list above with the lines in the configuration file, once you have, you’ll wonder “where the hell is it getting install-chef.ps1 from?”. At the moment, it isn’t.

2.2) Use PowerShell Bootstrapping to Instal Chef

Create a new file in your vagrant-chef-windows directory called install-chef.ps1 and populate it with the following:

$progressPreference = 'silentlyContinue';
$chefInstaller = 'C:\vagrant\chef-windows-11.16.2-1.windows.msi';
$chefInstallerUri = "https://opscode-omnibus-packages.s3.amazonaws.com/windows/2008r2/x86_64/chef-windows-11.16.2-1.windows.msi";
 
if(!(test-path $chefInstaller)){
    Write-Host "$(Get-Date) Downloading Chef...";
    Invoke-WebRequest -Uri $chefInstallerUri -outfile $chefInstaller;
}
 
 
if(!(Test-Path "C:\chef")){
    Write-Host " $(Get-Date) Installing Chef";
    Start-Process -Wait -FilePath 'C:\\Windows\\system32\\msiexec.exe' -ArgumentList @('-i',$chefInstaller,'/quiet','/log','C:\\tmp\\chef-client-install.log')
    Write-Host " $(Get-Date) Installation Complete"
}else{
    Write-Host " $(Get-Date) Chef is already installed!";
}

Ideally, we wouldn’t need to do this as Chef would already be installed in the Box we got from VagrantCloud.com, however, at the time of writing there are no Windows 2012 R2 boxes with Chef pre-installed.
Your folder should now look like this:
folder with install chef.ps1

2.3) Power On – Vagrant Up

Now, ensure you’re in your vagrant-chef-windows folder in the PowerShell console, then execute:

vagrant up

vagrant up #1

It will scurry off, download the kensykora 2012 R2 box (not shown as I already had it), power up a new VM and execute your ps1. Once complete, you should have a VirtualBox console pop up and allow you to sign in (right ctrl + del = Ctrl + Alt + Delete).

Username: Vagrant
Password: vagrant

2012 vagrant VMIf you login, you’ll see C:\chef exists, and if you browse into C:\vagrant, you’ll see that the entirety of your vagrant-chef-windows folder is available within the VM!

see c vagrant

This is important because almost all file paths you’ll set in your Vagrantfile configuration will be relative to this directory.

2.4) Setup Vagrant Chef Provisioning Configuration

Now it’s time to actually use Chef. But we’re not going to just open up a PS console inside the VM and run chef-client. Oh no, we’re going to use Vagrant’s chef-client provisioning functionality!

That means that every time we deploy a new VM, our PS1 file will install Chef, then Vagrant will run chef-client for us, with the configuration we’ve defined in the Vagrantfile.

Add the following lines to the end ofyour Vagrantfile (but before the final “end”).

        # Chef Provisioning
	config.vm.provision "chef_client" do |chef|
	 chef.chef_server_url = "https://api.opscode.com/organizations/orgname"
	 chef.node_name = "node20141019"
	 chef.validation_client_name = "orgname-validator"
	 chef.validation_key_path = "chef-repo\\.chef\\orgname-validator.pem"
	 chef.add_recipe "learn_chef_iis"
	end

You will, of course, need to replace orgname with your organisation name on the highlighted lines, and amend the node_name if you like.

Your Vagrantfile should now look like this:

Final Vagrantfile

This code uses the Chef Client we’ve already installed and the orgname-validator.pem which came with our Starter Kit in order to add this guest as a node to our managed Chef environment.

2.5) Upload the Cookbook

But wait, we haven’t got the cookbook learn_chef_iis (a simple Windows/IIS example used by the learnchef.com/windows walkthroughs)! CD into your chef-repo directory and execute:

knife cookbook site download learn_chef_iis

download learn_chef_iis

Now extract the resulting tar.gz into your cookbooks subdir.

learn_chef_iis extracted

And finally, upload it to your managed Chef environment.

 knife upload learn_chef_iis

knife upload learn_chef_iis

2.6) Vagrant Provision

Excellent! The cookbook’s ready to go. Now CD up a level into your vagrant directory and run:

vagrant provision

vagrant provision

Vagrant has now kicked off a chef-client run with the learn_chef_iis cookbook as its runlist. Once it’s finished (and in combination with the forwarded port we setup earlier) you should now be able to open your favourite browser on your host machine and go to http://localhost:8080 and see…

localhost8080Voila!

You’re seeing the results of the IIS webserver that Chef configured in the VirtualBox that Vagrant deployed and bootstrapped for you! *Phew*

2.7) Redeploy from Scratch

Now for the moment of truth. Delete the node from the managed Chef environment, destroy the VM and redeploy a fresh one based on the configuration we’ve provided!

delete node

vagrant destroy -f
vagrant up

vagrant destroy

Wait a little while for Vagrant and Chef to finish doing their thing and you should be able to go back to localhost:8080 again and see exactly the same thing on a fresh VM!

You can use this environment to test the custom cookbooks we created in Part 1, but I’ll leave that to you to figure out in combination with what we’ve done today!

Further Reading

Chef Manage

Vagrant Chef Client Provisioner

Vagrant Getting Started

Vagrant Cloud

Enforcing AWS Multi-Factor Authentication with IAM, PowerShell and PRTG

Introduction: MFA

Multi-Factor Authentication as utilised by AWS uses a TOTP (Time based One Time Password) setup with either a hardware or ‘virtual’ MFA device. The virtual device being the most commonly used, allowing you to use applications like Google Auth on your smartphone to generate passwords that are only viable for 60 seconds.

This means that if you have MFA enabled, even if someone has your password, so long as they don’t also have access to your (hardware or virtual) MFA device, they’re unable to login to your account.

Introduction: AWS MFA

MFA as utilised by AWS is pretty straightforward to setup, scan a QR code, type in a couple of PINs, job done. So long as you have the right permissions.

In order to allow your IAMs users to even setup their MFA device you need to set a policy against their user (preferably indirectly using a group). Something like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowUsersToCreateDeleteTheirOwnVirtualMFADevices",
      "Effect": "Allow",
      "Action": ["iam:*VirtualMFADevice"],
      "Resource": ["arn:aws:iam::123456789012:mfa/${aws:username}"]
    },
    {
      "Sid": "AllowUsersToEnableSyncDisableTheirOwnMFADevices",
      "Effect": "Allow",
      "Action": [
        "iam:DeactivateMFADevice",
        "iam:EnableMFADevice",
        "iam:ListMFADevices",
        "iam:ResyncMFADevice"
      ],
      "Resource": ["arn:aws:iam::123456789012:user/${aws:username}"]
    },
    {
      "Sid": "AllowUsersToListVirtualMFADevices",
      "Effect": "Allow",
      "Action": ["iam:ListVirtualMFADevices"],
      "Resource": ["arn:aws:iam::123456789012:mfa/*"]
    },
    {
      "Sid": "AllowUsersToListUsersInConsole",
      "Effect": "Allow",
      "Action": ["iam:ListUsers"],
      "Resource": ["arn:aws:iam::123456789012:user/*"]
    }
  ]
}

Where 123456789012 is your AWS account ID.

Okay, so far so good. Your AWS users can set their own MFA devices. But currently whatever other privileges you’ve given them are usable even if they haven’t setup an MFA device for their account, meaning their account is a security vulnerability. Best put pay to that!

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": "*",
      "Condition":
      {
          "Null":{"aws:MultiFactorAuthAge":"false"}
      }
    }
  ]
}

Now we’re giving the user full access to everything but only if they have authenticated with MFA. So if they login with just a password and try to access, e.g. EC2, they’ll get a big fat access denied.

accessdeniedwithoutmfa

Great! So they go and setup their MFA device, logout, login again with MFA.

loginwithmfa

And voila! Access allowed.

accessallowedwithmfa

Which is great! Really secure, can’t get in with that policy without using using MFA.

But what if someone sets up another policy (which itself is lovely and granular, preserves the principle of least privilege) but forgets the MFA constraint? When you get into more numerous and complicated policies attached variously to groups, users, etc. it becomes cumbersome to audit them all for compliance even with automation.

Further, what happens when someone gets woken up on call, forgets all about MFA for this particular AWS account (which may well be one of a dozen or so they’re involved with) then gets access denied when he tries to login. Will he know to setup MFA? Or will he wake up someone to give him “the right access” to the system?

In any case, until AWS allows MFA to be part of the ‘password policy’ and prompts you to set it up as soon as you login for the first time (and even potentially afterwards depending on how paranoid you are), there’s a need to ensure all your users have MFA setup from the get-go.

The Monitoring

I have the pleasure of using PRTG for monitoring. A capable little tool, but the following code can be adapted for any tool running on Windows.

[CmdletBinding()]
Param(
    [parameter(Mandatory=$true)]
    [string]$accessKey,
    [parameter(Mandatory=$true)]
    [string]$secretKey
)

# Grab the current working directory of the script for the purposes of loading the DLL
$scriptWorkingDirectory = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent

# Ensure you use the .NET 4.5 DLL not the .NET 3.5 DLL from the AWS .NET SDK
# Load AWS API DLL
$AWSAPIFiles = @(
    "$scriptWorkingDirectory\AWSSDK.dll"
)
foreach($apiFile in $AWSAPIFiles){
    
    # Try loading the DLL
    Write-Verbose "Loading $apiFile";
    try{
        $fileStream = ([System.IO.FileInfo] (Get-Item $apiFile)).OpenRead();
    }catch{
        Write-Error $_.exception.message;
        Exit 1;
    }
    
    # Read the contents of the DLL
    $assemblyBytes = New-Object byte[] $fileStream.Length
    $fileStream.Read($assemblyBytes, 0, $fileStream.Length) | out-null;
    $var= $fileStream.Close()

    # Load the library 
    [System.Reflection.Assembly]::Load($assemblyBytes) | out-null;
}

# Set the AWS Access Key and Secret Key for authentication using the .NET SDK
[System.Configuration.ConfigurationManager]::AppSettings["AWSAccessKey"] = $accessKey
[System.Configuration.ConfigurationManager]::AppSettings["AWSSecretKey"] = $secretKey

# Connect to the AWS API
Write-Verbose "Connecting to AWS API";
$client= New-Object -TypeName Amazon.IdentityManagement.AmazonIdentityManagementServiceClient;

# Fetch the list of users that have passwords but not MFA
Write-Verbose "Fetch users that have passwords, but no MFA";
$mfadevices = @()
$usersWithoutMFA = $client.listUsers().ListUsersResult.Users | ?{
        
        # Ensure the user has a password (if they only have a secret key, they don't need MFA)
        try{
            $client.GetLoginProfile($_.username) | Out-Null;
        }catch{
            return $false;
        }
        
        # Return false if they don't have MFA (otherwise we don't care about them as they're doing the right thing!)
        return !$client.ListMFADevices($_.username).MFADevices;
    }

# Output to PRTG
Write-Verbose "Output in a PRTG friendly format (XML)";
Write-Host "
<prtg>
	<result>
		<channel>Number of users without MFA devices registered</channel>
        <value>$(($usersWithoutMFA | Measure-Object).count)</value>
    </result>
    <Text>$(($usersWithoutMFA | select -expandProperty  "Username") -join "; ")</Text>
</prtg>";

# Return success exit code
exit 0;

In order to execute this you need the following pre-requisites:

  1. The .NET 4.5 AWSSDK.dll from the AWS .NET developer’s SDK must be housed in the same directory as the .ps1
  2. PowerShell 4.0 or higher must be installed on the PRTG Probe
  3. .NET 4.5 must be installed on the PRTG probe executing the custom sensor
  4. A user with at least the following privileges in AWS:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1410864868000",
      "Effect": "Allow",
      "Action": [
        "iam:ListUsers",
        "iam:ListMFADevices",
        "iam:GetLoginProfile"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:*"
      ]
    }
  ]
}

Where, again 123456789012 is replaced with your account ID.

In order to get the .NET 4.5 AWSSDK.dll from the AWS .NET developer’s SDK just install the SDK on your machine, then copy AWSSDK.dll from C:\Program Files (x86)\AWS SDK for .NET\bin\Net45 to the directory your script lives in.

This directory should be under your PRTG probe’s Custom Sensors\ExeXML\ directory.

Once you’ve done that, you can create a Script/Exe custom sensor in PRTG pointing at your new .ps1 file like so:

PRTGsensorMFA

Setting the arguments to reflect the access and secret keys of the AWS user you created earlier.

Once that’s done, you’ll have a sensor that shows the names of the users in your AWS account that have a password, but no MFA device. Great! But how do we alert on that? As when that devices goes to an error state, the message will be replaced with an error message!

No problem, just create a factory sensor that references the first sensor, then create a threshold on the channel.

Create Sensor > Factory Sensor > Properties

#<factory sensor channel ID>:<factory sensor name>
Channel(<custom sensor id>,<custom sensor channel>)
#1:Users without MFA on AWS
Channel(10101,2)

Then set the threshold against the channel like so:
mfachannelthreshold
Voila! You will be alerted whenever you have a user that has a password, but no MFA device associated!

How do you handle this issue in your environment? Any suggestions on how to do this better? Please let me know in the comments!

Further Reading

StackOverflow – Can you require MFA for AWS IAM accounts?

AWS Docs – Configuring and Managing a Virtual MFA Device for Your AWS Account (AWS Management Console)

JeffW@AWS – Allow your user to self-manage a virtual MFA

Getting Started with DSC and PowerShell 5.0 – Part 1 – Installing WordPress with Desired State Configuration

So we’ve checked out the basics of Chef on Windows in Part 1 and Part 2 of Chef On Windows, and with the recent release of the Windows Management Framework 5.0 Preview September 2014  I thought it was time to stick a toe into the water of the Desired State Configuration side of configuration management on Windows.
As quite a lot of intros focus very heavily on the theory and don’t necessarily show a lot of results up front, I’m going to continue the precedent of the preview Chef articles and show you the shortest path to something tangible, hopefully gaining some familiarity with the tech involved along the way.

In Part 1 we’re going to use the WMF 5.0 preview, DSC, and a little bit of OneGet/PowerShellGet (name seems to be up for discussion the moment), to install WordPress 4.0 on to a blank VM. In order to do this we’re going to follow the guide laid out in the quick-start of the WordPress PowerShell/DSC module, so all credit goes to the wonderful people who created this module for providing our first entry point into DSC!

Important: You don’t need WMF 5.0 to use DSC, it’s been around since PS 4.0, but the WordPress PowerShell/DSC module we’ll be using requires WMF 5.0 for OneGet.

Important #2: This guide uses a WMF 5.0 preview and DSC modules that are labelled x for eXperimental, don’t use these in production 🙂

Requirements

  1. Blank Windows 2012 R2 VM 
  2. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell

We won’t need the VMs we created in the Chef series as we’ll be focussing on just DSC for today.

1) Preparing the VM

As the  WordPress PowerShell/DSC module we’ll be using requires WMF 5.0 for OneGet, we need to go and grab the September 2014 Preview!

Download WMF 5.0 to your 2012 VM from http://www.microsoft.com/en-us/download/details.aspx?id=44070WMF 5.0 September 2014 Preview

Now we need to install the xWordPress module and its dependencies.

Whoa whoa whoa, don’t download it from the link! What is this, the 90s? We’ve just  installed PowerShell 5.0 and with it, OneGet, let’s use it!

Open up a PowerShell console and run


Install-Module xWebAdministration -MinimumVersion 1.3.2 -Force

and accept the offer to download NuGet_anycpu.exe.

install-module xWebAdministration

Now install the remaining modules.


Install-Module xPSDesiredStateConfiguration -MinimumVersion 3.0.1 -Force

Install-Module xMySql -MinimumVersion 1.0 -Force

Install-Module xWordPress -MinimumVersion 1.0 -Force

Install-Module xPhp -MinimumVersion 1.0.1 -Force

Excellent! Okay, where did they go?

$env:ProgramFiles\WindowsPowerShell\Modules folder

Program Files WindowsPowerShell Modules

Awesome! Since when has that been a thing? WMF 5.0? I assume, but I’m not sure. Getting modules to load automatically has always been a bit of a per-user PITA in the past, so if this is user-agnostic way of installing PowerShell modules, it’s only a good thing!

2) Prepare the Configuration

Now we need to grab the sample files from the xWordPress module and customise them to our needs.

Copy the contents of C:\Program Files\WindowsPowerShell\Modules\xWordPress\samples to your Documents folder

samples in my documents

Open up SingleNodeEndToEndWordPress.ps1 in the PowerShell ISE and check that the Download URLs are still correct for PHP and MySQL.

MySQL and PHP URLs

I only had to change PHP to http://windows.php.net/downloads/releases/archives/php-5.5.14-nts-Win32-VC11-x64.zip, but double check MySQL as well, as it may have changed by the time you read this!

3) Executing the Configuration

Go back to your PowerShell window, cd into your documents folder and execute SingleNodeEndToEndWordPress.ps1.

This will perform the following tasks (at least):

  1. Install IIS
  2. Install PHP and dependencies
  3. Install MySQL
  4. Install WordPress into IIS with * port 80 HTTP bindings.

 SingleNodeEndToEndWordPress

After some time, your system will restart to complete the installation.

DSC is Restarting the computer

Once it’s restarted, DSC will continue to configure the computer, to see the progress, go to the DSC event log.  (Event Viewer > Applications and Services Log > Microsoft > Windows > Desired State Configuration > Operational)DSC Event Log

Once you see “Warning” “The local configuration manager was shut down”, your new WordPress site should be ready! Check out Localhost in IE!

WordPress 4.0 default

Ooh, this is the first time I’ve seen WordPress 4.0 default installation! First impressions are very monochrome, but eh, that’s what themes are for!

Summary

So what have we achieved here?

We’ve used community provided modules for DSC/PowerShell to install WordPress and all its dependencies, including IIS, PHP, and MySQL.

Was this easier than doing all the work ourselves, clicking through installers and typing out config ourselves? Much!

Does it mean we no longer need Chef and all that work we did in the past couple of posts was unnecessary? Not at all!

Does this illustrate the power and flexibility of DSC and OneGet? No, we’re just getting started!

I’ll be writing a subsequent post to dig in and write our own DSC module/template/whatever-the-correct-nomenclature–is but I suspect that bringing what we’ve learned today into Chef with Chef’s new DSC evaluation release recipes will be the post immediately following this one.

Further Reading

Steven Murawski

Everything Else

Getting Started with Chef on Windows Server – Part 2 – Chef Server & Bootstrapping

Now that we’ve done Part 1 – Configure a Package & Service, we can start getting a little more into the meat of Chef: centralisation. In the previous scenario we had defined a single recipe and applied it locally. Very simple, not very useful. In this part, we’re going to create a Chef Server, upload the recipe we created in the previous part to it, and then bootstrap another VM using it.

This is a relatively long winded setup, and if you’re itching to get started I highly recommend running through the LearnChef.com Redhat Enterprise Linux tutorial which even provides you with the VMs and hosted Chef Server, which will get your feet wet and started on the road to Chef. If, however, you’re interested in getting slightly deeper into Chef, step right this way.

Requirements

  1. Ubuntu Server VM
  2. IMPORTANT: The 2012 R2 VM you made in the previous part of this series
  3. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell
  1. Basic understanding of what Chef is (ideal, but not required).
  1. Basic Linux knowledge

1) Setting up the Chef Server

“Wait, what? Ubuntu Server? What happened to the “On Windows” part of this? I thought that was the whole point!”

Unfortunately at the time of writing Chef Server is only available on Linux. So in order to manage our Windows servers we’re going to need an Ubuntu Server VM on which we can install Chef Server. Don’t worry, Chef Server isn’t really the focus here, we just need it for configuration centralisation and user management.

There are a couple of alternatives to self-hosting a Chef Server including: Opscode Hosted Chef, and OpsWorks (and probably others). The former looks pretty sexy if you’ve got the cash to splash, and their free trial is crucial in the learnchef.com examples which we’re blatantly ripping off.

  1. Spin up an Ubuntu Server instance, make sure it has its own IP, can talk to our old 2012 VM, and access to the internet
  2. Visit http://www.getchef.com/chef/installensure you click “Open Source Chef Server 11″ and select the latest version of Chef, copy the URL the link provides you with into notepad.
  3. In your Ubuntu Server VM enter:
    wget [downloadURL]

    e.g.

    wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chef-server_11.1.3-1_amd64.deb

    This will download the chef-server installation file to your current directory.

  4. Once that’s complete, execute the installer using:
    sudo dpkg -i chef-server*.deb
  5. Setup Chef Server using the following command (you won’t be asked for any details
    sudo chef-server-ctl reconfigure
  6. Once the configuration is complete, you’re done! You can visit the server in your browsing on https://<ip if your ubuntu server>

2) Log on to the Chef Server and Download Credentials

You will need the following private keys in order to set up the workstation we previously created on our 2012 VM to talk to our new Chef Server.

  1. An administrative user (in our case, admin)
  2. A validator user (in our case, chef-validator)

To get these credentials, login to your new Chef Server (https://<ip of ubuntu server>) using the default credentials:

Username: admin
Password: p@ssw0rd1

Note the lowercase p in the password, this is not an MS educational sample!

chef-server

You will be immediately prompted to save the ‘admin’ user’s private key, save this to your desktop as chef-admin.psm.

private-key

Now navigate to

Clients > chef-validator > Edit > Regenerate Private Key

chef-validator

To download the validator’s private key. Save it into a text file called chef-validator.pem on your desktop.

3) Setup the Development Kit in your 2012 VM to Talk to Chef Server

Now we’re going to highlight a distinction that we did not draw in our previous article (mostly because I didn’t really know it existed). That is the difference between a Workstation and a Chef Client.

You’ll remember that we installed both the Development Kit and the Chef Client on to our VM previously, well, as you might imagine, the devkit isn’t something you need on every server, as it is that which we were using to create our recipes and templates. The Development Kit is something you’d (I’d guess) install on a bastion server or RD Gateway allowing you to author your recipes and then upload them to your Chef Server to be deployed elsewhere.

One of the big advantages configuration management is the fact that you can version control your configuration, and to this end we’re going to place our existing recipes into a repository based on the Github Chef repo. Why exactly the repo needs to based on the full Chef repo from OpsCode I’m not sure, but I’m not inclined to contest the official documentation!

On your 2012 VM from the previous article, download GIT from http://www.git-scm.com/download/win ensuring you tick “Use GIT from the Windows Command Prompt” when asked.

git install

Once installed, open Powershell and CD into your Documents folder and run:

git clone git://github.com/opscode/chef-repo.git

clone-opscode-chef-repo

This will pull down the latest copy of the Chef repo from Github and form the basis of our new working directory.

Once complete, create a folder inside the new ‘chef-repo’ folder called .chef  (you’ll probably need to use mkdir as the Windows UI won’t let you create a folder starting with a ‘.’) and copy the two pem files you downloaded from the Chef server earlier into it:

.chef

Because these files are secret, we don’t want to sync them with our source repo, so open up .gitignore and check that the .chef folder is already ignored.

Important: Because I didn’t have a domain available to me, I lacked the FQDNs required for communication with the Chef Server. To workaround this for my test environment. I simply added an entry to the hostfile on my 2012 VM with the IP of the Chef Server and named it chef-server.fakedomain, which worked fine. (You will also need to do this on the machine you’re bootstrapping later.)

Now we can configure Knife to talk to our new Chef Server by running

knife configure --initial

Which will prompt for the following info:

Location of Config File: <accept default>
Chef Server URL:
 https://<ubuntu server IP>
Name for New User: w2k12a
Existing Admin Name: admin
Location of Existing Admin’s Private Key: C:\users\<yourname>\documents\chef-repo\.chef\chef-admin.pem
Validation Client Name: chef-validator
Location of Validation Key: C:\users\<yourname>\documents\chef-repo\.chef\chef-validator.pem
Path to Chef Repo: C:\Users\<yourname>\Documents\chef-repo\
Password for New User: <your choice>

knife configure --initial

And you’re done! Your workstation is now setup to talk to your Chef Server. Next we need to upload the recipe we created previously and bootstrap an unwitting victim server.

4) Upload Recipe & Bootstrap a New Server

In the previous article we created a basic recipe which installed IIS and amended the default.htm to say “Hello World!”, which is perfect for an illustration of how to take a completely blank server and bootstrap it with a specific recipe.

Upload Recipe to Chef Server

Now that your workstation (old 2012 VM) is setup to talk to our Chef server, we can upload the ‘webserver’ recipe we created locally last time.

Copy the “webserver” directory from C:\chef\cookbooks into the repo we just created C:\users\<yourname>\documents\chef-repo\cookbooks\ (if the cookbooks subfolder doesn’t exist, create it).

On the same server, run:

knife cookbook upload webserver

knife cookbook upload webserver

Bam, simple as that! Your webserver recipe is now available to any server configured to talk to our Chef server.

Bootstrap a New Server

Go off and spin yourself up a new 2012 server, I’ll wait.

Once you’re done, we’ll need to

  1. Add your chef server’s FQDN (e.g. chef-server.fakedomain) to the new server’s host file if like me you didn’t have a DNS server to hand.
  2. Enable Windows Remote Management on the new server
  3. Install a plugin for Knife on our workstation (the old VM).

Enable Windows Remote Management

On your fresh 2012 server run the following to allow remote access and set the recommended remoting settings from Chef (I neglected the MaxMemoryPerShellMB setting because W2012’s is higher than 300MB already).

Enable-PSRemoting -force
Set-Item WSMan:\localhost\MaxTimeoutms 1800000
Set-Item WSMan:\localhost\Service\AllowRemoteAccess $true
Set-Item WSMan:\localhost\Service\Auth\Basic $true
Set-item WSMan:\localhost\Service\AllowUnencrypted $true

Why Enable-PSRemoting and not  Set-WSManQuickConfig? Simply because using Invoke-Command from the workstation to the Chef Client is an easy way to troubleshoot connectivity issues.

Important: Do not copy this and use it in your production environment! Use it for testing and PoC and take the time to use proper encrypted auth in your production environment.

On your workstation (old 2012 server) run the following to allow the server to reach out and remote on to the server we’re going to bootstrap.

Set-Item wsman:\localhost\Client\TrustedHosts -value *

Important: Again, don’t copy this straight into production, use a value like *.contoso.com to allow your AD domain’s computers only.

Install Knife-Windows and Bootstrap Server

Hop back onto your old 2012 VM (the one we configured as a workstation with Chef DK) and run the following:

gem install knife-windows

This will call out and download the knife-windows plugin which allows bootstrapping via WinRM instead of the default SSH.

Once that’s done, it’s one simple command (well, kinda) to call out and install Chef Client and execute your Recipe on your new VM!

knife bootstrap windows winrm [new 2012 server ip] -x [windows admin username] -P [password] --node-name node1 --run-list 'recipe[webserver]' -V

(I’ve included -V for verbose because this took nearly ten minutes on my ageing-laptop-powered VMs and wanted some feedback during.)

start bootstrap

Some time later…finish bootstrap

Knife has now reached out to your blank 2012 VM, downloaded the MSI for Chef Client, installed it, and applied your ‘webserver’ recipe, which in turn installed IIS and populated Default.htm.

Did it work? The moment of truth… put http://<ip of your new server> into your browser

it worked!

Holy crap it actually worked!

Synopsis

So what have we actually achieved here? We’ve taken a recipe for installing IIS and an extremely basic custom website that was previously only applicable locally, and uploaded it to our own locally hosted Chef Server, allowing us to execute it remotely even when Chef isn’t already installed.

We’ve only scratched the surface of Chef here, and there are loads of questions to ask and answer, e.g.:

  1. How does Chef benefit from Desired State Configuration?
  2. How do I define per-server or per-environment settings like connection strings?
  3. How do I manage databases?
  4. How do I manage service account credentials?
  5. How do I deal with my existing executable installers?
  6. How do I manage upgrades?

And so on ad infinitum. Some of these may be answered in upcoming posts about OneGet and Desired State Configuration, others may be the subject of a further introduction-to-concepts blog post, depending on how well I get on with Chef. All are, I’m sure, answerable with appropriate research though. If you know of any useful conceptual introductions on Chef, please share them in the comments!

Further Reading

Install the Server on a Virtual Machine

How to Install a Chef Server, Workstation, and Client on Ubuntu VPS Instances

Managed Reference for WinRM Windows PowerShell Command Classes

Enable and Use Remote Commands in Windows PowerShell

Getting Started with Chef on Windows Server – Part 1 Intro

I’ve never had the opportunity to work with configuration management software, but a recent project has pushed me over the edge from “Wow, that sounds really cool in theory!” to “Well, I’d better get my feet wet!”.

As the learnchef.com’s Windows page is currently under constructionUnder construction, I thought I’d write my efforts up to help anyone who might also be getting their feet wet for the first time in the configuration management space using Chef on Windows.

IMPORTANT: As I’m writing these posts while going along, it’s not to say that any of what’s reported adheres to Chef’s best practices. So if you notice any glaring errors, please say so in the comments!

In this series I intend to explore what I understand to be the glorious trifecta of configuration management on Windows:

  1. Chef: Part 1Part 2, Part 3
  2. Windows Desired State Configuration: Part 1
  3. Oneget

At the start of this series we will have a very rudimentary/non-existent understanding of the three elements above, and will work through each individually, then tie them together (if possible).

This first post will be dedicated to an introduction to Chef on Windows.

Chef – Configuring a Package and a Service

About Chef

Although LearnChef’s Windows page is under construction, they still have a fantastic introduction on RHEL (Redhat Enterprise Linux) which even provides you with a preconfigured VM! I would highly recommend running through this just to get a basic intuitive feel for Chef if you’re on the fence and not sure if you can be bothered to spin up your own 2012 VM and install things yourself.

Steven Murawski has a good blog post Is the Chef Learning Curve Worth it?” which, while obviously a little biased as he’s now a community manager at Chef, gives a good overview of why you would use Chef on Windows and answers some of the main questions surrounding Chef on Windows.

Pre-requisites

The following steps will require:

  1. Windows 2012 R2 (in theory this should work on 2008 R2+ so long as you have PowerShell 4.0, but I haven’t tested it)
  2. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell
  3. Basic understanding of what Chef is (ideal, but not required).

Steps

We’re going to pretty much steal the exact steps from the RHEL Configure a Package and a Service lesson, mix it with the legacy Windows tutorial, and see what happens!

1) Install Chef & Chef Development Kit

Install the Chef Client and the Chef Development Kit on your 2012 R2 VM.

2) Generate a Cookbook

We’re going to create a cookbook that installs IIS and generates a custom Default.htm to display.

The working directory for Chef in Windows looks to be C:\Chef by default, so

cd c:\chef\cookbooks
chef generate cookbook webserver

chef generate cookbook webserverThis will generate the structure and default files for a cookbook named “webserver”.

2) Configure the Default Resource File

Now we need to write the Ruby that will define the following:

  1. Install IIS
  2. Start IIS
  3. Populate Default.htm with our message

To do so we’ll edit default.rb in the recipes directory of the webserver cookbook.

Notepad C:\chef\cookbooks\webserver\recipes\default.rb

Then define the following in the file. EDIT: Amended thanks to @cjeffblaine!

powershell_script 'Install IIS' do
 action :run
 code 'add-windowsfeature Web-Server'
end

service 'w3svc' do
 action [ :enable, :start ]
end

template 'c:\inetpub\wwwroot\Default.htm' do
 source 'Default.htm.erb'
 rights :read, 'Everyone'
end

This will execute Add-WindowsFeature Web-Server in a PowerShell context (installing IIS if necessary), then start IIS, and copy the contents of Default.htm.erb to C:\inetpub\wwwroot\Default.htm and give everyone read access, so we’d better define the contents of Default.htm.erb!

3) Create a Template

Templates allow you to use variables from Knife which include basic info like IP and Hostname by default, but can also be populated with custom information using data bags. An obvious example of a use-case for templates is for populating web.config information like DB connection strings.

chef generate template webserver Default.htm

chef generate template webserver Default.htm

If this throws an error saying Chef was not found, ensure you’ve installed the Chef Development Kit.

Next we need to edit the template file to reflect our custom splash page!

Notepad C:\chef\cookbooks\webserver\templates\default\Default.htm.erb

In this file we just enter a simple web page.

&amp;lt;html&amp;gt;
 &amp;lt;body&amp;gt;
 &amp;lt;h1&amp;gt;Hello World!&amp;lt;/h1&amp;gt;
 &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

4) Apply the config!

All done! Now we can apply the configuration!

chef-client --local-mode --runlist webserver

chef-client --local-mode --runlist webserver

All this does is kick off the Chef client in local mode specifying a runbook called ‘webserver’, but in the background Chef beavers away installing IIS, starting it, and customising the default.htm page.

working website

Et voilà!

5) Reapply the Configuration

We can now reapply this recipe over and over again, and each time Chef will check the config we’ve declared in the recipe against the actual configuration, and bring it back in line as necessary.

chef-client --local-mode --runlist webserver

So you can delete your default.htm, uninstall IIS, disable the service, but as soon as you run the code above, it will all be reset in accordance with your recipe!

Summary

Now those of you familiar with configuration management will be feeling a bit underwhelmed at this point. Where’s the automatic application? Where’s the centralisation? Bootstrapping? You didn’t even define any variables in your template!

Not to worry, we will do that in the next post.

Further Reading

Redhat Enterprise Linux / CentOS Training – LearnChef

Chef Reference – Chef.com

Is the Chef Learning Curve Worth it? – Steven Murawski

Chef Fundamentals Webinar Series – LearnChef

Scripting Backup Exec 2012 – P3 – Customising and Adding Backup Tasks

Introduction

Part 3 will cover creating and customising backup tasks for our newly created backup definition. So far we’ve only accepted the defaults, so unless you’re happy with the predefined schedule, retention period and secondary backup type, you’re going to have to start customising your tasks!

Customising and Adding Backup Tasks

As we’ve already created a backup definition from the BackupToDisk defaults, we’ll have a new definition  containing a full backup and an incremental backup.

It’s important to note that a backup definition will always contain a full backup tasks, you cannot delete it. You can only ever modify it. And in order to do so, we must first obtain our definition as a variable. There are two options for doing this, either you can assign your new definition to a variable as you create it, like so:

$backupDefinition = New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection $selection -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

Or, you can retrieve one you made earlier using:

$backupDefinition = get-BEBackupDefinition -name “New Backup Definition”

Gotcha: In theory, these two are interchangeable, but on a number of occasions I found that using the second method (get-BEBackupDefinition) immediately after saving or creating a new backup definition, caused the cmdlet to complain it couldn’t find a definition by that name. So on the whole I would recommend the former over the latter.

Cleaning Up and Scheduling

As I’ve already mentioned, you can only ever customise the full backup task, but as we also have an incremental task in our definition, let’s delete that before we get started.

$backupDefinition = $($backupDefinition | Remove-BEBackupTask * -Confirm:$false | Save-BEBackupDefinition -confirm:$false | Get-BEBackupDefinition)

In this section of code, we’re piping the $backupDefinition variable to “Remove-BEBackupTask”, which is using a wildcard (“*”) to delete all backup tasks (except full as you can’t delete that!). This then pipes the modified backup definition to the Save-BeBackupDefinition cmdlet, which commits the change.

Gotcha: Note the $() around the piped commands, without this, you won’t get the modified definition back, just the output of the first pipe.

Double Gotcha: Since writing this post I’ve discovered that occasionally the definition would not be passed correctly to the variable, causing subsequent cmdlets that I passed it to to claim that it was null. This is why I’ve put the “| Get-BEBackupDefinition” at the end, as this, oddly, seems to resolve the problem.

Now that we’ve cleared out the cobwebs, let’s see what we need in order to customise a task:

  • Schedule
  • Storage
  • Retention Period

The schedule we create using:

$fullBackupSchedule = New-BESchedule -WeeklyEvery “Saturday” -startingAt “8am” 

We’re going to use this to modify our full backup to run weekly on a Saturday at 8am. There are, naturally, a huge range of options in the New-BESchedule cmdlet, I recommend you consult the help file mentioned in Part 1.

Gotcha: Where the help file says you can specify multiple days by comma separating them, you cannot. You must insert them into an array first like so:

[…] New-BESchedule -WeeklyEvery @(“Friday”, “Saturday”, “Sunday”) […]

Modifying and Creating Tasks

The schedule is the only tricky part of modifying or creating a backup task. For this purpose there are six commands you need to know:

  • Set-BEFullBackupTask
  • Set-BEIncrementalBackupTask
  • Set-BEDifferentialBackupTask
  • Add-BEFullBackupTask
  • Add-BEIncrementalBackupTask
  • Add-BEDifferentialBackupTask

You can probably guess, set is for modify, add  is for creating new ones.

So, first up is modifying our existing task using the $schedule we created earlier.

$backupDefinition = $(Set-BEFullBackupTask -backupDefinition $backupDefinition  -Name * -DiskStorageKeepForHours 672 -schedule $fullBackupSchedule -storage “Full Backup Storage Pool” | Save-BEBackupDefinition -confirm:$false)

This passes the existing backup definition from the $backupDefinition variable, using the asterisk wildcard to set ALL full backup tasks for this definition (there’s only one) to keep the storage for 4 weeks (672 hours) according to our previously defined schedule in $fullBackupSchedule and to backup to the “Full Backup Storage Pool”. It then pipes the modified definition to the Save-BEBackupDefinition cmdlet which saves the change. All this is encapsulated by $() which ensures all the piped commands are executed fully prior to saving the output (the newly altered and committed definition) into $backupDefinition.

The Script So Far

Phew! So far we’ve got a script that creates a new backup definition, deletes the incremental default and modifies the full backup task to our specifications. So what does it look like in full?

# Select the correct server
$agentServer = getBeAgentServer “server11*”

# Include the following directories
$selection= @(New-BEFileSystemSelection -path “C:\*”  -PathIsDirectory -Recurse)
$selection += New-BEFileSystemSelection -path “Z:\* -PathIsDirectory -Recurse

# Exclude the following directories
$selection += New-BEFileSystemSelection -path “C:\Windows\* -PathIsDirectory -Exclude

# Create the Backup definition based on defaults
New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection $selection -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

# Delete the default incremental
$backupDefinition = $($backupDefinition | Remove-BEBackupTask * -Confirm:$false | Save-BEBackupDefinition -confirm:$false)

# Create a full backup schedule
$fullBackupSchedule = New-BESchedule -WeeklyEvery “Saturday” -startingAt “8am” 

# Modify the default full backup to reflect our new schedule and retention period
$backupDefinition = $(Set-BEFullBackupTask -backupDefinition $backupDefinition  -Name * -DiskStorageKeepForHours 672 -schedule $fullBackupSchedule -storage “Full Backup Storage Pool” | Save-BEBackupDefinition -confirm:$false)

# Create a differential backup schedule
$differentialBackupSchedule = New-BESchedule -WeeklyEvery @(“Sunday”, “Monday”, “Tuesday”, “Wednesday”, “Thursday”, “Friday”) -startingAt “9pm” 

# Modify the default differential backup to reflect our new schedule and retention period
$backupDefinition = $(Add-BEDifferentialBackupTask -backupDefinition $backupDefinition  -Name * -DiskStorageKeepForHours 168 -schedule $differentialBackupSchedule -storage “Differential Backup Storage Pool” | Save-BEBackupDefinition -confirm:$false)

As you can see, I added a new differential backup task running every day that the full doesn’t run, at 9pm with a retention period of 1 week. Hopefully from the explication of the previous cmdlets you can figure out how to customise it to suit your tastes.

Future Posts

That’s it for now! I will shortly post two new entries on Backup Exec 2012 on the following topics:

  • Integrating Backup Exec with PRTG Monitoring
  • Creating a more robust job creation script

Take care, and don’t forget to leave a comment!

Scripting Backup Exec 2012 – P2 – Creating a New Backup with Powershell

Introduction

In Part 2 of Scripting Backup Exec 2012, I’ll be illustrating how to create a Windows file backup from scratch using Powershell. For a basic rundown of the components, concepts and gotchas for this process, please see Part 1.

Prerequisites:

  1. A fully patched and up to date installation of Backup Exec 2012
  2. Powershell and (advantageous) basic Powershell knowledge
  3. An unrestricted execution policy
  4. A pre-installed and trusted Windows Agent added to the Backup Exec server.

The Script

As mentioned in Part 1 to successfully create a backup job we need to programmatically the following components:

  • Backup Definition (superordinate container)
    • Backup Agent (the server we’re backing up)
    • Filesystem Selection (what files we’re backing up)
    • System State Selection (whether we’re backing up the system state)
    • Backup Job Default (the default template which we will base this on)
    • Backup Definition Name
  • Backup Task
    • Backup Definition
    • Retention Period (in hours)
    • Schedule

Items in bold must be created using a cmdlet, items in italics can be specified as a string/int, or as an array/cmdlet for extra flexibility, and normal items are only specified by string or integer.

Creating the Backup Definition

The cmdlet used to create a new definition is

New-BEBackupDefinition

But before we do that, we need to specify which server we’re going to be backing up to, which will be done using:

Get-BeAgentServer “server11*”

You’ll note the asterix at the end. This is to allow for the possibility of a fully qualified server name (i.e. a server name with the domain on the end). Unfortunately Backup Exec doesn’t seem to be consistent in adding servers either with simply the hostname or fully qualified.

So a quick and simple bringing together of these commands would be the following:

$agentServer = getBeAgentServer “server11*”

New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection “C:\*” -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

As you can probably guess, this will create a backup-to-disk definition called “New Backup Definition” that backs up the C:\ drive, but not the system state, of a server called “server11”.

Gotcha: If you have a slightly myopic naming convention, this will also match “server111”, there are easy ways around this, but I’ll cover that in a later part.

Multiple Directory Selection & Exclusions

The code above is all well and good if you just want to backup the C:\ drive, but what if you want to backup, say, the Z:\ drive as well? Simple! Put them in an array!

$selection = @(“C:\*”, “Z:\*”)
New-BEBackupDefinition […] -FileSystemSelection $selection […]

Easy, right? But what if you want to exclude a directory? Then you’ll need to use our new friend:

New-BEFileSystemSelection

For example, if we want to include C:\ and Z:\, but exclude C:\Windows, we would do something like the following:

# Include the following directories
$selection= @(New-BEFileSystemSelection -path “C:\*”  -PathIsDirectory -Recurse)
$selection += New-BEFileSystemSelection -path “Z:\* -PathIsDirectory -Recurse

#Exclude the following directories
$selection += New-BEFileSystemSelection -path “C:\Windows\* -PathIsDirectory -Exclude

And then pass it to New-BEBackupDefinition.

The Script so Far

So far we haven’t done anything especially complicated, we’ve created a backup definition from defaults, with customised selection and pointed at the client we specified. So far our script looks like this:

# Select the correct server
$agentServer = getBeAgentServer “server11*”

# Include the following directories
$selection= @(New-BEFileSystemSelection -path “C:\*”  -PathIsDirectory -Recurse)
$selection += New-BEFileSystemSelection -path “Z:\* -PathIsDirectory -Recurse

# Exclude the following directories
$selection += New-BEFileSystemSelection -path “C:\Windows\* -PathIsDirectory -Exclude

# Create the Backup definition based on defaults
New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection $selection -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

It’s to important to note that at this point, the definition will contain the tasks defined in the default “BackupToDisk” job, namely a Full and an Incremental with fairly short retention periods.

How do we customise the tasks? You’ll see in Part 3 Customising and Adding Backup Tasks!