Category Archives: Devops

Enforcing AWS Multi-Factor Authentication with IAM, PowerShell and PRTG

Introduction: MFA

Multi-Factor Authentication as utilised by AWS uses a TOTP (Time based One Time Password) setup with either a hardware or ‘virtual’ MFA device. The virtual device being the most commonly used, allowing you to use applications like Google Auth on your smartphone to generate passwords that are only viable for 60 seconds.

This means that if you have MFA enabled, even if someone has your password, so long as they don’t also have access to your (hardware or virtual) MFA device, they’re unable to login to your account.

Introduction: AWS MFA

MFA as utilised by AWS is pretty straightforward to setup, scan a QR code, type in a couple of PINs, job done. So long as you have the right permissions.

In order to allow your IAMs users to even setup their MFA device you need to set a policy against their user (preferably indirectly using a group). Something like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowUsersToCreateDeleteTheirOwnVirtualMFADevices",
      "Effect": "Allow",
      "Action": ["iam:*VirtualMFADevice"],
      "Resource": ["arn:aws:iam::123456789012:mfa/${aws:username}"]
    },
    {
      "Sid": "AllowUsersToEnableSyncDisableTheirOwnMFADevices",
      "Effect": "Allow",
      "Action": [
        "iam:DeactivateMFADevice",
        "iam:EnableMFADevice",
        "iam:ListMFADevices",
        "iam:ResyncMFADevice"
      ],
      "Resource": ["arn:aws:iam::123456789012:user/${aws:username}"]
    },
    {
      "Sid": "AllowUsersToListVirtualMFADevices",
      "Effect": "Allow",
      "Action": ["iam:ListVirtualMFADevices"],
      "Resource": ["arn:aws:iam::123456789012:mfa/*"]
    },
    {
      "Sid": "AllowUsersToListUsersInConsole",
      "Effect": "Allow",
      "Action": ["iam:ListUsers"],
      "Resource": ["arn:aws:iam::123456789012:user/*"]
    }
  ]
}

Where 123456789012 is your AWS account ID.

Okay, so far so good. Your AWS users can set their own MFA devices. But currently whatever other privileges you’ve given them are usable even if they haven’t setup an MFA device for their account, meaning their account is a security vulnerability. Best put pay to that!

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": "*",
      "Condition":
      {
          "Null":{"aws:MultiFactorAuthAge":"false"}
      }
    }
  ]
}

Now we’re giving the user full access to everything but only if they have authenticated with MFA. So if they login with just a password and try to access, e.g. EC2, they’ll get a big fat access denied.

accessdeniedwithoutmfa

Great! So they go and setup their MFA device, logout, login again with MFA.

loginwithmfa

And voila! Access allowed.

accessallowedwithmfa

Which is great! Really secure, can’t get in with that policy without using using MFA.

But what if someone sets up another policy (which itself is lovely and granular, preserves the principle of least privilege) but forgets the MFA constraint? When you get into more numerous and complicated policies attached variously to groups, users, etc. it becomes cumbersome to audit them all for compliance even with automation.

Further, what happens when someone gets woken up on call, forgets all about MFA for this particular AWS account (which may well be one of a dozen or so they’re involved with) then gets access denied when he tries to login. Will he know to setup MFA? Or will he wake up someone to give him “the right access” to the system?

In any case, until AWS allows MFA to be part of the ‘password policy’ and prompts you to set it up as soon as you login for the first time (and even potentially afterwards depending on how paranoid you are), there’s a need to ensure all your users have MFA setup from the get-go.

The Monitoring

I have the pleasure of using PRTG for monitoring. A capable little tool, but the following code can be adapted for any tool running on Windows.

[CmdletBinding()]
Param(
    [parameter(Mandatory=$true)]
    [string]$accessKey,
    [parameter(Mandatory=$true)]
    [string]$secretKey
)

# Grab the current working directory of the script for the purposes of loading the DLL
$scriptWorkingDirectory = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent

# Ensure you use the .NET 4.5 DLL not the .NET 3.5 DLL from the AWS .NET SDK
# Load AWS API DLL
$AWSAPIFiles = @(
    "$scriptWorkingDirectory\AWSSDK.dll"
)
foreach($apiFile in $AWSAPIFiles){
    
    # Try loading the DLL
    Write-Verbose "Loading $apiFile";
    try{
        $fileStream = ([System.IO.FileInfo] (Get-Item $apiFile)).OpenRead();
    }catch{
        Write-Error $_.exception.message;
        Exit 1;
    }
    
    # Read the contents of the DLL
    $assemblyBytes = New-Object byte[] $fileStream.Length
    $fileStream.Read($assemblyBytes, 0, $fileStream.Length) | out-null;
    $var= $fileStream.Close()

    # Load the library 
    [System.Reflection.Assembly]::Load($assemblyBytes) | out-null;
}

# Set the AWS Access Key and Secret Key for authentication using the .NET SDK
[System.Configuration.ConfigurationManager]::AppSettings["AWSAccessKey"] = $accessKey
[System.Configuration.ConfigurationManager]::AppSettings["AWSSecretKey"] = $secretKey

# Connect to the AWS API
Write-Verbose "Connecting to AWS API";
$client= New-Object -TypeName Amazon.IdentityManagement.AmazonIdentityManagementServiceClient;

# Fetch the list of users that have passwords but not MFA
Write-Verbose "Fetch users that have passwords, but no MFA";
$mfadevices = @()
$usersWithoutMFA = $client.listUsers().ListUsersResult.Users | ?{
        
        # Ensure the user has a password (if they only have a secret key, they don't need MFA)
        try{
            $client.GetLoginProfile($_.username) | Out-Null;
        }catch{
            return $false;
        }
        
        # Return false if they don't have MFA (otherwise we don't care about them as they're doing the right thing!)
        return !$client.ListMFADevices($_.username).MFADevices;
    }

# Output to PRTG
Write-Verbose "Output in a PRTG friendly format (XML)";
Write-Host "
<prtg>
	<result>
		<channel>Number of users without MFA devices registered</channel>
        <value>$(($usersWithoutMFA | Measure-Object).count)</value>
    </result>
    <Text>$(($usersWithoutMFA | select -expandProperty  "Username") -join "; ")</Text>
</prtg>";

# Return success exit code
exit 0;

In order to execute this you need the following pre-requisites:

  1. The .NET 4.5 AWSSDK.dll from the AWS .NET developer’s SDK must be housed in the same directory as the .ps1
  2. PowerShell 4.0 or higher must be installed on the PRTG Probe
  3. .NET 4.5 must be installed on the PRTG probe executing the custom sensor
  4. A user with at least the following privileges in AWS:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1410864868000",
      "Effect": "Allow",
      "Action": [
        "iam:ListUsers",
        "iam:ListMFADevices",
        "iam:GetLoginProfile"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:*"
      ]
    }
  ]
}

Where, again 123456789012 is replaced with your account ID.

In order to get the .NET 4.5 AWSSDK.dll from the AWS .NET developer’s SDK just install the SDK on your machine, then copy AWSSDK.dll from C:\Program Files (x86)\AWS SDK for .NET\bin\Net45 to the directory your script lives in.

This directory should be under your PRTG probe’s Custom Sensors\ExeXML\ directory.

Once you’ve done that, you can create a Script/Exe custom sensor in PRTG pointing at your new .ps1 file like so:

PRTGsensorMFA

Setting the arguments to reflect the access and secret keys of the AWS user you created earlier.

Once that’s done, you’ll have a sensor that shows the names of the users in your AWS account that have a password, but no MFA device. Great! But how do we alert on that? As when that devices goes to an error state, the message will be replaced with an error message!

No problem, just create a factory sensor that references the first sensor, then create a threshold on the channel.

Create Sensor > Factory Sensor > Properties

#<factory sensor channel ID>:<factory sensor name>
Channel(<custom sensor id>,<custom sensor channel>)
#1:Users without MFA on AWS
Channel(10101,2)

Then set the threshold against the channel like so:
mfachannelthreshold
Voila! You will be alerted whenever you have a user that has a password, but no MFA device associated!

How do you handle this issue in your environment? Any suggestions on how to do this better? Please let me know in the comments!

Further Reading

StackOverflow – Can you require MFA for AWS IAM accounts?

AWS Docs – Configuring and Managing a Virtual MFA Device for Your AWS Account (AWS Management Console)

JeffW@AWS – Allow your user to self-manage a virtual MFA

Advertisements

Getting Started with DSC and PowerShell 5.0 – Part 1 – Installing WordPress with Desired State Configuration

So we’ve checked out the basics of Chef on Windows in Part 1 and Part 2 of Chef On Windows, and with the recent release of the Windows Management Framework 5.0 Preview September 2014  I thought it was time to stick a toe into the water of the Desired State Configuration side of configuration management on Windows.
As quite a lot of intros focus very heavily on the theory and don’t necessarily show a lot of results up front, I’m going to continue the precedent of the preview Chef articles and show you the shortest path to something tangible, hopefully gaining some familiarity with the tech involved along the way.

In Part 1 we’re going to use the WMF 5.0 preview, DSC, and a little bit of OneGet/PowerShellGet (name seems to be up for discussion the moment), to install WordPress 4.0 on to a blank VM. In order to do this we’re going to follow the guide laid out in the quick-start of the WordPress PowerShell/DSC module, so all credit goes to the wonderful people who created this module for providing our first entry point into DSC!

Important: You don’t need WMF 5.0 to use DSC, it’s been around since PS 4.0, but the WordPress PowerShell/DSC module we’ll be using requires WMF 5.0 for OneGet.

Important #2: This guide uses a WMF 5.0 preview and DSC modules that are labelled x for eXperimental, don’t use these in production 🙂

Requirements

  1. Blank Windows 2012 R2 VM 
  2. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell

We won’t need the VMs we created in the Chef series as we’ll be focussing on just DSC for today.

1) Preparing the VM

As the  WordPress PowerShell/DSC module we’ll be using requires WMF 5.0 for OneGet, we need to go and grab the September 2014 Preview!

Download WMF 5.0 to your 2012 VM from http://www.microsoft.com/en-us/download/details.aspx?id=44070WMF 5.0 September 2014 Preview

Now we need to install the xWordPress module and its dependencies.

Whoa whoa whoa, don’t download it from the link! What is this, the 90s? We’ve just  installed PowerShell 5.0 and with it, OneGet, let’s use it!

Open up a PowerShell console and run


Install-Module xWebAdministration -MinimumVersion 1.3.2 -Force

and accept the offer to download NuGet_anycpu.exe.

install-module xWebAdministration

Now install the remaining modules.


Install-Module xPSDesiredStateConfiguration -MinimumVersion 3.0.1 -Force

Install-Module xMySql -MinimumVersion 1.0 -Force

Install-Module xWordPress -MinimumVersion 1.0 -Force

Install-Module xPhp -MinimumVersion 1.0.1 -Force

Excellent! Okay, where did they go?

$env:ProgramFiles\WindowsPowerShell\Modules folder

Program Files WindowsPowerShell Modules

Awesome! Since when has that been a thing? WMF 5.0? I assume, but I’m not sure. Getting modules to load automatically has always been a bit of a per-user PITA in the past, so if this is user-agnostic way of installing PowerShell modules, it’s only a good thing!

2) Prepare the Configuration

Now we need to grab the sample files from the xWordPress module and customise them to our needs.

Copy the contents of C:\Program Files\WindowsPowerShell\Modules\xWordPress\samples to your Documents folder

samples in my documents

Open up SingleNodeEndToEndWordPress.ps1 in the PowerShell ISE and check that the Download URLs are still correct for PHP and MySQL.

MySQL and PHP URLs

I only had to change PHP to http://windows.php.net/downloads/releases/archives/php-5.5.14-nts-Win32-VC11-x64.zip, but double check MySQL as well, as it may have changed by the time you read this!

3) Executing the Configuration

Go back to your PowerShell window, cd into your documents folder and execute SingleNodeEndToEndWordPress.ps1.

This will perform the following tasks (at least):

  1. Install IIS
  2. Install PHP and dependencies
  3. Install MySQL
  4. Install WordPress into IIS with * port 80 HTTP bindings.

 SingleNodeEndToEndWordPress

After some time, your system will restart to complete the installation.

DSC is Restarting the computer

Once it’s restarted, DSC will continue to configure the computer, to see the progress, go to the DSC event log.  (Event Viewer > Applications and Services Log > Microsoft > Windows > Desired State Configuration > Operational)DSC Event Log

Once you see “Warning” “The local configuration manager was shut down”, your new WordPress site should be ready! Check out Localhost in IE!

WordPress 4.0 default

Ooh, this is the first time I’ve seen WordPress 4.0 default installation! First impressions are very monochrome, but eh, that’s what themes are for!

Summary

So what have we achieved here?

We’ve used community provided modules for DSC/PowerShell to install WordPress and all its dependencies, including IIS, PHP, and MySQL.

Was this easier than doing all the work ourselves, clicking through installers and typing out config ourselves? Much!

Does it mean we no longer need Chef and all that work we did in the past couple of posts was unnecessary? Not at all!

Does this illustrate the power and flexibility of DSC and OneGet? No, we’re just getting started!

I’ll be writing a subsequent post to dig in and write our own DSC module/template/whatever-the-correct-nomenclature–is but I suspect that bringing what we’ve learned today into Chef with Chef’s new DSC evaluation release recipes will be the post immediately following this one.

Further Reading

Steven Murawski

Everything Else

Getting Started with Chef on Windows Server – Part 2 – Chef Server & Bootstrapping

Now that we’ve done Part 1 – Configure a Package & Service, we can start getting a little more into the meat of Chef: centralisation. In the previous scenario we had defined a single recipe and applied it locally. Very simple, not very useful. In this part, we’re going to create a Chef Server, upload the recipe we created in the previous part to it, and then bootstrap another VM using it.

This is a relatively long winded setup, and if you’re itching to get started I highly recommend running through the LearnChef.com Redhat Enterprise Linux tutorial which even provides you with the VMs and hosted Chef Server, which will get your feet wet and started on the road to Chef. If, however, you’re interested in getting slightly deeper into Chef, step right this way.

Requirements

  1. Ubuntu Server VM
  2. IMPORTANT: The 2012 R2 VM you made in the previous part of this series
  3. Powershell Understanding – Basic: Microsoft Virtual Academy – Getting Started With PowerShell
  1. Basic understanding of what Chef is (ideal, but not required).
  1. Basic Linux knowledge

1) Setting up the Chef Server

“Wait, what? Ubuntu Server? What happened to the “On Windows” part of this? I thought that was the whole point!”

Unfortunately at the time of writing Chef Server is only available on Linux. So in order to manage our Windows servers we’re going to need an Ubuntu Server VM on which we can install Chef Server. Don’t worry, Chef Server isn’t really the focus here, we just need it for configuration centralisation and user management.

There are a couple of alternatives to self-hosting a Chef Server including: Opscode Hosted Chef, and OpsWorks (and probably others). The former looks pretty sexy if you’ve got the cash to splash, and their free trial is crucial in the learnchef.com examples which we’re blatantly ripping off.

  1. Spin up an Ubuntu Server instance, make sure it has its own IP, can talk to our old 2012 VM, and access to the internet
  2. Visit http://www.getchef.com/chef/installensure you click “Open Source Chef Server 11″ and select the latest version of Chef, copy the URL the link provides you with into notepad.
  3. In your Ubuntu Server VM enter:
    wget [downloadURL]

    e.g.

    wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chef-server_11.1.3-1_amd64.deb

    This will download the chef-server installation file to your current directory.

  4. Once that’s complete, execute the installer using:
    sudo dpkg -i chef-server*.deb
  5. Setup Chef Server using the following command (you won’t be asked for any details
    sudo chef-server-ctl reconfigure
  6. Once the configuration is complete, you’re done! You can visit the server in your browsing on https://<ip if your ubuntu server>

2) Log on to the Chef Server and Download Credentials

You will need the following private keys in order to set up the workstation we previously created on our 2012 VM to talk to our new Chef Server.

  1. An administrative user (in our case, admin)
  2. A validator user (in our case, chef-validator)

To get these credentials, login to your new Chef Server (https://<ip of ubuntu server>) using the default credentials:

Username: admin
Password: p@ssw0rd1

Note the lowercase p in the password, this is not an MS educational sample!

chef-server

You will be immediately prompted to save the ‘admin’ user’s private key, save this to your desktop as chef-admin.psm.

private-key

Now navigate to

Clients > chef-validator > Edit > Regenerate Private Key

chef-validator

To download the validator’s private key. Save it into a text file called chef-validator.pem on your desktop.

3) Setup the Development Kit in your 2012 VM to Talk to Chef Server

Now we’re going to highlight a distinction that we did not draw in our previous article (mostly because I didn’t really know it existed). That is the difference between a Workstation and a Chef Client.

You’ll remember that we installed both the Development Kit and the Chef Client on to our VM previously, well, as you might imagine, the devkit isn’t something you need on every server, as it is that which we were using to create our recipes and templates. The Development Kit is something you’d (I’d guess) install on a bastion server or RD Gateway allowing you to author your recipes and then upload them to your Chef Server to be deployed elsewhere.

One of the big advantages configuration management is the fact that you can version control your configuration, and to this end we’re going to place our existing recipes into a repository based on the Github Chef repo. Why exactly the repo needs to based on the full Chef repo from OpsCode I’m not sure, but I’m not inclined to contest the official documentation!

On your 2012 VM from the previous article, download GIT from http://www.git-scm.com/download/win ensuring you tick “Use GIT from the Windows Command Prompt” when asked.

git install

Once installed, open Powershell and CD into your Documents folder and run:

git clone git://github.com/opscode/chef-repo.git

clone-opscode-chef-repo

This will pull down the latest copy of the Chef repo from Github and form the basis of our new working directory.

Once complete, create a folder inside the new ‘chef-repo’ folder called .chef  (you’ll probably need to use mkdir as the Windows UI won’t let you create a folder starting with a ‘.’) and copy the two pem files you downloaded from the Chef server earlier into it:

.chef

Because these files are secret, we don’t want to sync them with our source repo, so open up .gitignore and check that the .chef folder is already ignored.

Important: Because I didn’t have a domain available to me, I lacked the FQDNs required for communication with the Chef Server. To workaround this for my test environment. I simply added an entry to the hostfile on my 2012 VM with the IP of the Chef Server and named it chef-server.fakedomain, which worked fine. (You will also need to do this on the machine you’re bootstrapping later.)

Now we can configure Knife to talk to our new Chef Server by running

knife configure --initial

Which will prompt for the following info:

Location of Config File: <accept default>
Chef Server URL:
 https://<ubuntu server IP>
Name for New User: w2k12a
Existing Admin Name: admin
Location of Existing Admin’s Private Key: C:\users\<yourname>\documents\chef-repo\.chef\chef-admin.pem
Validation Client Name: chef-validator
Location of Validation Key: C:\users\<yourname>\documents\chef-repo\.chef\chef-validator.pem
Path to Chef Repo: C:\Users\<yourname>\Documents\chef-repo\
Password for New User: <your choice>

knife configure --initial

And you’re done! Your workstation is now setup to talk to your Chef Server. Next we need to upload the recipe we created previously and bootstrap an unwitting victim server.

4) Upload Recipe & Bootstrap a New Server

In the previous article we created a basic recipe which installed IIS and amended the default.htm to say “Hello World!”, which is perfect for an illustration of how to take a completely blank server and bootstrap it with a specific recipe.

Upload Recipe to Chef Server

Now that your workstation (old 2012 VM) is setup to talk to our Chef server, we can upload the ‘webserver’ recipe we created locally last time.

Copy the “webserver” directory from C:\chef\cookbooks into the repo we just created C:\users\<yourname>\documents\chef-repo\cookbooks\ (if the cookbooks subfolder doesn’t exist, create it).

On the same server, run:

knife cookbook upload webserver

knife cookbook upload webserver

Bam, simple as that! Your webserver recipe is now available to any server configured to talk to our Chef server.

Bootstrap a New Server

Go off and spin yourself up a new 2012 server, I’ll wait.

Once you’re done, we’ll need to

  1. Add your chef server’s FQDN (e.g. chef-server.fakedomain) to the new server’s host file if like me you didn’t have a DNS server to hand.
  2. Enable Windows Remote Management on the new server
  3. Install a plugin for Knife on our workstation (the old VM).

Enable Windows Remote Management

On your fresh 2012 server run the following to allow remote access and set the recommended remoting settings from Chef (I neglected the MaxMemoryPerShellMB setting because W2012’s is higher than 300MB already).

Enable-PSRemoting -force
Set-Item WSMan:\localhost\MaxTimeoutms 1800000
Set-Item WSMan:\localhost\Service\AllowRemoteAccess $true
Set-Item WSMan:\localhost\Service\Auth\Basic $true
Set-item WSMan:\localhost\Service\AllowUnencrypted $true

Why Enable-PSRemoting and not  Set-WSManQuickConfig? Simply because using Invoke-Command from the workstation to the Chef Client is an easy way to troubleshoot connectivity issues.

Important: Do not copy this and use it in your production environment! Use it for testing and PoC and take the time to use proper encrypted auth in your production environment.

On your workstation (old 2012 server) run the following to allow the server to reach out and remote on to the server we’re going to bootstrap.

Set-Item wsman:\localhost\Client\TrustedHosts -value *

Important: Again, don’t copy this straight into production, use a value like *.contoso.com to allow your AD domain’s computers only.

Install Knife-Windows and Bootstrap Server

Hop back onto your old 2012 VM (the one we configured as a workstation with Chef DK) and run the following:

gem install knife-windows

This will call out and download the knife-windows plugin which allows bootstrapping via WinRM instead of the default SSH.

Once that’s done, it’s one simple command (well, kinda) to call out and install Chef Client and execute your Recipe on your new VM!

knife bootstrap windows winrm [new 2012 server ip] -x [windows admin username] -P [password] --node-name node1 --run-list 'recipe[webserver]' -V

(I’ve included -V for verbose because this took nearly ten minutes on my ageing-laptop-powered VMs and wanted some feedback during.)

start bootstrap

Some time later…finish bootstrap

Knife has now reached out to your blank 2012 VM, downloaded the MSI for Chef Client, installed it, and applied your ‘webserver’ recipe, which in turn installed IIS and populated Default.htm.

Did it work? The moment of truth… put http://<ip of your new server> into your browser

it worked!

Holy crap it actually worked!

Synopsis

So what have we actually achieved here? We’ve taken a recipe for installing IIS and an extremely basic custom website that was previously only applicable locally, and uploaded it to our own locally hosted Chef Server, allowing us to execute it remotely even when Chef isn’t already installed.

We’ve only scratched the surface of Chef here, and there are loads of questions to ask and answer, e.g.:

  1. How does Chef benefit from Desired State Configuration?
  2. How do I define per-server or per-environment settings like connection strings?
  3. How do I manage databases?
  4. How do I manage service account credentials?
  5. How do I deal with my existing executable installers?
  6. How do I manage upgrades?

And so on ad infinitum. Some of these may be answered in upcoming posts about OneGet and Desired State Configuration, others may be the subject of a further introduction-to-concepts blog post, depending on how well I get on with Chef. All are, I’m sure, answerable with appropriate research though. If you know of any useful conceptual introductions on Chef, please share them in the comments!

Further Reading

Install the Server on a Virtual Machine

How to Install a Chef Server, Workstation, and Client on Ubuntu VPS Instances

Managed Reference for WinRM Windows PowerShell Command Classes

Enable and Use Remote Commands in Windows PowerShell

Using the LeanKit API with PowerShell

As I alluded to in an earlier post, I’ve been using PowerShell to interact with the LeanKit API. You can find the rationale and overarching methodology in the post linked above. Here we’ll be dealing with the nuts and bolts.

Approach 1 – Using the .NET framework provided by LeanKit (FIXED by John Mathias)

Initially I attempted to perform this task by importing the LeanKit API Client Library for .NET into PowerShell using [System.Reflection.Assembly]::LoadFile(), but ultimately couldn’t get it to authenticate successfully.

The code snippet in question is below, if anyone can point out where I went wrong, I would be most grateful.

UPDATED: Fixed! John Mathias from LeanKit was kind enough to point out that I was mistakenly using the entire URL to populate the ‘hostname’ field. Change it to using just the subdomain, and it works a treat!

$scriptWorkingDirectory = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent

# Define variables
$boardID = 01234;
$boardAddress = "subdomain"; # Your leankit subdomain, e.g. JUST the 'subdomain' part of https://subdomain.leankit.com
$boardUser = "user@example.com";

# Load LeanKit API Library dependencies
[System.Reflection.Assembly]::LoadFile("$scriptWorkingDirectory\LeanKit.API.Client.Library\LeanKit.API.Client.Library.dll") | out-null
[System.Reflection.Assembly]::LoadFile("$scriptWorkingDirectory\LeanKit.API.Client.Library\Newtonsoft.Json.dll") | out-null
[System.Reflection.Assembly]::LoadFile("$scriptWorkingDirectory\LeanKit.API.Client.Library\RestSharp.dll") | out-null

# Create Authentication object so we can feed it into our factory object's creation
$leanKitAuth = New-Object LeanKit.API.Client.Library.TransferObjects.LeanKitAccountAuth -Property @{
    Hostname = $boardAddress;
    Username = $boardUser;
    Password = $([Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($(read-host "Pass" -assecurestring))));
}

# Connect and authenticate
Write-Verbose "Connecting to $($leanKitAuth.Hostname)";
$leankitApi = $(New-Object LeanKit.API.Client.Library.LeanKitClientFactory).Create($leanKitAuth);

# Create a new card
$newCard = @{
    Title = "New card!!!";
    Description =  "Oh my yes, it's a new card!";
    TypeID=01234;
}
$newCard = New-Object LeanKit.API.Client.Library.TransferObjects.Card -Property $newCard;
$leankitApi.AddCards($boardID,[LeanKit.API.Client.Library.TransferObjects.Card[]]@($newCard), "Automation!!!")

# Get the board
$leankitBoard = $leankitAPI.GetBoard($boardID);

# Get the card we just added
$card = $leankitBoard.alllanes().cards[0];

# Convert it to a card rather than a view
$card = $card.toCard()

# Change it slightly
$card.Title = "That's no card!!!"

# Update it!
$leankitApi.UpdateCards($boardID,[LeanKit.API.Client.Library.TransferObjects.Card[]]$card,"Automation");

The above code would result in a very long wait at the final step where it would (according to Fiddler2) make several calls to a blank HTTPS address. So I can only assume that the $leanKitAuth object isn’t getting properly passed to the .Create() method.

The above code now works properly! Thanks John!
It also uses the plural versions of UpdateCards with the appropriate typing so you can pass an array of card objects when you have multiple cards to update.

Method 2 – Invoke-RestMethod

Ultimately PowerShell’s Invoke-RestMethod, is absolutely perfect for the job anyway, so I decided to use that in lieu of getting the framework working leave it here as an example even though the code above now works.

Step 1) Getting your board

I created two very basic functions in order to get a board.

Set-LeanKitAuth

function Set-LeanKitAuth{
    param(
        [parameter(mandatory=$true)]
        [string]$url,

        [parameter(mandatory=$true)]
        [System.Management.Automation.PSCredential]$credentials
    )
    $script:leanKitURL = $url;
    $script:leankitCreds = $credentials
    return true;
}

Get-LeanKitBoard

function Get-LeanKitBoard{
    param(
        [parameter(mandatory=$true)]
        [int]$BoardID
    )

    if(!($script:leanKitURL -and $script:LeanKitCreds)){
        throw "You must run set-leankitauth first"
    }

    [string]$uri = $script:leanKitURL + "/Kanban/Api/Boards/$boardID/"
    return Invoke-RestMethod -Uri $uri  -Credential $script:leankitCreds
}

The idea here is that you only have to call Set-LeanKitAuth once at the beginning of the script, then your credentials are pervasive throughout the subsequent calls.

So to use the above functions, you would have a snippet like so:

Set-LeanKitAuth -url "https://myteam.leankit.com" -credentials (Get-Credential)

$leankitBoard = Get-LeanKitBoard -BoardID 1234567890

(Obviously replacing the URL and BoardID as appropriate.)
This will prompt you for your username and password (email address and password namely), and then put the resulting board in $leanKitBoard.

Data to get you started

  • Lanes: $leanKitBoard.ReplyData.Lanes
  • Backlog: $leanKitBoard.ReplyData.BackLog
  • Lane Cards: $leanKitBoard.ReplyData.Lanes.Cards
  • Backlog Cards: $leanKitBoard.ReplyData.BackLog.Cards
  • CardTypes: $leanKitBoard.ReplyData.cardtypes

Step 2) Adding Cards

In order to add cards using PowerShell, I whipped up another function, similar to the first.

Add-LeanKitCards

function Add-LeanKitCards{

    param(
        [parameter(mandatory=$true)]
        [int]$boardID,

        [parameter(mandatory=$true)]
        [ValidateScript({
            if($_.length -gt 100){
                #"You cannot pass greater than 100 cards at a time to add-LeankitCards"
                return $false;
            }
           if(
                ($_ |?{$_.UserWipOverrideComment}).length -lt $_.length
               ){
                # "All cards must have UserWipOverrideComment when passing to Update-LeankitCards";
                return $false;
            }
            return $true;
        })]
        [hashtable[]]$cards
    )

    [string]$uri = $script:leanKitURL + "/Kanban/Api/Board/$boardID/AddCards?wipOverrideComment=Automation"
    return Invoke-RestMethod -Uri $uri  -Credential $script:leankitCreds -Method Post -Body $(ConvertTo-Json $cards ) -ContentType "application/json" 

}

This requires you to pass a hashtable (or an array of hashtables) with the appropriate values to the parameter -cards.

Here’s an example:

$cardArray = @();
$cardArray += @{
    Title = "This is a new card";
    Description =  "Oh my, so it is. A fresh card!";
    TypeID=1234567890;
    laneID=1234567890
    UserWipOverrideComment = "Automation! Yeah!"
}

 Add-LeanKitCards -boardID 1234567890 -cards $cardArray

Again, obviously, replacing the numbers with IDs meaningful to your environment (use the examples in Data to get you started above to help you find what these IDs would be in your environment).

Step 3) Updating Cards

Updating cards is a little tricker, as you must provide more data. But before we get into that, here’s the function we’ll use (almost identical to the one above).

Update-LeankitCards

function Update-LeankitCards{

    param(
        [parameter(mandatory=$true)]
        [int]$boardID,

        [parameter(mandatory=$true)]
        [ValidateScript({
            if($_.length -gt 100){
                # "You cannot pass greater than 100 cards at a time to Update-LeankitCards"
                return $false;
            }
            if(
                ($_ |?{$_.UserWipOverrideComment}).length -lt $_.length
               ){
                # "All cards must have UserWipOverrideComment when passing to Update-LeankitCards";
                return $false;
            }
             if(
                ($_ |?{$_.ID}).length -lt $_.length
               ){
                # "All cards must have an ID when passing to Update-LeankitCards";
                return $false;
            }
            return $true;
        })]
        [hashtable[]]$cards
    )

    [string]$uri = $script:leanKitURL + "/Kanban/Api/Board/$boardID/UpdateCards?wipOverrideComment=Automation"
    return Invoke-RestMethod -Uri $uri  -Credential $script:leankitCreds -Method Post -Body $(ConvertTo-Json $cards) -ContentType "application/json"
}

Obviously we’ll have to pass the card ID in the array, but while playing around with this, it seemed like you needed more than that. In the end I just created a new hashtable of all the properties of the card I’m updating and then changed the ones I wanted to update. Like so:

# Get a card from a previous board response (I'm getting the first one here with '[0]', you'll probably want to choose your card more carefully)
$card = $leanKitBoard.ReplyData.lanes.cards[0]

# Create the hashtable
$updatedCard = @{UserWipOverrideComment = "No override"};

# Populate the hashtable with all the properties of the card we selected previously.
$card | gm | ?{$_.membertype -eq "NoteProperty"} | %{$updatedCard.add($_.name, $card.$($_.name))}

# Change the parameters you want to change
$updatedCard.LaneID = 01234567890;

# Add the card to an array
$cardArray = @($updatedCard);

# Submit it!
Update-LeankitCards -boardID 1234567890 -cards $cardArray

I shouldn’t need to say it again, but obviously, change the board ID etc to reflect your environment.

And that’s it! Easy really. There’s a lot more that you can do, for which I suggest you see the additional reading section.
Any questions, just leave them in the comments.

Additional Reading

https://support.leankit.com/forums/20153741-LeanKit-API-Application-Programming-Interface-

https://support.leankit.com/entries/20265038-API-Basics

http://technet.microsoft.com/en-us/library/hh849898.aspx

LeanKit integration with ticketing system (using PowerShell)

I’m no expert on Kanban by any means, but ever since reading The Phoenix Project, I’ve been dying to try it out in the workplace.

For me, there are four key things that I think Kanban can help us do that our current tools can’t:

  1. Identify queue time & bottlenecks
  2. Visualise Work In Process (WIP)
  3. Prioritise work (Queued or In Process)
  4. Promote mono-tasking by enforcing WIP limits

History

However, we’ve recently come out the far side of a meta-project to improve the visibility and predicitability of project work by introducing a tool called Clarizen, which has resulted in the tool being scrapped.

While I think there are a lot of things that the Clarizen team could do to improve the usability of the product, it could be the easiest to use, most visual, and most efficiency-promoting tool in the world, and it still wouldn’t have gained widespread adoption in my team.

Why? Because it’s Yet Another Input.

We already have:

  1. Emails (as much as we try to funnel work through the ticket system, this will always be an input).
  2. IM/Lync (as emails)
  3. Ticket system
  4. Monitoring system (alerts don’t raise tickets currently, though I’m toying with proposing the idea. It’s a matter of how to reduce spam).
  5. Personal to-do list (some of the team use Wunderlist, others use Outlook, I’ve got a couple of post-its hanging round in addition).
  6. Meetings (hopefully these feed into 3 or 5, or at least 1)

Adding Clarizen on top of that (which, to be truly effective, must be kept up to date as the tasks progress), especially as another location ‘to look’ for work, turned out to be too much. The team tried their best to keep it up to date, but usually the frequency and reaction was “Oh damn, I really should check my Clarizen queue”, me being one of the worst offenders in this regard.

Rationale

With this in mind, for Kanban to succeed, it needs to reduce the total amount of work without increasing the amount of meta-work/distraction for the team. This means that we need a single place, a canonical source, to look at for deciding what to work on next.

Ideally, this might be Kanban. As I alluded to at the start, our existing ticketing system is poxy awful for assisting us in prioritisation, visualisation and enforcing work in process. So why not use Kanban as canon and the ticket system as reflective? Three reasons:

  1. Inertia (as much as we hate our current ticket system, it’s wormed its way solidly into our workflow)
  2. Customer facing (customers raise work through our tickets, and we can’t expect them to derive data from our Kanban board as to progress!)
  3. Richness of progress – Kanban lanes are great for a birds’ eye view, but sometimes you just have to scribble down an IP address

So, given that for the above reasons we have to keep the ticket system, (and thereby if we want to run Kanban, do it in tandem), how do we minimise the meta-work of synchronising the state of work between the Kanban board and the ticket system?

PowerShell!

Why PowerShell? Our team is a Windows-centric operations team; the first half of that description shies us away from Python, Ruby, etc. and the second half from C#, VB.net, etc., as we want something as script-like as possible.

I did originally try bringing the LeanKit API Client Library for .NET into PowerShell using [System.Reflection.Assembly]::LoadFile(), but ultimately couldn’t get it to authenticate successfully succeeded with @JohnDMathis’ help!

The full post detailing the technical aspects can be found here; this one covers the idea behind it, the basic methodology, and the limitations of the current setup from a Kanban perspective.

Methodology

Tickets to Cards

Our ticket system is not tailored for our purposes, so the only metadata we can use is:

  • Title
  • Assignee
  • Last Updated
  • Ticket number
  • Priority

Still, that gives us reasonable flexibility to do as we will with the card once it’s in our Kanban board.

Naturally, we need to turn on the “Card ID” option on our Leankit board so that we can identify which cards correspond with which tickets. It also gives us a nice header across the top showing the ticket number.

Because we have such limited metadata concerning the actual status of the ticket, we’re very limited as to the complexity of the lanes we can have. Our Kanban board is currently just “Queue”, “Assigned” (broken down into team-members), and “Done”.

This doesn’t reflect anywhere near potential of Kanban to achieve our first target at the beginning of this post (Identify queue time & bottlenecks), but it’s a start for the other objectives. Greater granularity to unleash the full potential of Kanban will have to wait until Phase 2.

Temporarily, we’ve assigned an arbitrary amount of time to add to the “last updated” value from the ticket and set that as the “due date” of the card, to give us an indication of the ‘freshness’ of the task. There may be a better way of visualising this, but we’re still experimenting.

The workflow of the script goes as follows:

  1. Get tickets and cards
  2. Identify new tickets which don’t have cards
  3. Create cards for new tickets
  4. Refresh list of cards
  5. Loop through cards, identifying those which need updating
  6. Submit updated cards

Simple, but crucially, in step #5, only the lane and priority is updated. This means we can rename cards, change their type, expedite them, etc., and the changes won’t get overwritten the next time the script runs.

This allows the people who need to understand the overall view to manipulate the WIP in a way that’s meaningful to them, without either burdening the team members with updating metadata which is not useful to them, nor requiring the people interested in the enriched data to go into individual tickets and update them themselves.

Downsides

Without modification, there are a few downsides to this idea. We can’t achieve any of the following:

  • Bottleneck identification (source data not rich enough to auto-populate more than 3 lanes)
  • Wait-time identification (source data not rich enough)
  • Rich workflow representation (source data not rich enough)

All of which are problems stemming from the fact that we’re updating the lane from the ticket. If we were to simply use the broker script as a way of inputting the cards initially, we would be able to increase the lane complexity and the above problems would go away.

However, they would be replaced with the synchronicity problem, something we’re not ready to tackle yet.

Phase 2

Ultimately, we want to be able to support a more workflow-reflective Kanban board based on the ticket metadata. This could be done either by having meaningful ticket categorisation and statuses, or by achieving full buy-in from every member of the team as to the importance and efficacy of Kanban. But I suspect the latter option would, as observed with Clarizen, come at the price of one system or another.

Hopefully the former option will be realised with future updates to (or replacement of) our ticket system. But if anyone has any other ideas how to achieve lane complexity without increasing the work of the team in maintaing two systems, I’m all ears!