Using the LeanKit API with PowerShell

As I alluded to in an earlier post, I’ve been using PowerShell to interact with the LeanKit API. You can find the rationale and overarching methodology in the post linked above. Here we’ll be dealing with the nuts and bolts.

Approach 1 – Using the .NET framework provided by LeanKit (FIXED by John Mathias)

Initially I attempted to perform this task by importing the LeanKit API Client Library for .NET into PowerShell using [System.Reflection.Assembly]::LoadFile(), but ultimately couldn’t get it to authenticate successfully.

The code snippet in question is below, if anyone can point out where I went wrong, I would be most grateful.

UPDATED: Fixed! John Mathias from LeanKit was kind enough to point out that I was mistakenly using the entire URL to populate the ‘hostname’ field. Change it to using just the subdomain, and it works a treat!

$scriptWorkingDirectory = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent

# Define variables
$boardID = 01234;
$boardAddress = "subdomain"; # Your leankit subdomain, e.g. JUST the 'subdomain' part of https://subdomain.leankit.com
$boardUser = "user@example.com";

# Load LeanKit API Library dependencies
[System.Reflection.Assembly]::LoadFile("$scriptWorkingDirectory\LeanKit.API.Client.Library\LeanKit.API.Client.Library.dll") | out-null
[System.Reflection.Assembly]::LoadFile("$scriptWorkingDirectory\LeanKit.API.Client.Library\Newtonsoft.Json.dll") | out-null
[System.Reflection.Assembly]::LoadFile("$scriptWorkingDirectory\LeanKit.API.Client.Library\RestSharp.dll") | out-null

# Create Authentication object so we can feed it into our factory object's creation
$leanKitAuth = New-Object LeanKit.API.Client.Library.TransferObjects.LeanKitAccountAuth -Property @{
    Hostname = $boardAddress;
    Username = $boardUser;
    Password = $([Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($(read-host "Pass" -assecurestring))));
}

# Connect and authenticate
Write-Verbose "Connecting to $($leanKitAuth.Hostname)";
$leankitApi = $(New-Object LeanKit.API.Client.Library.LeanKitClientFactory).Create($leanKitAuth);

# Create a new card
$newCard = @{
    Title = "New card!!!";
    Description =  "Oh my yes, it's a new card!";
    TypeID=01234;
}
$newCard = New-Object LeanKit.API.Client.Library.TransferObjects.Card -Property $newCard;
$leankitApi.AddCards($boardID,[LeanKit.API.Client.Library.TransferObjects.Card[]]@($newCard), "Automation!!!")

# Get the board
$leankitBoard = $leankitAPI.GetBoard($boardID);

# Get the card we just added
$card = $leankitBoard.alllanes().cards[0];

# Convert it to a card rather than a view
$card = $card.toCard()

# Change it slightly
$card.Title = "That's no card!!!"

# Update it!
$leankitApi.UpdateCards($boardID,[LeanKit.API.Client.Library.TransferObjects.Card[]]$card,"Automation");

The above code would result in a very long wait at the final step where it would (according to Fiddler2) make several calls to a blank HTTPS address. So I can only assume that the $leanKitAuth object isn’t getting properly passed to the .Create() method.

The above code now works properly! Thanks John!
It also uses the plural versions of UpdateCards with the appropriate typing so you can pass an array of card objects when you have multiple cards to update.

Method 2 – Invoke-RestMethod

Ultimately PowerShell’s Invoke-RestMethod, is absolutely perfect for the job anyway, so I decided to use that in lieu of getting the framework working leave it here as an example even though the code above now works.

Step 1) Getting your board

I created two very basic functions in order to get a board.

Set-LeanKitAuth

function Set-LeanKitAuth{
    param(
        [parameter(mandatory=$true)]
        [string]$url,

        [parameter(mandatory=$true)]
        [System.Management.Automation.PSCredential]$credentials
    )
    $script:leanKitURL = $url;
    $script:leankitCreds = $credentials
    return true;
}

Get-LeanKitBoard

function Get-LeanKitBoard{
    param(
        [parameter(mandatory=$true)]
        [int]$BoardID
    )

    if(!($script:leanKitURL -and $script:LeanKitCreds)){
        throw "You must run set-leankitauth first"
    }

    [string]$uri = $script:leanKitURL + "/Kanban/Api/Boards/$boardID/"
    return Invoke-RestMethod -Uri $uri  -Credential $script:leankitCreds
}

The idea here is that you only have to call Set-LeanKitAuth once at the beginning of the script, then your credentials are pervasive throughout the subsequent calls.

So to use the above functions, you would have a snippet like so:

Set-LeanKitAuth -url "https://myteam.leankit.com" -credentials (Get-Credential)

$leankitBoard = Get-LeanKitBoard -BoardID 1234567890

(Obviously replacing the URL and BoardID as appropriate.)
This will prompt you for your username and password (email address and password namely), and then put the resulting board in $leanKitBoard.

Data to get you started

  • Lanes: $leanKitBoard.ReplyData.Lanes
  • Backlog: $leanKitBoard.ReplyData.BackLog
  • Lane Cards: $leanKitBoard.ReplyData.Lanes.Cards
  • Backlog Cards: $leanKitBoard.ReplyData.BackLog.Cards
  • CardTypes: $leanKitBoard.ReplyData.cardtypes

Step 2) Adding Cards

In order to add cards using PowerShell, I whipped up another function, similar to the first.

Add-LeanKitCards

function Add-LeanKitCards{

    param(
        [parameter(mandatory=$true)]
        [int]$boardID,

        [parameter(mandatory=$true)]
        [ValidateScript({
            if($_.length -gt 100){
                #"You cannot pass greater than 100 cards at a time to add-LeankitCards"
                return $false;
            }
           if(
                ($_ |?{$_.UserWipOverrideComment}).length -lt $_.length
               ){
                # "All cards must have UserWipOverrideComment when passing to Update-LeankitCards";
                return $false;
            }
            return $true;
        })]
        [hashtable[]]$cards
    )

    [string]$uri = $script:leanKitURL + "/Kanban/Api/Board/$boardID/AddCards?wipOverrideComment=Automation"
    return Invoke-RestMethod -Uri $uri  -Credential $script:leankitCreds -Method Post -Body $(ConvertTo-Json $cards ) -ContentType "application/json" 

}

This requires you to pass a hashtable (or an array of hashtables) with the appropriate values to the parameter -cards.

Here’s an example:

$cardArray = @();
$cardArray += @{
    Title = "This is a new card";
    Description =  "Oh my, so it is. A fresh card!";
    TypeID=1234567890;
    laneID=1234567890
    UserWipOverrideComment = "Automation! Yeah!"
}

 Add-LeanKitCards -boardID 1234567890 -cards $cardArray

Again, obviously, replacing the numbers with IDs meaningful to your environment (use the examples in Data to get you started above to help you find what these IDs would be in your environment).

Step 3) Updating Cards

Updating cards is a little tricker, as you must provide more data. But before we get into that, here’s the function we’ll use (almost identical to the one above).

Update-LeankitCards

function Update-LeankitCards{

    param(
        [parameter(mandatory=$true)]
        [int]$boardID,

        [parameter(mandatory=$true)]
        [ValidateScript({
            if($_.length -gt 100){
                # "You cannot pass greater than 100 cards at a time to Update-LeankitCards"
                return $false;
            }
            if(
                ($_ |?{$_.UserWipOverrideComment}).length -lt $_.length
               ){
                # "All cards must have UserWipOverrideComment when passing to Update-LeankitCards";
                return $false;
            }
             if(
                ($_ |?{$_.ID}).length -lt $_.length
               ){
                # "All cards must have an ID when passing to Update-LeankitCards";
                return $false;
            }
            return $true;
        })]
        [hashtable[]]$cards
    )

    [string]$uri = $script:leanKitURL + "/Kanban/Api/Board/$boardID/UpdateCards?wipOverrideComment=Automation"
    return Invoke-RestMethod -Uri $uri  -Credential $script:leankitCreds -Method Post -Body $(ConvertTo-Json $cards) -ContentType "application/json"
}

Obviously we’ll have to pass the card ID in the array, but while playing around with this, it seemed like you needed more than that. In the end I just created a new hashtable of all the properties of the card I’m updating and then changed the ones I wanted to update. Like so:

# Get a card from a previous board response (I'm getting the first one here with '[0]', you'll probably want to choose your card more carefully)
$card = $leanKitBoard.ReplyData.lanes.cards[0]

# Create the hashtable
$updatedCard = @{UserWipOverrideComment = "No override"};

# Populate the hashtable with all the properties of the card we selected previously.
$card | gm | ?{$_.membertype -eq "NoteProperty"} | %{$updatedCard.add($_.name, $card.$($_.name))}

# Change the parameters you want to change
$updatedCard.LaneID = 01234567890;

# Add the card to an array
$cardArray = @($updatedCard);

# Submit it!
Update-LeankitCards -boardID 1234567890 -cards $cardArray

I shouldn’t need to say it again, but obviously, change the board ID etc to reflect your environment.

And that’s it! Easy really. There’s a lot more that you can do, for which I suggest you see the additional reading section.
Any questions, just leave them in the comments.

Additional Reading

https://support.leankit.com/forums/20153741-LeanKit-API-Application-Programming-Interface-

https://support.leankit.com/entries/20265038-API-Basics

http://technet.microsoft.com/en-us/library/hh849898.aspx

LeanKit integration with ticketing system (using PowerShell)

I’m no expert on Kanban by any means, but ever since reading The Phoenix Project, I’ve been dying to try it out in the workplace.

For me, there are four key things that I think Kanban can help us do that our current tools can’t:

  1. Identify queue time & bottlenecks
  2. Visualise Work In Process (WIP)
  3. Prioritise work (Queued or In Process)
  4. Promote mono-tasking by enforcing WIP limits

History

However, we’ve recently come out the far side of a meta-project to improve the visibility and predicitability of project work by introducing a tool called Clarizen, which has resulted in the tool being scrapped.

While I think there are a lot of things that the Clarizen team could do to improve the usability of the product, it could be the easiest to use, most visual, and most efficiency-promoting tool in the world, and it still wouldn’t have gained widespread adoption in my team.

Why? Because it’s Yet Another Input.

We already have:

  1. Emails (as much as we try to funnel work through the ticket system, this will always be an input).
  2. IM/Lync (as emails)
  3. Ticket system
  4. Monitoring system (alerts don’t raise tickets currently, though I’m toying with proposing the idea. It’s a matter of how to reduce spam).
  5. Personal to-do list (some of the team use Wunderlist, others use Outlook, I’ve got a couple of post-its hanging round in addition).
  6. Meetings (hopefully these feed into 3 or 5, or at least 1)

Adding Clarizen on top of that (which, to be truly effective, must be kept up to date as the tasks progress), especially as another location ‘to look’ for work, turned out to be too much. The team tried their best to keep it up to date, but usually the frequency and reaction was “Oh damn, I really should check my Clarizen queue”, me being one of the worst offenders in this regard.

Rationale

With this in mind, for Kanban to succeed, it needs to reduce the total amount of work without increasing the amount of meta-work/distraction for the team. This means that we need a single place, a canonical source, to look at for deciding what to work on next.

Ideally, this might be Kanban. As I alluded to at the start, our existing ticketing system is poxy awful for assisting us in prioritisation, visualisation and enforcing work in process. So why not use Kanban as canon and the ticket system as reflective? Three reasons:

  1. Inertia (as much as we hate our current ticket system, it’s wormed its way solidly into our workflow)
  2. Customer facing (customers raise work through our tickets, and we can’t expect them to derive data from our Kanban board as to progress!)
  3. Richness of progress – Kanban lanes are great for a birds’ eye view, but sometimes you just have to scribble down an IP address

So, given that for the above reasons we have to keep the ticket system, (and thereby if we want to run Kanban, do it in tandem), how do we minimise the meta-work of synchronising the state of work between the Kanban board and the ticket system?

PowerShell!

Why PowerShell? Our team is a Windows-centric operations team; the first half of that description shies us away from Python, Ruby, etc. and the second half from C#, VB.net, etc., as we want something as script-like as possible.

I did originally try bringing the LeanKit API Client Library for .NET into PowerShell using [System.Reflection.Assembly]::LoadFile(), but ultimately couldn’t get it to authenticate successfully succeeded with @JohnDMathis’ help!

The full post detailing the technical aspects can be found here; this one covers the idea behind it, the basic methodology, and the limitations of the current setup from a Kanban perspective.

Methodology

Tickets to Cards

Our ticket system is not tailored for our purposes, so the only metadata we can use is:

  • Title
  • Assignee
  • Last Updated
  • Ticket number
  • Priority

Still, that gives us reasonable flexibility to do as we will with the card once it’s in our Kanban board.

Naturally, we need to turn on the “Card ID” option on our Leankit board so that we can identify which cards correspond with which tickets. It also gives us a nice header across the top showing the ticket number.

Because we have such limited metadata concerning the actual status of the ticket, we’re very limited as to the complexity of the lanes we can have. Our Kanban board is currently just “Queue”, “Assigned” (broken down into team-members), and “Done”.

This doesn’t reflect anywhere near potential of Kanban to achieve our first target at the beginning of this post (Identify queue time & bottlenecks), but it’s a start for the other objectives. Greater granularity to unleash the full potential of Kanban will have to wait until Phase 2.

Temporarily, we’ve assigned an arbitrary amount of time to add to the “last updated” value from the ticket and set that as the “due date” of the card, to give us an indication of the ‘freshness’ of the task. There may be a better way of visualising this, but we’re still experimenting.

The workflow of the script goes as follows:

  1. Get tickets and cards
  2. Identify new tickets which don’t have cards
  3. Create cards for new tickets
  4. Refresh list of cards
  5. Loop through cards, identifying those which need updating
  6. Submit updated cards

Simple, but crucially, in step #5, only the lane and priority is updated. This means we can rename cards, change their type, expedite them, etc., and the changes won’t get overwritten the next time the script runs.

This allows the people who need to understand the overall view to manipulate the WIP in a way that’s meaningful to them, without either burdening the team members with updating metadata which is not useful to them, nor requiring the people interested in the enriched data to go into individual tickets and update them themselves.

Downsides

Without modification, there are a few downsides to this idea. We can’t achieve any of the following:

  • Bottleneck identification (source data not rich enough to auto-populate more than 3 lanes)
  • Wait-time identification (source data not rich enough)
  • Rich workflow representation (source data not rich enough)

All of which are problems stemming from the fact that we’re updating the lane from the ticket. If we were to simply use the broker script as a way of inputting the cards initially, we would be able to increase the lane complexity and the above problems would go away.

However, they would be replaced with the synchronicity problem, something we’re not ready to tackle yet.

Phase 2

Ultimately, we want to be able to support a more workflow-reflective Kanban board based on the ticket metadata. This could be done either by having meaningful ticket categorisation and statuses, or by achieving full buy-in from every member of the team as to the importance and efficacy of Kanban. But I suspect the latter option would, as observed with Clarizen, come at the price of one system or another.

Hopefully the former option will be realised with future updates to (or replacement of) our ticket system. But if anyone has any other ideas how to achieve lane complexity without increasing the work of the team in maintaing two systems, I’m all ears!

LogGrapher.com

Image

It’s finally here!

I started working on loggrapher.com at the beginning of last year after getting frustrated with vCenter’s tiny, inflexible charts and Excel’s inability to deal with values that were ‘per line’ rather than ‘per column’.
It’s designed to deal with any line-graphable performance data (though I hope to add event-log graphing at a later date) that is formatted in CSV (UTF-8).

Coding loggrapher.com proved something of a challenge. Partly because I used it as an opportunity to finally start coding object oriented JavaScript, and partly because I wanted to make it very easy to use. At least as much time has been consulting my girlfriend (an aspiring UX designer) on the usability and layout, as has been spent in figuring out how to code it.

Performance

The most difficult aspect however, was the performance. The logs I wanted to parse were frequently more than 100MB in size, and as I wanted to make this available to the public, I was determined not to incur any server-side manipulation costs. This meant manipulating all the data in client side JavaScript, a problem I’d never tackled before.

Getting the CSV into memory was the first hurdle, even FileReader was new to me (and I’m still annoyed that readAsText was deprecated, as it was a lot less hassle than readAsArrayBuffer), I had to split up the file into 100,000 byte chunks, as trying to read it all at once resulted in the browser crashing more often than not.

Then there was the processing of the CSV, converting it from something like

“17”,”09/03/2013 19:31:00″,”vmhost3 – vmdatastore04″
“0”,”09/03/2013 19:31:00″,”vmhost3 – vmdatastore03″
“1”,”09/03/2013 19:31:00″,”vmhost4 – vmdatastore02″

into a JavaScript object that the graphing plugin could read, took tens of seconds. Not a long time to wait for your graph, but long enough for browser to decide that the JavaScript had hung, and mark the tab as crashed.

Enter, web workers, another new frontier for myself. They allow you to send an object off to a script you’ve defined in a separate .js file, which will then crunch your object in a different thread (thereby  preventing UI draw blocking) and spit the result back out. They’re not much fun to work with, as you you can’t use any additional plugins in the separate thread, won’t get any errors from the separate thread, and obviously can’t manipulate the DOM from the separate thread. However, they were absolutely perfect for what I was wanting to do.

So I’d managed to load the CSV into memory, parse it into a format that made sense to the charting plugin, phew, that’s my work done! Over to the plugin to actually do all the clever stuff and draw everything. And there we hit another snag. Originally I was using HighCharts to do the graph rendering. However, HighCharts doesn’t do well with more than 30,000 points, and goes to a simpler TurboMode after (by default) 1000 points. This was a problem, I wanted to render 200,0000+ points. After checking all the more common (jqPlot, charts.js, jsCharts,gRaphaël, etc…) I was starting to despair. Until I happened across jqChart, which specifically boasts high performance. My initial testing proved it could handle up to 2,000,000 points and still be workable (slow, but workable). I can’t express how awesome jqChart is. So much faster than its competitors, and still incredibly flexible and easy to use.

Use Case

Using vCenter on a daily basis is great, using it without vcOps? Less great. That’s where our friends PowerShell, PowerCLI and specifically, Get-Stat come in handy!

Say we wanted to get the realtime CPU stats for a given VM, open up the PowerCLI console window and try the following:

Connect-Viserver localhost

Get-Stat -entity $(get-vm vmname) -realtime -stat cpu.usage.average | export-csv C:\vmname-cpustats.csv

Replacing “VMName” with your VMs name, and the filename with something to your liking, of course!

Then you simply navigate to loggrapher.com (in a compatible browser!), click Start Graphing, configure your first log source, select the file, select the columns, make sure the date format is correct, then graph away!

Using It – Feedback Please!

As I mentioned earlier, one of the objectives with this tool was to make it easy to use. However, it’s not until it’s released into the wild that it becomes obvious how difficult a given piece of software is to use. Any and all feedback would be very much appreciated, there is a brief FAQ that I hope addresses some of the questions people might have concerning it. But please leave a comment, or use the UserVoice page.

If you need some data to get started with, you can download an example CSV file here.

Happy graphing!

Scripting Backup Exec 2012 – P3 – Customising and Adding Backup Tasks

Introduction

Part 3 will cover creating and customising backup tasks for our newly created backup definition. So far we’ve only accepted the defaults, so unless you’re happy with the predefined schedule, retention period and secondary backup type, you’re going to have to start customising your tasks!

Customising and Adding Backup Tasks

As we’ve already created a backup definition from the BackupToDisk defaults, we’ll have a new definition  containing a full backup and an incremental backup.

It’s important to note that a backup definition will always contain a full backup tasks, you cannot delete it. You can only ever modify it. And in order to do so, we must first obtain our definition as a variable. There are two options for doing this, either you can assign your new definition to a variable as you create it, like so:

$backupDefinition = New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection $selection -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

Or, you can retrieve one you made earlier using:

$backupDefinition = get-BEBackupDefinition -name “New Backup Definition”

Gotcha: In theory, these two are interchangeable, but on a number of occasions I found that using the second method (get-BEBackupDefinition) immediately after saving or creating a new backup definition, caused the cmdlet to complain it couldn’t find a definition by that name. So on the whole I would recommend the former over the latter.

Cleaning Up and Scheduling

As I’ve already mentioned, you can only ever customise the full backup task, but as we also have an incremental task in our definition, let’s delete that before we get started.

$backupDefinition = $($backupDefinition | Remove-BEBackupTask * -Confirm:$false | Save-BEBackupDefinition -confirm:$false | Get-BEBackupDefinition)

In this section of code, we’re piping the $backupDefinition variable to “Remove-BEBackupTask”, which is using a wildcard (“*”) to delete all backup tasks (except full as you can’t delete that!). This then pipes the modified backup definition to the Save-BeBackupDefinition cmdlet, which commits the change.

Gotcha: Note the $() around the piped commands, without this, you won’t get the modified definition back, just the output of the first pipe.

Double Gotcha: Since writing this post I’ve discovered that occasionally the definition would not be passed correctly to the variable, causing subsequent cmdlets that I passed it to to claim that it was null. This is why I’ve put the “| Get-BEBackupDefinition” at the end, as this, oddly, seems to resolve the problem.

Now that we’ve cleared out the cobwebs, let’s see what we need in order to customise a task:

  • Schedule
  • Storage
  • Retention Period

The schedule we create using:

$fullBackupSchedule = New-BESchedule -WeeklyEvery “Saturday” -startingAt “8am” 

We’re going to use this to modify our full backup to run weekly on a Saturday at 8am. There are, naturally, a huge range of options in the New-BESchedule cmdlet, I recommend you consult the help file mentioned in Part 1.

Gotcha: Where the help file says you can specify multiple days by comma separating them, you cannot. You must insert them into an array first like so:

[…] New-BESchedule -WeeklyEvery @(“Friday”, “Saturday”, “Sunday”) […]

Modifying and Creating Tasks

The schedule is the only tricky part of modifying or creating a backup task. For this purpose there are six commands you need to know:

  • Set-BEFullBackupTask
  • Set-BEIncrementalBackupTask
  • Set-BEDifferentialBackupTask
  • Add-BEFullBackupTask
  • Add-BEIncrementalBackupTask
  • Add-BEDifferentialBackupTask

You can probably guess, set is for modify, add  is for creating new ones.

So, first up is modifying our existing task using the $schedule we created earlier.

$backupDefinition = $(Set-BEFullBackupTask -backupDefinition $backupDefinition  -Name * -DiskStorageKeepForHours 672 -schedule $fullBackupSchedule -storage “Full Backup Storage Pool” | Save-BEBackupDefinition -confirm:$false)

This passes the existing backup definition from the $backupDefinition variable, using the asterisk wildcard to set ALL full backup tasks for this definition (there’s only one) to keep the storage for 4 weeks (672 hours) according to our previously defined schedule in $fullBackupSchedule and to backup to the “Full Backup Storage Pool”. It then pipes the modified definition to the Save-BEBackupDefinition cmdlet which saves the change. All this is encapsulated by $() which ensures all the piped commands are executed fully prior to saving the output (the newly altered and committed definition) into $backupDefinition.

The Script So Far

Phew! So far we’ve got a script that creates a new backup definition, deletes the incremental default and modifies the full backup task to our specifications. So what does it look like in full?

# Select the correct server
$agentServer = getBeAgentServer “server11*”

# Include the following directories
$selection= @(New-BEFileSystemSelection -path “C:\*”  -PathIsDirectory -Recurse)
$selection += New-BEFileSystemSelection -path “Z:\* -PathIsDirectory -Recurse

# Exclude the following directories
$selection += New-BEFileSystemSelection -path “C:\Windows\* -PathIsDirectory -Exclude

# Create the Backup definition based on defaults
New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection $selection -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

# Delete the default incremental
$backupDefinition = $($backupDefinition | Remove-BEBackupTask * -Confirm:$false | Save-BEBackupDefinition -confirm:$false)

# Create a full backup schedule
$fullBackupSchedule = New-BESchedule -WeeklyEvery “Saturday” -startingAt “8am” 

# Modify the default full backup to reflect our new schedule and retention period
$backupDefinition = $(Set-BEFullBackupTask -backupDefinition $backupDefinition  -Name * -DiskStorageKeepForHours 672 -schedule $fullBackupSchedule -storage “Full Backup Storage Pool” | Save-BEBackupDefinition -confirm:$false)

# Create a differential backup schedule
$differentialBackupSchedule = New-BESchedule -WeeklyEvery @(“Sunday”, “Monday”, “Tuesday”, “Wednesday”, “Thursday”, “Friday”) -startingAt “9pm” 

# Modify the default differential backup to reflect our new schedule and retention period
$backupDefinition = $(Add-BEDifferentialBackupTask -backupDefinition $backupDefinition  -Name * -DiskStorageKeepForHours 168 -schedule $differentialBackupSchedule -storage “Differential Backup Storage Pool” | Save-BEBackupDefinition -confirm:$false)

As you can see, I added a new differential backup task running every day that the full doesn’t run, at 9pm with a retention period of 1 week. Hopefully from the explication of the previous cmdlets you can figure out how to customise it to suit your tastes.

Future Posts

That’s it for now! I will shortly post two new entries on Backup Exec 2012 on the following topics:

  • Integrating Backup Exec with PRTG Monitoring
  • Creating a more robust job creation script

Take care, and don’t forget to leave a comment!

Scripting Backup Exec 2012 – P2 – Creating a New Backup with Powershell

Introduction

In Part 2 of Scripting Backup Exec 2012, I’ll be illustrating how to create a Windows file backup from scratch using Powershell. For a basic rundown of the components, concepts and gotchas for this process, please see Part 1.

Prerequisites:

  1. A fully patched and up to date installation of Backup Exec 2012
  2. Powershell and (advantageous) basic Powershell knowledge
  3. An unrestricted execution policy
  4. A pre-installed and trusted Windows Agent added to the Backup Exec server.

The Script

As mentioned in Part 1 to successfully create a backup job we need to programmatically the following components:

  • Backup Definition (superordinate container)
    • Backup Agent (the server we’re backing up)
    • Filesystem Selection (what files we’re backing up)
    • System State Selection (whether we’re backing up the system state)
    • Backup Job Default (the default template which we will base this on)
    • Backup Definition Name
  • Backup Task
    • Backup Definition
    • Retention Period (in hours)
    • Schedule

Items in bold must be created using a cmdlet, items in italics can be specified as a string/int, or as an array/cmdlet for extra flexibility, and normal items are only specified by string or integer.

Creating the Backup Definition

The cmdlet used to create a new definition is

New-BEBackupDefinition

But before we do that, we need to specify which server we’re going to be backing up to, which will be done using:

Get-BeAgentServer “server11*”

You’ll note the asterix at the end. This is to allow for the possibility of a fully qualified server name (i.e. a server name with the domain on the end). Unfortunately Backup Exec doesn’t seem to be consistent in adding servers either with simply the hostname or fully qualified.

So a quick and simple bringing together of these commands would be the following:

$agentServer = getBeAgentServer “server11*”

New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection “C:\*” -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

As you can probably guess, this will create a backup-to-disk definition called “New Backup Definition” that backs up the C:\ drive, but not the system state, of a server called “server11”.

Gotcha: If you have a slightly myopic naming convention, this will also match “server111”, there are easy ways around this, but I’ll cover that in a later part.

Multiple Directory Selection & Exclusions

The code above is all well and good if you just want to backup the C:\ drive, but what if you want to backup, say, the Z:\ drive as well? Simple! Put them in an array!

$selection = @(“C:\*”, “Z:\*”)
New-BEBackupDefinition […] -FileSystemSelection $selection […]

Easy, right? But what if you want to exclude a directory? Then you’ll need to use our new friend:

New-BEFileSystemSelection

For example, if we want to include C:\ and Z:\, but exclude C:\Windows, we would do something like the following:

# Include the following directories
$selection= @(New-BEFileSystemSelection -path “C:\*”  -PathIsDirectory -Recurse)
$selection += New-BEFileSystemSelection -path “Z:\* -PathIsDirectory -Recurse

#Exclude the following directories
$selection += New-BEFileSystemSelection -path “C:\Windows\* -PathIsDirectory -Exclude

And then pass it to New-BEBackupDefinition.

The Script so Far

So far we haven’t done anything especially complicated, we’ve created a backup definition from defaults, with customised selection and pointed at the client we specified. So far our script looks like this:

# Select the correct server
$agentServer = getBeAgentServer “server11*”

# Include the following directories
$selection= @(New-BEFileSystemSelection -path “C:\*”  -PathIsDirectory -Recurse)
$selection += New-BEFileSystemSelection -path “Z:\* -PathIsDirectory -Recurse

# Exclude the following directories
$selection += New-BEFileSystemSelection -path “C:\Windows\* -PathIsDirectory -Exclude

# Create the Backup definition based on defaults
New-BEBackupDefinition -BackupJobDefault BackupToDisk -FileSystemSelection $selection -SystemStateSelection Excluded -name “New Backup Definition” -AgentServer $agentServer

It’s to important to note that at this point, the definition will contain the tasks defined in the default “BackupToDisk” job, namely a Full and an Incremental with fairly short retention periods.

How do we customise the tasks? You’ll see in Part 3 Customising and Adding Backup Tasks!

Scripting Backup Exec 2012 – P1 – Introduction, References & Gotchas

Preamble

Introduction

Over the past week, I’ve been trialling and implementing Backup Exec 2012 to replace an existing Backup Exec 2010 setup. Although Googling for Backup Exec 2012 reviews gets you a great deal of negativity and vitriol about the new interface, I have to say that I find it far more intuitive than the previous system. Where previously even something as simple as adding a backup client to the server required some headscratching and occasionally some swearing, Backup Exec 2012 now centres the interface around common tasks, rather than a semi-arbitrary architecture that probably made a great deal of sense to the programmers.

That isn’t to say it’s all perfect, the decision to have graphically intensive screen animations for a utility that’s going to be primarily used over an RDP connection is unfathomable, and the inability to turn them off (as far as I can ascertain) is even more so.The logic behind some of the locations of some of the actions is a little confusing as well. For example, in order to delete a backup job, you must be in the ‘tree view’ of the job page. If you are in the list view, it’s greyed out for no apparent reason. My understanding is that this is due to a lack of transparency of the internal workings (which I was just praising) or to put it another way, a disconnect between the internal workings and the way you would expect it to work. Namely “Jobs” in the GUI and “Definitions” and “Tasks” in the back end.

This Series & the Problem at Hand

Over the next few posts I will be introducing the basic concepts and practicalities of scripting job creation in BE2012 as I understand them. I do not claim to be an expert in either Powershell or Backup Exec, so I am more than open to correction in the comments.

The reason we’re interested in this is due to one of the most complained about changes in BE2012. The fact that agents in a job are far more separate than in 2010. Where previously in 2010 a selection list made it fairly trivial to flick through different clients and select files and folders on an individual basis, this process is much more time consuming in 2012. It requires going into each server individually, hitting an edit button, waiting for the options to load, etc. While this isn’t an especially time consuming process for an individual server (call it a 10-20 second overhead), when you get into the hundreds of servers, this becomes incredibly tedious and is largely a waste of time when you have a server estate with largely similar selection requirements.

This post will primarily be an introduction to concepts, some gotchas, and a dollop of praise for 2012.

Scripting

Intro

By far the most wonderful thing about Backup Exec 2012 is the fact that its scripting interface is built in Powershell and it actually works. My experience with the command line interface (BEMCMD) of Backup Exec 2010 was nothing short of atrocious. Baffling switches combined with poorly formatted output were enough to put anyone off, but even once you waded through that, it didn’t even work! You would ask it “Restore W:\FTPsite\* for server XXX” and it would go away for half an hour, think about it, and say “no matches found” even though you had asked it a few moments prior, “What backed up on server XXX recently?” and it replied “W:\FTPSite\”.

In this series we will be covering only file backups using the Windows agent, not because DB and Linux backups aren’t important, but because I simply have no experience of scripting them in Backup Exec.

Concepts

This series is going to primarily revolve around the creation of jobs, though I will do a separate article on PRTG monitoring and may expand to restores and other scripted processes later.

There are four major components to a ‘backup job’:

  1. Definition
  2. Tasks
  3. Schedule
  4. Selection

definition is effectively a container for the tasks, it specifies which client we’re backing up and what selection we’re backing up.

A task is your actual backup job, it specifies what type of backup you’re doing (full, incremental, differential), what schedule you’re backing up on, your retention period and your storage device.

schedule is, unsurprisingly, the schedule according to which your tasks happen, what days/weeks/months/hours, etc it will run.

A selection is, again unsurprisingly, the selection of files that you will be backing up. This is applied to the definition.

GOTCHA: A definition must always have a Full Backup task, you can never delete it, only modify it.

References

The canonical reference is, of course, the BEMCLI help file, which can be found on the Symantec Website.

GOTCHA: At the time of writing, the .chm file available for download at this link is unviable, its pages are blank.

GOTCHA UPDATE: As Roderick Bant points out in the comment (and is now mentioned on the page linked), the above gotcha is untrue.  To get round it: “[…] right-click the file and select Properties. Under the General tab, you will see a message that says “This file came from another computer and might be blocked to help protetect this computer”. Click the Unblock button.”

To get around the gotcha, simply copy the same file from your Backup Exec installation directory. I would host it here, but I expect I’d get a cease and desist order fairly quickly from Symantec.

GOTCHA: Some of the documentation in the BEMCLI help file is inaccurate. For example, it claims you pass multiple selection directories by simply comma separating them. This is incorrect, you must create them in an array, then pass the array to the cmdlet.

With this in mind, the help file is still an invaluable reference tool, once you learn to take its teachings with a pinch of salt.

Bugs

The base installation of Backup Exec 2012 I found to be somewhat buggy when dealing with scripts. I would often get SQL locking errors that would cause my jobs to fail to be inserted.

Thankfully, after installing the latest BE2012 hotfixes on the server (which, admittedly I should have done originally), most of these problems went away.

Next Time…

Next is Part 2 Scripting Backup Exec 2012 – P2 – Creating a New Backup with Powershell, where we’ll create a basic backup job using the Powershell cmdlets in BE2012.

Adding internet shortcuts and Steam games to the start screen in Windows 8

With most programs, in Windows 8 you can simply right click and “Pin to start”

But with Steam games and internet shortcuts, you can’t do this.

Instead, you simply move the shortcut to the following location:

C:\ProgramData\Microsoft\Windows\Start Menu\Programs

Once you’ve moved it, go into the start screen, right click, hit “all apps” in the bottom right.

And then find your newly added shortcut, right click, hit “pin to start”

And voila! You have your “internet shortcut” pinned to your start screen.

I hope you’re all enjoying Windows 8!