Archive for the ‘Scripting’ Category

Scripted Microsoft Patch Removal

Posted: September 19, 2011 in Scripting, Security
Tags:

Many patch management systems have the ability to uninstall previously deployed patches. This functionality is typically used when a conflict with the patch is discovered after deployment.

Unfortunately, many of the automated removal mechanisms depend on the patch developer supporting and including a patch removal mechanism for each patch. When a patch doesn’t support this functionality, the management platform falls short in supporting this function.

As a workaround, here are a couple of scripts that will provide a semi-automated approach that can be used to remove patches remotely.

First, this script will list the patches installed on a system in the past X days where X is a command line parameter:


#Get list of patched installed in the past X days
param($Argument1,$Argument2)

$ComputerName = $Argument1
$intDays = $Argument2
$bolDaysValid = $true

#Validate second paramter – must be a number between 1 and 1000
If ([Microsoft.VisualBasic.Information]::isnumeric($intDays)) {
    If ($intDays -lt 1 -or $intDays -gt 1000) {$bolDaysvalid = $false }
    else {$bolDaysValid = $true}
}
else {
    $bolDaysValid = $false
}

If ($bolDaysValid -eq $false) {
    write-host "Invalid days parameter. Please use a number between 1 and 1000."
    write-host "Example: .\PatchList.ps1 mycomputer 30"
    write-host "Exiting"
    Exit
}

#Validate first paramter – WMI call to get OS
$OS = Get-WmiObject -Class win32_OperatingSystem -namespace "root\CIMV2" -ComputerName $computerName -ErrorAction silentlycontinue
if ($OS -eq $NULL) {
    write-host "Can’t access computer $ComputerName. Exiting."
Exit
}
#Get list of updates
Get-WmiObject -Computername $ComputerName Win32_QuickFixEngineering | ? {$_.InstalledOn} | where { (Get-date($_.Installedon)) -gt (get-date).adddays(-$intDays) }


Then, once the research is complete and the offending patch is found, the following script can be used to remotely remove the patch.


param($Argument1,$Argument2)
Add-Type -AssemblyName Microsoft.VisualBasic

$computername=$Argument1
$hotfixid=[string]$Argument2

#Second paramter validation – make sure it starts with ‘KB’ followed by a number
If (-not (($hotfixid.substring(0,2) -eq "KB") -and ([Microsoft.VisualBasic.Information]::isnumeric($hotfixid.substring(2))))) {
write-host "Invalid hotfix parameter. Please use ‘KB’ and the article number."
write-host "Example: .\PatchRemove.ps1 mycomputer KB976432"
write-host "Exiting"
Exit
}

#First paramter validation – get OS to be used later. If call fails, bad parameter
$OS = Get-WmiObject -Class win32_OperatingSystem -namespace "root\CIMV2" -ComputerName $computerName -ErrorAction silentlycontinue
if ($OS -eq $NULL) {
write-host "Can’t access computer $ComputerName. Exiting."
Exit
}
#Get hotfix list from target computer
$hotfixes = Get-WmiObject -ComputerName $computername -Class Win32_QuickFixEngineering |select hotfixid           

#Search for requested hotfix
if($hotfixes -match $hotfixID) {
    $hotfixNum = $HotfixID.Replace("KB","")
    Write-host "Found the hotfix KB " $HotfixNum
    Write-Host "Uninstalling the hotfix"
    #Windows 2008/R2 use WUSA to uninstall patch
    if ($OS.Version -like "6*") {
        $UninstallString = "cmd.exe /c wusa.exe /uninstall /KB:$hotfixNum /quiet /norestart"
          $strProcess = "wusa"
    }
    #Windows 2003 use spuninst in $NTuninstall folder to uninstall patch
    elseif ($OS.Version -like "5*") {
        $colFiles = Get-WMIObject -ComputerName $computername -Class CIM_DataFile -Filter "Name=`"C:\\Windows\\`$NtUninstall$HotFixID`$\\spuninst\\spuninst.exe`""
        if ($colfiles.FileName -eq $NULL) {
        Write-Host "Could not find removal script, please remove the hotfix manually."
        }
        else {
            $UninstallString = "C:\Windows\`$NtUninstallKB$hotfixNum`$\spuninst\spuninst.exe /quiet /z"
              $strProcess = "spuninst"
        }
    }
    #Send removal command
    ([WMICLASS]"\\$computername\ROOT\CIMV2:win32_process").Create($UninstallString) | out-null           
    #Wait for removal to finish
    while (@(Get-Process $strProcess -computername $computername -ErrorAction SilentlyContinue).Count -ne 0) {
        Start-Sleep 3
        Write-Host "Waiting for update removal to finish …"
    }
    #Test removal by getting hotfix list again
    $afterhotfixes = Get-WmiObject -ComputerName $computername -Class Win32_QuickFixEngineering |select hotfixid           
    if($afterhotfixes -match $hotfixID) {
        write-host "Uninstallation of $hotfixID succeeded"
    }
    else {
        write-host "Uninstallation of $hotfixID failed"
    }
}
else {           
    write-host "Hotfix $hotfixID not found"
return
}           


Note that these scripts are tested on servers running Windows 2003 or later.

Advertisements

The Office 365 deployment assistant is a great tool to assist with deploying and configuring an Office 365 migration. Several steps require PowerShell work and you can use PowerShell deployed locally on an on-premises server to configure Office 365 remotely. This is very convenient but holds a little gotcha.

The Microsoft instructions for connecting remotely to an Office 365 installation include the following commands:

$LiveCred = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection

Import-PSSession $Session

While it is true that these commands will create the remote session, after running the last command you will see a long list of cmdlets that were not redirected to the remote session. That is because those commands are already defined for the local session. This list is especially long if using PowerShell on an Exchange server which is typical since the configuration involves a mix of local and remote steps.

In order to be to use the commands that are duplicated in the cloud, there are two options.

First, you could use the following addition to the last line: Import-PSSession $Session –AllowClobber. The ‘–AllowClobber’ parameter will let PowerShell overwrite the locally registered commands with the Office 365 ones.

This approach works well but does prevent you from managing the local environment using the same commands. This can be resolved by opening another shell window or by following the second option.

The second option is to use this addition to the last line instead: Import-PSSession $Session –Prefix o365. The ‘-Prefix’ option allows both sets of commands to be available with commands that are sent remotely designated with the prefix string. So instead of running: Enable-OrganizationCustomization, the command would be: Enable-o365OrganizationCustomization.

Using the prefix option may require changes to script examples and a little getting used to but over the long term if you need to manage both an on premises and Office 365 environments, it saves a lot of time.

Exchange 2010 supports automatic provisioning for new mailboxes. Unfortunately this mechanism does not extend to personal archives. As mailboxes are moved to Exchange 2010, they must be enabled for archives manually with the operator managing the size of each database and dividing the load accordingly.

The script below was created to automate this function and is intended to run automatically using a scheduled task on each CAS server.

Typically archive databases are flagged so that they do not participate in automatic provisioning for new mailboxes. The script selects the smallest archive database from the databases that are excluded from provisioning using the –IsExcludedeFromProvisioning parameter. Users are then enabled for archives using the target database.

The script also assigns one of two custom archive policies – a 180 day policy and a 360 day policy based on a group that defines users that get 360 days of retention in their mailbox. The script uses a custom attribute to overcome the issue of identifying mailboxes that are not members of the 360 day retention group.

Let me explain the issue:

PowerShell scripts often handle the ‘reverse group membership check’ issue by using a script that first assigns the common value (in this case 180 day retention) to everyone and then assigns the special value (in this case 360 day retention) to the members of a group.

The main weakness to this approach, especially for something like a retention policy is that any error with the second part of the script (say if someone renamed the group) would result in everyone getting a more restrictive retention policy and more archived items which is potentially disruptive and difficult to reverse.

My solution is to assign everyone in the org the less restrictive setting using a custom attribute field in AD once. Then adjust that custom attribute value based on the group membership and use the custom attribute value to configure the retention policy. This means that if the group is renamed or another error occurs, new members of the group might get the wrong policy but existing members would not be impacted.

Note that this can be accomplished with less effort if you deploy the Quest PowerGUI tools since the get-QADUser command does support a parameter –NotMemberOf. I didn’t use this since I was trying to create a solution that didn’t require additional software (in other words, come on Microsoft and implement this function!)

 

In addition, the script uses custom attribute 13 to identify a mailbox that shouldn’t use a personal archive. This is intended for service accounts and special purpose mailboxes.

#
#
# NAME: Maintenance.ps1
#
# AUTHOR: Guy Yardeni
#
# COMMENT: Script to run various maintenance tasks for Exchange 2010
#
#        Enable archives for mailboxes
#        Configure archive policy based on AD group
#

# Script to enable archive for any users who don’t already have one
# using the smallest archive database
#         
# Any text in Custom Attribute 13 will cause the script to skip the mailbox
#
#

# Return archive database with smallest size
$TargetDB = Get-MailboxDatabase -status | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.IsExcludedFromProvisioning -eq $true)} | sort-object "DatabaseSize" | select-object -first 1

# Enable archive to relevant mailboxes to the target database
$results = Get-Mailbox | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.ArchiveDatabase -eq $null) -and ($_.CustomAttribute13 -eq "")} | enable-mailbox

-archive -archivedatabase $TargetDB.Name -retentionpolicy "360 Day Default" |measure-object
 
#Write output for testing
Write-Host $results.count "mailbox(es) were enabled for archiving on database" $TargetDB.Name

# Script to set correct archiving policy
Get-Mailbox | where {($_.CustomAttribute12 -eq "")} | set-mailbox -CustomAttribute12 "180"
Get-DistributionGroupMember "Exchange Archive Users – 360 day" | Get-Mailbox | set-mailbox -CustomAttribute12 "360"
Get-Mailbox | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.CustomAttribute12 -eq "180")} | set-mailbox -retentionpolicy "180 Day Default"
Get-Mailbox | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.CustomAttribute12 -eq "360")} | set-mailbox -retentionpolicy "360 Day Default"

As always, comments about the code and approach are welcome!

A quick post to make a common task a little easier.

When managing move mailbox requests, it is often useful to be able to view certain statistics to ensure that progress and migration pace are as expected and that each server is playing its expected roles.

Exchange 2010 SP 1 makes that possible using the get-moverequeststatistics powershell command-let but some manipulation and formatting makes a big difference in monitoring the results.

Try this command for a friendly view of useful information about each open move request (as well as some cool tricks you can use with the format-table command):

Get-MoveRequest | Get-MoveRequestStatistics  |Sort-Object CompletionTimeStamp| ft DisplayName, @{Expression={$_.BadItemsEncountered};Label=”Errors”}, @{Expression={$_.PercentComplete};Label=”Percent”}, @{Expression={$_.TotalMailboxSize.ToString().Split(“(“)[0]};Label=”Size”}, @{Expression={$_.totalinprogressduration};label=”Time”},@{Expression={(($_.BytesTransferred/$_.TotalInProgressDuration.TotalMinutes)*60).ToString().Split(“(“)[0]};Label=”Pace/hr”}, @{Expression={$_.MRSServerName.ToString().Split(“.”)[0]};Label=”CAS”}, @{Expression={$_.SourceDatabase.ToString().Split(“\”)[0]};Label=”SourceServer”},SourceDatabase,Status,CompletionTimestamp -auto

Redirecting the output to a file on a scheduled basis also makes troubleshooting after hours mailbox moves much easier.

MokaFive Automation

Posted: April 18, 2011 in Scripting, Virtualization

Automating tasks is a common goal for many projects, especially those targeting the implementation of a new system or the operational improvement of an existing system. The goals are typically to create a repeatable sequence of steps that can be executed on a scheduled basis with no human intervention to minimize chances of neglecting to run the sequence or introducing errors into the execution.

With most software packages, automation relies on components created by the vendor in advance (that hopefully fit the needs of the implementer) or a documented API that is sometime complex and cryptic and must be learned and understood before being used.

MokaFive takes a web services based approach to this need that makes it easier to create the automation and reduces the need for a highly skilled developer to do the work and greatly accelerates the learning curve for the task.

RESTFul Web Services

MokaFive uses RESTFul Web Services not only for customization of its enterprise product but also for the vendor supplied administration console. The large majority of actions taken within the administration console use REST calls that can be examined and then modified to execute calls needed by the automation sequence.

REST (Representational State Transfer) is a style of architecture used for web pages and services that’s characterized by a lack of state storage on the server. The result is that every client request must contain all the information needed to process the request. This facet makes examining REST calls and recreating them fairly straight forward.

There’s a lot of information about REST on the web and a good Wikipedia entry that can serve as a decent starting point for research: http://en.wikipedia.org/wiki/Representational_State_Transfer

Example

The rest of this post will go through an example of how to automate a task for MokaFive using REST. The example generates a script that will create IP ranges within MokaFive to direct clients to their nearest image store for image files. The script uses Active Directory sites and subnets as the authoritative data for the IP ranges.

The process involves the following high level steps:

  1. Obtain the necessary tools and software to work with REST
  2. Create the necessary REST requests and save them
  3. Create a script to collect sites and services data from AD
  4. Put the data and REST requests together to configure the IP ranges.

Note: as with most scripted solutions, there are multiple ways to solve this problem, this is the method I have selected and created.

Required tools

My example uses a client computer running Windows 7 with Firefox as a browser used to access the MokaFive administrator console. The code is created in PowerShell and takes advantage of the PowerShell 2.0 capabilities that are included with Windows 7.

Note: I highly recommend using the PowerShell ISE that’s included with Windows 7 if you go the PowerShell route as it makes the development experience much easier and quicker.

In addition to the core OS and the Firefox browser, the following tools were used:

  1. Firebug – this Firefox extension allows detailed examination of the interaction between the browser and web server, including REST calls. The extension can be downloaded here.
  2. rest-client – a Java based application that can be used to configure REST requests and save them for future use. There are many REST clients out there, this particular one was selected because it has a version with a full interface which is useful for creating and testing the requests, and a command line version which is very easy to include in scripts. You can find this tool here.

Creating REST requests

Once the tools are installed, the next step is to open Firefox and log into the MokaFive console in order to perform the actions that need to be scripted. For the subnet exercise, I created a new IP range using the UI which performs the three actions that I need to capture for the script: get list of IP ranges, get list of image stores (performed to populate the image store pull down) and creating the IP range. I also edited an IP range to be able to update an existing range with the script.

Now press F12 which will open the Firebug window, by default below the browsing window, make sure the console tab is selected and you can scroll through all the actions that were observed by Firebug. Finding the specific actions you need is easier if you have Firebug open when you perform the actions but is not too hard to do later since the URL path is a pretty good indicator of the action.

The next step is to reproduce each action with the REST client and save it for automated execution. Firebug allows you to right click an action URL and copy the location. After running the rest-client UI, you can paste the location into the URL field. Next, select the ‘Auth’ tab and enter the login information – in my case, Basic authentication with username and password. This example is done with http since the code will run on the server itself. For production environments, especially when running across a network, you will probably also want to configure SSL using the SSL tab.

That’s all that’s needed for a GET request. You can test the request by running it and making sure you get the expected results and then save the request. You can also use a more advanced authentication scheme by leveraging API calls directly to create an authentication cookie, but since one of my primary goals is simplicity and the script code will all reside on the MokaFive server, I didn’t go that route.

For the POST/PUT requests which include adding an IP range or editing an existing IP range, modifications to the ‘Body’ tab are required. The first modification is to set the content-type and charset to the correct setting (which will match the setting viewed in Firebug under the action/Headers\Request Headers section. The required content-type is ‘application/xml; charset=UTF-8’, which can be configured in rest-client’s ‘Body’ tab by click the leftmost icon (the one with the pencil on it) and selecting the correct value.

The second modification needed is the XML containing the data to be sent to the server. This XML can be found in the Firebug action under the ‘Put’ tab in the source section. I typically cut and paste from there into Notepad to remove any formatting and then into the rest-client ‘Body’ tab. You can also type directly into the field to avoid any hidden characters coming from Firefox/Firebug.

When going through the POST request for a new IP range and a PUT request for an update to an existing IP range, I found that the requests are almost identical except for the method and the URL. As a result, I only save and use a single request, in my case the POST version, and modify those fields on the fly as needed.

Once the requests are complete and tested, save the request files to be used by the automation script.

Collecting Active Directory site data

This is a task that PowerShell is able to handle easily. The following code sample returns and processes the required data:

$myForest =
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
Foreach ($site in $myForest.Sites)
 {
 Foreach ($subnet in $site.Subnets)
  {
  }
 }

As the code demonstrates, the sites are contained in a collection within the forest object and subnets are a collection within each site object. The subnet itself is a string with a CIDR representation of the subnet – for example, 10.15.20.0/24.

One interesting challenge that will be discussed later is created by the fact that while AD adheres to the CIDR standard, MokaFive allows the network address part of the CIDR to be a host address – so (AD) 10.15.20.0/24 and (MokaFive) 10.15.20.1/24 can refer to the same network. Luckily, MokaFive stores the network address in a different field which I will use instead.

Putting it all together

Taking all of the tools and information presented above and using it to construct the required automated script is not very complicated but a little time consuming. While I don’t plan to include my full code here, I will go through each section of the script to demonstrate the structure and highlight potential issues.

1. Collect initial data

In addition to setting global variables and opening the connection to the AD forest, this section collects the subnet and image store data from MokaFive. The subnet and image store data will be used later in the script for quite a few purposes so I decided to collect it at the start.

This task highlights two interesting aspects of the rest-client and XML structure: first, the CLI version of rest-client allows you to specify the target directory but not the response file name. The response file name is the same as the request file name with an RCS extension (the request file uses an RCQ extension). A simple rename solves the problem of needing the file treated as an XML file.

The second issue is that the response file XML is not very useful as all the tags contain data related to the rest transaction rather than the needed data. The relevant data is all contained in a single tag called ‘body’. In order to process the ‘body’ data as XML, I extract it and create a new XML file containing only the contents of the ‘body’ tag. I could have probably done this in memory without the file, but writing the XML out makes debugging, testing and operational validation much easier. This process is done for both the subnets and image stores.

2. Primary loop – image store

The top level structure of the script loops through each image store in the MokaFive configuration. Since I am expecting that there will be AD sites and subnets that don’t participate in the MokaFive architecture, I decided to process each image store and make sure that any subnet in the site for the image store is configured as an IP range with the correct image store assigned to it.

The script needs a way to match the image stores to AD sites in order to configure subnets correctly. For this specific script, I assume that the image store server name (and therefore the URL property) will start with the site code. There are many ways to accomplish this goal but using a specific image store naming standard is one of the simplest approaches.

Once the site code has been identified, the site object is retrieved and used for the inner loop.

3. Inner loop – subnets

Processing each subnet of the identified site is the job of the inner loop. The code searches for the AD subnet name in the MokaFive IP range list. Due to the CIDR issue identified in the last topic, instead of using the CIDR field in the MokaFive subnet XML, the NetworkID field is used and concatenated with the network portion of the CIDR.

If the subnet is found, the image store data is compared, if it is correct, no action is taken. If it is incorrect a record update is initiated. If the subnet is not found a record addition is initiated.

4. Subnet record manipulation

Both the subnet update and subnet creation use the same request file since the differences can easily be changed with code. Prior to running the request, the ‘URL’, ‘method’ and ‘body’ tags are modified to create either a subnet update or a subnet creation. In the case of a subnet creation, the subnet mask (e.g. 255.255.255.0) must be determined based on the last two digits of the CIDR and used in the URL.

5. Clean up

…and that’s all there is. Removing any temporary files and deleting objects is the last section. I do leave a log file and echo some information to the screen throughout the process to simplify auditing and troubleshooting.

Last words

First, if you’re still reading, thanks for putting up with such a long post. If you have any other questions, please post in the comments or contact me directly at therdpfiles@gmail(d0t)com.

XML to HTML conversion for IT

Posted: February 18, 2011 in Scripting

Lately, I’ve been finding myself retrieving back end data from systems on a more frequent basis. The data comes out as XML in most cases which is great for manipulation in scripts and integration with other systems but is not so great for presentation.

A recent task of this nature got me started down the path of converting the XML data to HTML so that the result can be easily published as a report to an intranet or emailed to recipients. Such a mechanism would allow me to create scheduled reports for clients without requiring a reporting platform.

As soon as I started digging into this task, it seemed that I couldn’t avoid learning a lot more about the XML data format and the associated standards that allow the data to be converted and transformed. Specifically, it seemed that I would have to get very familiar with XSLT, the transformation defining cousin of XML.

This prospect was not appealing to me since I’m not and have never been a web programmer. I’m not comfortable with HTML or other web based development technology and really have no need for those in-depth skill sets in my job. So I decided to spend some time looking for an alternative that was easier for me to pick up.

After going through various options, including third party solutions, my approach, which fit well within my scripting skill sets and is easy to research and learn, is to use Microsoft Excel as my transformation engine. Using simple calls to the Excel object model, I can quickly create a basic HTML page with my data. If needed, that page can then be passed on to web gurus to be processed by existing portals, intranets, CSS based configurations, etc. Or, left alone as an unattractive but informative and usable web page.

I used PowerShell to get this done so the sample code below is in PowerShell but VBA and VBScript could be used just as easily (although VBA isn’t as easy to secure and schedule).

The steps to execute are extremely simple – open the XML file in Excel, make any modifications to the data as needed and then save as HTML. Just the right level of simplicity for me; although the solution will require Excel installed on the system, this still seems like a simple approach that can easily be supported by client or server engineering teams down the road (rather than needing web/xml developers that aren’t typically part of IT engineering teams).

This code sample processes an XML file into an HTML file after reordering columns and replacing headers:

$excelApp = new-object -comobject "excel.application"

$excelapp.visible=$true

$excelWorkbook=$excelapp.workbooks.open("d:\downloads\excelps\xmldata.xml")

 

#restructure field

$excelWorkbook.Activesheet.Columns.item("O").cut()

$excelWorkbook.Activesheet.Columns.item("A").Insert()

 

$excelWorkbook.Activesheet.Columns.item("D").cut()

$excelWorkbook.Activesheet.Columns.item("B").Insert()

 

$excelWorkbook.Activesheet.Columns.item("G").cut()

$excelWorkbook.Activesheet.Columns.item("C").Insert()

 

$return=$excelWorkbook.Activesheet.Columns.item("G:Q").delete()

 

$return=$excelWorkbook.Activesheet.Cells.item(2,1)="User Name"

$return=$excelWorkbook.Activesheet.Cells.item(2,2)="Physical Hostname"

 

$excelWorkbook.saveas("d:\downloads\excelps\report.html",44)

$excelworkbook.close()

$excelapp.quit()

Note: I do set the application to be visible for troubleshooting purposes, the final automated code should not require the application to be visible.

The best part about this for me is that you perform any manipulation of the data prior to saving it – search and replace, format numbers and strings, etc. And for any manipulation, you can use the macro recorder to get the exact Excel code required.