The SCCM 2012/R2 Application Catalog provides a request/approval capability to facilitate the distribution of applications that require approval due to licensing cost or other reasons.  Unfortunately, this function is only exposed within the SCCM console and does not provide any notification capability for pending approvals. As a result, widespread use of approvals within an organization can be challenging to implement as IT team members responsible for SCCM, who are already overloaded in many cases, are required to regularly check the console to provide reasonable turnaround on requests and must do the research themselves to determine if the request should be approved or denied.

Luckily there is a solution to this problem that is not only simple to implement, quite flexible but also free. The solution is provided by Coretech, a respected System Center solution provider based in Denmark. We have successfully used Coretech’s Application Email Approval Tool in multiple environments to provide email notification for pending and completed requests as well as provide a flexible framework to manage the approval or denial of the request.

The solution, available here:, is typically installed directly on a primary site server and has not been seen to cause any conflict with normal primary site server operations.  Note that it can be installed on a dedicated server which does require a more complex security configuration.

The article linked above describes the installation and testing process as well the ways to use it in a way that suits the specific environment. While it does not cover every conceivable scenario, it can handle several core scenarios and use cases quite well:

  • Notification for pending approvals – the most basic scenario allows IT departments to designate individuals responsible for application approvals and use the CoreTech tool to provide notification to those individuals.
  • Manager approval – another common scenario which distributes the approvals throughout the organization to the person most qualified to determine what a user requires and also responsible for the cost of licenses used by their group – the user’s direct manager (as defined in Active Directory).
  • Purchasing agent approval – in many organizations, the only required approval is that of the purchasing group who must ensure that copies of software in use are licensed and licenses are managed as mandated by each vendor.

The solution supports additional granular control including individuals who are automatically approved for all software (useful for IT or QA staff tasked with testing and deploying the applications), a fallback address in case the user’s manager is not defined in Active Directory, as well as combinations of the above scenarios – a common one being manager based approval with purchasing/license manager based notification.

Approvals, or rejections, and request tracking are provided using a separate IIS web site that is deployed as part of the solution and the solution leverages native SCCM functionality to provide auditing of approvals as well as maintain all other SCCM security controls. In addition, the tool provides customizable email templates for notification emails sent to the approvers as well as the requesting user once the application has been approved or denied.

While it is not rare to find specific components within Microsoft products that are missing key functionality needed by most clients, it is extremely rare to find an elegant and low-cost solution to address the problem. This is one of those rare solutions.

This post was created in collaboration with Greg Rhodes and Rand Morimoto.


There are a lot of collaboration solutions available today. Whether users are simply talking, chatting, sharing documents, co-editing documents or participating in a real time meeting, there is a large spectrum of available solutions. From traditional solutions (like phone and email) sold by well known vendors and provided by the workplace to new solutions from younger vendors that seek to disrupt the existing landscape for collaboration solutions, IT departments and users have a lot to choose from. Furthermore, since many of the new solution are offered free of charge (either for a full or limited version), users can select to try and even implement solutions such as Dropbox, Trello, Skype  on their own with no assistance from IT.

The result is that users in most organizations have many collaboration tools that can be used for a specific purpose or use case, some of which may not be provided by IT or even known to IT. And while empowering users to select tools that are appropriate for their use case is a valuable goal, too many collaboration tools tend to cause mass confusion, especially when users who need to collaborate may be using different tools and platforms. The end user experience, therefore, is a mess with users not knowing which tool to use for which purpose, which tools are best to use for a specific user/department/location as well as users not being trained or well versed in the tools they need to use. In addition to confusion dominating the end user experience, this situation can introduce a lot of risk to the business as vital business information is stored in locations that IT is not aware and that have no governance controls and inadequate security controls.

The solution to the problem is a clear strategic plan for collaboration tools within the organization. Before I outline the key steps used to arrive at this strategy and to implement the strategy, let’s do a quick review of the categories and use cases for collaboration tools that are relevant to most organizations today:

  • The old school tools – email, file shares and phone
  • Advanced file sharing – cloud based solutions and document management (workflows, retention, records) – examples here include SharePoint, LiveLink, DropBox or
  • Real time communication – chat, voice/video over IP using tools like Jabber, Call Manager or Lync
  • Real time document collaboration – real time editing of documents using solutions such as Google Docs or Office Web Apps
  • Enterprise social network – persistent chat and discussion threads with user controlled membership and participation – examples include Chatter, Jive or Yammer
  • Project oversight tools – project submission, tracking, project collaboration workspaces using platforms such as Project Server

Given the broad nature of the above use cases and of collaboration in general, these are the key components to a successful effort to standardize and simplify enterprise collaboration tools while balancing functionality and user experience:

  • Identify the use cases that are important to your business

Not every organization requires every use case to conduct business. Determining which scenarios are needed for the organization and prioritizing those establishes the organization’s collaboration requirements and serves as a starting point for any related efforts. This is especially vital for scenarios that require advanced tools that can be more costly or complex to implement.

  • Align requirements for each area with business units

Given the user facing nature of all collaboration solutions, the requirements definition exercise must include representatives of key business areas. A slick and cutting edge solution that is approved by IT is of minimal value if it doesn’t meet the needs of sales, engineering or customer support. This alignment process must occur throughout the effort with business user participating in requirement gathering, product demos, POC testing and finally training development and execution.

  • Create a collaboration usage policy

Any efforts to address collected requirements must be accompanied by a clear usage policy that identifies to users what is expected of them with regards to management and custody of the organization’s data. Since some users will no doubt prefer solutions other than those provided by the organization, it is important to clarify what the policy is regarding the use of alternative solutions as well as the process for requesting that IT consider changes to existing solution offerings. This policy serves to guide users to acceptable collaboration practices as well as protect the organization from the inherent risk in disseminating enterprise data through unapproved channels, often known as data leakage.

  • Inventory purchased solutions

As the effort shifts to the tactical task of leveraging the requirements to develop and implement suitable solutions, a first step is to review and understand what collaboration solutions have already been purchased by the organization and the degree to which they have been implemented. Since some solutions may have been purchased and deployed without any involvement by IT, something that’s quite simple to do for cloud solutions, the financial impact to this step can be quite significant. Gaining visibility to enterprise assets and aligning those with proposed solutions and initiatives can not only reduce the cost of the overall effort but also greatly speed up the introduction of better functionality for all users.

  • Implement technical changes

The implementation phase includes modifications to existing solutions and deployment of new solutions to meet business requirements. This phase will typically be executed in stages starting with the highest priority changes and moving down the list. Also, given that in almost any organizations that are departments or workgroups with specialized needs, the implementation phase should start by focusing on solutions that are suitable for the large majority of users (the 80%) and then create a process to review and finding suitable solutions for specialized needs throughout the organization (the 20%).

  • Educate users

The best solutions and most innovative tools are of little value if users don’t understand when and how to use them. While many modern solution tout themselves as user friendly and ever self-explanatory, there tremendous value in user education around selecting the right tool for the right job and using each tool correctly. This is especially true in an enterprise setting where the organization usually places certain requirements or limitations on how tool can and should be used. It’s also important to note that education is not a one-time effort and must include initial education, new hire education and ongoing refreshers.

  • Continually evaluate and improve

As with any program based on business requirements and a rapidly changing landscape, the collaboration framework within an organization must be reviewed and evaluated on a regular basis to ensure that the goals of the initial implementation were met and that the framework continues to evolve to meet the changing needs of the business as well as incorporate new and better solutions in the market place.

If you only take away one key point from this post, make it about the prioritization of aligning IT with the business. The IT department of 2014 must ensure that any initiatives, especially those that are user facing, are closely aligned with the business to ensure that business problems are solved, business goals are met, users are engaged and productive on IT platforms and users/managers can provide feedback to allow IT to correct course as needed.

Following the above approach may not make every user happy but it will help strike a balance between user satisfaction, team productivity, cost and business benefit.

Upgrading Windows on laptops/tablets isn’t about imaging or SCCM/LANDesk. The real success factors are often not clearly understood and prepared for prior to the project increasing cost during the project and sometimes missing key benefits or improvements.


Key success factors are:

  • Explore new features of the OS – often a new version of Windows provides new features that can provide key business advantages or cost savings. Technologies such as Direct Access, BitLocker, AppLocker are free for most organizations and can provide substantial benefits when implemented as part of an OS upgrade.
  • Understand the current environment – accurate data about current hardware/peripherals as well as applications used throughout the organizations (especially non-enterprise apps) or which end users are responsible for each application is often to hard to come by. The assessments and discovery tasks to gather this information are very time consuming and are not ideal tasks for an outside vendor who is not intimately familiar with the organization. Starting the data collection well in advance and/or maintaining the data current on an on-going basis is necessary to reduce costs and meet deadlines
  • Understand requirements – the business requirements for Windows projects are often defined in parallel with the project, sometimes extending into deployment and changing key project parameters at the last minute. This approach can be very costly so defining the requirements prior to the project and making sure they are aligned with business strategy (e.g. should users be storing data on local devices? How is the data backed up and shared? How does this integrate with cloud offerings? How do we avoid data leakage?) is a great way to ensure that the final product is a good fit for the organization and project costs are contained.
  • Application testing process – as the most important factor of overall project duration, an effective application testing process can have a huge impact not only on project timelines but on end user experience following migration. A well-defined process that is managed by a competent application analyst/process manager is vital to the success of the application testing effort and with it the overall project.
  • Change management – a client OS upgrade is often one of the most disruptive IT projects for end users. While the OS change itself might be minor, the accompanying upgrade of the browser, core productivity software (Office, Acrobat, etc) and introduction of new OS features can be very disruptive to a large majority of end users. Managing this change, setting expectations, communicating clearly and structuring the project to minimize disruption are important activities that must be prioritized and handled by an experienced program manager or process analyst.
  • Process overhaul – the broad footprint of this type of project invariably impacts many internal processes: support processes, application lifecycle management processes, security processes, on-boarding and off-boarding processes, hardware asset management processes and more. While these processes can be updated following the migration, a typical approach, doing so is much more disruptive and takes quite a while to complete as team members are busy supporting the organization. Reviewing and adjusting processes during the project in coordination with the project team results in a more seamless transition and a faster return to full productivity for the organization
  • Compliance – for organizations subject to regulatory frameworks such as HIPAA, PCI, SOX, GxP, etc, the changes brought by a project such as this can be more impactful. Preparing the compliance teams for the project by including them in the project team from the onset and integrating their requirements and efforts into the project plan will help avoid last minute surprises that can derail execution


The solution that CCO ( uses is to deploy a project team with a lead in each area identified and ensure that the leads are familiar not only with the type of project but the type of organization. We leverage and support existing mechanisms within the organization to ramp up quickly on project portions that are required early and/or present high risk. Communications with all relevant channels including executives, business units and application owners is established at the onset and used for ongoing change management. Complementing roles of logistics/project management and process/change management are either filled by the same resources or a tightly integrated team.


End result for a recent Windows 8.1 project:

  • Meeting project deadlines and delivering a new platform on brand new hardware for thousands of users in less than 6 months
  • Delivering cutting edge features (encryption, cloud backup, always on VPN) and platforms (convertible touch hardware, enterprise tablets) to users with minimal disruption
  • Replacement of iPads for remote workers with Windows based tablets that provide enterprise management and security
  • Overall user satisfaction due to visible project benefits such as cutting edge hardware, up to date productivity tools, cloud data storage, non-password authentication
  • Flexibility within project team resulted in meeting schedules in spite of challenging external factors (cutting edge technology, release of Windows 8.1 update 1 in middle of project, hardware availability issues, limited internal resources)

Hope that this helps someone else with the same problem.

Problem overview:

SCOM 2012 running on 2 management servers with a backend SQL 2008 R2 cluster. Environment is healthy and working fine overall.

We were seeing a lot of heartbeat flaps (server loses heartbeat and then regains it within 1-10 minutes). Some of the problem servers are in the same data center as the management servers and some are in overseas data centers.

On the agent systems, when the problem occurs, error 20070 appears as follows:

The OpsMgr Connector connected to, but the connection was closed immediately after authentication occurred.  The most likely cause of this error is that the agent is not authorized to communicate with the server, or the server has not received configuration.  Check the event log on the server for the presence of 20000 events, indicating that agents which are not approved are attempting to connect.

This occurred on agents managed by either management servers. At times the agents would fail over to the other server successfully and at other times there is an event ID 21050 immediately following indicating that a connection to the other management server could not be made successfully. There are no corresponding event 20000 entries on the SCOM management servers nor are there any pending agents in the console.

The issues do not come up in batches or in any other discernible pattern. The managements servers are reachable via PING and RPC during the ‘outage’.

I tried installing all updates on the management servers, restarting services, rebooting the servers, flushing the cache on the clients and reinstalling the agent on clients. None of those helped.

Turned out that the organization had a previous management group hosted on a management server with the same name as the current management server. Clients were reaching out to the management server with information for a management group that no longer existed which generated some confusing errors.

The solution is to flush the cache on the management servers and then remove the phantom management group entries from agents that reported the issue.

Flushing the server cache can be done using this process:

  1. Open the Monitoring workspace
  2. Expand Operations Manager and then expand Management Server
  3. Select the Management Servers State view
  4. In Management Server State pane, click a management server
  5. In the Tasks pane, click Flush Health Service State and Cache

IMPORTANT: this task will never succeed since the task also flushes the fact that the server is running the task. The task will timeout and fail which is expected.

Then, cleaning up the agents was done with this process:

  1. Stop System Center Management service
  2. Remove registry keys for OLDMP from HKLM\Software\Microsoft\Microsoft Operations Manager\3.0\Agent Management Groups and HKLM\System\CurrentControlSet\Services\HealthService\Parameters\Management Groups
  3. Rename C:\Program Files\System Center Operations Manager\Agent\Health Service State
  4. Start System Center Management service

At the end of this process, the heartbeat alerts for servers that are up and have the System Center Management service running stopped happening.

Scripted Microsoft Patch Removal

Posted: September 19, 2011 in Scripting, Security

Many patch management systems have the ability to uninstall previously deployed patches. This functionality is typically used when a conflict with the patch is discovered after deployment.

Unfortunately, many of the automated removal mechanisms depend on the patch developer supporting and including a patch removal mechanism for each patch. When a patch doesn’t support this functionality, the management platform falls short in supporting this function.

As a workaround, here are a couple of scripts that will provide a semi-automated approach that can be used to remove patches remotely.

First, this script will list the patches installed on a system in the past X days where X is a command line parameter:

#Get list of patched installed in the past X days

$ComputerName = $Argument1
$intDays = $Argument2
$bolDaysValid = $true

#Validate second paramter – must be a number between 1 and 1000
If ([Microsoft.VisualBasic.Information]::isnumeric($intDays)) {
    If ($intDays -lt 1 -or $intDays -gt 1000) {$bolDaysvalid = $false }
    else {$bolDaysValid = $true}
else {
    $bolDaysValid = $false

If ($bolDaysValid -eq $false) {
    write-host "Invalid days parameter. Please use a number between 1 and 1000."
    write-host "Example: .\PatchList.ps1 mycomputer 30"
    write-host "Exiting"

#Validate first paramter – WMI call to get OS
$OS = Get-WmiObject -Class win32_OperatingSystem -namespace "root\CIMV2" -ComputerName $computerName -ErrorAction silentlycontinue
if ($OS -eq $NULL) {
    write-host "Can’t access computer $ComputerName. Exiting."
#Get list of updates
Get-WmiObject -Computername $ComputerName Win32_QuickFixEngineering | ? {$_.InstalledOn} | where { (Get-date($_.Installedon)) -gt (get-date).adddays(-$intDays) }

Then, once the research is complete and the offending patch is found, the following script can be used to remotely remove the patch.

Add-Type -AssemblyName Microsoft.VisualBasic


#Second paramter validation – make sure it starts with ‘KB’ followed by a number
If (-not (($hotfixid.substring(0,2) -eq "KB") -and ([Microsoft.VisualBasic.Information]::isnumeric($hotfixid.substring(2))))) {
write-host "Invalid hotfix parameter. Please use ‘KB’ and the article number."
write-host "Example: .\PatchRemove.ps1 mycomputer KB976432"
write-host "Exiting"

#First paramter validation – get OS to be used later. If call fails, bad parameter
$OS = Get-WmiObject -Class win32_OperatingSystem -namespace "root\CIMV2" -ComputerName $computerName -ErrorAction silentlycontinue
if ($OS -eq $NULL) {
write-host "Can’t access computer $ComputerName. Exiting."
#Get hotfix list from target computer
$hotfixes = Get-WmiObject -ComputerName $computername -Class Win32_QuickFixEngineering |select hotfixid           

#Search for requested hotfix
if($hotfixes -match $hotfixID) {
    $hotfixNum = $HotfixID.Replace("KB","")
    Write-host "Found the hotfix KB " $HotfixNum
    Write-Host "Uninstalling the hotfix"
    #Windows 2008/R2 use WUSA to uninstall patch
    if ($OS.Version -like "6*") {
        $UninstallString = "cmd.exe /c wusa.exe /uninstall /KB:$hotfixNum /quiet /norestart"
          $strProcess = "wusa"
    #Windows 2003 use spuninst in $NTuninstall folder to uninstall patch
    elseif ($OS.Version -like "5*") {
        $colFiles = Get-WMIObject -ComputerName $computername -Class CIM_DataFile -Filter "Name=`"C:\\Windows\\`$NtUninstall$HotFixID`$\\spuninst\\spuninst.exe`""
        if ($colfiles.FileName -eq $NULL) {
        Write-Host "Could not find removal script, please remove the hotfix manually."
        else {
            $UninstallString = "C:\Windows\`$NtUninstallKB$hotfixNum`$\spuninst\spuninst.exe /quiet /z"
              $strProcess = "spuninst"
    #Send removal command
    ([WMICLASS]"\\$computername\ROOT\CIMV2:win32_process").Create($UninstallString) | out-null           
    #Wait for removal to finish
    while (@(Get-Process $strProcess -computername $computername -ErrorAction SilentlyContinue).Count -ne 0) {
        Start-Sleep 3
        Write-Host "Waiting for update removal to finish …"
    #Test removal by getting hotfix list again
    $afterhotfixes = Get-WmiObject -ComputerName $computername -Class Win32_QuickFixEngineering |select hotfixid           
    if($afterhotfixes -match $hotfixID) {
        write-host "Uninstallation of $hotfixID succeeded"
    else {
        write-host "Uninstallation of $hotfixID failed"
else {           
    write-host "Hotfix $hotfixID not found"

Note that these scripts are tested on servers running Windows 2003 or later.

The Office 365 deployment assistant is a great tool to assist with deploying and configuring an Office 365 migration. Several steps require PowerShell work and you can use PowerShell deployed locally on an on-premises server to configure Office 365 remotely. This is very convenient but holds a little gotcha.

The Microsoft instructions for connecting remotely to an Office 365 installation include the following commands:

$LiveCred = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $LiveCred -Authentication Basic -AllowRedirection

Import-PSSession $Session

While it is true that these commands will create the remote session, after running the last command you will see a long list of cmdlets that were not redirected to the remote session. That is because those commands are already defined for the local session. This list is especially long if using PowerShell on an Exchange server which is typical since the configuration involves a mix of local and remote steps.

In order to be to use the commands that are duplicated in the cloud, there are two options.

First, you could use the following addition to the last line: Import-PSSession $Session –AllowClobber. The ‘–AllowClobber’ parameter will let PowerShell overwrite the locally registered commands with the Office 365 ones.

This approach works well but does prevent you from managing the local environment using the same commands. This can be resolved by opening another shell window or by following the second option.

The second option is to use this addition to the last line instead: Import-PSSession $Session –Prefix o365. The ‘-Prefix’ option allows both sets of commands to be available with commands that are sent remotely designated with the prefix string. So instead of running: Enable-OrganizationCustomization, the command would be: Enable-o365OrganizationCustomization.

Using the prefix option may require changes to script examples and a little getting used to but over the long term if you need to manage both an on premises and Office 365 environments, it saves a lot of time.

Windows VPN Client and local DNS resolution

Posted: August 25, 2011 in Security
Tags: ,

Typically when configuring a remote access VPN, the goal is for DNS requests to be resolved by DNS servers on the remote/server side of the VPN connection.

This is either because the connection is from a less trusted network to a more trusted one – i.e. from home to office, and so split tunnels are not allowed. Even if split tunnels are allowed, the local client network is typically less complex and can use broadcast name resolution while the remote network is complex so using it requires DNS.

Since this is the predominant configuration, this is how most VPN clients are configured and many, including the Windows VPN client, do not offer an option to change this configuration.

In some cases, however, it makes sense for DNS resolution to remain local to the client side of the VPN connection. This is useful when the connection is from a more trusted and complex network to a less trusted and complex network. For example, a connection from the office network to a lab network or to a home network, would benefit from keeping DNS resolution on the client side of the connection.

I ran into this problem trying to configure a connection to my lab that would allow me to keep the connection open while working on the office network.

Unfortunately, this isn’t easy to do this with the VPN client that is included in Windows Vista/7 (The VPN client with Windows XP had an issue that resulted in a side effect with this exact configuration). While Windows does allow configuring the binding order to different interfaces using the ‘Advanced Settings’ menu option in the ‘Network Connections’ control panel, changing the binding order for ‘[Remote Access Connections]’ doesn’t seem to have any impact.

The binding order is stored in the registry in the following location: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind. The list includes all the device GUIDs for network adapters and active connections in the binding priority order.

When working with the registry key, the following facts emerge:

  • Changing the order of the GUIDs in the registry does impact the binding order, including for VPN connections
  • Any changes to the key take effect immediately
  • When a VPN connection is completed, the GUID for the connection is added to the top of the bind order if it does not already exist
  • When a VPN connection is closed, the GUID entry for the connection is removed
  • If there are multiple GUID entries for the connection, only one is removed when the connection is closed

This mechanism creates the possibility of the following workaround:

  1. Examine the Bind registry key
  2. Connect to your VPN connection
  3. Check the Bind key again and copy the GUID that was added to the top of the list
  4. Paste the GUID entry at the bottom of the list 20 times
  5. Export the key and clean up the exported file to only include the bind key

The result is a key that will support the desired behavior. Every time a VPN connection is established, since the GUID is present, it will not be added. Since the GUID is at the bottom, DNS resolution will be done locally to the client. When the connection is disconnected, one GUID entry will be removed. After 20 VPN connections, the exported registry file can be used to reimport the key.

Of course, you can paste the GUID more times to reduce how often you have to reimport the key.

Also important to remember to redo this procedure if there are any changes to network adapters.