Archive for the ‘Security’ Category

Scripted Microsoft Patch Removal

Posted: September 19, 2011 in Scripting, Security
Tags:

Many patch management systems have the ability to uninstall previously deployed patches. This functionality is typically used when a conflict with the patch is discovered after deployment.

Unfortunately, many of the automated removal mechanisms depend on the patch developer supporting and including a patch removal mechanism for each patch. When a patch doesn’t support this functionality, the management platform falls short in supporting this function.

As a workaround, here are a couple of scripts that will provide a semi-automated approach that can be used to remove patches remotely.

First, this script will list the patches installed on a system in the past X days where X is a command line parameter:


#Get list of patched installed in the past X days
param($Argument1,$Argument2)

$ComputerName = $Argument1
$intDays = $Argument2
$bolDaysValid = $true

#Validate second paramter – must be a number between 1 and 1000
If ([Microsoft.VisualBasic.Information]::isnumeric($intDays)) {
    If ($intDays -lt 1 -or $intDays -gt 1000) {$bolDaysvalid = $false }
    else {$bolDaysValid = $true}
}
else {
    $bolDaysValid = $false
}

If ($bolDaysValid -eq $false) {
    write-host "Invalid days parameter. Please use a number between 1 and 1000."
    write-host "Example: .\PatchList.ps1 mycomputer 30"
    write-host "Exiting"
    Exit
}

#Validate first paramter – WMI call to get OS
$OS = Get-WmiObject -Class win32_OperatingSystem -namespace "root\CIMV2" -ComputerName $computerName -ErrorAction silentlycontinue
if ($OS -eq $NULL) {
    write-host "Can’t access computer $ComputerName. Exiting."
Exit
}
#Get list of updates
Get-WmiObject -Computername $ComputerName Win32_QuickFixEngineering | ? {$_.InstalledOn} | where { (Get-date($_.Installedon)) -gt (get-date).adddays(-$intDays) }


Then, once the research is complete and the offending patch is found, the following script can be used to remotely remove the patch.


param($Argument1,$Argument2)
Add-Type -AssemblyName Microsoft.VisualBasic

$computername=$Argument1
$hotfixid=[string]$Argument2

#Second paramter validation – make sure it starts with ‘KB’ followed by a number
If (-not (($hotfixid.substring(0,2) -eq "KB") -and ([Microsoft.VisualBasic.Information]::isnumeric($hotfixid.substring(2))))) {
write-host "Invalid hotfix parameter. Please use ‘KB’ and the article number."
write-host "Example: .\PatchRemove.ps1 mycomputer KB976432"
write-host "Exiting"
Exit
}

#First paramter validation – get OS to be used later. If call fails, bad parameter
$OS = Get-WmiObject -Class win32_OperatingSystem -namespace "root\CIMV2" -ComputerName $computerName -ErrorAction silentlycontinue
if ($OS -eq $NULL) {
write-host "Can’t access computer $ComputerName. Exiting."
Exit
}
#Get hotfix list from target computer
$hotfixes = Get-WmiObject -ComputerName $computername -Class Win32_QuickFixEngineering |select hotfixid           

#Search for requested hotfix
if($hotfixes -match $hotfixID) {
    $hotfixNum = $HotfixID.Replace("KB","")
    Write-host "Found the hotfix KB " $HotfixNum
    Write-Host "Uninstalling the hotfix"
    #Windows 2008/R2 use WUSA to uninstall patch
    if ($OS.Version -like "6*") {
        $UninstallString = "cmd.exe /c wusa.exe /uninstall /KB:$hotfixNum /quiet /norestart"
          $strProcess = "wusa"
    }
    #Windows 2003 use spuninst in $NTuninstall folder to uninstall patch
    elseif ($OS.Version -like "5*") {
        $colFiles = Get-WMIObject -ComputerName $computername -Class CIM_DataFile -Filter "Name=`"C:\\Windows\\`$NtUninstall$HotFixID`$\\spuninst\\spuninst.exe`""
        if ($colfiles.FileName -eq $NULL) {
        Write-Host "Could not find removal script, please remove the hotfix manually."
        }
        else {
            $UninstallString = "C:\Windows\`$NtUninstallKB$hotfixNum`$\spuninst\spuninst.exe /quiet /z"
              $strProcess = "spuninst"
        }
    }
    #Send removal command
    ([WMICLASS]"\\$computername\ROOT\CIMV2:win32_process").Create($UninstallString) | out-null           
    #Wait for removal to finish
    while (@(Get-Process $strProcess -computername $computername -ErrorAction SilentlyContinue).Count -ne 0) {
        Start-Sleep 3
        Write-Host "Waiting for update removal to finish …"
    }
    #Test removal by getting hotfix list again
    $afterhotfixes = Get-WmiObject -ComputerName $computername -Class Win32_QuickFixEngineering |select hotfixid           
    if($afterhotfixes -match $hotfixID) {
        write-host "Uninstallation of $hotfixID succeeded"
    }
    else {
        write-host "Uninstallation of $hotfixID failed"
    }
}
else {           
    write-host "Hotfix $hotfixID not found"
return
}           


Note that these scripts are tested on servers running Windows 2003 or later.

Windows VPN Client and local DNS resolution

Posted: August 25, 2011 in Security
Tags: ,

Typically when configuring a remote access VPN, the goal is for DNS requests to be resolved by DNS servers on the remote/server side of the VPN connection.

This is either because the connection is from a less trusted network to a more trusted one – i.e. from home to office, and so split tunnels are not allowed. Even if split tunnels are allowed, the local client network is typically less complex and can use broadcast name resolution while the remote network is complex so using it requires DNS.

Since this is the predominant configuration, this is how most VPN clients are configured and many, including the Windows VPN client, do not offer an option to change this configuration.

In some cases, however, it makes sense for DNS resolution to remain local to the client side of the VPN connection. This is useful when the connection is from a more trusted and complex network to a less trusted and complex network. For example, a connection from the office network to a lab network or to a home network, would benefit from keeping DNS resolution on the client side of the connection.

I ran into this problem trying to configure a connection to my lab that would allow me to keep the connection open while working on the office network.

Unfortunately, this isn’t easy to do this with the VPN client that is included in Windows Vista/7 (The VPN client with Windows XP had an issue that resulted in a side effect with this exact configuration). While Windows does allow configuring the binding order to different interfaces using the ‘Advanced Settings’ menu option in the ‘Network Connections’ control panel, changing the binding order for ‘[Remote Access Connections]’ doesn’t seem to have any impact.

The binding order is stored in the registry in the following location: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind. The list includes all the device GUIDs for network adapters and active connections in the binding priority order.

When working with the registry key, the following facts emerge:

  • Changing the order of the GUIDs in the registry does impact the binding order, including for VPN connections
  • Any changes to the key take effect immediately
  • When a VPN connection is completed, the GUID for the connection is added to the top of the bind order if it does not already exist
  • When a VPN connection is closed, the GUID entry for the connection is removed
  • If there are multiple GUID entries for the connection, only one is removed when the connection is closed

This mechanism creates the possibility of the following workaround:

  1. Examine the Bind registry key
  2. Connect to your VPN connection
  3. Check the Bind key again and copy the GUID that was added to the top of the list
  4. Paste the GUID entry at the bottom of the list 20 times
  5. Export the key and clean up the exported file to only include the bind key

The result is a key that will support the desired behavior. Every time a VPN connection is established, since the GUID is present, it will not be added. Since the GUID is at the bottom, DNS resolution will be done locally to the client. When the connection is disconnected, one GUID entry will be removed. After 20 VPN connections, the exported registry file can be used to reimport the key.

Of course, you can paste the GUID more times to reduce how often you have to reimport the key.

Also important to remember to redo this procedure if there are any changes to network adapters.

Part III: Redirection (virtual directory and SSL)

Typically when users want to access Outlook Web App (OWA), they are unlikely to type the correct URL: https://webmail.domain.com/owa or /exchange. In order to accommodate common URL combinations that users may enter, commonly using HTTP instead of HTTPS and omitting the virtual directory name, we can employ redirection.

To this end, there are two types of redirection:

The first redirects HTTP traffic to HTTPS and is very easy to accomplish in a TMG environment. If you refer to the previous post, you will see a configuration step for the web listener that accomplishes this function. In the ‘Connections’ tab for the listener, HTTP connections are allowed and redirected to HTTPS. Nice and easy.

The second form of redirection help redirect URLs that reference the root of the web site, i.e. https://webmail.domain.com , to the correct virtual directory. In an Exchange 2007/2010 environment, the destination should be https://webmail.domain.com/owa. If coexistence with legacy versions  (Exchange 2003) is needed, the destination will be https://webmail.domain.com/exchange.

In either case, we will use the ‘HTTP redirection’ feature of IIS 7.5 to configure the required setting using the following process on each CAS server:

  1. Open IIS Manager and navigate to the Default Web Site
  2. Open the HTTP Redirect feature and configure the following options:
  • Redirect all requests to this destination: https://webmail.domain.com/owa
  • Redirect all requests to exact destination: unchecked
  • Only redirect requests to content in this directory: checked
  • Status code: Found (302)

While those are the required settings for the web site, settings these configuration options will automatically set the same options on any sub folders and virtual directories that do not currently have redirection configured. Since we only want the redirection on the web site, we need to remove the redirection from all sub folders.

This process results in a problem: there are three directories that do need to be redirected and those are ‘/Exchange’, ‘/Exchweb’ and ‘/Public’. These three virtual directories must be redirected to ‘/owa’ in order for the OWA service to function correctly.

The problem is evident right after turning off redirection on the ‘/owa’ virtual directory using the UI – this change also disables redirection on the three folders listed above. If you re-enable redirection on any of the three, it will also re-enable redirection on ‘/owa’. This confusing and frustrating loop creates a series of unusable configurations that aren’t simple to correct.

The problem occurs because once the settings are configured using the UI, they are stored in the web.config file for the virtual directory and the four directories discussed all share a single web.config file. If the settings are configured using appcmd.exe the information is stored elsewhere (presumably in the metabase) and the problem is resolved.

The process to correct the issue is as follows:

  1. Remove the HttpRedirect section from the web.config file for /owa.
  2. Use the following commands to configure the correct settings for all folders:

cd %windir%\system32\inetsrv
appcmd set config “Default Web Site/Exchange” /section:httpredirect /enabled:true -commit:apphost
appcmd set config “Default Web Site/Exchweb” /section:httpredirect /enabled:true -commit:apphost
appcmd set config “Default Web Site/Public” /section:httpredirect /enabled:true -commit:apphost

appcmd set config “Default Web Site/owa” /section:httpredirect /enabled:false -commit:apphost

Finally, while you are configuring virtual directories on each CAS server, confirm that the authentication settings are set to Basic Authentication for all of the virtual directories – OWA, ECP, ActiveSync, and Outlook anywhere (configured on the properties of the CAS server). I recommend viewing and making changes to these settings through the Exchange Management Console or Shell.

That should be all that’s needed. The results of following the instructions in all 3 posts of this series are an Exchange 2010 system published with TMG using a single public IP address and a seamless user experience.

A big thanks and credit for various aspects of this post go to a couple of colleagues at Convergent Computing for helping discover, test and document this information. Thank you Yasu SabaLin and Aman Ayaz.

Hope this is helpful and as always please post any comments or questions

Part II: Creating the publishing rules

If you’ve read part I of this series, you’ve hopefully got your TMG standalone array up and ready to publish Exchange 2010 services. Before creating the publishing rules themselves, we need to address authentication.

Since our TMG servers are members of a workgroup in the DMZ (same applies if using an AD domain in the DMZ with no trusts to the internal AD forest), we will need to configure a method of authentication. I typically prefer the use of LDAPS since it is supported by default on AD domain controllers and is very simple to configure.

Setting up LDAPS Infrastructure

LDAPS uses a secure lookup to validate users against the AD domain. SSL is used to secure either LDAP (port 389/636) or global catalog (port 3268/3269) queries between TMG and select DCs on the internal system.

The setup steps are as follows:

  1. Configure select DCs with a certificate – any DCs that are participating in LDAPS must have a server authentication certificate. This certificate would be deployed automatically by an AD based enterprise CA.
  2. Export the trust root certificate from one of the DCs and copy the file over to each TMG server.
  3. Import the certificate into the ‘Trusted Root Certificate authorities’ folder on each TMG server.
  4. Ensure that firewalls allow traffic from the TMG servers to the selected DCs over ports 636 and 3269.

     A few tips to assist with this task:

  • If you just deployed your CA, run the command ‘certutil –pulse’ on a DC to issue the certificate.
  • Reboot the domain controller after issuing the CA to activate a listener on ports 636 (secure LDAP) and 3269 (secure global catalog).
Configure LDAPS authentication

Within the TMG console, follow these steps to configure LDAPS:

  • Right click on the ‘Web Access Policy’ node.
  • Select ‘Configure (Related)’ and then ‘RADIUS Server Settings’
  • Select the ‘LDAP Servers’ tab and click the ‘Add’ button to create an LDAP set
  • Enter a name for the LDAP set (e.g. Internal DCs), the domain name and credentials used for authentication

NOTE: this account needs minimal rights in AD and a non-expiring password

  • Make sure that ‘Use Global Catalog (GC)’ and ‘Connect LDAP servers over secure connection’ are checked.
  • Add the selected internal DCs that were used in the preparation steps.
  • Define the mask for this LDAP set using ‘domain\*’ for the expression and the created LDAP set and click OK
  • Define another mask for this LDAP set using ‘*@domain.com’ for the expression and the created LDAP set and click OK
  • Repeat the above steps for any additional domains as needed
Publish Exchange Services

Publishing the services is fairly simple by following the wizard. Here are the steps:

First, to create the listener that will be used for all of the rules:

  • Select the ‘Firewall Policy’ folder and the Toolbox tab on the right.
  • Navigate to ‘Network Objects’ and right click ‘Web Listener’ to create a new listener with the following info:
  1. Name: Exchange 2010 Listener
  2. Require SSL secured connections with clients
  3. Web Listener IP Addresses: All Networks (and Local Host)
  4. Use a single certificate for this Web Listener: select the certificate for webmail.domain.com
  5. Authentication Settings: HTML Form Authentication and LDAP (Active Directory) client authentication
  6. Enable SSO for Web sites published with this web listener with the name: domain.com
  • Open the properties of the recently created listener, select the ‘Connections’ tab and configure the following:
  1. Enable HTTP connections on port 80
  2. Redirect all traffic from HTTP to HTTPS

Next, to publish OWA:

  • Select the Firewall Policy section and from the the ‘Tasks’ tab on the right and click on ‘Publish Exchange Web Client Access’
  • Name the rule ‘Exchange 2010 OWA’
  • Select the ‘Exchange Server 2010’ version option and check ‘Outlook Web Access’
  • Accept the default selection of ‘Publish a single web site or load balancer’
  • Accept the default selection of ‘Use SSL to connect to the published server…’
  • On the ‘Internal Publishing Details’ page, enter: webmail.domain.com
  • Check ‘Use a computer name or IP address to connect to the published server and enter the CAS array FQDN
  • Enter the ‘Public Name’ of ‘webmail.domain.com’
  • On the ‘Select Web Listener’ page, select the ‘Exchange 2010 Listener’
  • Accept the default authentication delegation using ‘Basic authentication’
  • Accept the default of ‘All Authentication Users’ and complete the wizard

Edit the recently created rule, select the ‘Paths’ tab and add a new path as follows:

  • Folder: ‘/’
  • External path: same as published folder

Next, follow the same steps to create the ActiveSync publishing rule. Change only the name and select the ‘Exchange ActiveSync’ option on the ‘Select Services’ page. Adding the root folder path isn’t necessary for this rule.

Then, following same process create the Outlook Anywhere rule with an appropriate name and the ‘Outlook Anywhere (RPC/HTTPS)’ services option. Adding the root folder path isn’t necessary for this rule.

Finally, we need to create a rule for Autodiscover, since that service uses a separate URL and some different options.

  • Right click on the ‘Exchange 2010 Outlook Anywhere’ Rule and select ‘Copy’
  • Right click on the next rule down and select ‘Paste’
  • Edit the new rule and make the following changes:
  1. Name: Exchange 2010 Autodiscover
  2. Public Name: autodiscover.domain.com
  3. Users: All Users (remove Authenticated Users)
  4. Authentication delegation: No delegation, but client may authenticate directly

All the services are now published and the only thing remaining is to improve the user experience by configuring redirection to allow any URL to be entered in the browser. The third and final part of this post addresses redirection.

Part I: Introduction and Creating the array

During a recent Exchange 2010 migration project, I found that while there are many resources online to assist with publishing Exchange 2010 using TMG, none covered my scenario very well and most were missing details that were needed to make the solution work as I desired and intended.

Since I believe that this specific scenario is common, I will outline the specific details of the installation in a series of posts covering the whole process as well as a couple of sticking points that require a few extra tricks to address.

Please note that these guides are not intended as an exhaustive step-by-step manual for this process but rather as a set of tips, tricks and guidance for anyone who is already familiar with Exchange 2010 and TMG and the overall publishing process.

The environment

Exchange 2010 with Service Pack 1 is deployed using several mailbox servers in a single DAG hosting all mailboxes. The HT and CAS roles are hosted on two shared servers. The servers are load balanced across all ports. A CAS array was created (along with a DNS record) and point to the load balancing VIP.

Note: This post does not include a detailed discussion of load balancing. The information provided should apply equally well to WNLB and a hardware load balancer.

Two TMG 2010 Enterprise servers are deployed in a DMZ with a single interface to be used as reverse proxy servers only. The TMG servers are load balanced across ports 80 and 443.

The servers are all protected using a SAN certificate that includes the intended OWA/OA/EAS URL: webmail.domain.com and the Autodiscover URL autodiscover.domain.com as well as the FQDNs of the CAS/HT servers.

Deploying TMG – Installing the array

Extending the high availability options provided by Exchange 2010 to TMG is a key part of any implementation of the platform. No point in ensuring no single points of failure in the Exchange system if one of the primary access methods (OWA, ActiveSync, etc.) is a single point of failure.

TMG provides three options for high availability:

  • Manual – this option includes multiple TMG Standard edition servers that are load balanced but not aware of each other. Rules are synchronized manually across servers.
  • Partially automated – by leveraging the Enterprise edition of TMG, the servers can share an array configuration database that is stored on one server and replicated to the other. This option is known as a standalone array. A manual process is required to failover the configuration database to the other servers.
  • Fully automated – An enterprise array can be created by offloading the configuration database to another system (or ideally, multiple redundant systems). This configuration is a fully automated cluster.

I typically prefer the partially automated solution as it doesn’t require any additional systems but avoids the potential for user error and misconfiguration. Since the configuration information is loaded into memory on each TMG server, access to the database itself is only needed when making configuration changes so a manual failover is a very acceptable risk.

Preparation steps:
  • Confirm that each node can resolve the FQDN of the other node (by using DNS or hosts file)
  • Confirm that the user account you are logged on as is the same on both nodes (same name and password)
  • Confirm that both TMG servers are joined to the same workgroup
Certificate Configuration

If the servers are part of a DMZ domain, you can use an Enterprise CA to configure certificates. Since in my experience a DMZ CA is rare, this post uses self-signed certificates to authenticate TMG servers to each other

Generate the certificates using the makecert utility:

makecert -pe -n “CN=TMGArrayRootCA” -ss my -sr LocalMachine -a sha1 -sky signature -r “TMG Array Root CA”

makecert -pe -n “CN=TMG01.dmz.com” -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -in “TMGArrayRootCA” -is MY -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 “TMG01.cer”

makecert -pe -n “CN=TMG02.dmz.com” -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -in “TMGArrayRootCA” -is MY -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 “TMG02.cer”

Once the certificates are generated, use the Certificates MMC to export the certificates, copy them to each TMG server and install them as follows:

  • Import the TMGArrayRootCA certificate into the trusted root certificate folder
  • Import TMG01 and TMG02 into the personal certificate folder

Finally, using TMG console browse to System and select the Install Server Certificate task in the action bar on the right.

To verify the certificate installation use the Certificates MMC focused on the service account and select ISASTGCTRL as the service. The personal folder should contain the certificate the active array manager.

Array Creation
  • On node 2 <TMG02.dmz.com> open TMG console and select ‘Join Array’ from the tasks on the server node
  • Select the Standalone array and point to the IP of node 1 and enter the administrator credentials.
  • Confirm that the root CA is already installed on the server and complete the installation. This process will create the array and join the second node to it.
  • Restart all servers (array manager and the array managed)
  • After each system comes back up, open the TMG console and open the properties of the server node. In the intra-array credentials tab select the workgroup option and enter the required credentials

This completes the TMG installation and prepares the environment for publishing Exchange services, the topic of part 2 of this post.

 

Local group membership is used to manage access for a variety of reasons. Applications leverage local groups for access to system resources. Protective systems and support staff also require specific privileges that are granted using local groups.  The need to manage membership of these groups becomes an important goal in order to meet business objectives in the areas of security, manageability and availability.

The most typical need that comes up is the need to manage membership of the local Administrators group. This high privilege group that in many cases includes the ‘Domain Users’ group is now a potential security problem and needs to be restricted to protect the system. Often the desired membership is limited to the user who ‘owns’ the system along with support personnel and locked down otherwise to reduce the ability of malicious individuals and code to compromise the system.

My example and discussion will focus on the need to control the local Administrators group but most of the points will apply to other scenarios as well.

Group policy offers several approaches to meeting this goal and of course, they each work well in different scenarios. Let’s dig into the options and when they should be used or avoided.

Restricted Groups

The first mechanism I’m going to cover has been around in Group Policy for many years but is still frequently misunderstood.

The restricted groups configuration node can be found under Computer Configuration\Policies\Windows Settings\Security Settings\Restricted Groups. The component is configured by adding a group (you can either browse or type in a group name) and then configuring the members of the group or the groups this group is a member of.

This mechanism has one very important nuance (important enough to keep someone from getting fired!). If the group membership is controlled (using the top part of the configuration dialog), the existing group membership will be replaced by the configuration. This means that potentially important existing security principals are removed, that maintaining exceptions for specific machines is complex and that using multiple GPOs to configure this mechanism in a cumulative manner isn’t possible.

As a result, controlling group membership directly is rare and typically only used in environments where complete control is required and no further modification to the group’s membership is needed or anticipated.

The lower half of the configuration dialog, or the indirect configuration method, are much more useful in my experience. The behavior of this component is cumulative so any configuration changes are added to existing group membership.

Leveraging restricted groups to manage the Administrators group will therefore involve the following steps:

  • Create an AD group to contain privileged accounts that will be added to the local Administrators group
  • Create a GPO for local group management
  • Add the AD group created to the restricted groups interface
  • Add the local administrators group to the AD group configuration within restricted groups using the bottom, or ‘Member of’, section
  • Refresh the policy

Using this approach, a single GPO can contain multiple restricted groups entries and would manage local group membership for a collection of systems. This allows a decent level of basic local group management but it does leave a taste that something easier to use, more powerful and more flexible should be available these days. This is where group policy preferences come in…..

Local Users and Groups Extension

The introduction of group policy preferences (GPPs) with Windows Server 2008/Vista brought a whole new mechanism to manage local groups (and users). GPPs provide an extension to manage local users and groups that provides a lot of control and flexibility. Let ‘s take a look at what is possible:

First, the extension exists under both the user and computer configuration nodes under Preferences\Control Panel Settings\Local Users and Groups with some benefits to the user section that will be discussed below. Note that when using the user configuration section, the extension can be configured to be limited by the permissions of the user by selecting ‘Run in logged-on user’s security context’ on the Common tab.

Once the extension is selected and a new group is added, the administrator can use the interface to rename the group, remove existing users and groups from the membership list and add or remove specific security principals to/from the group’s membership.

In addition to these operations, the extension takes advantage of common (and powerful) GPP features like ‘Apply once’, item-level targeting and policy actions such as update/replace/create/delete  (which allows removal of a group or user account).

Another great features is available when using the user configuration version of the extension which can automatically manage membership for the ‘current user’ through the GPO making it easy to add the local user only to a local group.

In my opinion, all local user and group membership administration should be performed using GPPs and the Local Users and Groups extension. The improved interface, granular control and benefit of GPP mechanisms makes this the ideal choice for the task.

For more information about GPPs and what they require, check out my previous blog post: https://rdpfiles.com/2009/11/13/group-policy-preferences-aka-gpps-2/.

Many security professionals agree that the signature based approach used by anti-virus, anti-malware and anti-spyware products, also known as ‘blacklisting’, doesn’t work. Blacklisting products tend to be in a constant state of chasing new types and examples and malware, usually a few steps behind. They are also inherently ill-suited to handle sophisticated malware, polymorphic threats (those that automatically alter their code) and recently exposed or published malware (0-day threats).

To the rescue comes a relatively new category of software product, application whitelisting. Sometimes called application control, this product category introduces a low level driver on client systems which monitors activity on the system and can prevent execution of programs that have not been approved in advance. The approval is a combination of multiple policy components delivered by a management server. Since the approach requires all executed software to be approved, it has no need to predict which software is malicious.

Also as a result, this type of approach and software has the potential to be much more disruptive, especially in organizations that support a relatively open client platform.

Once you understand the problem and identify a need, it’s time to take a look at potential products that meet the need. There are several vendors that deliver products in this space and they vary quite a bit in their approach, cost and complexity. As a result it helps to first establish your selection criteria. Key factors to consider should include:

  • Flexibility of policy controls – the primary challenge in implementing application whitelisting is the crafting of an approval configuration that meets constantly changing security objectives while minimizing disruption of constantly changing productivity tasks. In order to allow the organization to strike this delicate balance and maintain it over time, the selected solution must have great flexibility in configuring approvals. Typical mechanisms include approvals based on digital signatures, file metadata, file hash, file path, properties of running process, trusted software installers, trusted software directories and even external ratings of known applications.
  • Management and administration tools – as a key security system, application control solutions must complement protection capabilities with enterprise class tools for management and administration. Provided consoles, tools and APIs must be flexible and easy to use to support operators with different privilege levels, easy access to information and configuration controls and easy to use ‘master switches’ in the event of an emergency.
  • Monitoring and auditing – A vital complement to management tools is a robust auditing and monitoring capability. This capability can be delivered either within the solution or as ready integration interfaces into existing monitoring and auditing frameworks. The system must audit and record any administrative changes as well as key events on protected systems and agents. Monitoring compliance and current issues must be possible and delivered in a format that serves the needs of engineering and management staff.
  • Agent tamper protection – most organizations have many processes, including interactive user sessions, that run with administrative privileges on client systems. To prevent malicious code or curious users from disabling or tampering with the protective agent, the solution architecture must include sophisticated tamper protection to make such changes more difficult to make.
  • Operational modes – in my experience, all of the products in this space offer a ‘monitor only’ mode that allows the administrator to monitor the environment, determine how applications are used throughout the environment and assess what the impact of policy enforcement would be. In addition, almost all of the products I’ve reviewed also offer an enforcement mode that ensures that the client operates within the policy based framework. Some products differentiate themselves by offering more than one type of enforcement mode that allows the user to interact with the system (for example, prompting the user for action in some cases).
  • Supported platforms – in today’s IT environment, many organizations support multiple client operating systems and while they may not represent the same risk, most information security strategies strive to achieve parity in the security of supported platforms. Identifying a solution that addresses all support operating systems should be a key objective.
  • Vendor support and viability – since this product category is very young, the amount of available public knowledge on the technology and leading products is minimal and any implementation will rely heavily on the vendor. It is therefore important to make sure that the vendor will continue to be around and provide the required level of support.

When looking for products to include in the selection process, a good place to start is this set of introductory articles from InfoWorld magazine: http://www.infoworld.com/d/security-central/test-center-review-whitelisting-security-offers-salvation-835. The article and associated reviews are somewhat out of date so the details about each product should be validated using additional sources but the article provides a good starting point to identify the key vendors in this space. For a recent effort, I focused on the following three products:

  • Microsoft AppLocker – For organizations that are either on or migrating to current Microsoft operating systems (Windows Server 2008 R2 and Windows 7), this option is attractive primarily because it is free. AppLocker is a component of these operating systems and is easy to configure using group policies. Another strength is a powerful approval configuration mechanism that leverages digital signatures and file metadata to provide granular control for approving applications that are properly signed and contain metadata.
    The product’s primary weakness is a lack of administration console and poor visibility into compliance and auditing – the data is there in the event logs of each client system but there is no easy native way to collect and present the data.
  • Bit9 Parity Suite – The solution from a leading specialty vendor in this space provides flexible policy controls and an excellent searchable  knowledge base called FileAdvisor that provides sophisticated ratings for millions of applications based on several factors to help determine if the application is malicious or safe.
  • McAfee Application Control – This product is the result of an acquisition of SolidCore Systems in 2009. The product approaches the solution from the perspective of creating an initial trusted configuration of a system and closely managing any changes from that point forward. This approach is a great fit for servers and for environments that allow a restricted number of applications on client systems.

After selecting the product that best fits your needs, the project can start. I recommend focusing on the following key high level tasks:

  • Platform design – this system will be collecting information about any application that is written and executed on all managed systems. Since the volume of data can be significant, the system must be designed and architected correctly. In most cases only 1 or 2 servers are required but the capacity, placement and component configuration should be discussed with the vendor, designed on paper and then tested in a lab environment.
  • Approval configuration – all of the solutions in this space have several mechanisms to approve applications. Work with the platform in a lab/test configuration in order to understand the behavior of each mechanism and design the configuration of the platform to meet security objectives and business requirements. I recommend focusing on the ability to identify and correctly configure applications and processes that will automatically create or deploy other approved applications – compilers, software distribution systems, updaters from Adobe/Google/Mozilla, etc.
  • Process design – due to the potentially disruptive nature of an application control system, the administration and support processes around the platform must be designed carefully and thoroughly to ensure a successful project. The processes must involve teams from support departments, client engineering, security and IT management. The interaction between users and the systems must be clearly understood and incorporated into proposed processes. Input from various business departments and users should be solicited to make sure that different needs, job responsibilities and user environments are identified and addressed by the solution.
  • Client deployment and monitoring – the first step in testing the proposed design and configuration in the ‘field’ is to deploy the client agent to managed systems in a ‘monitor only’ mode and taking the time to collect and analyze the resulting data while stabilizing the infrastructure. Leverage the information collected in this phase to validate assumptions and adjust plans as needed.
  • Enforcement testing and pilot – planning is vital but it can only take you so far in determining the best way to approach a deployment and the impact to your user community. The rest must come from careful testing and a staged deployment. Carefully select early adopter users to ensure that there is a good distribution of test scenarios. Over-allocate resources for the testing and pilot phases to ensure that the phases are concluded quickly and that participating users have a positive experience with the system and the support processes. And finally, when needed, make changes and corrections to designs, plans, configurations and processes to incorporate lessons learned from early adopters. Assume that this feedback cycle will continue throughout the deployment of the system and its lifecycle

Hopefully you can use this information to help kick start your whitelisting project (I know it would have been useful to me at the start of some recent projects) and please comment if you have any questions or anything to add.