Archive for April, 2011

There are many reasons to create a custom ADM/ADMX template: managing settings for software that doesn’t include GPO support, a modification to an OS setting that isn’t part of the standard templates, disable or enable a specific component (e.g. IPv6) or to extend the features of existing policy settings (e.g. redirect user shell folders).

All of these have one thing in common: they complete their function by modifying registry keys, the core function of the custom ADM or ADMX template. This commonality results in the following typical high level process for creating or modifying custom ADM/ADMX files:

  1. Research the registry keys that control the required settings.
  2. Learn and understand the template file format.
  3. Create and test the custom template file.
  4. Repeat step 3 until everything works right (usually the longest step in the process).
  5. Deploy the templates to configure GPOs after training administrators about the differences between managed and unmanaged settings.
  6. Respond to questions and issues when the mechanism malfunctions, the specific requirements change or people forget the operation process for using the custom template.

There’s not much we can do about step 1 since we need to determine how to configure the required settings but past that step, this is a fairly long and sometimes painful road to implement the required change. As a result, many administrators choose to use scripts or .REG files to simplify the process and avoid having to dig into the ADM/ADMX file format.

With the introduction of group policy preferences with Windows 2008, we now have the registry extension that can accomplish the same task and much much more. The base functionality allows us to deploy registry keys as well as custom templates or scripts but this mechanism includes the following additional benefits:

  • The ability to import keys from the local computer’s registry – once you configure the required settings on your admin computer, you can import them directly into the GPO.
  • The ability to organize and manage keys by collection.
  • The ability to manage all of the key types: strings, DWORD, QWORD, multi-string value, expandable-string value and binary values.
  • The ability to update, replace or delete existing strings – the update action will only update the value data whereas the replace action will delete the existing key/value and create a new one with the desired value data.

In addition to the registry extension specific benefits, we also get the following benefits that are global to all preferences:

  • The ability to run user settings using the system security context
  • The ability to remove the item when the setting is no longer applied – this is an important option that allows the preference to behave similar to a managed policy setting (note that this will not re-instate an original value, just remove the setting).
  • The ability to create a true preference and apply the setting only once allowing the user to change it.
  • ..and most importantly, the ability to configure conditional expressions for each registry key or collection to further define its target. This capability, known as item-level targeting (or ILT) is a very granular and powerful engine that provides an administrator the tools to direct each setting to the computers or users who need it based on over 25 categories of properties including hardware levels, OS, networking configuration, group membership and any registry/file/LDAP/WMI query.

Given these benefits, the registry extension becomes the ‘Swiss army knife’ of custom registry modifications to Windows systems and user environments.

So while there is still a need for ADMX templates from Microsoft to manage the OS and there’s a strong need for templates from other software vendors, when those templates are not available, I reach for the registry extension and avoid any authoring of custom ADM/ADMX templates.

So are custom ADM/ADMX template a thing of the past?  please share in the comments section. I’m interested in how many folks out there are still creating custom template files.

Advertisements

MokaFive Automation

Posted: April 18, 2011 in Scripting, Virtualization

Automating tasks is a common goal for many projects, especially those targeting the implementation of a new system or the operational improvement of an existing system. The goals are typically to create a repeatable sequence of steps that can be executed on a scheduled basis with no human intervention to minimize chances of neglecting to run the sequence or introducing errors into the execution.

With most software packages, automation relies on components created by the vendor in advance (that hopefully fit the needs of the implementer) or a documented API that is sometime complex and cryptic and must be learned and understood before being used.

MokaFive takes a web services based approach to this need that makes it easier to create the automation and reduces the need for a highly skilled developer to do the work and greatly accelerates the learning curve for the task.

RESTFul Web Services

MokaFive uses RESTFul Web Services not only for customization of its enterprise product but also for the vendor supplied administration console. The large majority of actions taken within the administration console use REST calls that can be examined and then modified to execute calls needed by the automation sequence.

REST (Representational State Transfer) is a style of architecture used for web pages and services that’s characterized by a lack of state storage on the server. The result is that every client request must contain all the information needed to process the request. This facet makes examining REST calls and recreating them fairly straight forward.

There’s a lot of information about REST on the web and a good Wikipedia entry that can serve as a decent starting point for research: http://en.wikipedia.org/wiki/Representational_State_Transfer

Example

The rest of this post will go through an example of how to automate a task for MokaFive using REST. The example generates a script that will create IP ranges within MokaFive to direct clients to their nearest image store for image files. The script uses Active Directory sites and subnets as the authoritative data for the IP ranges.

The process involves the following high level steps:

  1. Obtain the necessary tools and software to work with REST
  2. Create the necessary REST requests and save them
  3. Create a script to collect sites and services data from AD
  4. Put the data and REST requests together to configure the IP ranges.

Note: as with most scripted solutions, there are multiple ways to solve this problem, this is the method I have selected and created.

Required tools

My example uses a client computer running Windows 7 with Firefox as a browser used to access the MokaFive administrator console. The code is created in PowerShell and takes advantage of the PowerShell 2.0 capabilities that are included with Windows 7.

Note: I highly recommend using the PowerShell ISE that’s included with Windows 7 if you go the PowerShell route as it makes the development experience much easier and quicker.

In addition to the core OS and the Firefox browser, the following tools were used:

  1. Firebug – this Firefox extension allows detailed examination of the interaction between the browser and web server, including REST calls. The extension can be downloaded here.
  2. rest-client – a Java based application that can be used to configure REST requests and save them for future use. There are many REST clients out there, this particular one was selected because it has a version with a full interface which is useful for creating and testing the requests, and a command line version which is very easy to include in scripts. You can find this tool here.

Creating REST requests

Once the tools are installed, the next step is to open Firefox and log into the MokaFive console in order to perform the actions that need to be scripted. For the subnet exercise, I created a new IP range using the UI which performs the three actions that I need to capture for the script: get list of IP ranges, get list of image stores (performed to populate the image store pull down) and creating the IP range. I also edited an IP range to be able to update an existing range with the script.

Now press F12 which will open the Firebug window, by default below the browsing window, make sure the console tab is selected and you can scroll through all the actions that were observed by Firebug. Finding the specific actions you need is easier if you have Firebug open when you perform the actions but is not too hard to do later since the URL path is a pretty good indicator of the action.

The next step is to reproduce each action with the REST client and save it for automated execution. Firebug allows you to right click an action URL and copy the location. After running the rest-client UI, you can paste the location into the URL field. Next, select the ‘Auth’ tab and enter the login information – in my case, Basic authentication with username and password. This example is done with http since the code will run on the server itself. For production environments, especially when running across a network, you will probably also want to configure SSL using the SSL tab.

That’s all that’s needed for a GET request. You can test the request by running it and making sure you get the expected results and then save the request. You can also use a more advanced authentication scheme by leveraging API calls directly to create an authentication cookie, but since one of my primary goals is simplicity and the script code will all reside on the MokaFive server, I didn’t go that route.

For the POST/PUT requests which include adding an IP range or editing an existing IP range, modifications to the ‘Body’ tab are required. The first modification is to set the content-type and charset to the correct setting (which will match the setting viewed in Firebug under the action/Headers\Request Headers section. The required content-type is ‘application/xml; charset=UTF-8’, which can be configured in rest-client’s ‘Body’ tab by click the leftmost icon (the one with the pencil on it) and selecting the correct value.

The second modification needed is the XML containing the data to be sent to the server. This XML can be found in the Firebug action under the ‘Put’ tab in the source section. I typically cut and paste from there into Notepad to remove any formatting and then into the rest-client ‘Body’ tab. You can also type directly into the field to avoid any hidden characters coming from Firefox/Firebug.

When going through the POST request for a new IP range and a PUT request for an update to an existing IP range, I found that the requests are almost identical except for the method and the URL. As a result, I only save and use a single request, in my case the POST version, and modify those fields on the fly as needed.

Once the requests are complete and tested, save the request files to be used by the automation script.

Collecting Active Directory site data

This is a task that PowerShell is able to handle easily. The following code sample returns and processes the required data:

$myForest =
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
Foreach ($site in $myForest.Sites)
 {
 Foreach ($subnet in $site.Subnets)
  {
  }
 }

As the code demonstrates, the sites are contained in a collection within the forest object and subnets are a collection within each site object. The subnet itself is a string with a CIDR representation of the subnet – for example, 10.15.20.0/24.

One interesting challenge that will be discussed later is created by the fact that while AD adheres to the CIDR standard, MokaFive allows the network address part of the CIDR to be a host address – so (AD) 10.15.20.0/24 and (MokaFive) 10.15.20.1/24 can refer to the same network. Luckily, MokaFive stores the network address in a different field which I will use instead.

Putting it all together

Taking all of the tools and information presented above and using it to construct the required automated script is not very complicated but a little time consuming. While I don’t plan to include my full code here, I will go through each section of the script to demonstrate the structure and highlight potential issues.

1. Collect initial data

In addition to setting global variables and opening the connection to the AD forest, this section collects the subnet and image store data from MokaFive. The subnet and image store data will be used later in the script for quite a few purposes so I decided to collect it at the start.

This task highlights two interesting aspects of the rest-client and XML structure: first, the CLI version of rest-client allows you to specify the target directory but not the response file name. The response file name is the same as the request file name with an RCS extension (the request file uses an RCQ extension). A simple rename solves the problem of needing the file treated as an XML file.

The second issue is that the response file XML is not very useful as all the tags contain data related to the rest transaction rather than the needed data. The relevant data is all contained in a single tag called ‘body’. In order to process the ‘body’ data as XML, I extract it and create a new XML file containing only the contents of the ‘body’ tag. I could have probably done this in memory without the file, but writing the XML out makes debugging, testing and operational validation much easier. This process is done for both the subnets and image stores.

2. Primary loop – image store

The top level structure of the script loops through each image store in the MokaFive configuration. Since I am expecting that there will be AD sites and subnets that don’t participate in the MokaFive architecture, I decided to process each image store and make sure that any subnet in the site for the image store is configured as an IP range with the correct image store assigned to it.

The script needs a way to match the image stores to AD sites in order to configure subnets correctly. For this specific script, I assume that the image store server name (and therefore the URL property) will start with the site code. There are many ways to accomplish this goal but using a specific image store naming standard is one of the simplest approaches.

Once the site code has been identified, the site object is retrieved and used for the inner loop.

3. Inner loop – subnets

Processing each subnet of the identified site is the job of the inner loop. The code searches for the AD subnet name in the MokaFive IP range list. Due to the CIDR issue identified in the last topic, instead of using the CIDR field in the MokaFive subnet XML, the NetworkID field is used and concatenated with the network portion of the CIDR.

If the subnet is found, the image store data is compared, if it is correct, no action is taken. If it is incorrect a record update is initiated. If the subnet is not found a record addition is initiated.

4. Subnet record manipulation

Both the subnet update and subnet creation use the same request file since the differences can easily be changed with code. Prior to running the request, the ‘URL’, ‘method’ and ‘body’ tags are modified to create either a subnet update or a subnet creation. In the case of a subnet creation, the subnet mask (e.g. 255.255.255.0) must be determined based on the last two digits of the CIDR and used in the URL.

5. Clean up

…and that’s all there is. Removing any temporary files and deleting objects is the last section. I do leave a log file and echo some information to the screen throughout the process to simplify auditing and troubleshooting.

Last words

First, if you’re still reading, thanks for putting up with such a long post. If you have any other questions, please post in the comments or contact me directly at therdpfiles@gmail(d0t)com.

 

Local group membership is used to manage access for a variety of reasons. Applications leverage local groups for access to system resources. Protective systems and support staff also require specific privileges that are granted using local groups.  The need to manage membership of these groups becomes an important goal in order to meet business objectives in the areas of security, manageability and availability.

The most typical need that comes up is the need to manage membership of the local Administrators group. This high privilege group that in many cases includes the ‘Domain Users’ group is now a potential security problem and needs to be restricted to protect the system. Often the desired membership is limited to the user who ‘owns’ the system along with support personnel and locked down otherwise to reduce the ability of malicious individuals and code to compromise the system.

My example and discussion will focus on the need to control the local Administrators group but most of the points will apply to other scenarios as well.

Group policy offers several approaches to meeting this goal and of course, they each work well in different scenarios. Let’s dig into the options and when they should be used or avoided.

Restricted Groups

The first mechanism I’m going to cover has been around in Group Policy for many years but is still frequently misunderstood.

The restricted groups configuration node can be found under Computer Configuration\Policies\Windows Settings\Security Settings\Restricted Groups. The component is configured by adding a group (you can either browse or type in a group name) and then configuring the members of the group or the groups this group is a member of.

This mechanism has one very important nuance (important enough to keep someone from getting fired!). If the group membership is controlled (using the top part of the configuration dialog), the existing group membership will be replaced by the configuration. This means that potentially important existing security principals are removed, that maintaining exceptions for specific machines is complex and that using multiple GPOs to configure this mechanism in a cumulative manner isn’t possible.

As a result, controlling group membership directly is rare and typically only used in environments where complete control is required and no further modification to the group’s membership is needed or anticipated.

The lower half of the configuration dialog, or the indirect configuration method, are much more useful in my experience. The behavior of this component is cumulative so any configuration changes are added to existing group membership.

Leveraging restricted groups to manage the Administrators group will therefore involve the following steps:

  • Create an AD group to contain privileged accounts that will be added to the local Administrators group
  • Create a GPO for local group management
  • Add the AD group created to the restricted groups interface
  • Add the local administrators group to the AD group configuration within restricted groups using the bottom, or ‘Member of’, section
  • Refresh the policy

Using this approach, a single GPO can contain multiple restricted groups entries and would manage local group membership for a collection of systems. This allows a decent level of basic local group management but it does leave a taste that something easier to use, more powerful and more flexible should be available these days. This is where group policy preferences come in…..

Local Users and Groups Extension

The introduction of group policy preferences (GPPs) with Windows Server 2008/Vista brought a whole new mechanism to manage local groups (and users). GPPs provide an extension to manage local users and groups that provides a lot of control and flexibility. Let ‘s take a look at what is possible:

First, the extension exists under both the user and computer configuration nodes under Preferences\Control Panel Settings\Local Users and Groups with some benefits to the user section that will be discussed below. Note that when using the user configuration section, the extension can be configured to be limited by the permissions of the user by selecting ‘Run in logged-on user’s security context’ on the Common tab.

Once the extension is selected and a new group is added, the administrator can use the interface to rename the group, remove existing users and groups from the membership list and add or remove specific security principals to/from the group’s membership.

In addition to these operations, the extension takes advantage of common (and powerful) GPP features like ‘Apply once’, item-level targeting and policy actions such as update/replace/create/delete  (which allows removal of a group or user account).

Another great features is available when using the user configuration version of the extension which can automatically manage membership for the ‘current user’ through the GPO making it easy to add the local user only to a local group.

In my opinion, all local user and group membership administration should be performed using GPPs and the Local Users and Groups extension. The improved interface, granular control and benefit of GPP mechanisms makes this the ideal choice for the task.

For more information about GPPs and what they require, check out my previous blog post: https://rdpfiles.com/2009/11/13/group-policy-preferences-aka-gpps-2/.

Many security professionals agree that the signature based approach used by anti-virus, anti-malware and anti-spyware products, also known as ‘blacklisting’, doesn’t work. Blacklisting products tend to be in a constant state of chasing new types and examples and malware, usually a few steps behind. They are also inherently ill-suited to handle sophisticated malware, polymorphic threats (those that automatically alter their code) and recently exposed or published malware (0-day threats).

To the rescue comes a relatively new category of software product, application whitelisting. Sometimes called application control, this product category introduces a low level driver on client systems which monitors activity on the system and can prevent execution of programs that have not been approved in advance. The approval is a combination of multiple policy components delivered by a management server. Since the approach requires all executed software to be approved, it has no need to predict which software is malicious.

Also as a result, this type of approach and software has the potential to be much more disruptive, especially in organizations that support a relatively open client platform.

Once you understand the problem and identify a need, it’s time to take a look at potential products that meet the need. There are several vendors that deliver products in this space and they vary quite a bit in their approach, cost and complexity. As a result it helps to first establish your selection criteria. Key factors to consider should include:

  • Flexibility of policy controls – the primary challenge in implementing application whitelisting is the crafting of an approval configuration that meets constantly changing security objectives while minimizing disruption of constantly changing productivity tasks. In order to allow the organization to strike this delicate balance and maintain it over time, the selected solution must have great flexibility in configuring approvals. Typical mechanisms include approvals based on digital signatures, file metadata, file hash, file path, properties of running process, trusted software installers, trusted software directories and even external ratings of known applications.
  • Management and administration tools – as a key security system, application control solutions must complement protection capabilities with enterprise class tools for management and administration. Provided consoles, tools and APIs must be flexible and easy to use to support operators with different privilege levels, easy access to information and configuration controls and easy to use ‘master switches’ in the event of an emergency.
  • Monitoring and auditing – A vital complement to management tools is a robust auditing and monitoring capability. This capability can be delivered either within the solution or as ready integration interfaces into existing monitoring and auditing frameworks. The system must audit and record any administrative changes as well as key events on protected systems and agents. Monitoring compliance and current issues must be possible and delivered in a format that serves the needs of engineering and management staff.
  • Agent tamper protection – most organizations have many processes, including interactive user sessions, that run with administrative privileges on client systems. To prevent malicious code or curious users from disabling or tampering with the protective agent, the solution architecture must include sophisticated tamper protection to make such changes more difficult to make.
  • Operational modes – in my experience, all of the products in this space offer a ‘monitor only’ mode that allows the administrator to monitor the environment, determine how applications are used throughout the environment and assess what the impact of policy enforcement would be. In addition, almost all of the products I’ve reviewed also offer an enforcement mode that ensures that the client operates within the policy based framework. Some products differentiate themselves by offering more than one type of enforcement mode that allows the user to interact with the system (for example, prompting the user for action in some cases).
  • Supported platforms – in today’s IT environment, many organizations support multiple client operating systems and while they may not represent the same risk, most information security strategies strive to achieve parity in the security of supported platforms. Identifying a solution that addresses all support operating systems should be a key objective.
  • Vendor support and viability – since this product category is very young, the amount of available public knowledge on the technology and leading products is minimal and any implementation will rely heavily on the vendor. It is therefore important to make sure that the vendor will continue to be around and provide the required level of support.

When looking for products to include in the selection process, a good place to start is this set of introductory articles from InfoWorld magazine: http://www.infoworld.com/d/security-central/test-center-review-whitelisting-security-offers-salvation-835. The article and associated reviews are somewhat out of date so the details about each product should be validated using additional sources but the article provides a good starting point to identify the key vendors in this space. For a recent effort, I focused on the following three products:

  • Microsoft AppLocker – For organizations that are either on or migrating to current Microsoft operating systems (Windows Server 2008 R2 and Windows 7), this option is attractive primarily because it is free. AppLocker is a component of these operating systems and is easy to configure using group policies. Another strength is a powerful approval configuration mechanism that leverages digital signatures and file metadata to provide granular control for approving applications that are properly signed and contain metadata.
    The product’s primary weakness is a lack of administration console and poor visibility into compliance and auditing – the data is there in the event logs of each client system but there is no easy native way to collect and present the data.
  • Bit9 Parity Suite – The solution from a leading specialty vendor in this space provides flexible policy controls and an excellent searchable  knowledge base called FileAdvisor that provides sophisticated ratings for millions of applications based on several factors to help determine if the application is malicious or safe.
  • McAfee Application Control – This product is the result of an acquisition of SolidCore Systems in 2009. The product approaches the solution from the perspective of creating an initial trusted configuration of a system and closely managing any changes from that point forward. This approach is a great fit for servers and for environments that allow a restricted number of applications on client systems.

After selecting the product that best fits your needs, the project can start. I recommend focusing on the following key high level tasks:

  • Platform design – this system will be collecting information about any application that is written and executed on all managed systems. Since the volume of data can be significant, the system must be designed and architected correctly. In most cases only 1 or 2 servers are required but the capacity, placement and component configuration should be discussed with the vendor, designed on paper and then tested in a lab environment.
  • Approval configuration – all of the solutions in this space have several mechanisms to approve applications. Work with the platform in a lab/test configuration in order to understand the behavior of each mechanism and design the configuration of the platform to meet security objectives and business requirements. I recommend focusing on the ability to identify and correctly configure applications and processes that will automatically create or deploy other approved applications – compilers, software distribution systems, updaters from Adobe/Google/Mozilla, etc.
  • Process design – due to the potentially disruptive nature of an application control system, the administration and support processes around the platform must be designed carefully and thoroughly to ensure a successful project. The processes must involve teams from support departments, client engineering, security and IT management. The interaction between users and the systems must be clearly understood and incorporated into proposed processes. Input from various business departments and users should be solicited to make sure that different needs, job responsibilities and user environments are identified and addressed by the solution.
  • Client deployment and monitoring – the first step in testing the proposed design and configuration in the ‘field’ is to deploy the client agent to managed systems in a ‘monitor only’ mode and taking the time to collect and analyze the resulting data while stabilizing the infrastructure. Leverage the information collected in this phase to validate assumptions and adjust plans as needed.
  • Enforcement testing and pilot – planning is vital but it can only take you so far in determining the best way to approach a deployment and the impact to your user community. The rest must come from careful testing and a staged deployment. Carefully select early adopter users to ensure that there is a good distribution of test scenarios. Over-allocate resources for the testing and pilot phases to ensure that the phases are concluded quickly and that participating users have a positive experience with the system and the support processes. And finally, when needed, make changes and corrections to designs, plans, configurations and processes to incorporate lessons learned from early adopters. Assume that this feedback cycle will continue throughout the deployment of the system and its lifecycle

Hopefully you can use this information to help kick start your whitelisting project (I know it would have been useful to me at the start of some recent projects) and please comment if you have any questions or anything to add.

VDI the easy way

Posted: April 7, 2011 in Virtualization

It seems like most organizations these days are taking a look at Virtual Desktop Infrastructure as a potential technology to implement in order to address several IT challenges: the personal device preference of users, the need to standardize on a client platform and configuration, reducing hardware cost at the desktop and others.

After working with many organizations to explore solutions from current leading vendors such as VMWare, Microsoft and Citrix, my experience has been that most of these explorations and proof of concepts result in organizations shying away from deploying VDI at all. The solutions are complex, expensive (software and implementation resources) and are typically quite a challenge to manage operationally. For most organizations, the core need to deploy a server farm that can host concurrent sessions for each client is already an insurmountable problem.

As a result I was extremely happy to recently discover a small vendor that provides a much simpler alternative with some very attractive features. The vendor is called MokaFive and their enterprise solution uses existing client side hypervisor solutions (currently VMWare and VirtualBox are supported) to run a managed virtual image on the client system. The computing resources required are all client side so no server farm is required – in fact that only required servers are a very lightweight policy management server and an infrastructure to provide the image files to clients.

MokaFive include a client agent that interfaces with the hypervisor and manages the configured policies which include some key security features such as a timer to control how long an image can be used offline before it must check in with the policy server, preventing an image from being copied to another host, requiring AV scans on the host and many more. The client agent supports hypervisors on multiple host operating systems including Windows, OS X, Linux and bare metal which makes the solution very attractive in a heterogeneous environment (i.e. everywhere).

The system also includes a pretty nifty client side architecture that isolates the corporate delivered components from user added components and gives the user the controls to revert the to ‘vanilla’ corporate image if their own changes have created problems they can’t resolve.

From an IT Pro perspective, installing the solution only takes a few hours and customizing the image is easy as well. Policy controls are very flexible and the management console is well suited to getting information quickly.

Sorry if the post sounds like a sales pitch but if a single vendor solution goes above and beyond in solving a common problem, that’s worth taking note and I would recommend to anyone seriously considering VDI in 2011 to take a look at solutions that run on the client system and specifically at MokaFive.