Exchange 2010 supports automatic provisioning for new mailboxes. Unfortunately this mechanism does not extend to personal archives. As mailboxes are moved to Exchange 2010, they must be enabled for archives manually with the operator managing the size of each database and dividing the load accordingly.

The script below was created to automate this function and is intended to run automatically using a scheduled task on each CAS server.

Typically archive databases are flagged so that they do not participate in automatic provisioning for new mailboxes. The script selects the smallest archive database from the databases that are excluded from provisioning using the –IsExcludedeFromProvisioning parameter. Users are then enabled for archives using the target database.

The script also assigns one of two custom archive policies – a 180 day policy and a 360 day policy based on a group that defines users that get 360 days of retention in their mailbox. The script uses a custom attribute to overcome the issue of identifying mailboxes that are not members of the 360 day retention group.

Let me explain the issue:

PowerShell scripts often handle the ‘reverse group membership check’ issue by using a script that first assigns the common value (in this case 180 day retention) to everyone and then assigns the special value (in this case 360 day retention) to the members of a group.

The main weakness to this approach, especially for something like a retention policy is that any error with the second part of the script (say if someone renamed the group) would result in everyone getting a more restrictive retention policy and more archived items which is potentially disruptive and difficult to reverse.

My solution is to assign everyone in the org the less restrictive setting using a custom attribute field in AD once. Then adjust that custom attribute value based on the group membership and use the custom attribute value to configure the retention policy. This means that if the group is renamed or another error occurs, new members of the group might get the wrong policy but existing members would not be impacted.

Note that this can be accomplished with less effort if you deploy the Quest PowerGUI tools since the get-QADUser command does support a parameter –NotMemberOf. I didn’t use this since I was trying to create a solution that didn’t require additional software (in other words, come on Microsoft and implement this function!)

 

In addition, the script uses custom attribute 13 to identify a mailbox that shouldn’t use a personal archive. This is intended for service accounts and special purpose mailboxes.

#
#
# NAME: Maintenance.ps1
#
# AUTHOR: Guy Yardeni
#
# COMMENT: Script to run various maintenance tasks for Exchange 2010
#
#        Enable archives for mailboxes
#        Configure archive policy based on AD group
#

# Script to enable archive for any users who don’t already have one
# using the smallest archive database
#         
# Any text in Custom Attribute 13 will cause the script to skip the mailbox
#
#

# Return archive database with smallest size
$TargetDB = Get-MailboxDatabase -status | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.IsExcludedFromProvisioning -eq $true)} | sort-object "DatabaseSize" | select-object -first 1

# Enable archive to relevant mailboxes to the target database
$results = Get-Mailbox | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.ArchiveDatabase -eq $null) -and ($_.CustomAttribute13 -eq "")} | enable-mailbox

-archive -archivedatabase $TargetDB.Name -retentionpolicy "360 Day Default" |measure-object
 
#Write output for testing
Write-Host $results.count "mailbox(es) were enabled for archiving on database" $TargetDB.Name

# Script to set correct archiving policy
Get-Mailbox | where {($_.CustomAttribute12 -eq "")} | set-mailbox -CustomAttribute12 "180"
Get-DistributionGroupMember "Exchange Archive Users – 360 day" | Get-Mailbox | set-mailbox -CustomAttribute12 "360"
Get-Mailbox | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.CustomAttribute12 -eq "180")} | set-mailbox -retentionpolicy "180 Day Default"
Get-Mailbox | where {($_.ExchangeVersion.ExchangeBuild.Major -eq 14) -and ($_.CustomAttribute12 -eq "360")} | set-mailbox -retentionpolicy "360 Day Default"

As always, comments about the code and approach are welcome!

Advertisements

A quick post to make a common task a little easier.

When managing move mailbox requests, it is often useful to be able to view certain statistics to ensure that progress and migration pace are as expected and that each server is playing its expected roles.

Exchange 2010 SP 1 makes that possible using the get-moverequeststatistics powershell command-let but some manipulation and formatting makes a big difference in monitoring the results.

Try this command for a friendly view of useful information about each open move request (as well as some cool tricks you can use with the format-table command):

Get-MoveRequest | Get-MoveRequestStatistics  |Sort-Object CompletionTimeStamp| ft DisplayName, @{Expression={$_.BadItemsEncountered};Label=”Errors”}, @{Expression={$_.PercentComplete};Label=”Percent”}, @{Expression={$_.TotalMailboxSize.ToString().Split(“(“)[0]};Label=”Size”}, @{Expression={$_.totalinprogressduration};label=”Time”},@{Expression={(($_.BytesTransferred/$_.TotalInProgressDuration.TotalMinutes)*60).ToString().Split(“(“)[0]};Label=”Pace/hr”}, @{Expression={$_.MRSServerName.ToString().Split(“.”)[0]};Label=”CAS”}, @{Expression={$_.SourceDatabase.ToString().Split(“\”)[0]};Label=”SourceServer”},SourceDatabase,Status,CompletionTimestamp -auto

Redirecting the output to a file on a scheduled basis also makes troubleshooting after hours mailbox moves much easier.

Part III: Redirection (virtual directory and SSL)

Typically when users want to access Outlook Web App (OWA), they are unlikely to type the correct URL: https://webmail.domain.com/owa or /exchange. In order to accommodate common URL combinations that users may enter, commonly using HTTP instead of HTTPS and omitting the virtual directory name, we can employ redirection.

To this end, there are two types of redirection:

The first redirects HTTP traffic to HTTPS and is very easy to accomplish in a TMG environment. If you refer to the previous post, you will see a configuration step for the web listener that accomplishes this function. In the ‘Connections’ tab for the listener, HTTP connections are allowed and redirected to HTTPS. Nice and easy.

The second form of redirection help redirect URLs that reference the root of the web site, i.e. https://webmail.domain.com , to the correct virtual directory. In an Exchange 2007/2010 environment, the destination should be https://webmail.domain.com/owa. If coexistence with legacy versions  (Exchange 2003) is needed, the destination will be https://webmail.domain.com/exchange.

In either case, we will use the ‘HTTP redirection’ feature of IIS 7.5 to configure the required setting using the following process on each CAS server:

  1. Open IIS Manager and navigate to the Default Web Site
  2. Open the HTTP Redirect feature and configure the following options:
  • Redirect all requests to this destination: https://webmail.domain.com/owa
  • Redirect all requests to exact destination: unchecked
  • Only redirect requests to content in this directory: checked
  • Status code: Found (302)

While those are the required settings for the web site, settings these configuration options will automatically set the same options on any sub folders and virtual directories that do not currently have redirection configured. Since we only want the redirection on the web site, we need to remove the redirection from all sub folders.

This process results in a problem: there are three directories that do need to be redirected and those are ‘/Exchange’, ‘/Exchweb’ and ‘/Public’. These three virtual directories must be redirected to ‘/owa’ in order for the OWA service to function correctly.

The problem is evident right after turning off redirection on the ‘/owa’ virtual directory using the UI – this change also disables redirection on the three folders listed above. If you re-enable redirection on any of the three, it will also re-enable redirection on ‘/owa’. This confusing and frustrating loop creates a series of unusable configurations that aren’t simple to correct.

The problem occurs because once the settings are configured using the UI, they are stored in the web.config file for the virtual directory and the four directories discussed all share a single web.config file. If the settings are configured using appcmd.exe the information is stored elsewhere (presumably in the metabase) and the problem is resolved.

The process to correct the issue is as follows:

  1. Remove the HttpRedirect section from the web.config file for /owa.
  2. Use the following commands to configure the correct settings for all folders:

cd %windir%\system32\inetsrv
appcmd set config “Default Web Site/Exchange” /section:httpredirect /enabled:true -commit:apphost
appcmd set config “Default Web Site/Exchweb” /section:httpredirect /enabled:true -commit:apphost
appcmd set config “Default Web Site/Public” /section:httpredirect /enabled:true -commit:apphost

appcmd set config “Default Web Site/owa” /section:httpredirect /enabled:false -commit:apphost

Finally, while you are configuring virtual directories on each CAS server, confirm that the authentication settings are set to Basic Authentication for all of the virtual directories – OWA, ECP, ActiveSync, and Outlook anywhere (configured on the properties of the CAS server). I recommend viewing and making changes to these settings through the Exchange Management Console or Shell.

That should be all that’s needed. The results of following the instructions in all 3 posts of this series are an Exchange 2010 system published with TMG using a single public IP address and a seamless user experience.

A big thanks and credit for various aspects of this post go to a couple of colleagues at Convergent Computing for helping discover, test and document this information. Thank you Yasu SabaLin and Aman Ayaz.

Hope this is helpful and as always please post any comments or questions

Part II: Creating the publishing rules

If you’ve read part I of this series, you’ve hopefully got your TMG standalone array up and ready to publish Exchange 2010 services. Before creating the publishing rules themselves, we need to address authentication.

Since our TMG servers are members of a workgroup in the DMZ (same applies if using an AD domain in the DMZ with no trusts to the internal AD forest), we will need to configure a method of authentication. I typically prefer the use of LDAPS since it is supported by default on AD domain controllers and is very simple to configure.

Setting up LDAPS Infrastructure

LDAPS uses a secure lookup to validate users against the AD domain. SSL is used to secure either LDAP (port 389/636) or global catalog (port 3268/3269) queries between TMG and select DCs on the internal system.

The setup steps are as follows:

  1. Configure select DCs with a certificate – any DCs that are participating in LDAPS must have a server authentication certificate. This certificate would be deployed automatically by an AD based enterprise CA.
  2. Export the trust root certificate from one of the DCs and copy the file over to each TMG server.
  3. Import the certificate into the ‘Trusted Root Certificate authorities’ folder on each TMG server.
  4. Ensure that firewalls allow traffic from the TMG servers to the selected DCs over ports 636 and 3269.

     A few tips to assist with this task:

  • If you just deployed your CA, run the command ‘certutil –pulse’ on a DC to issue the certificate.
  • Reboot the domain controller after issuing the CA to activate a listener on ports 636 (secure LDAP) and 3269 (secure global catalog).
Configure LDAPS authentication

Within the TMG console, follow these steps to configure LDAPS:

  • Right click on the ‘Web Access Policy’ node.
  • Select ‘Configure (Related)’ and then ‘RADIUS Server Settings’
  • Select the ‘LDAP Servers’ tab and click the ‘Add’ button to create an LDAP set
  • Enter a name for the LDAP set (e.g. Internal DCs), the domain name and credentials used for authentication

NOTE: this account needs minimal rights in AD and a non-expiring password

  • Make sure that ‘Use Global Catalog (GC)’ and ‘Connect LDAP servers over secure connection’ are checked.
  • Add the selected internal DCs that were used in the preparation steps.
  • Define the mask for this LDAP set using ‘domain\*’ for the expression and the created LDAP set and click OK
  • Define another mask for this LDAP set using ‘*@domain.com’ for the expression and the created LDAP set and click OK
  • Repeat the above steps for any additional domains as needed
Publish Exchange Services

Publishing the services is fairly simple by following the wizard. Here are the steps:

First, to create the listener that will be used for all of the rules:

  • Select the ‘Firewall Policy’ folder and the Toolbox tab on the right.
  • Navigate to ‘Network Objects’ and right click ‘Web Listener’ to create a new listener with the following info:
  1. Name: Exchange 2010 Listener
  2. Require SSL secured connections with clients
  3. Web Listener IP Addresses: All Networks (and Local Host)
  4. Use a single certificate for this Web Listener: select the certificate for webmail.domain.com
  5. Authentication Settings: HTML Form Authentication and LDAP (Active Directory) client authentication
  6. Enable SSO for Web sites published with this web listener with the name: domain.com
  • Open the properties of the recently created listener, select the ‘Connections’ tab and configure the following:
  1. Enable HTTP connections on port 80
  2. Redirect all traffic from HTTP to HTTPS

Next, to publish OWA:

  • Select the Firewall Policy section and from the the ‘Tasks’ tab on the right and click on ‘Publish Exchange Web Client Access’
  • Name the rule ‘Exchange 2010 OWA’
  • Select the ‘Exchange Server 2010’ version option and check ‘Outlook Web Access’
  • Accept the default selection of ‘Publish a single web site or load balancer’
  • Accept the default selection of ‘Use SSL to connect to the published server…’
  • On the ‘Internal Publishing Details’ page, enter: webmail.domain.com
  • Check ‘Use a computer name or IP address to connect to the published server and enter the CAS array FQDN
  • Enter the ‘Public Name’ of ‘webmail.domain.com’
  • On the ‘Select Web Listener’ page, select the ‘Exchange 2010 Listener’
  • Accept the default authentication delegation using ‘Basic authentication’
  • Accept the default of ‘All Authentication Users’ and complete the wizard

Edit the recently created rule, select the ‘Paths’ tab and add a new path as follows:

  • Folder: ‘/’
  • External path: same as published folder

Next, follow the same steps to create the ActiveSync publishing rule. Change only the name and select the ‘Exchange ActiveSync’ option on the ‘Select Services’ page. Adding the root folder path isn’t necessary for this rule.

Then, following same process create the Outlook Anywhere rule with an appropriate name and the ‘Outlook Anywhere (RPC/HTTPS)’ services option. Adding the root folder path isn’t necessary for this rule.

Finally, we need to create a rule for Autodiscover, since that service uses a separate URL and some different options.

  • Right click on the ‘Exchange 2010 Outlook Anywhere’ Rule and select ‘Copy’
  • Right click on the next rule down and select ‘Paste’
  • Edit the new rule and make the following changes:
  1. Name: Exchange 2010 Autodiscover
  2. Public Name: autodiscover.domain.com
  3. Users: All Users (remove Authenticated Users)
  4. Authentication delegation: No delegation, but client may authenticate directly

All the services are now published and the only thing remaining is to improve the user experience by configuring redirection to allow any URL to be entered in the browser. The third and final part of this post addresses redirection.

Part I: Introduction and Creating the array

During a recent Exchange 2010 migration project, I found that while there are many resources online to assist with publishing Exchange 2010 using TMG, none covered my scenario very well and most were missing details that were needed to make the solution work as I desired and intended.

Since I believe that this specific scenario is common, I will outline the specific details of the installation in a series of posts covering the whole process as well as a couple of sticking points that require a few extra tricks to address.

Please note that these guides are not intended as an exhaustive step-by-step manual for this process but rather as a set of tips, tricks and guidance for anyone who is already familiar with Exchange 2010 and TMG and the overall publishing process.

The environment

Exchange 2010 with Service Pack 1 is deployed using several mailbox servers in a single DAG hosting all mailboxes. The HT and CAS roles are hosted on two shared servers. The servers are load balanced across all ports. A CAS array was created (along with a DNS record) and point to the load balancing VIP.

Note: This post does not include a detailed discussion of load balancing. The information provided should apply equally well to WNLB and a hardware load balancer.

Two TMG 2010 Enterprise servers are deployed in a DMZ with a single interface to be used as reverse proxy servers only. The TMG servers are load balanced across ports 80 and 443.

The servers are all protected using a SAN certificate that includes the intended OWA/OA/EAS URL: webmail.domain.com and the Autodiscover URL autodiscover.domain.com as well as the FQDNs of the CAS/HT servers.

Deploying TMG – Installing the array

Extending the high availability options provided by Exchange 2010 to TMG is a key part of any implementation of the platform. No point in ensuring no single points of failure in the Exchange system if one of the primary access methods (OWA, ActiveSync, etc.) is a single point of failure.

TMG provides three options for high availability:

  • Manual – this option includes multiple TMG Standard edition servers that are load balanced but not aware of each other. Rules are synchronized manually across servers.
  • Partially automated – by leveraging the Enterprise edition of TMG, the servers can share an array configuration database that is stored on one server and replicated to the other. This option is known as a standalone array. A manual process is required to failover the configuration database to the other servers.
  • Fully automated – An enterprise array can be created by offloading the configuration database to another system (or ideally, multiple redundant systems). This configuration is a fully automated cluster.

I typically prefer the partially automated solution as it doesn’t require any additional systems but avoids the potential for user error and misconfiguration. Since the configuration information is loaded into memory on each TMG server, access to the database itself is only needed when making configuration changes so a manual failover is a very acceptable risk.

Preparation steps:
  • Confirm that each node can resolve the FQDN of the other node (by using DNS or hosts file)
  • Confirm that the user account you are logged on as is the same on both nodes (same name and password)
  • Confirm that both TMG servers are joined to the same workgroup
Certificate Configuration

If the servers are part of a DMZ domain, you can use an Enterprise CA to configure certificates. Since in my experience a DMZ CA is rare, this post uses self-signed certificates to authenticate TMG servers to each other

Generate the certificates using the makecert utility:

makecert -pe -n “CN=TMGArrayRootCA” -ss my -sr LocalMachine -a sha1 -sky signature -r “TMG Array Root CA”

makecert -pe -n “CN=TMG01.dmz.com” -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -in “TMGArrayRootCA” -is MY -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 “TMG01.cer”

makecert -pe -n “CN=TMG02.dmz.com” -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -in “TMGArrayRootCA” -is MY -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 “TMG02.cer”

Once the certificates are generated, use the Certificates MMC to export the certificates, copy them to each TMG server and install them as follows:

  • Import the TMGArrayRootCA certificate into the trusted root certificate folder
  • Import TMG01 and TMG02 into the personal certificate folder

Finally, using TMG console browse to System and select the Install Server Certificate task in the action bar on the right.

To verify the certificate installation use the Certificates MMC focused on the service account and select ISASTGCTRL as the service. The personal folder should contain the certificate the active array manager.

Array Creation
  • On node 2 <TMG02.dmz.com> open TMG console and select ‘Join Array’ from the tasks on the server node
  • Select the Standalone array and point to the IP of node 1 and enter the administrator credentials.
  • Confirm that the root CA is already installed on the server and complete the installation. This process will create the array and join the second node to it.
  • Restart all servers (array manager and the array managed)
  • After each system comes back up, open the TMG console and open the properties of the server node. In the intra-array credentials tab select the workgroup option and enter the required credentials

This completes the TMG installation and prepares the environment for publishing Exchange services, the topic of part 2 of this post.

Many considerations go into designing a scalable robust application infrastructure. Those considerations vary quite a bit from application to application and organization to organization. In fact, agreeing on the goals and constraints of the proposed system is typically the most important task in ensuring an efficient, relevant architecture.

When considering a MokaFive deployment, the following goals are typical and will be used to drive the example design covered in this post:

  • Minimize WAN traffic
  • Minimize user wait times for initial deployment and updates
  • Meet 4 hour SLA in the event of a server failure
  • Meet 24 hour SLA in the event of a site failure
  • Eliminate single points of failure within application
  • Support up to 2000 users

Meeting these goals will be achieved using the following system design components:

  • Dedicated database servers
  • Geographically distributed image store infrastructure
  • High availability configuration
  • Disaster recovery configuration

The result is focused on classifying each data source within the MokaFive system based on the amount of data it typically carries. Since the policy and reporting data transferred between management servers and client as well as between management servers and database servers is of a small size, those systems will be centralized with multiple systems provided for redundancy only. On the other hand the image stores carry, replicate and deliver larger amounts of data and are therefore designed with a distributed approach to minimize WAN traffic and delivery times in addition to providing disaster recovery and high availability.

Business continuity planning design

Before digging into the design, let me define the terms as I’m using them (these terms tend to be used to mean different things by different people):

  • Business continuity planning (BCP) – a process that creates a design taking into account a variety of potential risks and identifying approaches to mitigate as many of the risks as possible. The BCP guidelines are typically provided by the business in the form of required uptime and allowed downtime during incidents for different systems and data sources.
  • Disaster recovery (DR) – a configuration created to meet BCP requirements that supports risk mitigation during a significant incident, typically involving the temporary or permanent deactivation of a data center or site.
  • High availability (HA) – a configuration created to meet BCP requirements and provides rapid service resumption in the event of a local outage such as a server or component failure.

In the case of a MokaFive system, the ability for the system to recover from a local server or component failure (HA) or a site failure (DR) relates to the configuration of each of the following components:

Database – MokaFive uses a Microsoft SQL database to store policy, client and configuration data which is used to drive the implementation and management of clients and images.

Application server – all communication with the platform is managed by the application server. It is the primary contact point for clients, administration consoles and automation scripts.

Image stores – delivering the content of virtual images is performed by the image stores. Both primary and replica image stores are supported by MokaFive with the former being a read/write copy that is used for authoring and staging while the latter is a read-only copy typically used as a distribution point for clients.

The design of each of these components to support the hybrid centralized/distributed model will be covered in the following sections:

 

Database design

Database redundancy for both HA and DR leverages capabilities built into the MS SQL product. In order to keep costs down, this configuration is designed with the standard edition of SQL in mind.

High availability is achieved using a two node database cluster. This configuration does increase cost due to the need for shared storage but ensures minimal downtime in the event of a SQL server or component failure.

Disaster recovery to a second data center is achieved using log shipping which allows SQL to replay back copied logs on a stand-by database server. This choice avoids the need for SQL Enterprise edition, which is required to support asynchronous database mirroring, the other alternative for database redundancy across a WAN link.

 

Application server design

The application server component doesn’t store any data and as a result, very little needs to be staged in advance to support failover either locally within a data center or across data centers in a site failure scenario.

The installation media can be used to deploy the software on a warm server, which should be patched regularly and ready for the application deployment. The deployment does require manual intervention but is very simple to execute and should be configured to use the active database server and image store during installation.

Access to the application server by clients is provided using an alias DNS record (a CNAME) which is also used for the SSL certificate and configured within the MokaFive console. This configuration requires a simple additional step of manually modifying the DNS record in order to complete the failover process. This action can also be scripted.

In order to make sure that clients and replicas are deployed using this alias rather than the server FQDN, we simply need to modify the server’s DNS name entry in the iConfig administration console under the General tab in the Network section. The value should match the alias stored in DNS and used in the SSL certificates protecting the system.

Image store design

Configuring redundancy for the image store is primarily an exercise in file replication. The image stores – both primary and replica, are just a set of files that need to be available to clients and to the application server. There are two components that must work together to ensure the redundancy – availability of the primary image store and the ability of replicas and the Creator application to access the required information from the correct location as needed.

Maintaining availability of the primary image store can be accomplished with any file replication tool. I typically use Microsoft’s Distributed File System Replication (DFSR) because it’s built into the server I use and is efficient, secure and easy to configure. The latest version of MokaFive as of this writing, version 3.5, includes a new primary image store replication option that will likely negate the need for a separate replication tool going forward.

If for any reason, the application mechanism isn’t suitable, DFSR or another replication tool should do the trick just fine. Make sure to select a tool that supports replicating changes only because the image store tends to contain very large files that are only changed a little bit at a time.

Replicating the primary store to a second server within the same data center and a third server in the DR data center will create a topology that mirrors the database and application servers (in fact, the application server is often used for the primary image store).

Once the primary image store is redundant, we just need to make sure that the replicas and Creator can find their primary. This is achieved using the same alias based mechanism that ensures access to the application servers. If the primary image store is stored on the application server (my typical best practice), then no additional configuration is required. If the primary image store is on a dedicated server, it must be registered in the administration console using the alias name (in this case you will need a total of two aliases, one for the application server and one for the primary image store)

One big note: a lot of this configuration can be simplified when using a global application level load balancer but since many organizations do not have those, this approach serves as a better general best practice that can be used anywhere.

There are many reasons to create a custom ADM/ADMX template: managing settings for software that doesn’t include GPO support, a modification to an OS setting that isn’t part of the standard templates, disable or enable a specific component (e.g. IPv6) or to extend the features of existing policy settings (e.g. redirect user shell folders).

All of these have one thing in common: they complete their function by modifying registry keys, the core function of the custom ADM or ADMX template. This commonality results in the following typical high level process for creating or modifying custom ADM/ADMX files:

  1. Research the registry keys that control the required settings.
  2. Learn and understand the template file format.
  3. Create and test the custom template file.
  4. Repeat step 3 until everything works right (usually the longest step in the process).
  5. Deploy the templates to configure GPOs after training administrators about the differences between managed and unmanaged settings.
  6. Respond to questions and issues when the mechanism malfunctions, the specific requirements change or people forget the operation process for using the custom template.

There’s not much we can do about step 1 since we need to determine how to configure the required settings but past that step, this is a fairly long and sometimes painful road to implement the required change. As a result, many administrators choose to use scripts or .REG files to simplify the process and avoid having to dig into the ADM/ADMX file format.

With the introduction of group policy preferences with Windows 2008, we now have the registry extension that can accomplish the same task and much much more. The base functionality allows us to deploy registry keys as well as custom templates or scripts but this mechanism includes the following additional benefits:

  • The ability to import keys from the local computer’s registry – once you configure the required settings on your admin computer, you can import them directly into the GPO.
  • The ability to organize and manage keys by collection.
  • The ability to manage all of the key types: strings, DWORD, QWORD, multi-string value, expandable-string value and binary values.
  • The ability to update, replace or delete existing strings – the update action will only update the value data whereas the replace action will delete the existing key/value and create a new one with the desired value data.

In addition to the registry extension specific benefits, we also get the following benefits that are global to all preferences:

  • The ability to run user settings using the system security context
  • The ability to remove the item when the setting is no longer applied – this is an important option that allows the preference to behave similar to a managed policy setting (note that this will not re-instate an original value, just remove the setting).
  • The ability to create a true preference and apply the setting only once allowing the user to change it.
  • ..and most importantly, the ability to configure conditional expressions for each registry key or collection to further define its target. This capability, known as item-level targeting (or ILT) is a very granular and powerful engine that provides an administrator the tools to direct each setting to the computers or users who need it based on over 25 categories of properties including hardware levels, OS, networking configuration, group membership and any registry/file/LDAP/WMI query.

Given these benefits, the registry extension becomes the ‘Swiss army knife’ of custom registry modifications to Windows systems and user environments.

So while there is still a need for ADMX templates from Microsoft to manage the OS and there’s a strong need for templates from other software vendors, when those templates are not available, I reach for the registry extension and avoid any authoring of custom ADM/ADMX templates.

So are custom ADM/ADMX template a thing of the past?  please share in the comments section. I’m interested in how many folks out there are still creating custom template files.