Archive for the ‘Virtualization’ Category

MokaFive Automation

Posted: April 18, 2011 in Scripting, Virtualization

Automating tasks is a common goal for many projects, especially those targeting the implementation of a new system or the operational improvement of an existing system. The goals are typically to create a repeatable sequence of steps that can be executed on a scheduled basis with no human intervention to minimize chances of neglecting to run the sequence or introducing errors into the execution.

With most software packages, automation relies on components created by the vendor in advance (that hopefully fit the needs of the implementer) or a documented API that is sometime complex and cryptic and must be learned and understood before being used.

MokaFive takes a web services based approach to this need that makes it easier to create the automation and reduces the need for a highly skilled developer to do the work and greatly accelerates the learning curve for the task.

RESTFul Web Services

MokaFive uses RESTFul Web Services not only for customization of its enterprise product but also for the vendor supplied administration console. The large majority of actions taken within the administration console use REST calls that can be examined and then modified to execute calls needed by the automation sequence.

REST (Representational State Transfer) is a style of architecture used for web pages and services that’s characterized by a lack of state storage on the server. The result is that every client request must contain all the information needed to process the request. This facet makes examining REST calls and recreating them fairly straight forward.

There’s a lot of information about REST on the web and a good Wikipedia entry that can serve as a decent starting point for research: http://en.wikipedia.org/wiki/Representational_State_Transfer

Example

The rest of this post will go through an example of how to automate a task for MokaFive using REST. The example generates a script that will create IP ranges within MokaFive to direct clients to their nearest image store for image files. The script uses Active Directory sites and subnets as the authoritative data for the IP ranges.

The process involves the following high level steps:

  1. Obtain the necessary tools and software to work with REST
  2. Create the necessary REST requests and save them
  3. Create a script to collect sites and services data from AD
  4. Put the data and REST requests together to configure the IP ranges.

Note: as with most scripted solutions, there are multiple ways to solve this problem, this is the method I have selected and created.

Required tools

My example uses a client computer running Windows 7 with Firefox as a browser used to access the MokaFive administrator console. The code is created in PowerShell and takes advantage of the PowerShell 2.0 capabilities that are included with Windows 7.

Note: I highly recommend using the PowerShell ISE that’s included with Windows 7 if you go the PowerShell route as it makes the development experience much easier and quicker.

In addition to the core OS and the Firefox browser, the following tools were used:

  1. Firebug – this Firefox extension allows detailed examination of the interaction between the browser and web server, including REST calls. The extension can be downloaded here.
  2. rest-client – a Java based application that can be used to configure REST requests and save them for future use. There are many REST clients out there, this particular one was selected because it has a version with a full interface which is useful for creating and testing the requests, and a command line version which is very easy to include in scripts. You can find this tool here.

Creating REST requests

Once the tools are installed, the next step is to open Firefox and log into the MokaFive console in order to perform the actions that need to be scripted. For the subnet exercise, I created a new IP range using the UI which performs the three actions that I need to capture for the script: get list of IP ranges, get list of image stores (performed to populate the image store pull down) and creating the IP range. I also edited an IP range to be able to update an existing range with the script.

Now press F12 which will open the Firebug window, by default below the browsing window, make sure the console tab is selected and you can scroll through all the actions that were observed by Firebug. Finding the specific actions you need is easier if you have Firebug open when you perform the actions but is not too hard to do later since the URL path is a pretty good indicator of the action.

The next step is to reproduce each action with the REST client and save it for automated execution. Firebug allows you to right click an action URL and copy the location. After running the rest-client UI, you can paste the location into the URL field. Next, select the ‘Auth’ tab and enter the login information – in my case, Basic authentication with username and password. This example is done with http since the code will run on the server itself. For production environments, especially when running across a network, you will probably also want to configure SSL using the SSL tab.

That’s all that’s needed for a GET request. You can test the request by running it and making sure you get the expected results and then save the request. You can also use a more advanced authentication scheme by leveraging API calls directly to create an authentication cookie, but since one of my primary goals is simplicity and the script code will all reside on the MokaFive server, I didn’t go that route.

For the POST/PUT requests which include adding an IP range or editing an existing IP range, modifications to the ‘Body’ tab are required. The first modification is to set the content-type and charset to the correct setting (which will match the setting viewed in Firebug under the action/Headers\Request Headers section. The required content-type is ‘application/xml; charset=UTF-8’, which can be configured in rest-client’s ‘Body’ tab by click the leftmost icon (the one with the pencil on it) and selecting the correct value.

The second modification needed is the XML containing the data to be sent to the server. This XML can be found in the Firebug action under the ‘Put’ tab in the source section. I typically cut and paste from there into Notepad to remove any formatting and then into the rest-client ‘Body’ tab. You can also type directly into the field to avoid any hidden characters coming from Firefox/Firebug.

When going through the POST request for a new IP range and a PUT request for an update to an existing IP range, I found that the requests are almost identical except for the method and the URL. As a result, I only save and use a single request, in my case the POST version, and modify those fields on the fly as needed.

Once the requests are complete and tested, save the request files to be used by the automation script.

Collecting Active Directory site data

This is a task that PowerShell is able to handle easily. The following code sample returns and processes the required data:

$myForest =
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
Foreach ($site in $myForest.Sites)
 {
 Foreach ($subnet in $site.Subnets)
  {
  }
 }

As the code demonstrates, the sites are contained in a collection within the forest object and subnets are a collection within each site object. The subnet itself is a string with a CIDR representation of the subnet – for example, 10.15.20.0/24.

One interesting challenge that will be discussed later is created by the fact that while AD adheres to the CIDR standard, MokaFive allows the network address part of the CIDR to be a host address – so (AD) 10.15.20.0/24 and (MokaFive) 10.15.20.1/24 can refer to the same network. Luckily, MokaFive stores the network address in a different field which I will use instead.

Putting it all together

Taking all of the tools and information presented above and using it to construct the required automated script is not very complicated but a little time consuming. While I don’t plan to include my full code here, I will go through each section of the script to demonstrate the structure and highlight potential issues.

1. Collect initial data

In addition to setting global variables and opening the connection to the AD forest, this section collects the subnet and image store data from MokaFive. The subnet and image store data will be used later in the script for quite a few purposes so I decided to collect it at the start.

This task highlights two interesting aspects of the rest-client and XML structure: first, the CLI version of rest-client allows you to specify the target directory but not the response file name. The response file name is the same as the request file name with an RCS extension (the request file uses an RCQ extension). A simple rename solves the problem of needing the file treated as an XML file.

The second issue is that the response file XML is not very useful as all the tags contain data related to the rest transaction rather than the needed data. The relevant data is all contained in a single tag called ‘body’. In order to process the ‘body’ data as XML, I extract it and create a new XML file containing only the contents of the ‘body’ tag. I could have probably done this in memory without the file, but writing the XML out makes debugging, testing and operational validation much easier. This process is done for both the subnets and image stores.

2. Primary loop – image store

The top level structure of the script loops through each image store in the MokaFive configuration. Since I am expecting that there will be AD sites and subnets that don’t participate in the MokaFive architecture, I decided to process each image store and make sure that any subnet in the site for the image store is configured as an IP range with the correct image store assigned to it.

The script needs a way to match the image stores to AD sites in order to configure subnets correctly. For this specific script, I assume that the image store server name (and therefore the URL property) will start with the site code. There are many ways to accomplish this goal but using a specific image store naming standard is one of the simplest approaches.

Once the site code has been identified, the site object is retrieved and used for the inner loop.

3. Inner loop – subnets

Processing each subnet of the identified site is the job of the inner loop. The code searches for the AD subnet name in the MokaFive IP range list. Due to the CIDR issue identified in the last topic, instead of using the CIDR field in the MokaFive subnet XML, the NetworkID field is used and concatenated with the network portion of the CIDR.

If the subnet is found, the image store data is compared, if it is correct, no action is taken. If it is incorrect a record update is initiated. If the subnet is not found a record addition is initiated.

4. Subnet record manipulation

Both the subnet update and subnet creation use the same request file since the differences can easily be changed with code. Prior to running the request, the ‘URL’, ‘method’ and ‘body’ tags are modified to create either a subnet update or a subnet creation. In the case of a subnet creation, the subnet mask (e.g. 255.255.255.0) must be determined based on the last two digits of the CIDR and used in the URL.

5. Clean up

…and that’s all there is. Removing any temporary files and deleting objects is the last section. I do leave a log file and echo some information to the screen throughout the process to simplify auditing and troubleshooting.

Last words

First, if you’re still reading, thanks for putting up with such a long post. If you have any other questions, please post in the comments or contact me directly at therdpfiles@gmail(d0t)com.

VDI the easy way

Posted: April 7, 2011 in Virtualization

It seems like most organizations these days are taking a look at Virtual Desktop Infrastructure as a potential technology to implement in order to address several IT challenges: the personal device preference of users, the need to standardize on a client platform and configuration, reducing hardware cost at the desktop and others.

After working with many organizations to explore solutions from current leading vendors such as VMWare, Microsoft and Citrix, my experience has been that most of these explorations and proof of concepts result in organizations shying away from deploying VDI at all. The solutions are complex, expensive (software and implementation resources) and are typically quite a challenge to manage operationally. For most organizations, the core need to deploy a server farm that can host concurrent sessions for each client is already an insurmountable problem.

As a result I was extremely happy to recently discover a small vendor that provides a much simpler alternative with some very attractive features. The vendor is called MokaFive and their enterprise solution uses existing client side hypervisor solutions (currently VMWare and VirtualBox are supported) to run a managed virtual image on the client system. The computing resources required are all client side so no server farm is required – in fact that only required servers are a very lightweight policy management server and an infrastructure to provide the image files to clients.

MokaFive include a client agent that interfaces with the hypervisor and manages the configured policies which include some key security features such as a timer to control how long an image can be used offline before it must check in with the policy server, preventing an image from being copied to another host, requiring AV scans on the host and many more. The client agent supports hypervisors on multiple host operating systems including Windows, OS X, Linux and bare metal which makes the solution very attractive in a heterogeneous environment (i.e. everywhere).

The system also includes a pretty nifty client side architecture that isolates the corporate delivered components from user added components and gives the user the controls to revert the to ‘vanilla’ corporate image if their own changes have created problems they can’t resolve.

From an IT Pro perspective, installing the solution only takes a few hours and customizing the image is easy as well. Policy controls are very flexible and the management console is well suited to getting information quickly.

Sorry if the post sounds like a sales pitch but if a single vendor solution goes above and beyond in solving a common problem, that’s worth taking note and I would recommend to anyone seriously considering VDI in 2011 to take a look at solutions that run on the client system and specifically at MokaFive.

This latest cool solution comes from a colleague of mine, Andrew Abbate and looks at providing access to isolated VM guests.

In my lab environment, I had a major constraint around IP address space.  As such, I was given 4 IP addresses that covered my 4 Hyper-V hosts.  Thus I needed a way to address and reach my 40+ VMs that are configured in isolated networks.

The solution?  VLAN tagging and NAT.

The first step was to utilize the HP NIC utilities to create a tagged VLAN port (virtual interface). This can be done with any NIC that supports VLAN tagging including Broadcom and Intel Pro.


This gave me a 2nd interface to which I could bind an additional subnet without needing any additional network ports to activate additional networks in the Hyper-V servers.

In Hyper-V, the virtual switch is bound to the tagged VLAN interface

Similarly, the individual VMs are bound to the same VLAN tag.

Within the VM, the guest is configured to use an IP from the subnet which is on the tagged VLAN and it uses the Hyper-V host as its default gateway.

The Hyper-V host then receiveds the Network Access and Policy Services role.  This gives us Routing and more importantly, Network Address Translation.

The “public” interface on the Hyper-V host is listed as the “internet” interface, and the tagged interface is used as the “shared” interface.  This allows the IP range on the VMs to use the Hyper-V host as a NAT gateway. Useful to note is that if you forget to check “Enable virtual LAN identification” on the virtual switch interface (as shown above) the VMs will be able to talk to each other from host to host, but not talk to the host itself.  This can be annoying for getting non-ISOs from the host to the guest and will prevent NAT from working.

At this point, NAT allows the VMs to talk to networks on the other side of the Hyper-V host – including the Internet!

Now at this point, my needs became slightly more esoteric.  I needed to test USB devices against the VM.  Since Hyper-V doesn’t have the ability to pass a USB device from the host to the guest, I needed another way.  I needed to be able to RDP directly into a VM that was on a network that wasn’t routable.  This is where the NAT configuration provides a solution.

By going into the properties of the public interface in the RRAS interface:

And then into the Services and Ports tab:

I’m able to add a service for a NAT/PAT rule allowing RDP on a custom port:

In this case, I’m saying “if someone hits the public interface on Hyper-V on port 3390, pass that to a specific VM on port 3389.”  This allows me to publish all my VMs RDP services via a single IP address.  I simply have to alter the port in the RDP client:

Net result, I can reach my 40 VMs running on an isolated network from a production network without having to burn 40 IP addresses.  This can be very useful in a lab environment where you need to be able to bypass the processes of the network folks to get something working. Also a fun exercise in VLAN tagging and NAT rules.

Second option specific to RDP access is to deploy a TS gateway on the host to listen on the Untagged VLAN and provide connections to systems on the Tagged VLAN.

To accomplish this, I added RPC over HTTPS Proxy as a feature and Remote Desktop Services (R2) along with IIS as roles.  Defined the access rules and for now, just created the self signed cert.

Installed the self signed cert into the Trusted Root container in my workstation and I’m able to reference the Hyper-V host as my TS Gateway and list the “not really reachable” IP as my target and RDP works fine.

So while both options can be used to provide RDP access to isolated guests, the incoming NAT translation can be used for many other purposes since its protocol independent, for example, with it I’m able to run Windows Updates on my isolated Lab systems!

~A

P2V(hd) the easy way

Posted: November 22, 2009 in Virtualization

There are many methods for migrating a physical server to a Hyper-V virtual server but most of them require a management platform or third party software. For those in the market for a free and easy method to migrate physical server onto a virtual Hyper-V platform, life recently got much easier with the introduction of Disk2vhd. Created by Mark Russinovich and Bryce Cogswell (of Sysinternals), the latest version of this tool makes P2V migrations as easy as can be.

Disk2vhd is free and will run on Windows Server 2003 SP1, Windows XP SP2 or later. The utility supports 32 and 64 bit systems. Running the tool is as simple as selecting the disks to be captured and the target location. For performance reasons it is recommended not to save the vhd image to the disk being captured, but capturing across a fast network works very well.

Disk2vhd is available for download here (http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx).

One seemingly common problem with the capture process occurs when capturing a boot disk that does not include the required disk controller drivers for the IDE controller used by Hyper-V. This situation can be identified when the captured image boots with a blue screen on Hyper-V showing a 0x0000007B error code. Luckily, a simple modification can be made to the system before the migration process is started to correct this issue. This process involves making sure that 4 IDE driver files are available on the server and registered in the registry. This additional process is typically needed only on Windows Server 2003 servers and the required steps are explained in this KB article: http://support.microsoft.com/kb/314082. In my experience, on each system only one of the identified files was missing and once it and the associated registry keys were added, the P2V process worked flawlessly.

Now go get rid of those aging physical servers!