Archive for the ‘General’ Category

There are a lot of collaboration solutions available today. Whether users are simply talking, chatting, sharing documents, co-editing documents or participating in a real time meeting, there is a large spectrum of available solutions. From traditional solutions (like phone and email) sold by well known vendors and provided by the workplace to new solutions from younger vendors that seek to disrupt the existing landscape for collaboration solutions, IT departments and users have a lot to choose from. Furthermore, since many of the new solution are offered free of charge (either for a full or limited version), users can select to try and even implement solutions such as Dropbox, Trello, Skype  on their own with no assistance from IT.

The result is that users in most organizations have many collaboration tools that can be used for a specific purpose or use case, some of which may not be provided by IT or even known to IT. And while empowering users to select tools that are appropriate for their use case is a valuable goal, too many collaboration tools tend to cause mass confusion, especially when users who need to collaborate may be using different tools and platforms. The end user experience, therefore, is a mess with users not knowing which tool to use for which purpose, which tools are best to use for a specific user/department/location as well as users not being trained or well versed in the tools they need to use. In addition to confusion dominating the end user experience, this situation can introduce a lot of risk to the business as vital business information is stored in locations that IT is not aware and that have no governance controls and inadequate security controls.

The solution to the problem is a clear strategic plan for collaboration tools within the organization. Before I outline the key steps used to arrive at this strategy and to implement the strategy, let’s do a quick review of the categories and use cases for collaboration tools that are relevant to most organizations today:

  • The old school tools – email, file shares and phone
  • Advanced file sharing – cloud based solutions and document management (workflows, retention, records) – examples here include SharePoint, LiveLink, DropBox or Box.com
  • Real time communication – chat, voice/video over IP using tools like Jabber, Call Manager or Lync
  • Real time document collaboration – real time editing of documents using solutions such as Google Docs or Office Web Apps
  • Enterprise social network – persistent chat and discussion threads with user controlled membership and participation – examples include Chatter, Jive or Yammer
  • Project oversight tools – project submission, tracking, project collaboration workspaces using platforms such as Project Server

Given the broad nature of the above use cases and of collaboration in general, these are the key components to a successful effort to standardize and simplify enterprise collaboration tools while balancing functionality and user experience:

  • Identify the use cases that are important to your business

Not every organization requires every use case to conduct business. Determining which scenarios are needed for the organization and prioritizing those establishes the organization’s collaboration requirements and serves as a starting point for any related efforts. This is especially vital for scenarios that require advanced tools that can be more costly or complex to implement.

  • Align requirements for each area with business units

Given the user facing nature of all collaboration solutions, the requirements definition exercise must include representatives of key business areas. A slick and cutting edge solution that is approved by IT is of minimal value if it doesn’t meet the needs of sales, engineering or customer support. This alignment process must occur throughout the effort with business user participating in requirement gathering, product demos, POC testing and finally training development and execution.

  • Create a collaboration usage policy

Any efforts to address collected requirements must be accompanied by a clear usage policy that identifies to users what is expected of them with regards to management and custody of the organization’s data. Since some users will no doubt prefer solutions other than those provided by the organization, it is important to clarify what the policy is regarding the use of alternative solutions as well as the process for requesting that IT consider changes to existing solution offerings. This policy serves to guide users to acceptable collaboration practices as well as protect the organization from the inherent risk in disseminating enterprise data through unapproved channels, often known as data leakage.

  • Inventory purchased solutions

As the effort shifts to the tactical task of leveraging the requirements to develop and implement suitable solutions, a first step is to review and understand what collaboration solutions have already been purchased by the organization and the degree to which they have been implemented. Since some solutions may have been purchased and deployed without any involvement by IT, something that’s quite simple to do for cloud solutions, the financial impact to this step can be quite significant. Gaining visibility to enterprise assets and aligning those with proposed solutions and initiatives can not only reduce the cost of the overall effort but also greatly speed up the introduction of better functionality for all users.

  • Implement technical changes

The implementation phase includes modifications to existing solutions and deployment of new solutions to meet business requirements. This phase will typically be executed in stages starting with the highest priority changes and moving down the list. Also, given that in almost any organizations that are departments or workgroups with specialized needs, the implementation phase should start by focusing on solutions that are suitable for the large majority of users (the 80%) and then create a process to review and finding suitable solutions for specialized needs throughout the organization (the 20%).

  • Educate users

The best solutions and most innovative tools are of little value if users don’t understand when and how to use them. While many modern solution tout themselves as user friendly and ever self-explanatory, there tremendous value in user education around selecting the right tool for the right job and using each tool correctly. This is especially true in an enterprise setting where the organization usually places certain requirements or limitations on how tool can and should be used. It’s also important to note that education is not a one-time effort and must include initial education, new hire education and ongoing refreshers.

  • Continually evaluate and improve

As with any program based on business requirements and a rapidly changing landscape, the collaboration framework within an organization must be reviewed and evaluated on a regular basis to ensure that the goals of the initial implementation were met and that the framework continues to evolve to meet the changing needs of the business as well as incorporate new and better solutions in the market place.

If you only take away one key point from this post, make it about the prioritization of aligning IT with the business. The IT department of 2014 must ensure that any initiatives, especially those that are user facing, are closely aligned with the business to ensure that business problems are solved, business goals are met, users are engaged and productive on IT platforms and users/managers can provide feedback to allow IT to correct course as needed.

Following the above approach may not make every user happy but it will help strike a balance between user satisfaction, team productivity, cost and business benefit.

Upgrading Windows on laptops/tablets isn’t about imaging or SCCM/LANDesk. The real success factors are often not clearly understood and prepared for prior to the project increasing cost during the project and sometimes missing key benefits or improvements.

 

Key success factors are:

  • Explore new features of the OS – often a new version of Windows provides new features that can provide key business advantages or cost savings. Technologies such as Direct Access, BitLocker, AppLocker are free for most organizations and can provide substantial benefits when implemented as part of an OS upgrade.
  • Understand the current environment – accurate data about current hardware/peripherals as well as applications used throughout the organizations (especially non-enterprise apps) or which end users are responsible for each application is often to hard to come by. The assessments and discovery tasks to gather this information are very time consuming and are not ideal tasks for an outside vendor who is not intimately familiar with the organization. Starting the data collection well in advance and/or maintaining the data current on an on-going basis is necessary to reduce costs and meet deadlines
  • Understand requirements – the business requirements for Windows projects are often defined in parallel with the project, sometimes extending into deployment and changing key project parameters at the last minute. This approach can be very costly so defining the requirements prior to the project and making sure they are aligned with business strategy (e.g. should users be storing data on local devices? How is the data backed up and shared? How does this integrate with cloud offerings? How do we avoid data leakage?) is a great way to ensure that the final product is a good fit for the organization and project costs are contained.
  • Application testing process – as the most important factor of overall project duration, an effective application testing process can have a huge impact not only on project timelines but on end user experience following migration. A well-defined process that is managed by a competent application analyst/process manager is vital to the success of the application testing effort and with it the overall project.
  • Change management – a client OS upgrade is often one of the most disruptive IT projects for end users. While the OS change itself might be minor, the accompanying upgrade of the browser, core productivity software (Office, Acrobat, etc) and introduction of new OS features can be very disruptive to a large majority of end users. Managing this change, setting expectations, communicating clearly and structuring the project to minimize disruption are important activities that must be prioritized and handled by an experienced program manager or process analyst.
  • Process overhaul – the broad footprint of this type of project invariably impacts many internal processes: support processes, application lifecycle management processes, security processes, on-boarding and off-boarding processes, hardware asset management processes and more. While these processes can be updated following the migration, a typical approach, doing so is much more disruptive and takes quite a while to complete as team members are busy supporting the organization. Reviewing and adjusting processes during the project in coordination with the project team results in a more seamless transition and a faster return to full productivity for the organization
  • Compliance – for organizations subject to regulatory frameworks such as HIPAA, PCI, SOX, GxP, etc, the changes brought by a project such as this can be more impactful. Preparing the compliance teams for the project by including them in the project team from the onset and integrating their requirements and efforts into the project plan will help avoid last minute surprises that can derail execution

 

The solution that CCO (www.cco.com) uses is to deploy a project team with a lead in each area identified and ensure that the leads are familiar not only with the type of project but the type of organization. We leverage and support existing mechanisms within the organization to ramp up quickly on project portions that are required early and/or present high risk. Communications with all relevant channels including executives, business units and application owners is established at the onset and used for ongoing change management. Complementing roles of logistics/project management and process/change management are either filled by the same resources or a tightly integrated team.

 

End result for a recent Windows 8.1 project:

  • Meeting project deadlines and delivering a new platform on brand new hardware for thousands of users in less than 6 months
  • Delivering cutting edge features (encryption, cloud backup, always on VPN) and platforms (convertible touch hardware, enterprise tablets) to users with minimal disruption
  • Replacement of iPads for remote workers with Windows based tablets that provide enterprise management and security
  • Overall user satisfaction due to visible project benefits such as cutting edge hardware, up to date productivity tools, cloud data storage, non-password authentication
  • Flexibility within project team resulted in meeting schedules in spite of challenging external factors (cutting edge technology, release of Windows 8.1 update 1 in middle of project, hardware availability issues, limited internal resources)

Many considerations go into designing a scalable robust application infrastructure. Those considerations vary quite a bit from application to application and organization to organization. In fact, agreeing on the goals and constraints of the proposed system is typically the most important task in ensuring an efficient, relevant architecture.

When considering a MokaFive deployment, the following goals are typical and will be used to drive the example design covered in this post:

  • Minimize WAN traffic
  • Minimize user wait times for initial deployment and updates
  • Meet 4 hour SLA in the event of a server failure
  • Meet 24 hour SLA in the event of a site failure
  • Eliminate single points of failure within application
  • Support up to 2000 users

Meeting these goals will be achieved using the following system design components:

  • Dedicated database servers
  • Geographically distributed image store infrastructure
  • High availability configuration
  • Disaster recovery configuration

The result is focused on classifying each data source within the MokaFive system based on the amount of data it typically carries. Since the policy and reporting data transferred between management servers and client as well as between management servers and database servers is of a small size, those systems will be centralized with multiple systems provided for redundancy only. On the other hand the image stores carry, replicate and deliver larger amounts of data and are therefore designed with a distributed approach to minimize WAN traffic and delivery times in addition to providing disaster recovery and high availability.

Business continuity planning design

Before digging into the design, let me define the terms as I’m using them (these terms tend to be used to mean different things by different people):

  • Business continuity planning (BCP) – a process that creates a design taking into account a variety of potential risks and identifying approaches to mitigate as many of the risks as possible. The BCP guidelines are typically provided by the business in the form of required uptime and allowed downtime during incidents for different systems and data sources.
  • Disaster recovery (DR) – a configuration created to meet BCP requirements that supports risk mitigation during a significant incident, typically involving the temporary or permanent deactivation of a data center or site.
  • High availability (HA) – a configuration created to meet BCP requirements and provides rapid service resumption in the event of a local outage such as a server or component failure.

In the case of a MokaFive system, the ability for the system to recover from a local server or component failure (HA) or a site failure (DR) relates to the configuration of each of the following components:

Database – MokaFive uses a Microsoft SQL database to store policy, client and configuration data which is used to drive the implementation and management of clients and images.

Application server – all communication with the platform is managed by the application server. It is the primary contact point for clients, administration consoles and automation scripts.

Image stores – delivering the content of virtual images is performed by the image stores. Both primary and replica image stores are supported by MokaFive with the former being a read/write copy that is used for authoring and staging while the latter is a read-only copy typically used as a distribution point for clients.

The design of each of these components to support the hybrid centralized/distributed model will be covered in the following sections:

 

Database design

Database redundancy for both HA and DR leverages capabilities built into the MS SQL product. In order to keep costs down, this configuration is designed with the standard edition of SQL in mind.

High availability is achieved using a two node database cluster. This configuration does increase cost due to the need for shared storage but ensures minimal downtime in the event of a SQL server or component failure.

Disaster recovery to a second data center is achieved using log shipping which allows SQL to replay back copied logs on a stand-by database server. This choice avoids the need for SQL Enterprise edition, which is required to support asynchronous database mirroring, the other alternative for database redundancy across a WAN link.

 

Application server design

The application server component doesn’t store any data and as a result, very little needs to be staged in advance to support failover either locally within a data center or across data centers in a site failure scenario.

The installation media can be used to deploy the software on a warm server, which should be patched regularly and ready for the application deployment. The deployment does require manual intervention but is very simple to execute and should be configured to use the active database server and image store during installation.

Access to the application server by clients is provided using an alias DNS record (a CNAME) which is also used for the SSL certificate and configured within the MokaFive console. This configuration requires a simple additional step of manually modifying the DNS record in order to complete the failover process. This action can also be scripted.

In order to make sure that clients and replicas are deployed using this alias rather than the server FQDN, we simply need to modify the server’s DNS name entry in the iConfig administration console under the General tab in the Network section. The value should match the alias stored in DNS and used in the SSL certificates protecting the system.

Image store design

Configuring redundancy for the image store is primarily an exercise in file replication. The image stores – both primary and replica, are just a set of files that need to be available to clients and to the application server. There are two components that must work together to ensure the redundancy – availability of the primary image store and the ability of replicas and the Creator application to access the required information from the correct location as needed.

Maintaining availability of the primary image store can be accomplished with any file replication tool. I typically use Microsoft’s Distributed File System Replication (DFSR) because it’s built into the server I use and is efficient, secure and easy to configure. The latest version of MokaFive as of this writing, version 3.5, includes a new primary image store replication option that will likely negate the need for a separate replication tool going forward.

If for any reason, the application mechanism isn’t suitable, DFSR or another replication tool should do the trick just fine. Make sure to select a tool that supports replicating changes only because the image store tends to contain very large files that are only changed a little bit at a time.

Replicating the primary store to a second server within the same data center and a third server in the DR data center will create a topology that mirrors the database and application servers (in fact, the application server is often used for the primary image store).

Once the primary image store is redundant, we just need to make sure that the replicas and Creator can find their primary. This is achieved using the same alias based mechanism that ensures access to the application servers. If the primary image store is stored on the application server (my typical best practice), then no additional configuration is required. If the primary image store is on a dedicated server, it must be registered in the administration console using the alias name (in this case you will need a total of two aliases, one for the application server and one for the primary image store)

One big note: a lot of this configuration can be simplified when using a global application level load balancer but since many organizations do not have those, this approach serves as a better general best practice that can be used anywhere.

Working with 64 bit operating systems, especially on client operating systems, still presents some challenges. One recent interesting effect I’ve run into is the redirection of the %windir%\System32 folder. When a 32 bit application attempts to access the folder, it is redirected to %windir%\SysWOW64, this allows those applications to use the correct version of various tools.

The difficulty comes in when trying to use certain tools, like bcdedit.exe, do not exist in the %windir%\SysWow64 directory and error out. The solution is to access the tools under the %windir%\SysNative folder which bypasses the redirection.

More information can be found here: http://msdn.microsoft.com/en-us/library/aa384187(VS.85).aspx

Welcome to the RDP Files

Posted: November 13, 2009 in General

It seems only fit to kick off a new blog with an introduction. An introduction to the author, to the content, and to the reason for writing a blog.

My name is Guy Yardeni.

I’m a 15 year veteran of IT infrastructure work, most of it done as a consultant and/or implementer assisting various organizations with deploying technologies such as directories, messaging, system management, security systems and content management platforms.

Most of my work these days focuses on the Microsoft products filling the above categories, but my past adventures have included in-depth exploration of Novell, Cisco, Citrix products and many more.

My typical day is spent designing solutions, implementing complex systems, supporting the technologies or providing knowledge transfer to IT staff about each solution and related products.

Which leads me to the question of why do we need another blog about IT technology: In the course of my work, I run across many difficult problems, questions and challenges. Most of these eventually do get solved, but many of the elements of the troubleshooting process or the solution are not available online. Furthermore, seldom are the details about the problem and the solution captured in an easily retrievable manner.

The notion that this hard to get, valuable information will not be available to myself (yes, I typically forget the details of the problem and solution after several months), my colleagues and IT professional seems very wasteful.

Therefore, the goal for this blog is to capture important information that I uncover in the course of my work and that would be valuable to myself and others down the road.

Finally, the name of the blog was selected because when I examine my work for the past several years, it seems that the tool that I use most often and that is most indispensable for my work is a good RDP client interface, which is used to remotely manage servers.

That’s it for the introduction, hope you’re ready for a steady stream of useful technical information.

Guy