Posts

IT consolidation

Six guiding principles to Consolidate your IT

Over the last decade, enterprises have invested heavily in IT systems to support their growing business needs. However, this has increased the complexity of system integration and forced companies to invest more heavily in ongoing maintenance. Annually 65 –70% of the IT budget is allocated to cover these costs. Today, companies are revisiting their IT strategy and looking for ways to consolidate their IT in order to reclaim this investment, reduce maintenance and support future growth.

The need for IT consolidation is most evident in two types of organizations. In the first group, IT grew organically with business over the decades, surviving changes in strategy, management, staff, and vendor orientation. The second group of business capital groups is characterized by rapid growth through acquisitions (followed by attempts to integrate radically different IT environments). In both groups, their IT infrastructures have typically been pieced together over the past 20 (or more) years.

Yet, continual enhancements and the installation of new applications to enhance business processes place a significant burden on existing IT. The cost of maintaining and operating non-integrated legacy environments is taking its toll and blocking investment in revenue generation initiatives. Consolidation can reverse this trend and maintain the same service levels with less.

Where to start?

The most basic consolidation programs can start with selecting the same vendor to provide software for various applications – and/or the same software version and release across the entire business portfolio. The simple principle is that the more you buy from a vendor, the better relationship, the greater the scope for negotiation, and – potentially – the more efficient integration of components. This means that we save both on CAPEX and OPEX (simpler maintenance). The cons of working with a single vendor are commonly known as well.

The next obvious step is to reduce server counts by consolidating user workloads onto fewer systems. This is swiftly becoming a top-down-driven priority for many enterprises who are struggling under server sprawl. Virtualization is a key component that allows enterprises to abstract their hardware resources, such that multiple operating systems and software stacks can share the same hardware resources in secure isolation. The barriers to adopting virtualization are minimal, with inexpensive entry-level solutions and the comforting knowledge that you can simply migrate between the hardware and virtual worlds.

Hardware or hypervisor virtualization?

There are two dominant technology paradigms to choose from when virtualizing your IT Infrastructure; hardware or hypervisor-level virtualization. Hardware virtualization is not a new concept and its origins can be traced back to IBM’s 1960s innovation with logical partitions (LPARS). This was the first technology that enabled the underlying hardware to be allocated to specific workflow processes. Today, leading server vendors each have their own variation of hardware virtualization including HP, Dell, Fujitsu and Oracle. So what are the scenarios for its usage?

For many enterprises, the use of hardware virtualization is to create a stable, well-supported and efficient environment for their mission-critical and legacy applications. However, the downsides include; high TCO for such high-end servers, complex administration requirements and vendor lock-in to specific hardware. Most critically, hardware virtualization does not allow you to take advantage of removing data center silos to achieve full flexibility of workflow execution.

Today there is more to server virtualization than simply supporting the consolidation of workloads onto a reduced set of servers. This is where hypervisors fit in and can underpin the potential for even greater advances – resource pooling, workload migration and high availability to name just a few. A hypervisor is a thin layer of software that runs directly on the hardware, allowing multiple operating systems and their software stacks to run directly on top.

The hypervisor market was pioneered by VMware in 2001, with the launch of its ESX Server. As the market has matured, leading vendors including Citrix, Microsoft and Oracle have entered the fray, alongside the established open-source Xen and KVM offerings. Depending on the business need any number of them may be suitable. Is choosing a hypervisor as simple as picking one and standardizing it?

Consider

In the real world, enterprises use mixed architectures from multiple vendors (often a legacy of corporate history). To account for this a holistic approach is needed to ensure that you do not exchange server sprawl for hypervisor sprawl. You need to consider:

  1. What types of workflows are you looking to virtualize? Many enterprises are looking to virtualize workloads at the application and departmental server level, while further down the list are the more mission-critical applications such as BI, ERP and database management systems. A clear consolidation strategy should be devised that virtualizes workflows step-by-step building on the knowledge gained with successful migrations. You should not be looking at an all-or-nothing approach.
  2. Governance – enterprises also have many servers that are located in external data centers. In the absence of a standardized virtualization solution, it is important to define governance guidelines and a clear communication strategy to avoid virtualization solutions slipping in “by the back door”. Departments will often take a short-sighted view and make their decision based on familiarity.
  3. Support – or more succinctly FUD “Fear, Uncertainty and Doubt”. This is a hot issue and one that leaves many enterprises ill at ease. Certain applications often depend on a defined stack, the net effect of which may dictate the hypervisor required. For instance, Oracle only fully supports its applications when running on OracleVM. This leaves enterprises who run Oracle stacks on another vendor’s hypervisor with the question of risk; they will be supported by Oracle for known problems, but for unrecognized problems or if Oracle’s recommended solution fails then it is left to the enterprise to prove that the root cause is not at the hypervisor level. Of course, many hypervisors have such migration tools but with outages proving so costly to enterprises, the risk of an unsupported infrastructure can be too great for many to take.
  4. Business Continuity – hypervisors can form an important element in Enterprise’s reliability and disaster recovery planning. Features such as dynamic workload balancing ensure that virtual machines with high user load are migrated to underutilized servers before users are adversely affected. Template virtual machines, so-called “Golden Images” can be simply deployed over the network, minimizing downtime during routine upgrades and patching. Virtual machine snapshots provide yet another fallback option if things go wrong during the maintenance. Within seconds you can revert back to a last-known-good configuration with all processes running. Not all hypervisors support these features and it is important to evaluate your business needs before investing.
  5. Licensing – one of the main drivers for virtualization is the streamlining of license costs. But first you need to check that your existing vendor licensing agreements can benefit from this model. This is not a trivial task and will involve lengthy auditing and negotiation with your application vendors to determine the correct path to take. For instance, if you are paying for licenses based on the number of end users, then you could benefit from moving to a per CPU licensing model. Obviously, this is not in the interests of the application vendor who may try differing tactics to dissuade you.
  6. Existing admin skills – are your admins more “point and click” focused? If so, then the user-friendliness of the hypervisor is a pressing requirement. This is where VMware and Hyper-V have the upper hand. However, for administrators schooled in Linux, Unix or Solaris a Xen-based offering such as OracleVM can be tailored and configured to suit your environment and deliver great value within a fraction of the cost.

Final thoughts

CIOs will need to carefully consider their options when consolidating. Although many enterprises would simply like to standardize on a single hypervisor, trying to migrate all workloads onto this may not make sense for any number of the reasons outlined above. Today, we are seeing more and more enterprises address these needs through multi-vendor virtualization.

When taking this approach, it is important to look to the future. You chose virtualization to reduce costs, provide flexible server utilization and streamline management. You do not want to allow your data center to become siloed at the hypervisor level. The challenge is to create an environment where multi-vendor hypervisors can coexist seamlessly to form a private Cloud. Eventually, you will want to take advantage of the public Cloud and its SaaS business model. These are not trivial issues, but the boldest efforts of data center virtualization typically yield the highest benefits.