Software is also eating the data center

Mark Andreesen’s famous August 2011 WSJ article, Why Software Is Eating the World, discusses how software companies, especially Silicon Valley firms, are disrupting industries across the planet. Most big data center players still cling to the hardware-based models of yore. But the growing ubiquity of the hypervisor as the new data center O/S means that software-defined technologies will increasingly challenge the status quo.

Software-Defined Data Center Attributes

Proliferating data center virtualization has revealed the necessity for simplified infrastructure design, implementation and support. Major data center players including EMC, Cisco, NetApp, HP, IBM, Oracle and Dell have responded with converged infrastructure (CI) solutions that combine compute, storage and network resources either as products or as reference architectures.

These solutions have found a very receptive market – VCE alone is exceeding a billion dollar run rate just three years after launch. But while they solve many of the efficiency challenges of a virtualized infrastructure, the CI dependency on storage hardware limits their ability to enable a next generation software-defined data center (SDDC) which is defined by the following attributes:

Convergence: A SDDC should embody true convergence across different tiers of data center applications, consolidating the infrastructure in the process. Converging storage and compute onto the same rack or even the same chassis still leaves two distinct tiers requiring an intermediate network to move data continuously between them. Storage controllers operating a single box are meaningless in a SDDC. They need to be aggregated over multiple nodes to enable management and resiliency as part of a single global system.

Elastic Consumption: The software-defined data center is VMware’s version of private cloud. As such, it should mimic the public cloud in terms of elastic resource consumption. But separate storage and compute tiers require that either excess capacity be purchased up-front, or that forklift upgrades be incurred as demand increases.

Hybrid Agility: The three hybrid components of a SDDC include flash + disk, multiple hypervisors and private/public cloud interoperability:

  • Flash + Disk: Tying flash to the array makes it difficult to address certain workloads such as big data, to manage data on a lifecycle basis and to incorporate technology innovations.
  • Multi-Hypervisors: Despite the many benefits accruing from using a single hypervisor, organizations increasingly are deploying multiple options.
  • Private/Public Cloud Interoperability: Hybrid agility requires seamless exchange of workloads between private clouds and public providers.

Legacy storage solutions will find it hard to retrofit flash and public cloud storage into their offerings. Legacy system management services will find it hard to subsume management of multiple hypervisors within a single pane of glass. Design of a consumer-grade console to manage these hybrid environments requires fresh thinking.

Hyper-Convergence

Cloud providers such as Google, Facebook, Amazon, Microsoft Azure and Twitter all utilize custom-built servers with aggregated local storage rather than SANs. This environment, also known as hyper-convergence, is efficient, reliable, extremely scalable and low-cost.

But unlike the Internet juggernauts, it is impractical for enterprises to run their myriad applications on a custom-built distributed server environment. The Nutanix concept originated with a couple of the Google File System architects who realized that they could leverage the hypervisor to achieve the same hyper-convergence benefits for the masses. Over time, the engineering team gathered additional top talent from VMware, Oracle, Microsoft and most recently, Facebook.

Nutanix utilizes the hypervisor as a substrate where everything now runs as a service. The storage controllers themselves are virtualized onto the hypervisor right next to the workloads and data. This eliminates the traffic from server to shared storage device. And the virtualized storage pools enable capabilities such as VMware Fault Tolerance, High Availability and DRS to all work “out of the box”.

The Nutanix Virtual Computing Platform consolidates the compute and storage tiers onto one unified appliance that takes up only 2U of rack space. It accommodates four X86 servers, server-attached PCIe Flash and high capacity SATA drives. The result is reduced cabling, power and cooling requirements as well as reduced network traffic.

The best hardware-based CI solutions incorporate a GUI enabling effective collaboration between separate compute and storage teams. Hyper-convergence, along with consolidating multiple technologies, also abstracts the low-level intricacies within each functional silo. Policy and resource management are elevated to a level where it can be managed by a single data center team, enabling organizations to move away from a stovepipe IT staffing model.

SDDC Performance

Data center manufacturers like to argue that specifically designed hardware with custom ASICs enables performance at scale that is superior to software. While often true in the early stages of software innovations, history shows that superior ease-of-use is more important to consumers than a small performance advantage.

As an example, Java was initially much slower than C. But its versatility and ease-of-use eventually led to much greater market share than its predecessor. This phenomenon is amplified by Moore’s Law which renders any initial performance advantages irrelevant. We saw this take place with virtual servers which for some time now have ran as fast or faster than their physical counterparts, and we’re seeing it take place again today with virtual desktops. VDI is now much more dominant than server-based computing, and it’s increasingly eating into the market for physical PCs.

But Nutanix is far from religious about the software-defined everything mantra. Storage is virtualized without any intermediation from the VMware hypervisor, and includes PCIe pass-thru. Accessing storage hardware directly without going through the hypervisor significantly enhances performance for services requiring specific-purpose hardware.

Marketing Speak?

The SDDC terminology is not just marketing speak. As an analogy, think about what Apple did to phones, calculators, cameras, Rolodexes, Sony Walkmans, eReaders, etc. The iPhone converged all of these individual technologies using a software-defined platform that changes the keyboard on the fly to match whatever functionality is accessed.

Hyper-convergence is necessary to provide iPhone-like consolidation benefits to a software-defined data center. And in the process, it reduces both cost and complexity. Most importantly, fractional and elastic resource consumption facilitates a private cloud environment.

In this model, technology management is much more aligned with data center-level objectives. And rather than spending the majority of their time on infrastructure issues, the IT staff can work more closely with the business. This allows them to leverage the SDDC capabilities of speed and agility to achieve not just IT, but business objectives.

See Also:
The Nutanix Solution. Nutanix Web site.
VCE Vblock Demand Hits Billion Dollar Run Rate Three Years After Launch. 02/20/2013. EMC Press Release.
Converged Infrastructure Takes the Market by Storm. 08/22/2012. David Vellante. Wikibon.
HyperConvergence phase added to the Infrastructure Continuum. 08/20/2012. Steve Chambers. ViewYonder.
Why Software is Eating the World. 08/20/2011. Marc Andreessen. The Wall Street Journal.
Apache Hadoop. Wikipedia.

This entry was posted in Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


*