Calling Nutanix a Storage Company Doesn’t Compute

Disruptive technologies are sometimes positioned within the context of existing solutions – the classic example being the horseless carriage as the initial categorization of automobiles. In our industry, we’ve seen Salesforce evangelize “No Software” long before cloud computing became popular. Citrix emphasized application delivery rather than hosted desktop sessions. VMware initially promoted “Mainframe-class reliability, security and management on Intel computers” until the virtualization concept took hold.

Nutanix faces a similar challenge in messaging our Virtual Computing Platform. I had breakfast last week with a savvy financial analyst who quickly grasped the disruptive nature of the company and how it’s restructuring the architecture of virtualized datacenters. And yet, as we were parting he still lumped Nutanix in with the myriad fast-growing new storage companies.

There is nothing wrong with storage, of course, at least not if talking about partially used paint cans, boxes of old records or even archived data. But when it comes to frequently accessed information and high IOPS, “storage” shouldn’t even be used in the same sentence.

As an analogy, consider car keys. Most folks use them on a daily basis; it wouldn’t make sense to keep them in the basement storage room. Adding a fire pole to the basement or a moving walkway to speed progress through the storage room are rather silly solutions. It makes vastly more sense to keep the car keys conveniently close by on the kitchen counter or dresser. Similarly, active information should be maintained on SSDs and disk close to the CPUs, not on a centralized storage array accessed over a network.

History of Enterprise Storage

Enterprise storage for X86-based networks started with the introduction of the EMC Symmetrix in 1990. Despite the significant expense and complexity it entailed, larger organizations began consolidating their data from local server drives to the EMC arrays in order to benefit from shared access and other advantages. The Internet boom during the second half of the 1990s fueled a rapid increase in sales for both EMC and newcomer NetApp. But the dot com burst early the next decade reversed the trend, and both organizations saw declining revenues. That is, until VMware came on the scene.

EMC sales

EMC Sales in Billions

In 2003, VMware added a truly remarkable enhancement to ESX, the ability to vMotion virtual machines between hosts. IT staffs were eager to deploy this capability to eliminate the requirement for maintenance windows among other use cases. But on page 37 of the VMware VirtualCenter 1.0 User Manual under VirtualCenter VMotion Requirements was the bullet that would change the datacenter: “The hosts must share a storage area network (SAN) infrastructure”.

The tide immediately changed for the storage manufacturers as customers began purchasing shared storage to utilize VMotion and DRS. EMC recognized the potential goldmine and announced its intentions later that same year to acquire VMware for $625M.

vCenter Manual

                      The Bullet that Changed the Enterprise Datacenter

The rest, of course, is history. The storage industry, driven by VMware virtualization, took off. In addition to rapid growth of both EMC and NetApp, other large datacenter incumbents IBM, HP, Hitachi and Dell all dramatically increased their storage businesses organically, through acquisitions, or both.

Today, in addition to the traditional storage players, we have new start-ups focusing on flash, server compute, virtualization, or scale-out in order to offer enhanced performance or lower costs. One of the flash start-ups, ExtremIO, was acquired by EMC 18 months ago and is being formally launched as an EMC product tomorrow. Another storage start-up focused on flash, Nimble, has filed for an IPO.

EMC Stock

EMC Stock Price Increased 129% and NetApp 96% the Year VMware Announced “SAN Required”

Storage Might be Faster, but it’s Still Storage

A storage array might consist of all flash and be blindingly fast in terms of IOPs, but it still is accessed by servers across a physical network – largely negating the speed advantage. And with some exceptions, including EMC’s ExtremIO, all of the storage traffic is funneled through two physical storage controllers which can, and does, lead to boot storms and write storms – particularly in an environment demanding high IOPs such as VDI.

This hub and spoke model of storage and servers also doesn’t scale. As more servers are added, the IOPs are continually divided by the number of servers, reducing the available performance. And when an array fills up with data, a complex and expensive forklift is required in order to continue expanding.

Lazy River

 A Networking “Lazy River” Traverses the Distance between Servers & Storage

The proprietary storage arrays are also difficult to manage. LUNs must be carved up and datastores configured. Each brand of array requires specialists who are skilled in administering the storage environment.

Bringing a Knife to a Gun Fight

While VMware-induced SAN deployments were propagating throughout enterprise datacenters, a very different scenario was unfolding in the Internet space. The story goes that prior to the launch of Google, Sergey Brin took a tour of the Yahoo datacenter which consisted of well over 1,000 NetApp Filers. He was astounded that so many of the arrays sat mostly idle due to lack of activity by users in different global time zones. Brin refused to accept his team’s explanation that storage arrays could not accommodate varying sets of user data. He told them to find a way to make Google’s storage agile, scalable and efficient.

The new company hired a team of scientists who developed the Google File System (GFS) and MapReduce to enable a massively scalable environment utilizing only commodity servers with local drives and with no need for managing and optimizing the storage environment. The impact was quickly felt throughout the Internet provider space whose leaders eventually all adopted a Google-like architecture. Robin Harris of StorageMojo estimated that Google had a 5-8X cost advantage over former search leader Yahoo. For Yahoo, it was like bringing a knife to a gun fight.

A couple of the developers of GFS, including the lead scientist, saw an opportunity to bring the advantages of true convergence to commercial and government enterprises by leveraging the hypervisor itself. Nutanix uses a similar distributed file system along with variants of MapReduce and No SQL to enable predictive data placement. Nutanix, like Google, eschews RAID and instead utilizes commodity hardware with data replicated across multiple nodes.

The Nutanix Virtual Compute Platform includes both flash and disk as part of a commodity server rather than as components in a proprietary array. Workloads automatically migrate between the flash and disk depending upon how actively they are accessed, optimizing performance while minimizing cost.

The Nutanix Virtual Compute Platform doesn’t require specialized storage administrators. It is up and on-line in under an hour and then managed entirely from the virtualization console by the same team administrating the virtual machines. And Nutanix, which utilizes virtualized storage controllers, scales just like Google. A Nutanix Virtual Computing Platform can start with a single 3-node cluster, and then grow one node at a time to accommodate thousands of nodes – without any change in configuration.

Nutanix Scale

Nutanix Scales:  Increasing Storage Controllers, Read/Write Cache & Compute/Storage Capacity

A Radically Simpler, and More Efficient, way of Building Enterprise Datacenters

When I corrected the financial analyst about his misclassification of Nutanix as a storage company, he sheepishly acknowledged his mistake. Nutanix is certainly not a storage company, just as it is not a server company. Nutanix has brought the same type of SAN-less scale-out architecture pioneered by Google to virtualized datacenters. Government and commercial enterprises across the globe are increasingly enjoying the cloud provider advantages of low cost, scalability, simplicity and resiliency.

And while the analyst might be struggling with how to categorize the Nutanix Virtual Computing Platform, he knows that it is proving to be wildly popular. Nutanix continues its pace as the fastest growing infrastructure company of the past decade.

Thanks to Dave Gwyn (@dhgwyn) and Sudheesh Nair (@sudheenair) from whom I appropriated pretty much every good idea and analogy relayed in this article.

 

See Also:

Scale Out Shared Nothing Architecture Resiliency by Nutanix.  10/26/2013.  Josh Odgers. CloudXC.

How Yahoo Can Beat Google.  07/05/2007. Robin Harris. Storage Mojo.

Killing With Kindness: Death by Big Iron. 05/23/2006. Robin Harris. Storage Mojo

This entry was posted in Uncategorized and tagged , , , , , , , , , . Bookmark the permalink.

One Response to Calling Nutanix a Storage Company Doesn’t Compute

  1. I enjoy what you guys tend to be up too. Such clever work and
    coverage! Keep up the fantastic works guys I’ve added you guys
    to our blogroll.

Leave a Reply

Your email address will not be published. Required fields are marked *


*