In The Register last week, a VCE spokesperson said that comparing Nutanix to Vblock is like comparing a glove box in one vehicle to an entire car. A more applicable, albeit fictional, automobile analogy would be to contrast an inexpensive Tesla with a Cadillac Escalade for commuting back and forth to work.
VCE certainly hasn’t achieved a $1.8B run rate by selling a bad solution. On the contrary, I think it represents the pinnacle of the three-tier datacenter model. I was an early public proponent of both Vblock and UCS. A couple of my early UCS blog posts still show up in the top two results when Google searching “UCS vs. Matrix”. And I helped facilitate several publicized Vblock sales while at Presidio.
But back then I didn’t know that Nutanix was already bringing web-scale architecture to the enterprise. The same reasons that Google and the other leading cloud providers eliminated SAN-based infrastructure for their primary hosting businesses are the reasons that enterprises across the globe are gravitating to web-scale converged infrastructure today. Just the administration required for SANs alone (not even counting UCS) is tremendously complex and expensive. The lack of scalability, the vulnerability to downtime, the complexity and the high cost for the equipment, rack space, power and cooling are all drivers for enterprise migration to web-scale IT.
Here are the 10 reasons that web-scale converged infrastructure is going to trounce Vblock’s converged infrastructure 1.0:
1) Frankenblocks are Unnatural Solutions
Frankenblock was the affectionate nickname bestowed upon Vblocks by some early customers, but it’s applicable to all of the legacy storage manufacturers’ converged infrastructure solutions. These technological anomalies have only been able to flourish due to the immaturity of the virtualization industry.
A virtualized datacenter necessitates extensive collaboration among the storage, server and network teams – something that many IT shops, with their stovepipe functional organizational models, are not designed to accommodate. Troubleshooting complex virtualization issues generally requires calls to multiple manufacturers – and finger-pointing commonly ensures. And it can easily take months for organizations to procure the storage, server and network components and get everything working well together in a virtualized environment.
The converged infrastructure 1.0 approach helps mitigate these virtualization challenges to a certain degree, but it is an unwieldy solution. Vblocks, for example, include EMC storage and UCS already pre-racked and cabled, yet the lead time for delivery is still extensive. Although there is one number to call for support, difficult problems still end up in conference calls with the individual manufacturers. Even the ordering process can be complex for channel partners who need to contact three different manufacturers in order to obtain the lowest-cost VCE quote (which still costs significantly more than just purchasing the individual product components).
2) VCE Claims it “…delivers the industry’s only true converged infrastructure systems”, Yet it Doesn’t Actually Converge Any Infrastructure
VCE coined, “Converged infrastructure”, but every leading storage manufacturer now uses the term to describe a prepackaged combination of servers and storage arrays – promising application optimization and a single point of management as part of an integrated stack. The irony is the complete lack of convergence – at least from an infrastructure standpoint.
Wiktionary defines convergence as the merging of distinct devices, technologies or industries into a unified whole. It follows that converged infrastructure should eliminate redundant hardware and consolidate disparate management tiers. The convergence of voice and data networks, for example, eliminated both the hardware and management requirements for separate voice networks (PBXs).
Vblock, FlexPod, PureFlex and all of the other “converged infrastructure” solutions utilize proprietary arrays with Intel based storage controllers that are very similar to the Intel servers they use for compute. There is no elimination of hardware, rack space, power or cooling requirements. And there is no consolidation of management; the arrays still require specialized storage administration.
3) Hardware-Defined Convergence Doesn’t Work so Well
Let’s use “hardware-defined convergence” as a more applicable moniker for simply packaging physical products together. When I was a kid, it was a foregone conclusion that the car-plane would be ubiquitous by now. But hardware-defined convergence has never proven to be very successful. Converged infrastructure 1.0, whether Vblock or not, will certainly not be the exception to the rule.
VCE architects have long recognized this deficiency. A couple of years ago, when he worked at the Office of the CTO for VCE, Steve Chambers wrote a blog post about hyper-converged infrastructure to describe solutions that truly did converge compute and storage.
Although Nutanix is the only “hyper-converged infrastructure” manufacturer providing a distributed file system to bring web-scale IT to the datacenter, many other manufacturers have entered the market. Most significantly, VMware recently introduced VSAN. As the dominant virtualization leader, VMware’s endorsement of software-defined converged infrastructure (also commonly referred to as Server SAN as well as hyper-converged infrastructure) is a tremendous validation of the architecture. EMC is also jumping into the space with ScaleIO and with its upcoming Project Mystic.
4) Software-Defined Convergence Whips Hardware-Defined Convergence Every Time
For a newer technology to displace an incumbent, it typically needs to be notably easier to use, significantly less expensive, or both. Blackberry initially dominated the market it created by using software to converge cell phones with PC email functionality. But although Blackberry had a tremendous advantage as first mover, it also had a huge vulnerability – a physical keyboard.
When the iPhone was first announced, Steve Ballmer famously scoffed at both its high price and its lack of a physical keyboard – saying that businesses would never accept it. But both business users and consumers loved the convenience of the software-defined iPhone keyboard. While perhaps not quite as easy to type upon as Blackberry’s physical version, the ease-of-use more than compensates as the iPhone adjusts to reflect the functionality being utilized whether a phone, calculator, MP3 player, camera, email device, etc. The iPhone and other software-defined smart phones quickly made Blackberry largely irrelevant.
Vblock’s packaging of UCS servers and arrays has captured a lot of sales to organizations hurting from the pain of datacenter virtualization. But software-defined alternatives will inevitably win the day.
5) Centralized Storage is Anachronistic in the Modern, Virtualized, Datacenter
When VMware announced vMotion in 2003, the vCenter 1.0 Users Manual included a bullet point on page 37 stating, “The hosts must share a Storage Area Network (SAN).” This requirement changed the face of the enterprise datacenter for the next ten years as organizations around the globe purchased arrays in order to run ESX and vMotion.
But even with today’s modernized SAN architecture, centralized storage is still not a good fit for a virtualized datacenter. Traditional architecture separates the flash and disk away from the compute (where it should be) and sticks them in proprietary arrays at the end of the network where they’re subject to performance degradation from network hops and latency. Even arrays consisting of all flash still suffer from network bottlenecks.
Utilizing physical storage controllers furthermore makes it challenging to ensure adequate IOPS for the many different types of virtual workloads. This further contributes to I/O related issues such as the “blender effect”.
Additionally, complex infrastructure tasks such as creating, masking and zoning physical LUN devices have no awareness of the virtual machines that are running on them, making it impossible to define granular policy such as compression, deduplication, data protection and replication.
6) Vblock Compounds the Problem of Poor Storage Array Scalability
The inability for storage arrays to easily and inexpensively scale is one of the biggest detriments to a virtualized datacenter. This is particularly true when uncertainty exists as to the ultimate resource requirements such as with virtual desktops and private cloud. The large initial investment required for a storage array capable of handling projected future requirements is often enough to discourage organizations from moving forward with a VDI or private cloud initiative.
Vblocks can compound this problem by requiring staircase purchases of not only storage, but also UCS chasses and Nexus switches. This tends to be particularly problematic for private cloud initiatives in organizations relying upon project-based budgeting for funding their IT infrastructures. It also makes chargeback more difficult to implement due to the complexity of allocating costs of large blocks of storage, compute and networking in a meaningful manner.
7) Even Cisco UCS Has its Limits
UCS was the largest investment Cisco had ever made when it undertook development of the product 10 years ago. UCS was built under the direction of Ed Bugnion, VMware’s co-founder and CTO, and it was the only product developed by any of the datacenter leaders specifically for hosting a virtualized datacenter. When UCS debuted a little over five years ago, it incorporated some remarkable innovations such as FCoE (Fibre Channel Over Ethernet), hypervisor bypass, extended memory and a GUI that can help the server and storage teams collaborate more effectively.
Despite widespread sentiment that Cisco didn’t know anything about the server business and would fail miserably, UCS went onto become the top-selling blade server in the Americas. But UCS has an enormous disadvantage. It only addresses a small part of the virtualized datacenter issues – the compute. By far the majority of issues have to do with storage. Not surprisingly, along with VCE/EMC, three other datacenter storage manufacturers, NetApp, Hitachi and Nimble, all incorporate UCS for the compute portion of their converged infrastructure solutions.
8) Web-Scale has Already Won in the Cloud Provider Space
Years before VMware transformed the enterprise space, the Internet giants were already consuming large quantities of SANs and NAS. When Google came on the scene, the co-founders knew that they would need to handle billions of users and trillions of objects. They wanted a solution that would be much more economical, efficient, resilient and scalable than shared storage.
Google consequently took a scientific approach to rethinking datacenter infrastructure. Indeed, the company hired a team of scientists who developed the Google File System along with MapReduce and NoSQL. Instead of using storage arrays, Google runs hundreds of thousands of commodity servers utilizing GFS to aggregate the local storage.
Google published papers on its GFS-based architecture in 2003, and now every leading cloud provider no longer uses arrays for their primary hosting business. Web-scale IT, including commodity servers, local storage and some variation of a distributed file system, has become the de facto standard in this very demanding space.
The low cost, resiliency, simplicity and scalability of the web-scale architecture ensure that it will also become the standard of the modern (virtualized) datacenter.
9) Web-Scale Converged Infrastructure is Proving to be Even More Effective in the Enterprise
“50 per cent of global enterprises will be taking an architectural approach to web-scale IT by 2017”
– Gartner 03/10/2014
Although most enterprise, and even government, datacenters are not as large as those of the leading cloud providers, they often receive even greater benefit from Web-Scale converged infrastructure. This is due to their much greater number of applications, the majority of which tend to be off-the-shelf.
Conventional three-tier architecture is infrastructure-centric rather than VM-centric. The storage subsystem has no virtual machine awareness and no insight into the number or configuration of disparate workloads residing on them. Administrators are forced to use complex data analysis techniques in an attempt to mitigate the “thebully/victim” and “noisy neighbor” effects among virtual machines along with mysterious application sluggishness and potential service disruption.
Nutanix Web-Scale Converged Infrastructure, on the other hand, is VM-aware. The architecture utilizes Virtual Machines, rather than LUNs, as the primary building block of the datacenter. Policies traditionally defined at the storage pool or LUN level can now be applied at the individual virtual machine level – where they make sense, resolving the application and service disruption issues and enabling both analytics and replication capabilities at a very granular virtual machine level.
10) Web-Scale Converged Infrastructure Delivers what VCE Promises
Simplicity is just one of the claims commonly made by VCE that Nutanix actually fulfills. Another is time-to-deployment. VCE boasts that it only takes 40 days to procure and stand-up a Vblock. Nutanix installs in hours, and can be procured in less than a week. And not only is Nutanix much less expensive to procure and scale, but it also is much less expensive and complex to operate.
Web-Scale also enables a vastly less complex and more elegant approach to hosting virtualized infrastructure than packaging servers, even UCS servers, with storage arrays. For example, since the virtualization administrators manage the entire environment, there is no need for collaboration between server and storage teams.
Enterprises and governments around the world are embracing Web-Scale IT. Nutanix is now the fastest-growing infrastructure company of the past three decades.
To learn more about web-scale, check out the industry-wide online event that Nutanix is hosting live on June 25, Web-Scale Wednesday. Featuring speakers from Facebook, Twitter, Nutanix, Citrix, Dell, Wikibon, The Register and more.
Thanks to Michael Berthiaume, VCDX, and Jerome Cheng for their contributions to this article.