TCO: Pure Storage vs. Nutanix Enterprise Cloud

Disclaimer: This blog is personal and reflects the opinions of the author, not necessarily those of Nutanix. It has not been reviewed nor approved by Nutanix.

A lot of back and forth tweets have taken place between a few Nutanix folks (including yours truly) and @deepStorageNet which I regret. The first bullets were fired via a DeepStorage Technology report (commissioned by Pure Storage), Exploring the True Cost of Converged vs. Hyperconverged Infrastructure: FlashStack delivers all flash performance at a cost below that of Nutanix Hybrid HCI. I feel I should have responded privately rather than in an emotional tweetstorm.

I first met Howard Marks of DeepStorage several years ago when I stopped to see him in his lab in Santa Fe, though I knew of him by reputation long before that. A smart guy with a clever wit, I was disappointed in his report. The document draws conclusions that I feel are unrealistic – though the lack of detailed information in the analysis makes it very difficult to evaluate the results. A coworker and I tried to get more detailed information, but by that time tensions were high and the atmosphere uncooperative.

A Legacy Mindset

I generated a TCO analysis of the two platforms (which you can find here) that, among other constraint assumptions on the side of FlashStack, utilizes RF3 (rather than the more economical but still extremely resilient RF2) and no data reduction (being compression & deduplication) or erasure coding efficiency. Even with these handicaps (and without considering other Nutanix capital and operating cost advantages), Nutanix Enterprise Cloud enables significant savings vs. FlashStack. So why do Howard and I come to such differing conclusions?

I believe that a big reason is the same one I’ve run into countless times since I began doing financial modeling for Nutanix well over four years ago: A legacy storage mindset. Storage has evolved from a proprietary hardware array (all flash or not) to become an application running on commodity servers (HCI). When looking at the world through the lens of proprietary hardware, a superficial comparison of acquisition cost can often provide enough information to make a reasonable decision. This is not the case with very differing technologies such as converged infrastructure and HCI.

An interesting parallel to the impact of this legacy mindset on cost evaluation is the similar way it impacts performance evaluation. Traditional workload generators such as iometer, vdbench and fio are designed for measuring the results of brute force speed tests with traditional shared storage arrays. They fail to measure the far more relevant challenge of maintaining consistent and reliable performance when storage becomes software. Nutanix created X-Ray to evaluate workload characterizations in the real (HCI) world of noisy neighbors, adding workloads, rolling upgrades and node failures.

Howard said that his requirement for N+2 (RF3 in Nutanix terms) comes from the risk of subsequent failure during a RAID rebuild. He is right to be concerned due to the additional stress put on a RAID set both from front end I/O and back end rebuilds. But this problem is not applicable to Nutanix whose distributed platform ensures no hotspots during rebuilds and utilizes all drives within the cluster to perform the rebuild, not a small subset like traditional RAID based systems.

The Nutanix (Acropolis) Distributed Storage Fabric (ADSF) proactively monitors drive health and re-protects data at the first sign of a drive potentially having issues. RF2 is also much more resilient than say a RAID 5 (N+1) platform and can in fact lose up to 24 drives concurrently without data loss.

Nevertheless, to keep the TCO even more conservative on the side of status quo, I assumed utilizing Nutanix using RF3.

Doing Your Own Math

As I wrote almost a year ago in the blog post, Financial Modeling in the Era of Hyperconvergence and Cloud, A financial modeling framework is the best approach for attacking complex technology comparison evaluations. Why rely on any generic cost or TCO comparison report when you can generate one specific to your organization, your environment and to your decision criteria?

Thanks to Tim McCallum (@TimMcCallum97) for extensive contributions to the TCO analysis and to Josh Odgers (@Josh_Odgers) for sharing his technical expertise.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


*