April 15, 2016

Enterprise Cloud For business success and IT relevance

Enterprise Cloud

The easier you make it for your business to use IT, the more your company will rely upon, IT.  We know that thanks to an economic theory called Jevons paradox.

The Jevons paradox states that as technology evolves and processes become enhanced to the point where a resource can be used more efficiently, the rate of consumption associated with that particular resource increases as well. That increase is due to a simultaneous increase in demand.

The paradox, not surprisingly, is named for William Stanley Jevons. He observed that the consumption of coal drastically increased in England following the introduction of the Watt steam engine, which burned coal more efficiently than previous technologies.

That paradox is relevant to the cloud computing conversation. In traditional 3-tier (centralized storage + storage networks + compute) data centers, the mishmash of SANs, NAS devices, switch fabrics, servers and management consoles lead to considerable inefficiencies in technology consumption. The result is complexity, poor scalability, and vulnerability to downtime, among other challenges.

Businesses, though, require  increasingly faster IT responsiveness in order for their success. When they can’t get the agility they need from internal IT, they look to public cloud.

Public Cloud is No Panacea

While public cloud can be a fantastic fit for elastic-workloads, it is typically not the optimal solution for the majority of applications. For one thing, it tends to be quite expensive – particularly when adding in components such as load-balancing, backup, DR, bandwidth, monitoring, etc. Cost often increases further when performance proves inadequate for certain workloads. I was at a large company recently that was paying exorbitant public cloud fees because so many employees ordered VMs without any centralized way of managing them or of discontinuing usage when no longer required.

It can also be very migrate to public cloud. Wikibon estimates that the payback of converting to public cloud takes over four times as long as to recoup the costs of migrating to a private cloud. The first year is particularly painful with public cloud conversion having a large negative value to the organization.

It can additionally be expensive and time-consuming to get back out of the cloud. Facebook, required 20 engineers a year to migrate Instagram from AWS to its own datacenters following its acquisition Instagram. And public cloud often still lacks capabilities required for compliance and certain standards of security.

The Enterprise Cloud Alternative

Virtualization, in the early days, held out great promise for simplifying the data center and for bringing a new level of efficiency to IT operations. Instead, things became even more complex as organizations added SANs, NAS devices and switch fabrics in order to utilize enterprise virtualization capabilities such as vMotion/Live Migration and DRS.

Hypervisors were built to sustain these hodgepodge environments, adding still more silos of management consoles, analytics databases and disaster recovery tools. Enabling acceptable scalability and resiliency often required extensive design and implementation costs.

One of the main value-adds Nutanix brings is simplifying the way people interact with, purchase and manage IT hardware.”

       – Paul Pindell, Senior Solution Architect F5 Networks

Nutanix eliminates the inefficiencies of 3-tier infrastructure, allowing IT to benefit from the same accelerated time-to-market, simplicity and efficiency of public cloud while also enabling lower costs, more control and better long-term predictability. This simplification translates into business benefits.

The IT Manager at Protective Trust explained in an IDC study of 13 Nutanix customers, “Nutanix impacts the performance of our services to the end user, which means they are happier … That results in more sales for us. There are customers that we probably would not have gotten if it weren’t for Nutanix. The impact to revenue is probably [in the] millions — $1-2 million per year currently.”

In the same IDC study, an IT manager at an insurance company in Europe described how Nutanix solutions help make business more effective: “We’re now more proactive with Nutanix because I can predict more easily how I want to change my infrastructure …. This means that 80% of my time is proactive; before, it was like 40%. We’re doing projects and other work with this saved time and creating business value.

An IT manager at Langs Building Supplies said, “With Nutanix, we are doing far more projects for our business units …. Instead of spending 65% of our time running things, we’re spending 65% working on the business …. So we’ve become an enabler rather than a hindrance.”  

Converging Public and Private Clouds

Nutanix is taking hyperconvergence to a new level by starting to converge the private and public clouds–delivering hybrid capabilities to common data and control fabrics. Deploying an enterprise cloud enables organizations to reclaim a huge chunk of their IT budgets—falling in line with exactly what Mr. Jevon learned long ago.

 

See Also

What Your In-House IT Department Can Learn from Cloud-Computing. 01/08/2016. Steve Kaplan. bythebell.com.

7 Reasons Why Acropolis is the Next Generation Hypervisor. 12/01/2015. Steve Kaplan. Nutanix.com

April 10, 2016

The ROI of Nutanix Platform Expert (NPX) certification

how-to-invest-in-yourself

Image from Google Images

The Nutanix Platform Expert (NPX) certification program is the most rigorous in the IT industry, requiring extensive architectural capabilities validated through written exams, a very detailed web-scale solution design, hands-on lab and design exercises, and a live peer review. And, an NPX needs to be an expert in not just one, but in (at least) two hypervisor platforms.

An NPX certification requires more than multi-hypervisor expertise. There is an intense focus on skills required to migrate businesses from 3-tier to HCI and the ability to incorporate new technologies and operational models, such as containers and DevOps, into web-scale solutions. This is a very specialized set of skills and the NPX program is first to market with a methodology that includes them in an enterprise architect certification.

The NPX is also a very exclusive club. Currently there are only nine NPXs in the world – all but one of whom are also VMware VCDXs or double VCDXs. For context, there are 1,810 billionaires on the planet.

The Need for NPX Certification

The virtualization world is rapidly changing. Both Gartner and IDC indicate that over half of enterprise customers now run two or more hypervisors. Customers increasingly demand multiple hypervisors for business/mission critical applications, development, stage and test environments, big data, VDI and OpenStack among other use cases. This trend will accelerate as hyper-converged infrastructure (HCI) becomes more widely adopted as the primary vehicle for hosting virtual machines.

Nutanix’s next-generation hypervisor, AHV, further shifts the landscape by making virtualization, as is already the case in AWS, a feature of the infrastructure. And the ease of moving VMs back and forth between legacy hypervisors and AHV will further accelerate the perspective of hypervisor as datacenter sheet metal.

NPX architects increasingly play a major role in helping organizations transition from three-tier infrastructures (centralized storage + storage network + compute) to Web-scale. They help them determine what hypervisors and venues (i.e. public cloud vs. on-prem.) are the most appropriate for different applications or use cases, and then design and deploy a successful multi-hypervisor / hybrid cloud environment.

The point of NPX is to serve the customer by ensuring a consistently excellent, customer-focused solution design and delivery. The NPX will enter the room at the X-level with the weight of the entire NPX community behind her. Holding NPX gives architects the privilege to address decision-makers directly and to be listened to as a peer.

Is An NPX Certification Worth the Investment?

The NPX program was developed by a team of fourteen Nutanix experts with many decades of solution design and delivery expertise between them. The goal of the program is to produce and certify the best Enterprise Architects in the world, so the bar is set very, very high in terms of skills required to achieve the credential. And that’s as it should be.

Rene Cert

The NPX certification is free for qualified candidates, but of course the commitment in time and energy is Herculean. Eight of the nine NPXs are Nutanix employees, but customer Rene Van Den Bedem (also a double VCDX) obtained his certification late last year. Several other partners and customers are currently undergoing NPX certification.

 

See Also:

Nutanix Platform Expert program. Nutanix Web Site.

NPX Certifications (via @bsuhr By The Numbers)

NPX tab. Magnus Andersson. vcdx56.com.

Nutanix Platform Expert (NPX): Why We Built It and Why it Matters. 03/20/2015. Mark Brunstad. Nutanix.com

 

 

 

 

 

 

March 17, 2016

Why Cisco won’t HyperFlex Nutanix

 

network complexity

Image by Google Images

 

“Obviously Nutanix is very influential among the first movers in this space, but it is almost in the same way that Netscape was influential in the browser and web server space.”

– Todd Brannon, Dir Product Marketing for Cisco UCS (TheNextPlatform 03/01/16)

 

You can’t say that Cisco lacks for confidence. Despite being the last server manufacturer to embrace hyperconvergence, the company didn’t hesitate to take shots at industry leader, Nutanix, while also making brazen claims about its brand new HyperFlex offering:

Cisco HyperFlex represents true hyperconvergencewith a leap in technology that delivers the industry’s first complete end-to-end hyperconverged solution

HyperFlex surpasses first generation hyperconverged solutions, which were severely limited in terms of the performance, flexibility, and operational simplicity.”

Cisco HyperFlex™ Systems let you unlock the full potential of hyperconvergence and adapt IT to the needs of your workloads.”

Cisco HyperFlex™ Systems, powered by Intel® Xeon® processors, deliver a new generation of more flexible, scalable, enterprise-class hyperconverged solutions.”

Networking Myopia

According to Cisco, only the UCS fabric interconnect architecture enables all traffic, even from the UCS blade servers, to reach any node in the cluster with a single hop. Perhaps this is why Cisco caps its HyperFlex clusters at eight nodes with only 4 clusters per UCS domain (or 32 nodes as opposed to a non-HX UCS domain of up to 160 nodes).

But Cisco, despite the success of UCS in the datacenter (which itself is driven by networking synergies), remains a networking company. Cisco must continue to sell a whole lot of switching and routing hardware. Even its SDN solution is designed to sell more Nexus switch ports.

The fallacy in Cisco’s argument is that if the network is such a barrier to HCI adoption, why has hyperconverged momentum been so extraordinary? Nutanix started shipping in late 2011 and since then, the industry has embraced HCI running on all kinds of hardware and network technologies.

More importantly, Nutanix’s version of HCI does not increase complexity but instead reduces it by adding intelligence to how and when the data touches the wire. Reliability is improved while bandwidth bottlenecks are slashed.

Data locality is one of the unique attributes of Nutanix web-scale. Data locality always keeps the data close to the VM which contains virtually all network within the server using the local virtual switches. Data locality eliminates the additional HCI stress on the network during steady state – which means most of the time.

The Nutanix software sits inside a VM and is a consumer of the hypervisor networking. It uses the existing virtual switch setup and configuration – all of which can be modified through Nutanix Prism. This can be a VMware Standard Switch, Distributed Switch or even a Cisco Nexus 1000V – and Nutanix does the same on Hyper-V.

The only time data leaves the node is if it is replicated to another node. Since Nutanix only uses one Controller VM (CVM) per host, only a basic 10G network is required. Any modern 10Gbps network already offers more than enough networking resources for a Nutanix cluster.

In terms of inter-VM communication, Nutanix nodes are no different than a UCS server running a hypervisor. If the hypervisor supports SDN, so does Nutanix. An SDN solution orchestrating networking functions will continue working irrespective of the HCI platform.

The Real Networking Challenge

Cisco says that, “Networking in most hyperconverged environments is an afterthought. With Cisco HyperFlex Systems, it is an integral and essential part of the system.” Central to this assertion is that enterprises still operate largely in silos. When compute or storage additions are required, only Cisco has the capability to seamlessly cross those silos.

Despite Cisco’s rant, it’s just not reality. There is no type of difficulty in four, eight or fifteen CVMs communicating with each other. But there is one networking challenge that is very real.

New networking products eventually cause redesign. As an example, the Nexus switch product line displaced the Catalyst switch product line via wholesale network redesign and forklift upgrades. For the most part, however, networking hardware lasts a long time. Organizations are often reluctant to throw away their investments.

Nutanix HCI significantly simplifies the networking design. The virtual switch is comparable to traditional top-of-rack. Top-of-rack is comparable to core switches. The networks are simple, clean and have a predictable traffic flow as they grow.

The networking challenge for organizations considering HCI is whether or not to continue utilizing a complex network architecture along with the IT functional silos supporting it. While Nutanix web-scale certainly doesn’t prohibit this approach, it does inevitably spur discussions about migrating to a much simpler and more efficient operation.

It’s All about Simplicity

Cisco markets Hyperflex as a single hyperconverged offering, but it’s really a collection of three existing solutions: Springpath software, network, and Cisco UCS. UCS was a very innovative product, as I wrote about in 2009, when it first shipped just over seven years ago. But UCS was designed for a hardware-based world, and that day has passed as customers have made it clear they want a software-defined datacenter.

SDN adoption

A key tenant of the software-defined datacenter is simplicity. UCS is complex to install and time-consuming to upgrade. It includes custom ASICs, FCoE, Services Profiles and Templates, and Nexus switch integration – attributes that were tremendous seven years ago, but which merely add unnecessary complexity today. UCSM is a separate management console that provides more opportunities for bugs and outages.

Is HyperFlex “The Next Generation of HCI”?

Despite the incredible mindshare and momentum of HCI, Cisco appears to consider it only a niche offering. Fortunately for datacenter customers across the globe, its HyperFlex is going to change all that: “HyperFlex is, “The next generation of hyperconverged infrastructure…Cisco intends to take hyperconvergence mainstream in the Enterprise and accelerate it.

If this grandiose declaration sounds familiar, it’s because Cisco made similar claims for prior introductions of new products:

ACE:     “Next-generation Application Delivery Solution”

VXI:      “Cisco® Virtualization Experience Infrastructure (VXI) accelerates Mainstream virtual desktop adoption…[and] delivers the next generation virtual workspace.”

Invicta:  “Deploy next-generation integrated infrastructure”

I know very little about Springpath. It might well be fine HCI software, but clearly it is still very immature and untested. Even the Whiptail underpinning of Invicta was far more established.

HCI is much more critical to an organization than either virtualization hosts or storage. If the hyperconverged environment goes down, IT comes to a standstill. As independently validated both by achieving a Net Promoter Score (NPS) of 92 and by winning the Northface Scoreboard Award for World Class Excellence in Customer Service the last three years in a row, Nutanix long ago solved the reliability challenges of HCI.

Cisco’s comparison of Nutanix with Netscape implies that, like Netscape’s fate vis-à-vis Microsoft, the much larger Cisco will trounce Nutanix now that it has entered the HCI space. The analogy falls apart, however, in corporate vision. Netscape remained focused on its browser origins even as the Internet landscape dramatically changed whereas Nutanix has embraced enterprise cloud.

Maybe HyperFlex will end up being, as Cisco claims, the next generation of HCI. But Nutanix has already evolved well beyond hyperconvergence. HCI has become table stakes in the game of enterprise cloud. HCI, hypervisor and network are not end goals but simply checklist items in the requirement for efficiency and simplicity.

Nutanix is not burdened with the complexity of legacy network designs. In conjunction with its rapidly growing ecosystem, Nutanix focuses on enhancing the application lifecycle management experience from deployment to performance to management. This is how Nutanix is increasingly making infrastructure invisible.

Thanks to @evolvingneurons, @NutanixChris, Matt Northam, Thenu Kittappa and @vpai for edits and suggestions.

See Also:

Fight the Fud – Cisco “My VSA is Better than your VSA”. 03/15/2016. Josh Odgers. CloudXC.

And for perspective…

Why Nutanix isn’t Singing the VSPEX Blues. 02/03/2015. Steve Kaplan. ChannelDisrupt.

March 3, 2016

Hyperconvergence as table stakes

table stakes                                                               Image by Flikr

It’s been an exciting past couple of weeks. First VMware and EMC announced their joint VxRail appliance which is, amazingly, already “industry leading.” And then yesterday Cisco, the last server vendor to fully embrace hyperconverged infrastructure (HCI), announced “HyperFlex: The next generation in complete hyperconvergence.”

But CIOs don’t care about infrastructure – hyperconverged or not. They’re looking for greater ease-of-use and reliability. They want to reduce the operational burden of the traditional datacenter so that their staff can devote their time to advancing IT support of the business rather than fighting fires.

 

The Broken Datacenter

The typical 3-tier (centralized storage + storage network + compute) datacenter is a mishmash of servers, SANs, NAS devices, switch fabrics and management consoles – all administered by silos of expensive IT specialists.

CIOs are no longer willing to tolerate this type of inefficiency. They are demanding the agility, responsiveness and ease of use of cloud – and if they can’t get it internally, they will absorb the higher prices and move their infrastructures to public providers.

Nutanix realized years ago that while its software-defined, share-nothing HCI architecture was a great start, the real future was in providing an enterprise cloud platform that optimizes the benefits of Cloud whether public or on-premises. Nutanix focuses on making infrastructure invisible: enabling seamless workload operation on whatever hypervisor and in whatever location organizations choose.

1.  Simplicity

Eliminating complexity is the most desired attribute of what Cisco would call a “next generation” HCI platform, and yet is a difficult concept to provide. It is hardly simple, for example, when the server firmware upgrade manual is 15 pages or when a half a dozen management tools are required for an enterprise solution.

Operational simplicity including extensive automation is at the foundation of Nutanix technology. A couple of mouse clicks, for example, upgrades all of the nodes including not just the OS, but also the underlying firmware and even the hypervisor – whether vSphere, Hyper-V or Nutanix’s Acropolis Hypervisor. And these upgrades occur non-disruptively in the background on a rolling basis.

All Nutanix nodes, regardless of age or location, are all managed (and upgraded) from the single Prism Management pane-of-glass. Built-in analytics provide visibility across the entire stack.

2.  Security

Security is one of the most important criteria to consider when deploying datacenter infrastructure. Is security baked in or bolted on?  Does the manufacturer supply STIGs (Security Technical Implantation Guides) and, if so, are they upgraded regularly?

Nutanix makes security core to everything we do. Nutanix’s full stack analytics capabilities keep the entire hyperconverged infrastructure platform baseline compliant. Pre-STIGs and automated self-healing take security to a whole new level.

Eric Hammersley, Security Architect at Nutanix recently wrote:

You should demand more from your providers. Processes that are supportable, quality assured, and not massive time sinks detracting from the actual business of the day.  You pay good money for your products and shouldn’t have to spend a month tweaking and modifying configurations to get it on the network. However, that outcry from the industry is still little more than a whimper, and ownership still falls to the customer. We at Nutanix do not stand for that approach, and neither should you.

At Nutanix our Security Development Lifecycle (SecDL) ensures the product you purchase is intrinsically hardened, derived from a set of Security controls that spans as many processes and certifications as we can find, the NIST 800-53. That’s right, out of the box, already done.

Fully documented and completely transparent in its implementation, so you can verify the derived controls for yourself. That’s the responsible vendor approach, the one you deserve.  Manual hardening, gone. Cyclic and monotonous application of security controls, no more. You can have your day back to do other things.

3.  Virtualization Choice

Virtualization is the primary foundation of modern datacenters. But both IDC and Gartner say that over half of organizations are now running two or more hypervisors. HCI should not lock customers into a single virtualization choice.

Just as Nutanix made storage invisible, it is also making virtualization invisible. Customers can choose vSphere, Hyper-V or Acropolis Hypervisor to optimize the performance and cost of individual workloads. Nutanix even allows automated migration of hypervisors back and forth between ESX and Acropolis Hypervisor, and Hyper-V is on the roadmap.

Compression, deduplication and WAN acceleration have evolved from products to features. The hypervisor is already a non-factor in the public cloud, and Acropolis Hypervisor, which is included with every Nutanix node, is accelerating the end of the standalone hypervisor in the datacenter. Superior economics, simplicity, scalability, security, resiliency and analytics are resulting in tremendous momentum for this next-generation hypervisor.

4.  Cloud Choice

Nutanix includes the Acropolis App Mobility Fabric which enables customers to back up seamlessly to either AWS or to Azure. Spinning up the VMs for disaster recovery purposes is next. Eventual full convergence with private and public clouds will minimize customer costs by seamlessly running elastic workloads in public clouds and predictable workloads on-premises.

5.  Support

Nutanix wields support as a competitive advantage. We boast an industry-leading 92 Net Promoter Score. Nutanix support covers the entire infrastructure stack: Compute, Storage and Virtualization. Nutanix even provides support for business critical applications such as SQL & Exchange.

Nutanix owns the problem regardless of whether the issue is believed to be a 3rd party responsibility or not. Nutanix SREs will arrange and actively participate in multi-way calls between vendors until the customer’s problem is resolved. While Nutanix SREs are highly skilled and experienced, they also can escalate to multiple CCIE, VCDX, NPX and CTPs.

 

A Strategic Approach to Purchasing IT Infrastructure

Due to the scalability and other limitations of 3-tier infrastructure, IT organizations have been used to thinking tactically about purchase decisions – buying compute and storage components separately as required. We see this frequently during our TCO/ROI analyses when customers ask us questions such as, “What is your cost per GB?”, or “What does it cost to buy a Nutanix pod – because that’s how we buy infrastructure?” While perfectly legitimate questions in a 3-tier environment, they make no sense in a web-scale world.

The whole Converged Infrastructure genre, despite lacking any real innovation or, for that matter, any actual infrastructure that is converged, has done well because it helps mitigate the challenges of standing up and supporting/troubleshooting 3-tier infrastructure. But converged infrastructure solutions still require separate tiers of servers and proprietary (hardware-defined) storage, with each tier managed separately.

HCI is barreling toward the chasm crossing. In addition to the new products by Cisco, EMC and VMware, HPE announced an integrated solution with Azure a few months ago. Dell and Lenovo both sell Nutanix-powered HCI appliances.

It behooves all organizations considering their IT futures to now take an honest look at replacing their 3-tier environments with a combination of hyperconverged and public cloud.

 

Thanks to @joshodgers, @cakeis_not_alie, @sudheenair and @JonKohler for edits and suggestions.

February 3, 2016

How to Conquer Jet Lag

airports

Since taking on a new global role last June, I’ve done a lot of international travel.

I enjoy meeting clients all over the world. But I’ve found that it can be difficult to remain composed and alert while I’m hopping time zones.

It’s virtually impossible to be effective at your position when jet lagged. After speaking with other world travelers (especially Venugopal Pai of Nutanix) and sharing some tips, I’ve put together a list that should help you reduce jet lag—if not allow you to sidestep it altogether.

Orient Yourself

Whether you’re flying to Tokyo or London, change your watch to your destination’s time zone the moment you board your plane. Your mind will have additional time to orient to your destination. Don’t wait until you land.

The real pros adjust to the time zone they’re traveling to a day or two before they take off. Obviously, not everyone can afford that luxury. But if you can, try it out.

Bring Your Own Food on the Plane

It’s important to sync up your meals with your destination time zone.

Bring along your own food so you don’t have to rely on airplane serving times. Opt for lighter meals that contain protein, complex carbohydrates, and plant-based foods.

Arrive in Time for Dinner

You’ve been sitting in a metal tube for hours, so you’ll be tired when you land. While often not possible, if you can pick a flight that lands right before dinner, you can get situated, eat a meal, and get to sleep at the same time as the locals.

Exercise Upon Arrival

John Donnelly has been traveling across the globe for more than three decades. He always goes for a run first thing in the morning after he lands.

According to Donnelly, running is his “antidote to jet lag.” While it takes his coworkers a few days to adjust to the local time zone, he does it quickly. Personally, I try to find a local Bikram (hot) yoga class.

Stay Up

Let’s say you’ve flown around the world. If you fall asleep in the afternoon at your hotel and wake up at midnight, how can you expect to feel refreshed the next day?

Instead, force yourself to stay up as long as you can. Sure, you might be borderline delusional before you finally go to bed, but your body will thank you when you wake up the next day—on local schedule.

Take Melatonin

Melatonin is a hormone that helps control your sleep and wake cycles. Supplements of the hormone can help you adjust to your new time zone when you travel. In fact, a 2002 study revealed that melatonin can help travelers who are hopping five or more time zones beat jet lag.

Be careful. You shouldn’t take melatonin on shorter flights. You’ll wake up groggy, which probably won’t help you close your next deal.

Hydrate Early and Often

Dehydration makes us tired. Also, does anything sound worse than flying halfway around the world with a parched mouth?

Make sure you drink a healthy amount of water when you’re traveling internationally.

Wear Eye Shades

Light—and its absence—plays a crucial role in jet lag. Wearing eye shades will help you block out light when you’re trying to catch up on sleep before you land.

If you follow these tips, you should see an increase in productivity while you’re traveling. You’ll also be that much more fun for your friends and family when you land back home.

January 25, 2016

Harsha Hosur, VCDX #135, is the 18th VCDX to join Nutanix

[Author note: This post is part of an ongoing series about VCDXs at Nutanix]

Harsha

Harsha Hosur VCDX #135 (@Harsha_Hosur) has joined Nutanix as a Senior Consultant in Global Professional Services (GSO). Harsha is based in Melbourne. (Double VCDX Josh Odgers is also based in Melbourne; Nutanix now has over 20% of the VCDXs in Australia).

Harsha previously worked as a Senior Cloud Architect in EMC Cloud Services where he helped develop and implement cloud strategy for large multinational customers. Prior to that worked for VCE as part of Cloud Services. At Nutanix, Harsha primarily focuses on Cloud and on virtualization in general, though he is also trying to learn desktop side of things.

“I’ve been interested in joining Nutanix for over a couple of years,” said Harsha. “The first time I saw the product, I was was frankly blown away. It took a while for me to find the right role. But I knew I wanted  to be part of the team that would change the world in terms of computing.”

Harsha is going to go for his NPX certification later this year.  “I want to make sure I learn a lot about Acropolis HV (AHV) first,” Harsha said. “I want to have all the tools in my arsenal.”

“While my exposure to AHV has been limited so far,” Harsha said, “I consider it the next-gen hypervisor. I particularly like the lack of built-in points of failure. I feel that Nutanix still has some work to do on the networking features, but engineering is making rapid progress.”

Harsha reports to Paul Harb. Paul says, “Harsha exemplifies the type of consultant we strive to bring into the Global Services Organization (GSO). He has incredible technology expertise and industry experience, he is highly sought after by clients across the globe, and he has a real passion for using Web-scale technology to help organizations transform their capabilities to manage IT.”

“Nutanix excels with its cloud strategy in terms of the simplicity it brings to the table,” Harsha says. “Having worked with competitive technologies, I am all too familiar with the long time it takes to set up a base infrastructure; typically two to three weeks vs. a couple days for Nutanix.”

“Nutanix enables us to spend our time strategic planning with customers rather than spending time on infrastructure basics. We have a multitude of options including VCAC, OpenStack, Azure Private Cloud, etc. This versatility frees customers from direction to a particular vendor’s solution and enables them to acquire the best strategy for their environments.

“Providing an array of options for customers is what will change how Cloud is adopted in enterprise environments. This versatility also enables us to provide a hybrid cloud strategy (AWS, Azure, etc.). In contrast, running something like OpenStack on legacy infrastructures requires specific versions that are vendor supported. Since Nutanix Acropolis is based on a KVM standard, customers don’t have to change anything to run OpenStack.”

Harsha says that Nutanix GSO is developing delivery kits including assessments that assist customers in their journey to Cloud. This journey is in that many organizations are not accustomed to accomplishing technology initiatives in a rapid and smooth manner.

“Customers can use public cloud for applications that need to be available to 3rd party users,” said Harsha. “Put these type of applications in the public cloud and provide an API – eliminating the requirement to provide security.”

“Nutanix customers can run all legacy applications on-prem. and then move them to public cloud when desired, said Harsha. “This provides an advantage of better performance, scalability and overall management. Additionally, customers can lock down the on-prem. environment to limit available resources thereby ensuring the apps will run at least as well on an expanded scale in the public cloud.”

Harsha says that Nutanix Web-scale makes more sense than public cloud for applications that are not customer facing, “With a robust and resilient infrastructure like Nutanix, why would anyone want to put internal-facing apps in public cloud?”

 

January 23, 2016

Hyperconverged players index

The following is a list of all the known (to me) hyperconvergence players (updated 07/04/2016):

1 Atlantis Computing Atlantis HyperScale, Atlantis USX
2 Breqwatr All-flash appliance
3 Cisco Cisco HyperFlex (Springpath OEM). Investment in Stratoscale. Selling arrangements with Maxta & Simplivity
4 Citrix Sanbolic
4 Datacore Datacore Hyper-Converged Virtual SAN
5 DataDirect Networks DDN SFA14KE
6 Diamanti  
7 Dell Dell XC (Nutanix OEM) & EVO:Rail
8 EMC VxRail, VSPEX Blue, ScaleIO & VxRack
9 Fujitsu EVO:RAIL Fujitsu Primeflex for VMware VSAN
10 Gridstore Private cloud in a box
11 Hedvig Hedvig Distributed Storage Platform
12 HPE HPE Hyper-Converged Infrastructure Based On ProLiant DL380 Servers

StoreVirtual & EVO:Rail

13 Hitachi Data Systems HDS Unified Compute Platform HC V240

Unified Compute Platform 1000 for VMware EVO:Rail

14 HTBase HTVCenter
15 Huawei FusionCube
16 Idealstor Idealstor IHS
17 Infrascale Infrascale Cloud Failover Appliance
18 Lenovo Lenovo HX (Nutanix OEM) EVO:Rail. Selling arrangements with Maxta, Simplivity and StorMagic,
19 Maxta Hyper-Convergence for Open Stack
20 NetApp NetApp Integrated VMware EVO:RAIL Solution
20 NIMBOXX Hyperconverged Infrastructure Solutions
20 NodeWeaver NodeWeaver Appliance Series
21 Nutanix Xtreme Computing Platform
22 Oracle Oracle SuperCluster M7
23 Pivot3 Enterprise HCI All-Flash Appliance
24 Promise Technology Promise vSkyCube
25 QNAP TDS-16489U
26 Riverbed SteelFusion
27 Rugged Cloud HCI
28 Scale Computing HC3
29 SimpliVity Omnicube (hardware-assisted SDS)
30 Sphere3D V3 VDI
30 Springpath Independent IT Infrastructure [Word is Springpath can now only be purchased via Cisco]
31 Starwind Starwind Hyper-Converged Platform
32 Stratoscale Symphony
33 StorMagic SvSAN
34 Supermicro Supermicro Hyper-Converged Infrastructure Solutions

EVO:RAIL

35 VMware VxRail, EVO: RAIL, VSAN, EVO: RACK, VSAN Ready Nodes
36 Yottabyte yStor
37 ZeroStack ZeroStack Cloud Platform

 

January 8, 2016

What Your In-House IT Department Can Learn From Cloud-Computing

Image by Flickr user stuant63

Image by Flickr user stuant63

Although it’s undergone transformations in recent years, the datacenter remains inelegant and unwieldy. For an industry that advocates for agility and efficiency, the traditional datacenter seems archaic.

IT staffs operating on-premises datacenters need to take some lessons from the datacenter’s cutting edge cousin, the cloud. With the exception of non-predictable workloads, utilizing the cloud isn’t about cost savings. It can help you make operations run quickly and smoothly, letting you focus on the most strategic parts of your business.

Here are the best practices that datacenters can pull from their cloud counterparts (while maintaining the lower costs and perks of an on-premises environment).

Technologies

Typical datacenters are a mishmash of equipment and technologies driven by individual departmental requirements. As datacenters evolved, they needed new technologies that made sense at the time, but according to Kevin Lawton, resulted in an “unsustainably complex and expensive proposition.” In addition to creating islands of technology, IT functional teams also became increasingly siloed.

The tangle of technologies makes it difficult to collect and analyze meaningful performance metrics and makes it more challenging to keep firmware up-to-date. It’s arduous to keep product versions upgraded and proper training and certification in place.

On the other hand, cloud computing enables “ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources…that can be rapidly provisioned and released with minimal management effort.” Making datacenters readily available encourages agility and adaptability.

Key to accessibility is utilizing a truly software-defined architecture that eliminates proprietary hardware and instead puts intelligence into the software. Whether local or in datacenters across the globe, administration should be simple and intuitive. Any effort spent on infrastructure is effort wasted.

Agility

Two significant advantages offered by the cloud are speed and agility. Traditional infrastructure creates massive hurdles to achieve such capabilities in on-premises environments. Providing new compute instances or storage can take weeks or months.

Public cloud utilizes commodity servers, distributed file systems, and aggregated local storage rather than monolithic SANs. As a result, administrators can provision new applications and thousands of VMs in minutes.

Organizations should seek similar architectural solutions that offer faster and easier deployment along with process streamlining. By encouraging and promoting agility, an in-house datacenter that imitates the cloud can provide a rapid response to its customers.

Security

Public cloud can make compliance more difficult to maintain due to shared pools of infrastructure. A healthcare provider, for example, has patient information coexisting on the same infrastructure as other firms. And a security breach is particularly challenging for public cloud users to mitigate.

A private datacenter allows optimal control of security and data access. The key is to implement infrastructure and virtualization solutions that have security designed into the solutions from the beginning rather than ones that are bolted on. Limiting product boundaries and access points helps ensure an extremely secure environment.

Cost

Public Cloud will often make economic sense for workloads that are non-predictable by nature. If, for example, an organization setting up a new website does not know if it will receive 1,000 hits or 1,000,000 hits a day, then it would be expensive to purchase the infrastructure for the best-case scenario if it fails to materialize. Even Nutanix cannot make 1,000 hits at a 1,000,000 hit capacity cost effective.

When workloads require measurable predictable performance, 3-tier on-premises environments are less expensive. Nutanix web-scale infrastructure is a proven alternative that enables still much larger cost savings.

The number one benefit enabled by Nutanix web-scale, however, isn’t a reduction in cost. It’s increased business agility. Start delivering IT-as-a-Service to bring a competitive advantage to your business through the simplicity, scalability, and resiliency of public cloud while maintaining the control and security of on-premises IT.

Thanks to @vmmike130 for edits and suggestions.

December 31, 2015

The ROI of VDI – with Citrix and Nutanix

With all of the buzz around VDI for so many years, it’s reasonable to assume VDI owns a good chunk of the corporate desktop market. But Gartner says it only has 4.9% share, with storage as the primary bottleneck.

A Gartner study showed that, on average, storage costs accounted for 40% – 60% of the entire VDI budget. Even worse, 91% of the study participants spent more on storage than they planned.

Nutanix changed the economics of VDI with its introduction of hyperconverged infrastructure (HCI) four years ago. Organizations across the globe have realized both significant savings and a fast payback period by virtualizing desktops. Citrix’ support of Nutanix’s Acropolis Hypervisor makes the numbers more compelling than ever.

The Challenges of Legacy Storage

Monolithic storage arrays don’t lend themselves well to the evolving and dynamic environments typical of VDI deployments:

Storage Arrays Require Monolithic Purchases: SANs must be sized and purchased up-front to have enough capacity to accommodate future expectations. Additional storage shelves can be added up to a point. But once the storage controllers exceed 50% utilization, customers are forced to undergo an expensive upgrade that is risky, complex and time-consuming. Wikibon estimates that the cost to migrate to a new array is 54% of the cost of the array.

VDI Migration Patterns: A migration to VDI typically takes place over a number of years – often in conjunction with an organization’s refresh rate. It commonly starts with one department or a subset of the total user organization and is then expanded across the organization if it’s successful. To meet projected future demand without a forklift upgrade, organizations need to buy large SANs with storage controller and storage shelf capacity that sit unused for years.

VDI Workload Uncertainty: Many different scenarios increase VDI demand more than anticipated, such as implementing a new resource-intensive application, hiring more employees than expected, acquiring another company, etc. If an organization guesses incorrectly and under-buys SAN capacity, it still faces a forklift upgrade. And as the Gartner study referenced earlier shows, most organizations under-buy storage.

SANs Mean a Large “I” in ROI: Even when an organization purchases a large enough SAN to meet future resource requirements, it struggles to show a decent (if any) return on investment because of the up-front cost. As Table 1 shows, the excess capacity begins depreciating on the first day the SAN is installed. Besides the excess capacity expense, there are also rack space, power, and cooling expenses.

SAN Depreciation

 Table 1: Depreciation of the Up-Front Excess Capacity of a SAN

Performance Challenges: While SANs, particularly all-flash arrays, have gotten faster, performance is still limited by the physical storage controllers. As the throughput capacity of these controllers becomes saturated, users experience intermittent read storms and write storms. It translates to an inconsistent user experience – and users (and administrators) hate inconsistency. It’s better to give users a 100% mediocre experience than a 95% excellent and 5% mediocre experience.

Datacenter Politics: Traditional SANs create a political challenge – particularly for persistent virtual desktops. Desktop IT staff get frustrated by constantly going to central IT storage administrators to secure the LUN they need or for more IOPS/lower latency. The central storage staff, in turn, don’t understand write requirements for virtual desktops and doesn’t want to be bothered by VDI requests.

VDI Failure: The SAN issues around a lack of scalability, high initial cost, diminishing/intermittent performance, risk of forklift upgrades and politics make it difficult, as the VDI market share figures indicate, to implement a successful virtual desktop project. While I can’t find collaborating citations, it’s commonly accepted knowledge that most VDI deployments utilizing traditional storage stall out not long after the pilot phase or fail completely.

The Game-Changing Economics of Hyperconvergence

Nutanix HCI provides VDI customers with the exact opposite experience of a SAN. Customers can start small and expand as needed – one node at a time if they’d like. As they add nodes, they’re bringing the latest in CPU, memory, disk, and flash technologies to their environment The cost per VM and total project cost plummets. The risk of over-buying or under-buying is eliminated.

Additionally, HCI customers benefit from Moore’s Law. Table 2 shows a 5,000 user organization migrating to VDI in conjunction with a 5-year refresh rate. Every year for five years IT, rather than buying new PCs for 1,000 users, either locks down their devices or gives them zero clients.

This organization purchases Nutanix or Dell XC nodes instead of a SAN. Assuming it starts with a pilot, it only needs to purchase three nodes to kick things off. As it expands to the first year’s targeted 1,000 users, eight nodes are purchased in total.

Table 2

Table 2: The Declining Number of Nutanix Nodes Required Due to Moore’s Law Hardware Enhancements

Moore’s Law, which states that the number of transistors on a processor doubles every eighteen months, has long powered the IT industry. Laptops, the web, iPhone, and cloud computing have been enabled by ever-faster CPUs.

There’s no end in sight for the continued performance benefits of Moore’s Law, even though the performance is achieved in ways Moore wouldn’t have guessed. The Intel Haswell E5-2600 CPUs, for example, show performance gains of 18% – 30% over the Ivy Bridge predecessor.

Moore’s Law means that hardware continues to get faster. If we assume an annual density increase in VMs per node of 25%, the example organization can handle the next 1,000 users with only six more nodes in year 2. By year 5, it only needs three more nodes to handle the last 1,000 users.

Software-Defined Storage Enabled 1-Click Upgrades

Because SANs use proprietary hardware, the firmware tends to be tightly coupled with the equipment. A new OS version can’t simply be applied to an older SAN, thereby giving it the attributes and capabilities of a new array.

But Nutanix is truly software-defined. Customers can non-disruptively apply the latest Nutanix OS to their existing nodes with a single click, giving their nodes all the capabilities and features of newer models. All nodes, old and new, run the same OS and are managed from a central single-pane-of-glass Prism management console even if the nodes are globally distributed. Because of our software, the older nodes realize increased performance as well.

Nutanix has seen a 5X increase in performance resulting from software from 2012 to today. A Nutanix and Citrix partner, Tim Buckholz of Choice Solutions, wrote a blog post describing how he saw around a 50% increase in performance on the same Nutanix nodes when he upgraded from OS 3.1 to 4.1. As another example, Nutanix’s latest OS release included its patented erasure coding which enables customers to realize around a 30% increase in capacity (even on their older Nutanix and Dell XC appliances).

Combining the hardware performance benefits resulting from Moore’s Law with the software-defined one-click upgrade benefits accentuates the advantages of Nutanix’s pay-as-you-grow architecture. Customers require far fewer nodes when expanding their environment. And original nodes, which are refreshed with the latest OS, can be repurposed for tertiary use cases that don’t consume primary datacenter real estate.

Linear Performance

With Nutanix, each node is a virtualized storage controller, reducing or eliminating the read and write storms common in a SAN environment. Combined with bringing the workload next to the CPU (rather than accessing it across a network), the Nutanix HCI architecture translates to a high-performing and consistent user experience. A better user experience leads to enthusiastic and broad-spread VDI acceptance.

Citrix XenDesktop and Acropolis Hypervisor

The pay-as-you grow capabilities of Nutanix HCI along with the advantages enabled by Moore’s Law and Software Defined Storage 1-click upgrades mean that organizations can realize significant CapEx and OpEx savings from VDI (and a payback period typically less than one year).

Acropolis Hypervisor (AHV) is a next-generation hypervisor that leverages the software intelligence of the hyperconverged architecture. AHV changes the core building block of the virtualized datacenter from hypervisor to application. It liberates virtualization from specialists – making it simple and manageable by anyone from DevOps teams to Citrix administrators.

Running Citrix XD with Acropolis Hypervisor provides customers with benefits such as simplicity, scalability, security, resiliency, analytics, and support of the entire stack. Table 3 shows an example of XenDesktop and Acropolis ROI results based on an analysis we provided earlier in 2015 for a university. After seeing how much money it could save and all the other benefits from using virtualized desktops, the university expanded the initiative from a small project to encompass 6,000 staff members (3,000 PCs and 3,000 laptops).

 

Table 3 rev 4

Table 3: Projected 5-Year Cash Flow Savings from Migrating to Citrix XD on Nutanix Acropolis

CapEx

  • Refreshed Existing PCs: Cost of the PCs + the installation multiplied by the average number of PC upgrades each year by 5 years. For this example:
    • PC Cost (w/o monitor) $750
    • Installation Cost 2 Hr. @$30/hr. loaded cost = $60
    • Ave # Upgrades per Year 48 month refresh rate = 3,000 PCs/4 = 750 PC upgrades on ave. per year
    • ($750 + $60) X 750 X 5 years = $3,037,500
  • Refreshed Existing Laptops: Cost of the Laptops + the installation multiplied by the average number of Laptop upgrades each year by 5 years.
    • Laptop Cost $1,200 (w/o including docking station or monitor)
    • Installation Cost 2 Hr. @$30/hr. loaded cost = $60
    • Ave # Upgrades per Year 36 month refresh rate = 3,000 Laptops/3 = 1,000 laptop upgrades on ave. per year
    • ($1,200 + $60) X 1,000 X 5 years = $6,300,000
  • PC & Laptop Disposal Cost: $25 (university contract) for recycling or otherwise disposing of the old laptops and degaussing the drives.
  • VPN Equipment Eliminated with VDI: Using XD SSL VPN eliminates the need for $40K of VPN equipment per year at a 5 year refresh rate.
  • Nutanix Cost: Five year cost of Nutanix and support
  • XD & 1st Year SA Cost: Five year cost for XD licensing & 1st Year SA
  • Profile Manager: The University is not utilizing a profile manager, although many organizations do use products from manufacturers such as LiquidWare Labs, RES Software, AppSense and others (Unidesk is especially popular among universities).
  • Wyse Zero-Client Cost: PCs and laptops are locked down for the 1st three years and then are replaced with zero-clients (costing $300 each in the example).
  • Cost to Design & Deploy VDI: $80,000 including the Nutanix implementation (a small component)

OpEx

  • Encryption Cost for Physical Laptops: $20 per laptop per year
  • PC & Laptop Power Cost: 80 watts and 50 watts respectively. $.098 per kWH Commercial rate
  • Wyse Power Cost: 8 watts
  • Nutanix Power & Cooling: 1,100 watts per block
  • Eliminated VPN Maintenance: $4,000 per year
  • Staffing Expense: IDC says the cost of staffing virtual desktops, for a typical organization, is 5.2 hours per year – 40% of that for physical desktops. We used this metric along with a loaded IT hourly salary of $30.

What’s Not Included

Hypervisor Cost: Because the university is running XenDesktop on Nutanix Acropolis Hypervisor, there is no additional cost for a hypervisor. This integrated stack also makes management of the environment even easier.

Inventory:  The benefits of eliminating the existing PC & laptop inventories.

User Downtime: Savings from reducing user downtime. If IDC-reported downtime savings numbers are utilized, the analysis would reflect an additional $8M of savings.

User productivity: Savings from making users more efficient are difficult to quantify – especially for a university. In other use cases, it can be easier. For example, an attorney with access to her desktop billing program from anywhere may be able to increase cash flow. Nurses that reduce their log-in times through VDI technology can see more patients.

Disaster Recovery: We do not quantify the benefits from enabling anytime/anywhere access now for the faculty to their virtual desktops.

BYOD: Citrix XenDesktop makes it easy to provide users with their organizational desktop running on any device.

Student VDI: Further savings can be accrued from expanding the environment to encompass learning labs, BYOD, etc.

Table 4 shows another perspective of the ROI results, on a year-by-year cash flow basis – which is typically the way that finance people like to see the results. The $7,876,367 in five-year savings foots to the same number in Table 3. The projections show an ROI of 452% with a 9.4-month payback and an IRR of 106%.

Note that even if staff cost reductions are removed from the calculations that 5-year savings are still projected at over $3M with a payback of under a year.

Table 4 rev 2

Table 4: Cash Flow Savings by Year

Tips for VDI Success with an HCI Architecture

One of VDI’s challenges has been that IT staff assume virtualizing desktops is as easy as virtualizing servers. VDI involves hundreds or thousands of users – each with their own experiences and expectations. Just one disgruntled user, if it’s the right user, can kill a large VDI project.

When a VDI deployment has problems, it tends to affect a large number of users. No matter how well the “mission-critical” application environment is constructed – if the users can’t access the applications, then the organization suffers. A VDI initiative should be treated with the same importance as an enterprise ERP application deployment.

Start with an ROI Analysis

Generating a big picture ROI analysis that includes pain point identification, objectives, and costs is a great start to a successful virtual desktop project. The economic savings enabled by Citrix and Nutanix virtual infrastructure provide incentive for the organization to make the VDI migration a high priority.

Consider Including a User Experience Management Solution

As the ROI analysis almost always shows, Nutanix architecture enables enough savings to warrant an organization “doing VDI right”. Consider using a leading UEM vendor such as Unidesk, LiquidWare Labs, AppSense, or RES Software. Nutanix has partnerships with all of these organizations, and their products are all Nutanix Ready.

Take Advantage of Hyperconverged Infrastructure to Provide a Stellar Pilot

A VDI pilot is one of the most important elements of a successful initiative. If pilot users have a poor experience, they will quickly spread word. The project stalls or fails. With this in mind, you can understand why VDI has such a low market share.

Conventional storage makes it difficult for organizations to establish a successful pilot and then grow the environment. Organizations want to start off with a small SAN sized for the initial pilot and not invest in a larger unit until they’re sure of the project success.

But this approach leads to poor performance for the all-important pilot users. Even if the initial users are happy, the environment grows as peers want their own virtual desktops. Performance degrades and the project stalls.

Nutanix lets organizations start a pilot with as few as three nodes, growing one node at a time as needed. It provides users with a great virtual desktop experience from the start and remains consistent as the environment grows.

Take Advantage of Nutanix Shadow Clones

Nutanix shadow clone technology works well with Citrix MCS and can deliver both persistent and non-persistent desktops. This article by Kees Baggerman and Martijn Bosschaart explains how to effectively use Citrix MCS and PVS on Nutanix.

Remember VDI is Primarily About User Productivity

As impressive as the savings might be from utilizing Citrix XD on Nutanix, increased user productivity is the most important benefit for most organizations. Focus on delivering the best user experience along with improved security, disaster recovery, BYOD, support, and administration. And deploy the best architecture and products possible in order to ensure project success.

 

See Also

The Single Biggest Benefit of Nutanix Hyperconverged Infrastructure (It’s Not What You Think). 12/10/2015. Steve Kaplan. Nutanix web site.

The 7 Reasons Why Acropolis is the Next Generation Hypervisor. 12/01/2015. Steve Kaplan. Nutanix web site.

Nutanix and Citrix: Citrix App Delivery on Nutanix Acropolis. 08/13/2015. Kees Baggerman. MyVirtualVision.

Citrix Mounts Acropolis, Offers Support for Nutanix Citrizenry. 08/13/2015. Chris Mellor. The Register.

Citrix XenDesktop & XenApp Supported on Nutanix Acropolis. 08/12/2014. Andre Leibovici. MyVirtualCloud.net.

Lord of the Token Rings: Nutanix Forms New Alliance with Citrix. 08/12/2015. Adrian Bridgwater. Forbes.

The Secret Bottleneck of VDI. 10/10/2014. Mark Lockwood. Gartner Blog Network.

 

Thanks to Kees Baggerman – CTP (@kbaggerman), Martijn Bosschaart (@mbosschaart) and Andrew Mills (@drutanix) and Tim McCallum (@TimMcCallum97) for contributions and editing.

December 4, 2015

How Nutanix Acropolis Hypervisor is Causing Organizations to Rethink Virtualization

I wrote a post a few days ago titled, The 7 Reasons Why Acropolis Hypervisor is the Next Generation Hypervisor. You can read it on Nutanix’s web site here.

ComputerWeekly came out with their take on the subject the next day with a humorous, but I feel mostly accurate article (by Adrian Bridgwater), titled, Nutanix: we’re so over legacy hypervisors. I like his description of my blog as a “listicle”.

The article concludes with the advice, “So now you know, don’t get caught wearing last season’s legacy hypervisor out in public.”

 

December 3, 2015

Shridhar Deuskar, VCDX #12, is the 17th VCDX to join Nutanix

Dueskar

Shridhar Deuskar (@IOguru) VCDX #12 joined Nutanix as Senior Manager, Advisory Services. He previously worked at VMware and then at EMC as part of XtremIO (Shridhar was one of the first 5 employees of the U.S. office of XtremIO prior to its acquisition by EMC).

“I see a tremendous opportunity for services at Nutanix,” Shridhar said. “One of the reasons I joined the firm was that I could hit the ground running and make a significant contribution quickly. I wanted to be part of something exciting. I like working in fast-paced environments, and Nutanix is a rocket ship.”

Given Shridhar’s expertise with all-flash storage arrays (AFA), I asked him how the hot technology fits in with Nutanix’s offerings (Nutanix currently offers only one all flash hyperconverged node). While a big proponent of all-flash, Shridhar thinks that customers considering AFA owe it to themselves to evaluate a hyperconverged solution.

“I feel AFA will eventually become mainstream,” Shridhar said. ”But Nutanix’s mixed disk and flash nodes deliver performance that is more than adequate. This performance is on par with AFAs, but much simpler to manage since it avoids multiple components. And Nutanix hyperconverged allows customers to enjoy the same level of capability and user experience as they’d receive in the cloud, but with more control and at a much lower cost”.

Shridhar is a member of the newly created Advisory Services group within GSO. He reports to Bill Hussain and is responsible for growing services revenues (all of which go through channel partners). Shridhar is working with fellow VCDX Jon Kohler as well as with Dave Elefante.

“Shridhar is one of the first group of VCDXs and has a great reputation within the industry,” said Hussain. “We’re really glad to have his assistance in driving the advisory services component of the GSO. We’re seeing organizations across the globe increasingly migrating from 3-tier infrastructure to web-scale, and the very experienced GSO consultants can help advise them in deploying optimal technology, process and organizational strategies.”

Shridhar is already working on a couple of specific delivery services including Vblock migration and a   Cloud Assessment practice.

“NoSQL Databases and some of the modern cloud-based monitoring apps such as Splunk, the Hadoop eco-system, and other new generations of apps have been purpose-built for a shared nothing architecture,” Shridhar said.  “The Nutanix platform lends itself naturally as the platform of choice for such applications. I am also thinking of coming up with specific services with this new class of applications.”

November 10, 2015

Brian Suhr, VCDX #118, is the 16th VCDX to join Nutanix

suhrpic

Brian Suhr VCDX #118 (@bsuhr) joined Nutanix as a Senior Technical Marketing Engineer. Brian has worked for 20 years in the industry and runs the blog sites, Data Center Zombie and VirtualizeTIPs. Prior to joining Nutanix, he was a Senior Solutions Architect/Technologist at the Office of the CTO at AHEAD.

“After spending 4 ½ years on the channel side,” Brian said, “I was really interested in learning the vendor side of things. I’ve been approached through the years by a lot of vendors, but an important checkbox for me was the requirement to totally believe in the product and to get up every day with a smile on my face believing I was making a positive contribution to the industry. Nutanix definitely checked that box.”

At AHEAD, Brian helped evaluate new products and vendors so that the company could make good bets on what technologies it should focus in.

“If someone hasn’t already bet on Nutanix,” said Brian, “They’re already late to the party. The simplicity is a home run. Everyone talks about not wanting to spend 70% – 80% of their time keeping the lights on.  As I looked at legacy solutions and other HCI players, Nutanix is the only company really delivering on the complete simplicity story.”

Brian reports to Kevin Fernandez. He splits his time between enablement, reference architectures, best practices guides, tech notes and product marketing.

“We try to keep tabs on the really talented folks throughout the industry, and we’ve known about Brian for years,” said Kevin. “We’re excited to have him on the team as another ambassador of Nutanix technology and engineering innovation. Brian will help us continue to build and deliver solutions that transform our customers’ IT management experiences.”

Coming from the channel, I asked Brian what channel partners should be doing in order to thrive in the new era of cloud.

“If you’re a partner focused on selling widgets and boxes, you’re already far behind,” Brian said. “It’s OK to sell widgets as part of solution, but the only way to thrive and lead the pack is to sell services. Partners that sell VDI, build orchestration systems and help customers automate their environments will distinguish themselves and thrive.”

When I pointed out that Nutanix eliminates the back-end rack and stack and configuration services that many VARs depend upon, Brian remarked, “Nutanix eliminates the unintelligent back-end services. Nutanix takes only hours rather than weeks to implement, but it allows partners to move the conversation up the stack to the automation/orchestration level. Nutanix hyperconverged provides a great foundation for partners to help customers tie into Dev/Ops and private/public cloud.”

Brian says he is interested in obtaining an NPX certification, but for the next year he wants to focus on learning about Acropolis and all the Nutanix architectural nuances. He also is enjoying the opportunity to work with the rapidly growing Nutanix organization.

 

 

September 14, 2015

Ryan Grendhal, VCDX #65, is the 15th VCDX to join Nutanix

Ryan G 2

Ryan Grendhal VCDX #65 (@rgrendhal) joined Nutanix as a Senior Sales Engineer, North Central. Ryan designs solutions around Nutanix technology that drastically simplify data center operations, allowing Minnesota businesses to focus on innovation. Ryan previously spent 17 years in various roles in IT, the last six as a senior architect at Datalink.

“Nutanix’s vision and execution around simplicity is what attracted me to the company,” said Ryan. “While VMware initially brought a whole new level of simplicity to the datacenter, the architecture has become increasingly complex over the years. I’ve been looking for the next great innovation that would drive simplicity again, and am convinced that Nutanix web-scale is it.”

Prior to joining Nutanix, Ryan specialized in vRealize Automation where he got a lot of first-hand exposure to the difficulty in creating processes to simplify operations. The challenge is amplified when vendors and IT staff forget that the whole purpose for IT is to enable business. This means solving issues as quickly as possible so that the business can move onto other things.

“Products built by infrastructure companies to solve modern application needs won’t cut it,” Ryan said. These will only work with legacy client/server applications, but cloud apps define the future. Nutanix’s Acropolis hypervisor provides that bridge to the cloud.”

Ryan explains that Nutanix provides invisible infrastructure which addresses modern cloud apps. The simplicity emanates from the Prism interface, 1-click upgrades, VM-centric statistics and analytics-based proactive assumptions. The combination dramatically simplifies IT administration.

“We look for consultants who can bring exceptional value to our customers by thinking outside the box,” said Ajay Aggarwal, Vice President Sales Engineering Americas for Nutanix. “Ryan exemplifies the type of expertise, passion and customer focus that have allowed us to generate such phenomenal customer advocacy.”

Ryan is definitely passionate about technology and the industry in general. Coming from the channel, I asked him what channel partners should be doing in order to thrive in the new era of hyperconvergence and cloud.

“Channel partners need to redefine their value by moving up the IT stack,” Ryan said. “Designing RAID groups for IO workloads will no longer cut it. Partners need to be able to architect solutions that provide agility and faster time-to-market for the business.”

Ryan said that enabling business success all ties back to simplicity and to execution. But this goes well beyond technology.

“When I listen to [Nutanix CEO] Dheeraj Pandey talk and I see what our product teams are doing, I realize that simplicity permeates all levels of the company,” Ryan said. “It’s a very exciting place to work.”

 

 

July 6, 2015

Zi Seng Yeo (@VCDX180) Joins Nutanix

Jason_Yeo

Zi Seng Yeo (“Jason”. @jasonyzs88) is the 14th VCDX to join Nutanix. Jason is part of the Global Services Organization in Singapore. Prior to starting at Nutanix, Jason spent the last 5 years at VMware where he served as a delivery consultant and, more recently, as an End User Computing Specialist in the VMware APJ Centre of Excellence.

“Jason is one of seven VCDX-DT (Desktop Technology) worldwide,” said Bill Hussain, Sr. Director Consulting Services for Nutanix. “Hiring him was a huge coup as he is a true cornerstone-type resource for consulting in APJ. While he will be focused primarily on consulting delivery and partner enablement in region, we would be remiss to not leverage his thought leadership more broadly worldwide.”

Jason says that one of the biggest challenges to a successful VDI deployment is that organizations simply try to convert convoluted traditional physical desktop deployments to virtual. He puts an emphasis on helping customers get their desktop environments cleaned up and streamlined for VDI.

“Another issue with VDI for larger organizations,” Jason says, is that they often have teams such as virtualization, storage and desktop who are used to working in silos. For VDI to be successful, it is essential that these teams come together and work collaboratively to enable an efficient deployment.”

This is one of the reasons why Jason likes hyperconverged technologies – because the platform itself helps eliminate the challenges of IT silos by eliminating the requirement for separate storage administration. Segregating VDI traffic to the top-of-rack-switch also minimizes network traffic and contention.

“Hyperconvergence, and especially Nutanix helps by taking away complexity of design and virtualization and storage layers,” Jason said. “There is less to worry about: less people, less change requests, etc.”

Jason maintains that hyperconvergence also helps eliminate politics of the datacenter. “The desktop team no longer has to pester the central storage IT administrators for more LUNs or IOPs. VDI is a new technology for many storage administrators, and they often don’t understand virtual desktop variables such as high write requirements.”

Prior to joining Nutanix, Jason set up a couple of parallel VDI deployments: One using VMware VSAN and one using Nutanix. “It was interesting to see how things happened at both sites in parallel,” Jason said. “I was amazed at how much easier it was to deploy VDI on Nutanix. The metrics were also great.”

The Nutanix technology was one of the compelling reasons Jason decided to join the company. “Separating the CVM from the kernel helps with upgrades and changes because they are decoupled from the hypervisor,” Jason explained.  “There are simply too many dependencies to worry about if everything is in the kernel.”

While Jason will continue to assist with VDI deployments at Nutanix, he will be involved with all aspects of Acropolis and Prism implementations.  He also says that he is definitely going for his NPX certification. He is not sure which two hypervisors he will focus on yet, but will likely choose vSphere and Acropolis.

I asked Jason if he ever utilized ROI analyses as part of his VDI implementations. He said that he didn’t utilize financial modeling specifically, but that he always does try to understand the business requirements that customers are trying to achieve.  He keeps these objectives in mind while architecting the design and during the deployment.

June 17, 2015

Andrew Nelson (VCDX #33) joins Nutanix

Andrew Nelson

Andrew Nelson (@vmnelson) is the 13th VCDX to join Nutanix. Andrew previously was a Staff Systems Engineer II at the Office of the CTO at VMware. During his years at VMware, Andrew developed a reputation as being one of the foremost experts at virtualizing challenging workloads such as high-performance computing (HPC).

“I’ve never accepted the industry truism that bare metal is preferable for workloads such as Hadoop or HPC,” Andrew said. “It’s a question of figuring out the true value that virtualization adds, and then ensuring that variables such as performance are equivalent.”

Like many of the VCDXs – particularly the ones that have come to Nutanix, Andrew is a bit of an iconoclast. Even while at VMware, he spent a lot of time working with KVM.

“It’s all about perspective,” Andrew said. “KVM includes certain capabilities that vSphere lacked which helped me in turn understand the big picture. If you lack perspective, then you’re just an evangelist – someone who can only discuss one side of a technological offering.”

Andrew is a Distributed Systems Specialist reporting to Sandeep Randhawa.  Andrew’s responsibilities include assisting customers in understanding both the benefits and best practices of virtualizing challenging workloads.

“We strive to hire, what is well known in the valley as, ‘smart creatives,’” said Sandeep Randhawa. “We are building a team of Specialist (Pre Sales) System Engineers that not only have deep domain knowledge around different hypervisors and the application ecosystems around them, but who also have an exceptional curiosity and the grit to push that expertise to build out enterprise solutions not economically possible with traditional architecture. With talent like Andrew onboard, the possibilities are endless as we go from Act I to Act II and on toward Act III.”

I asked Andrew why he joined Nutanix, and he responded, “I really like Nutanix’s approach to hyperconvergence – a focus on management. I also like that Nutanix is a smaller company where I can have more influence on the work stream and provide feedback that our engineers will utilize as part of their product development.”

Andrew said he is particularly excited to be working with Acropolis. He explained, “Acropolis includes the scale-out and automation capabilities required to ensure that enterprises can now successfully virtualize and manage Big Data, HPC and containers.”

Andrew picked up his VCDX before even joining VMware, when he was the lead virtualization architect for the US Marine Corps. He is planning now to go for his NPX certification, “It will be hard, but will be an opportunity to expand and sharpen my perspective.”

May 5, 2015

Alexander Thoma – @VCDX026 (“The Iron Panelist”) Joins Nutanix

Thoma

Alexander Thoma is the 12th VCDX to join Nutanix. Alexander was known as the “iron panelist” during his nine years at VMware because he sat through so many VCDX design panels. He oversaw half of all the world’s VCDXs’ design reviews.

Thoma said that not only is pursuing an NPX certification high on his priority list, but that he hopes to also replicate his “iron panelist” achievement as part of an NPX panel.

“It’s fantastic that a second company in the industry understands the value of a high-level architectural certification,” Thoma said.  “A deep technical understanding, requirements understanding and architectural analysis is required for customer success.”

I questioned Thoma about why an NPX certification is important in light of Nutanix’s “uncompromisingly simple” mantra. He responded, “Well, you certainly don’t need an NPX to install a Nutanix block which is very easy.  But while Nutanix slashes complexity, someone still has to design and implement the overall architecture.”

“Application requirements still have to be analyzed, gathered and put into the right context.  To do that requires NPX-level skills for the more complex enterprise solutions.  An NPX is someone who can drive data collection while keeping the big picture in place – making sure all the components work together.

“It is also increasingly important to have expertise in more than one hypervisor. While the features of hypervisors grow ever more comparable, the management deltas are still far apart. Nutanix gives customers the ability to efficiently utilize whatever hypervisor is best for the use case, for example – perhaps ESX for enterprise applications, Hyper-V for Windows Server and KVM for everything else.”

Thoma is a Sr. Solution & Performance Engineer for Nutanix, reporting to Michael Webster (@VCDXNZ001). Thoma’s responsibilities include developing solutions for business applications such as SAP, Oracle, Exchange, SQL.  He will also help evaluate new releases and will help with alliances tasks, especially with SAP.  Thoma will also be available to help both customers and partners with strategic initiatives.

“The best way to build a winning team, “said Michael Webster, “is to hire the most capable and talented people, and then empower them to be successful. Alexander is recognized around the world as a leading enterprise architect for virtualized and cloud environments, especially with regard to business critical applications including SAP. At Nutanix, we understand the value that expert level architecture skill brings to making our platform simple for our customers to use. We are incredibly fortunate to have Alexander on our team as we continue to expand our R&D and Solutions and Performance expertise in enterprise applications around the globe. ”

Mark Brunstad (@MarkBrunstad), Manager Curriculum Development of Nutanix nu.school – Educational Services, was formerly the VCDX Program Manager at VMware. He commented, “I’m really happy to be working with Alexander again because he is such an impact player and his skills will be amplified by this exceptional team. The talent we have is really setting Nutanix apart when it comes to the customer experience we can provide. People are talking –  the sky’s the limit.“

“I joined Nutanix,” Thoma said, “because I wanted to be able to make an impact. That requires being in a company with disruptive technology, and one that is not too large.  At a very large company, it’s hard to quantify your impact.  And, of course, the technology Nutanix provides is very interesting and has a lot of potential.

“With well over 1,000 employees, Nutanix certainly isn’t a start-up any longer, but it still has the dynamics of a smaller company. I can speak with anyone from resolution engineers to the Sr. VP of Marketing. People are open-minded and were soliciting my opinions from the first day on the job.

“I’m really, really excited.  It’s only been 4 days and I’m already totally sure it was the right move.”

 

April 26, 2015

My brief advice for applying for a job at Nutanix

A friend of mine once told me, “If I can’t be good, I’ll at least be responsive.”  I too try to be very responsive to customers, prospects, partners and fellow employees.

There was a time at Nutanix when I was also very responsive to job seekers; I certainly respect those who would like to work here. But when I joined Nutanix a little over 2 years ago, we had around 150 employees. Today we have over 1,000.

Nutanix is one of the hottest young companies in the technology sector. We have very high reviews at GlassDoor and in other forums. But because Nutanix is a very selective employer – you can imagine how many job inquiry requests we must get in order to grow so quickly.

I personally probably receive an average of 1 – 3 requests each business day – though I just received one this morning (Sunday) which was the inspiration for this post. At the bottom of my LinkedIn profile, I include advice for applying on-line at Nutanix.

But in reality, if I wanted a job in Nutanix sales or channels, I’d come to the table with an opportunity. That’s the way to grab attention. In fact, that’s exactly what I did. I sold my first Nutanix implementation before even accepting a job with the company – and had others in the queue.

Additionally, Nutanix is a very savvy practitioner of social media. I suggest strategic Twitter following and tweets, LinkedIn connections and updates, and – of course, blogging. The industry knowledge, relationships and recognition you accrue will certainly help you achieve your objectives.

March 15, 2015

How to upgrade Nutanix firmware – a very short blog post

Reading Derrek Hennessy’s thorough blog post last week about how to perform a UCS firmware upgrade prompted me to see what Cisco had to say on the subject. It turns out that Cisco has published even more detailed instructions. And here’s an informative post that Kevin Houston wrote last year about lessons learned regarding UCS updates.

I was curious as to what the equivalent process is for Nutanix firmware upgrades — keeping in mind that the Nutanix virtual compute platform includes not only the compute, but also the storage components. Jonathan Kohler (@JonKohler) and Jerome Joseph (@getafix_nutanix) provided the steps and screen shots.

Steps for Upgrading Nutanix Firmware

1 – Log into Nutanix Prism

2 – Select “Upgrade Software”

3 – Download the Target Code … (this comes from Internet automatic download)

4 – Hit Install

5 – Play 2048 while you wait

6 – Upgrade Complete!

An upgrade of 4 nodes requires 30 minutes, is rolling (one node at a time), and is non-disruptive (no compute or storage disruption during the NOS upgrade). And because Nutanix is truly software-defined, the existing hardware is enhanced with new features, bug fixes and significant performance improvement.

Software-Defined Bonus: Hypervisor Patches

In addition to Nutanix Operating System upgrades, organizations can also upgrade their hypervisor, NCC and disk firmware with the “Upgrade Software” link. This process is also extremely simple, utilizing pretty much the same non-disruptive workflow as listed above. Additionally, bios, bmc and  Ethernet firmware upgrades are tentatively planned for NOS 4.2.

FW 1

 

 

FW 4

FW 5

FW 7

FW Final

March 11, 2015

Why Nutanix isn’t singing the VSPEX Blues

[Author note: This article was published originally on Channel Disrupt on 2/3/2015]

Does EMC’s announcement of VSPEX BLUE pose a roadblock to Nutanix’s record-setting momentum?  It’s actually the opposite. Nutanix is not going to revolutionize the $73B server and storage market without a lot of good competitors. And there is no hardware manufacturer more important than EMC to validating hyper-converged infrastructure (HCI) as the future of the virtualized datacenter.

The Clout of EMC

EMC started the whole storage array industry in 1990 with its introduction of Symmetrix. The company continues to dominate with a 30% share of the $23.5B storage market. And it has augmented its storage business with many other very successful acquisitions over the years including VMware, Data Domain, Avamar, RSA and Isilon.

The Hopkinton giant has also done an admirable job in developing channel partner loyalty despite selling directly to certain customers. Partners appreciate both the leads EMC brings them and the help it extends in closing deals. They also like the distinction they earn by acquiring EMC certifications. These certs translate into back-end services revenues for integrating EMC’s complex stable of storage products.

But all is not roses. “The Federation” has stumbled a bit the past few years as its revenue growth rate has declined. EMC recently had to absorb the highly unprofitable VCE partnership, and the company was known to have shopped itself out to HP, and possibly others, late last year.

Despite these setbacks, EMC continues to be one of the most influential companies in the datacenter. Customers and partners across the globe take note of its vision and purchase its products. As a recent example, even all of the pain of the disruptive XtremeIO upgrade didn’t squelch its title as the fastest-growing EMC product ever (albeit a lot of this growth is likely coming at the expense of declining VMAX sales).

Positioning of VSPEX BLUE

EMC is going to market with an EVO:Rail solution as part of its VSPEX group which now also includes VCE. VSPEX, of course, is a converged infrastructure reference architecture including servers, storage and network while Vblock is a manufacturer-integrated solution. In neither case is there any actual convergence of infrastructure. Customers still face the same extensive rack space requirements, management challenges and scalability issues as when purchasing the products individually.

VMware’s EVO:Rail, on the other hand, is genuinely hyper-converged infrastructure. It includes consolidation of redundant hardware and elimination of multiple management tiers. (As an aside, “hyper” in hyper-convergence stands for “hypervisor”, not for “excessive”. Hyper-converged products only work, at least today, with virtualized workloads).

VSPEX BLUE’s product name and category grouping affirms that EMC considers hyper-convergence to be just another offering in its vast array of storage oriented products. EMC Chairman, Joe Tucci, reinforced this perspective in his 01/30/2014 earnings call: “Let me add a little color. When our sales force goes in they don’t think about [deciding] what’s declining, what’s growing, what they think about is, what are the customers’ needs and then we have a whole portfolio of products and as you can see, that’s our strength and as we are doing that, you can also note that our gross margins are doing well.”

Nutanix: One Mission

While Nutanix describes its offering as “Web-scale” in reference the Google-like infrastructure it introduced to the enterprise, the overall industry increasingly recognizes the broad category as “hyper-converged infrastructure”.  Nutanix, with a 52% market share, is the clear leader in the hyper-converged space.

IDC HCI chart

Unlike EMC, Nutanix does not consider hyper-converged infrastructure to be a storage line-item. We live, eat and breathe Web-scale as not only a vastly superior platform for hosting a virtualized datacenter, but as the inevitable future.

If you go to Nutanix’s engineering department, you don’t find a lot of ex-storage folks. Instead, engineers from companies such as Google, Facebook and Twitter work to enable massively scalable, very simple and low-cost infrastructures for government and enterprise customers. It’s a completely different mindset.

This same scale-out mindset is pervasive in marketing, finance, channels, operations, HR, professional services, alliances and sales. Sr. VP of Sales, Sudheesh Nair, recently commented in a blog post, “EMC is a $60B company with one of the fiercest and meanest enterprise sales engines ever assembled on the face of the earth (I say this as a compliment with full admiration).”

But as good as EMC’s sales force may be, messaging hyper-convergence as just another approach to a virtualized data center is going to be difficult to convey with the same conviction as Nutanix’s sales folks. Nutanix is focused on revolutionizing the data center – or as our federal team likes to say, #OneMission.

So Who Will Win, Nutanix or EMC?

A big answer to this question, of course, is dependent upon the channel. Channel partners hold a lot of sway over their customers and are instrumental in helping them select the best technology for their requirements.

Fortunately, we’re seeing a rapidly increasing number of channel partners adopt the same type of Web-scale passion as our own sales teams. Partners are realizing that while they may not be able to charge their customers for the same back-end integration services that EMC products enable, they develop a deeper trust and many more higher margin services opportunities in areas such as hybrid and private cloud enablement, big data, Splunk, metro cluster, VDI and so on.

The VSPEX BLUE launch, paradoxically, is going to help Nutanix partners make a huge leap forward. Marketing gurus Al Ries and Jack Trout describe “Law #1: The Law of Leadership” in their book, The 22 Immutable Laws of Marketing. This law states, “The leading brand in any category is almost always the first brand into the prospect’s mind.”

In other words, by promoting VSPEX BLUE, EMC sets the stage to win against the real competition – the $73B of servers and storage sold every year. Both Nutanix partners and their customers will win as a result.

See Also

EMC’s VSPEX BLUE Joins the VMware EVO:RAIL Family of Systems. 02/03/2015. Mornay Van Der Walt. VMware Blogs.

EMC’s Joe Tucci on Q4 2014 Results – Earnings Call Transcript. 01/30/2014. Seeking Alpha.

EMC Combines VCE, VSPEX into New $1B-plus Converged Infrastructure Business. 01/28/2014. Joe Kovar. CRN.

IDC MarketScape: Worldwide Hyperconverged Systems. 01/26/2015. Storage Newsletter.

On Classless Winners and Classy Losers. 01/26/2015. Sudheesh Nair. LinkedIn.

EMC said to Explore Options Ahead of CEO’s Retirement. 09/22/2014. Beth Jinks. Bloomberg.

XtremIO Craps on EMC Badge. 09/18/2014. Nigel Poulton. Nigelpoulton.com

 

 

 

 

 

 

December 7, 2014

Meet Nutanix’s 11th VCDX, Richard Arsenian

According to Forbes, there are 1,645 billionaires on the planet. This is 9 times the 184 VCDXs. In other words the VCDX community is a very exclusive group. Obtaining a VCDX certification is an arduous process and only the most exceptional engineers are successful. These folks can pretty much work wherever they’d like.

Arsenian

Last July, I wrote a blog post about our (then) ten VCDXs who all explained why they came to Nutanix. The common theme was that Nutanix Web-scale is changing the industry similar to the way in which VMware changed it last decade – and they want to play a part.

Richard Arsenian (@richardarsenian) is VCDX #126. He joined Nutanix last month after previous roles at VMware and Cisco. Richard works as a Senior Systems Engineer in OEM Sales. Here is Richard’s response when I asked him why he chose to follow his fellow VCDXs to Nutanix:

“As technology experts, we look at new ways to innovate and often look up to those who pioneer the latest technology trends in the industry.

“The decision to join Nutanix for me was quiet simple; I was present when VMware introduced the concept of vMotion in 2003, leading to some exciting changes in the datacenter. Fast forward the exciting technology innovations introduced by virtualization  and the effects of Virtual Machine sprawl; the traditional 3-tier storage architecture has proven to be inflexible, introduce inherit complexity and more importantly responsible for  the majority of performance related problems.

“Various technology vendors have attempted to address the above challenges by architecting “converged” based systems or release all-flash based storage arrays in their product portfolios, however, this ‘band-aid’ like solution isn’t the answer to the challenges and problems we are seeing in the datacenter today. This is where I noticed Nutanix and began to pay close attention to the company.

‘Nutanix’s value proposition was simple – Bring the Web-scale technology goodness and operational model employed and pioneered by Google, Facebook and Amazon down to the enterprise in a simple pay-as-you-grow model. I began to investigate further with their customer case studies, and after applying the VCDX scrutiny (Availability, Manageability, Performance, Recoverability, Security)  against the Nutanix solution, my eyes began to light up.

“What’s more impressive is how @Josh_Odgers (VCDX #90) and I took my existing VCDX Architecture Design comprising of the traditional 3 Tier Server, network, storage architecture and further simplified it by transitioning to Nutanix’s Web-scale approach. Not to mention the cost savings associated with the operational aspects, eliminating technology\storage silos (Goodbye RAID and SAN!!) and power/cooling.

“People ask me, ‘what’s the next thing to revolutionize the datacenter’, and  I believe this is it! We’ve been consolidating applications quite well thanks to virtualization – now it’s all about datacenter consolidation and Web-scale. That is the next technology wave!

“I believe Nutanix has the ‘X-Factor’and its fantastic to be part of something that is changing the world.”

Mark Brunstad, who formerly headed up the VMware VCDX program and who now manages curriculum development within Nutanix Global Services at Nutanix, had the following comment:

“Nutanix’s technology is unquestionably revolutionary. This is evidenced not only by Nutanix’s exceptional success, but by how industry leaders including Dell, EMC, HP, VMware, Cisco, and NetApp have all recently come out with, or announced, their own hyper-converged solutions.

But what truly makes Nutanix special are the employees. Whether in marketing, engineering, finance, support, sales, alliances, etc., Nutanix strives to hire industry veterans with proven innovative and leadership capabilities. I’ve known Richard since he began his VCDX Journey. He’s an extremely talented Sales Engineer and Solution Architect and someone who’s committed to sharing knowledge with his team. He’s going to be a huge asset as we build our sales and services organizations. We’re certainly thrilled to have him as a member of the Nutanix family.”

Thanks to VCDX by the Numbers compiled by @bsuhr (Brian Suhr. VCDX #118)

November 15, 2014

Dell VDI-bundled XC Series appliances vs. white box + VSAN

Wikibon’s Dave Floyer published an excellent article today featuring an in-depth economic comparison of two approaches to VDI. In the left corner are the new Dell XC Series Web-scale Converged Appliance models targeted at VDI use cases. Floyer refers to these models as Nutanix-powered converged VDI application appliances. And in the right corner are white box servers (such as Supermicro) plus VMware VSAN.

Are SANs for Virtual Desktops Officially Anachronistic?

It is amazing how quickly datacenter is evolving. In August of 2013, I wrote a Wikibon piece describing why hyper-converged infrastructure enables a positive VDI ROI whereas conventional compute + storage solutions are almost always a bad idea. Back then, Nutanix was really the only hyper-converged game in town.

Fast forward 15 months, and the hyper-converged space is suddenly crowded. Dell, of course, has the XC Series while VMware has VSAN. But VMware also makes VSAN available as EVO:Rail to several hardware manufacturers including EMC, HP, Supermicro, Dell, Inspur, Fujitsu and Net One Systems. HP, meanwhile, has given its Left Hand VSA a makeover – it’s now StoreVirtual. There are new software-only hyper-converged startups such as Maxta and SANbolic. Pure Storage has announced an upcoming hyper-converged solution. Even Cisco is getting into the game with an investment in Stratoscale.

The Wikibon article doesn’t address traditional infrastructure or SANs at all, simply stating that, “delivering software-led infrastructure as a converged appliance significantly reduces IT costs.”  The implication is (at least the way I read it) that SANs are obsolete technology when it comes to virtual desktops.

Three Year TCO Analysis

Dell has not yet published any information about its upcoming XC Series bundling of the compute, storage, hypervisor and VDI broker (what Floyer calls a “single managed entity”). Floyer says that the Dell solution will include either VMware Horizon View or Citrix XenDesktop.

An article in Tom’s IT Pro, Dell XC Series Powered by Nutanix Ready for Web-Scale, says that three of the Dell XC Series models: XXC720xd-A5, XC720xd-B5 and XC720xd-B7 are “geared specifically for VDI environments”.  The article goes on to say that, “The models…are pre-configured for different types of users and virtualization platforms (Citrix, Microsoft, VMware)”.

Floyer’s Wikibon article digs deep into the economics of the Dell vs. VSAN approach – and I suggest reading it in order to get the complete picture. But to summarize, although it costs less money up-front to purchase white boxes + VSAN than the Dell XCs, when evaluating costs over a three-year period – this delta is quickly offset by operating costs and by ongoing software maintenance expense. The Dell “Single Managed Entity” approach is significantly less expensive than white box + VSAN.

Assuming an organization starts with 400 VDI users and increases its user base by 120 desktops a year for two years (rarely do organizations virtualize all of their desktop users up-front), the 3-year TCO for the white box approach is $533,000 vs. only $389,000 for the Dell XC Series solution. This equates to $28 per white box VM vs. $21 per XC Series VM. Additionally, the Super Micro solution requires 36.8 days to deploy vs. just 6.3 days for XC Series.

The article does not specifically address, let alone attempt to quantify, differences in the two solutions in areas such as resiliency, performance or enterprise capabilities. It does, however, discuss the reduced complexity and lower risk of both deployment and support of the Dell XC Series approach.

Floyer concludes, “Wikibon research demonstrates that converged application appliances, correctly implemented, offer significantly lower costs and faster time-to-value for applications in general. Dell and Nutanix have introduced a specific converged application appliance for VDI desktops, provided by a single source, with single sets of updates, a single hand to shake, and a single throat to choke. Wikibon strongly recommends the Dell & Nutanix converged VDI application appliance be included in any RFP for VDI implementation.”

See Also

Tips for a Successful Virtual SAN (VSAN) Proof of Concept (POC). 11/14/2014. Cormac Hogan. cormachogan.com.

Is VMware Virtual SAN Production Ready Yet? 11/12/2014. Eiad Al-Aqqad. Virtualization Team.

How I Learned to Stop Worrying and Love Commoditization. 11/08/2014. Peter Wagner. Gigaom. [My comments debate the author]

October 23, 2014

Let’s port Oracle and SAP to run inside the hypervisor

Dheeraj Pandey, CEO of Nutanix, just wrote a comment to Nigel Poulton’s blog, VSAN is No Better than a HW Array. The post is good, but I particularly like the comments. Dheeraj does a technical deep dive into why it makes more sense to run storage as a VM above the kernel. He makes the point that, “storage is just a freakin’ app on the server. If we can run databases and ERP systems in a VM, there’s no reason why storage shouldn’t. And if we’re arguing for running storage inside the kernel, let’s port Oracle and SAP to run inside the hypervisor!”

 

iphone slideDheeraj was also interviewed yesterday for an article in the Triangle Business Journal about Nutanix’s new Durham office. If you are interested in learning about the origin and philosophy of Nutanix, this is worth a read. It also provides some insight into why Nutanix is playing a big part in shaking up the datacenter status quo.

October 20, 2014

Calculating Infrastructure TCO per VM

Most organizations today tend to be mostly or completely virtualized. As their existing 3-tier infrastructure ages, they face a dilemma of whether to refresh their existing servers and SANs or migrate to a web-scale or hyper-converged platform.

The IT leadership often wants to evaluate a TCO (Total Cost of Ownership) analysis comparing the costs of both platforms over the lifecycle of the equipment. In these cases, it is important to factor in variables such as administrative costs, rack space requirements, scalability expectations, etc.

In order to assist IT staffs with their analyses, I have developed a formula that looks at the monthly TCO over N number of years per Virtual Machine:

Read the rest of this article on Wikibon.

Channel Disrupt – my new blog site

I have started a new blog, Channel Disrupt, focused on the massive changes going on inside and outside of the datacenter, and how they’re impacting channel infrastructure partners. If you are part of, or interact with, the IT channel – I invite you to take a look. All feedback is very appreciated.

Thanks,
Steve

October 8, 2014

How VARs can thrive in the face of cloud

Most channel partners with whom I speak seem to be having banner years – and 2015 looks promising as well. And yet, there is a widespread concern that the traditional infrastructure VAR model is in decline. In order to remain relevant in the wake of increased cloud adoption, solutions providers know they’re going to have to reshape their product-focused businesses.

The Threat from Cloud

Jeff Bezos of Amazon famously told IT hardware manufacturers, “Your margin is my opportunity.” AWS already does billions of dollars in business, and it’s just getting started. Then there’s Azure, Google Cloud, vCloud Air and many others. As organizations increasingly move workloads to the public cloud, it means less infrastructure sales for VARs.

One channel partner sent me the following email:

“This hit me earlier this year when we were competing heavily against EMC in an account with Dell. We did a great job coming in and dealing a knockout blow to them as the incumbent. The IT Director proclaimed that “EMC has been disqualified from the opportunity”, and I was feeling pretty good about closing another six figure storage deal. What I didn’t realize is that the IT Director was thinking much more big picture and while I had won the asset battle, I lost the war on strategy. The organization ended up throwing everything to a major cloud provider in another region and [our firm] & Dell lost all of our footprint in the back office side of the account with only desktops remaining. This customer to me represented a pretty early adopter in this kind of model, but I do believe that traditional refresh cycles are going to be increasingly taken out by this kind of consumption model, and partners lacking a strategy in how to play this game will find themselves losing more and more fights.”

And another partner wrote,

“…we are already seeing storage refreshes become smaller in size, and I have explained to our management that one of the reasons for that is the cloud and the loss of certain workloads, and the second reason is applications are no longer as reliant on a centralized storage as they were before. For example, Exchange used to be a huge workload that enterprises buy centralized storage for, but what we are seeing is this workload is all but lost to the cloud, either Office 365 or a hosted exchange service if Office 365 does not meet all requirements. In the long run, I see datacenter footprint shrinking to about 10% of what it is today for each organization where only the most valuable assets are stored on-premises with everything else in the cloud. Of course this is not going to happen overnight but I definitely think that is where things are headed.”

The Traditional VAR Model

While there are thousands of solutions provider models, the traditional infrastructure VAR model centers around selling and installing hardware and software. They push the server, storage, virtualization and networking products they know and are certified in, and charge the customers to implement and make the products work together.

This “manufacturer rep” approach to datacenter build-out has worked very well over the past as organizations across the globe have transitioned from physical to virtual datacenters. Many VARs have built huge businesses driven largely by SAN and switch fabric implementation. But as increasingly more workloads move to the cloud, the traditional VAR is going to find it hard to survive let alone thrive. VARs need to transition to consultancies who build their businesses around cloud – with cloud being a verb rather than a noun.

The Cloud Integrator Model

I like the term “cloud integrator” as it is a natural evolution of the “systems integrator” moniker that was widely utilized at the dawn of the reseller business. Cloud integrators will assist their customers in developing, implementing and managing optimal cloud strategies.

A cloud integrator, for example, can help its customers with the following:

• Deciding which cloud providers should handle which workloads.
• Monitoring the cloud providers for price, security, business viability, regulatory compliance, etc.
• Implementing methodologies to move workloads between cloud providers, or the firm’s datacenters, if necessary.
• Which workloads should remain in the datacenter.
• How to provide cloud-like capabilities for the on-premises virtual machines including instant provisioning, universal access, a services catalog and chargeback/showback.
• Enabling bursting to the cloud when necessary
• Enabling effective backup and DR/BC strategies.
• Managing everything from a single portal.
• Helping customers manage part or all of their environments.

The good news is that cloud integration has the potential to be a very lucrative recurring business model. The bad news is that it takes a different skill set from what most VARs possess today. Successfully transitioning to a cloud integrator is going to require a massive retooling of the organization. Along with mastering traditional IT requirements such as security and disaster recovery, cloud integrators will be adept at truly understanding the customer’s business challenges in areas such as revenue generation, customer retention, agility and growth.

Cloud integrators need the knowledge to advise their customers on optimal cloud strategies to enable achievement of their business objectives. And they need the skill sets to be able to successfully implement the solutions including measures to ensure broad user adoption.

Transitioning to a Cloud Integrator

Customers are not moving to the cloud overnight, and VARs have some breathing room as well. It’s important, though, that they start encouraging their customers to begin adopting cloud mindsets. For larger organizations in particular, that means adopting the same type of infrastructure benefits that have enabled the leading public cloud providers to operate so efficiently.

VMware’s launch of EVO:Rail and Dell’s OEM deal with Nutanix are indications that three-tier datacenter infrastructure is under siege. The modern virtualized datacenter is going to increasingly embrace the same type of web-scale (i.e. hyper-converged) architecture utilized by all of the leading cloud providers. The much greater scalability, simplicity resiliency and lower cost make this transformation inevitable.

VARs can start the transition to a cloud integrator model by helping their customers understand the pros and cons of web-scale infrastructure within their own environments. They can then work with the organizations amenable to the concept to design and implement migration strategies.

While the simplicity of web-scale converged infrastructure precludes the days or weeks of back end integration services, it opens up many more opportunities in designing and implementing cloud capabilities such as service catalogs, elasticity, big data, cloud-based DR services, and so on. These higher-margin services can be further augmented with recurring revenue opportunities for cloud management services.

Acknowledgements

I spoke with several channel partners who were very helpful, but I particularly wanted to thank Eli Khnaser and Jenson Isham.

September 2, 2014

Mark Brunstad, former VMware VCDX Program Manager, joins Nutanix

I am very pleased to add my welcome to @MarkBrunstad who joined Nutanix today as Manager, Curriculum Development, nu.school – Educational Services function within Nutanix Global Services. Mark is well-known in the virtualization industry for building VMware’s vaunted VCDX program. I first met Mark at VMworld 2013 in San Francisco, and I spoke with him over the weekend about his new role at Nutanix:

What is it that drew you to Nutanix?

The VCDXs going to Nutanix really caught my attention. I know just about everyone in the program, and the extraordinary skills they have. When they started flocking to the company, I started digging into it. The technology is amazing and there’s world-class engineering talent driving the products. There’s also a lot of transparency, which is refreshing. The leadership really values experience, innovation, and agility. They will listen to your ideas, and they make it clear that people and their passion are the biggest assets.

Nutanix didn’t recruit me. I applied for a job blind via the Web so there were no futures being discussed before the interviews. When I met Mike Fodor [Nutanix Sr. Director, Education Services] and saw what his team was doing with nu.school, I was totally blown away. The creativity in their approach is going to shake up education and certification in our industry – it’s really going to be as disruptive as the technology. Meeting Dheeraj Pandey [Nutanix CEO] and seeing his commitment to education and certification as success factors for Nutanix and their partners; well that sealed the deal for me.

You said that the VCDXs, which now number 10, going to Nutanix caught your attention. When VCDXs can pretty much work wherever they’d like, why are so many coming here?

To begin with, Nutanix is the biggest sponsor of the VCDX Program outside of VMware. After that, I think the biggest attractions are the work and the technology. The best solution architects in the world want to do the web-scale projects because they’re challenging. Nutanix is a company that recognizes VCDX skills, provides challenging projects, and its people a really unique toolbox with which to work. The company has a culture of innovation that’s attractive as well – there’s a willingness to let the experts innovate and that’s a huge draw.

Someone queried me via Twitter wondering why, if Nutanix is so simple, do we need VCDXs?  What’s your opinion?

Nutanix is simple – but the solutions you deploy on top to meet a customer’s business needs are still complex and disruptive. There is also the challenge of designing an effective migration from a legacy 3-tier architecture to a web-scale converged infrastructure. You need a real consulting architect with top-notch design and delivery skills to ensure the solutions will be implemented to perform as advertised. This is exactly the alignment between Education Services and Consulting Services within Nutanix Global Services (GSO). Everybody wants the benefits of cloud, but they’re going to want to touch the VCDX before they cut the check.

What’s in store for the Nutanix Platform Professional certification program?

Nutanix will expand the NPP Nutanix Certification program from platform professionals level (NPP) to an expert level. We want to dive deep into the convergence layer where Nutanix is the leader – we want mastery of the platform that supports real web-scale deployments first. You’ll need to really understand availability, management, application performance, BC/DR, automation, and how that’s supported. From there, we’ll build design and delivery skills for a wide range of solutions. That’s going to mean Microsoft, Citrix, Oracle, OpenStack and whatever else our customers need. I’m hoping to be involved in every aspect of this expanded certification program at Nutanix, as I was with the VCDX Program at VMware.

Nutanix partners with VMware on all but one of their many products and technologies, but the overlap with EVO:RAIL is grabbing a lot of media attention. Do you see the new competition spilling over into expert Nutanix certifications and VCDX?

A lot of VMware partners have high-end certifications that are specific to their platforms and stacks, and they sell solutions from VMware’s competitors too. That’s the nature of the business we’re all in right now. I think an awesome (and growing) VCDX bench is going to continue to be a huge asset to Nutanix. The reality is that it’s a huge asset to VMware as well – but we can do more. I think we’ll focus on delivering something really exceptional and giving our customers excellent choices – then we’ll let them decide where they find the best value.

Any other comments?

I’m excited to go beyond VCDX with this team. The VCDX bar is extremely high, but the achievers are already looking to what’s next; they’re looking for that bigger challenge and an opportunity to grow even more. We’ll give them something to shoot for, and we’ll give the industry a new benchmark. With this company it’s going to be really, really fun!

 

See Also

Farewell VMware! 08/27/2014. Mark Brunstad. TwitLonger.

Why are VCDXs Flocking to Nutanix? 07/28/2014. Steve Kaplan. Nutanix web site.

What is the Value of a VCDX to a VMware Ecosystem Partner?  11/16/2013.  Steve Kaplan.  By The Bell.

 

 

 

August 28, 2014

VMware EVO: RAIL – Impact on Nutanix Partners

VMware announced one of the industry’s worst kept secrets, EVO: RAIL (formerly Project Marvin), a few days ago at VMworld.  Many partners have been asking my opinion of the solution – and are sometimes surprised to hear my enthusiasm. Here’s why I think EVO: RAIL is good for the industry, and good for Nutanix partners:

Validation: With its announcement of EVO: RAIL, VMware has eliminated any doubt that software-defined datacenters will be built with hyper-converged appliances that integrate storage and compute resources. The unveiling of EVO Rail, supported by multiple server hardware vendors, resoundingly validates Nutanix’s vision that datacenters will be based on distributed software running on clusters of x86 servers – eliminating the need for traditional storage solutions, which are expensive, complex and slow today’s business.

 

“Nutanix is a valuable partner. Their technology has enabled Toyota to further simplify its datacenter infrastructure, improve performance and lower costs. We are discovering transformative benefits through web-scale infrastructure as our Nutanix footprint continues to grow. We look forward to expanding the platform to new workloads in the future.”

                            –Ned   Curic, CTO, Toyota Motor Sales, USA

 

The multi-billion dollar infrastructure incumbents are firmly entrenched in the datacenter. They have a massive base of customers and installed products. Datacenters already utilize management and governance processes that make it difficult for customers to migrate to a new platform. The incumbents also work through a very established channel partner network that further cements customer loyalty. EVO: RAIL is going to make it easier for Nutanix partners to disrupt the status quo. And by bringing a superior architecture to their customers’ attention, they will build increased loyalty and trust.

Competition: Nutanix recently announced a $200M run rate. While impressive for a company selling less than three years, it is a long way from Nutanix’s goal of becoming a leader in the $50B server and storage market. Meeting this lofty objective requires that hyper-converged architectures become the standard for virtualized datacenters. This will require competition to grow the market and generate broad customer demand. And, there is no better competitor than the industry’s dominant virtualization company – VMware. The Baird Equity Research report from last week confirms Nutanix as the leader in the hyper-converged category with a 50% market share. While it predated the EVO: RAIL announcement by a few days, it named VMware VSAN as, “the most notable competitor given VMware’s market reach.”  Nutanix partners can now offer customers two viable alternatives to the traditional 3-tier datacenter infrastructure.

Maturity: While both Nutanix and EVO: Rail provide a software-defined storage solution based on x86 servers Nutanix has five-year head start, yielding a solution that is more mature and feature rich. Nutanix now supports core enterprise applications (e.g. Microsoft Exchange, SQL Server, etc.) up to a massive scale. EVO Rail, on the other hand, is powered by VMware Virtual SAN, which is currently qualified for tier 2 and tier 3 workloads, along with smaller scale VDI installations. The Nutanix Virtual Computing Platform provides the following unique functionality:

  • Complete enterprise feature set, including native snapshots, clones, compression, de-duplication, data locality and more
  • Distributed Web-scale architecture providing predictability, scalability and business agility
  • A range of proven appliances that offer different ratios of compute and storage resources to support the widest range of virtual workloads

Support: It takes tremendous effort to make complex solutions simple.  This is also the case with hyper-converged appliances. Nutanix takes extraordinary measures to ensure not only a beautifully simple management interface, but also stability and reliability – backed by award-winning industry-leading support. As an example, Nutanix intentionally announced the strategic relationship with Dell months in advance of shipping the XC Web-Scale Converged Appliance. Nutanix is investing the engineering resources required to ensure that Dell server hardware is optimally configured to drive the best performance and reliability when running Nutanix software. The quality of the joint solution will meet the same high standards of the Nutanix branded appliances. VMware, unfortunately, does not have this same luxury with EVO: RAIL. They will be dependent upon the quality testing and support of the various hardware manufacturers. VMware may also be challenged to ensure version compatibility as each EVO: Rail partner will have its own virtual machine. The most important questions that EVO: Rail customers and partners must ask: Will my hardware partners really be able to support the VMware software stack on their own? Why am I calling a server vendor to get help with my hypervisor and storage software?

Flexibility: Lastly, appliances are good – but they shouldn’t restrict choice. VMware vSphere has earned its dominant position in the virtualization industry. Nevertheless, Hyper-V continues to gain significant ground as enterprise customers demand flexibility. Nutanix understands this, which is why we’ve engineered our solution to be hypervisor agnostic. We deliver feature parity across vSphere, Hyper-V and KVM – protecting customer freedom to choose the right technology for their environment.

Co-opetition: While there will be some competitive overlap in the hyper-converged market, Nutanix and VMware will continue to partner in other strategic use cases, including end user computing (EUC), vSphere support, vCAC on Nutanix, and other areas. Channel partners can further grow their businesses by helping their customers navigate the complexities of migrating existing environments to a Web-scale infrastructure. In the process, they can assist their customers in strategizing, designing and implementing both best practices and expanded use cases.

See Also: Nutanix Raises $140M Series E from Public Market Investors at Over $2B Valuation. 08/27/2014. Nutanix Press Release.

EVO RAIL: Status Quo for Nutanix. 08/26/2014. Dwayne Lessner. IT Blood Pressure.

Nutanix Blows Away Industry Net Promoter Scores. 04/28/2014. Steve Kaplan.  Nutanix

August 7, 2014

Nutanix arrives as a mainstream datacenter player

GunnarTweet

Gunnar Berger, Citrix CTO Desktop & Applications Group, tweeted yesterday in response to Nutanix’s announcement that it hit a $200M annual run rate. In his typical witty way, he lamented how quickly companies grow up.

While becoming a grown-up company is important, achieving a $200M annual run rate combined with several other events of the past 6 weeks signifies something vastly more significant – that Nutanix has arrived as a mainstream datacenter player.  Consider:

  • Gartner positioned Nutanix as the furthest in the Integrated Systems Visionary Quadrant for completeness of vision.
  • Dell signed an OEM agreement with Nutanix.
  • TechTarget wrote, “…Nutanix’s entry into the converged infrastructure game brought this technology to the masses, and its products have become a very sore spot for VCE.”
  • Symantec, one of the world’s largest software companies, blogged about its NetBackup solution for Nutanix.
  • Nutanix achieved Epic “Target Platform Status for VDI” (in exceptionally short time).
  • Nutanix signed distribution agreements with industry leaders Avnet and Arrow.
  • Nutanix’s 10th VCDX joined the company (as one of over 600 employees)

Just one of these points of recognition implies tremendous faith that a company’s solution is the real deal.  Achieving all of them indicates that Nutanix web-scale converged infrastructure is beginning to make waves in the $50B server and storage datacenter market.

And as the OEM relationship with Dell makes clear, Nutanix is truly a software-defined manufacturer. This provides Nutanix with the capability to continue innovating in ways that legacy proprietary hardware manufacturers simply cannot touch.

It’s obviously a very exciting time for Nutanix and for our rapidly growing number of customers.  But it’s also an exciting time for our channel partners who have the opportunity to disrupt the datacenter in a manner not seen since VMware first unveiled vMotion 11 years ago.

 

See Also:

Avnet Technology Solutions Selected as New Converged Infrastructure Distributor for Nutanix in U.S. and Canada.  08/06/2014. Press Release.  WSJ.

Who Needs Dell? Nutanix Tripled Revenue Last Quarter. 08/06/2014. Dave Raffo. Storage Soup.

Expectations High for a VMware Converged Infrastructure Product. 08/05/2014. Brian Kirsch. SearchServerVirtualization.

NetBackup: The True Scale-out Backup Solution for VMware vSphere Workloads on Nutanix. 08/01/2014. Abdul Rasheed. Symantec Web Site.

The 10 Most Controversial Companies of 2014 (so far). 07/30/2014. Kevin McLaughlin. CRN.

Why are VCDX’s Flocking to Nutanix?  07/28/2014. Steve Kaplan. Nutanix Web Site.

Arrow Signs Distribution Agreement with Nutanix: Bringing Web-Scale Converged Infrastructure to Resellers. 07/10/2014. News Release.  Arrow Investor Relations.

Nutanix is Standout Visionary in New Gartner Magic Quadrant. 07/10/2014. Exclusive Networks Blog

Dell inks OEM deal with Nutanix to build mutant server, storage, networking beasts. 06/24/2014.  Chris Mellor. The Register.

July 30, 2014

Why are so many VCDXs converging on Nutanix?

I wrote a blog post last November titled, What is the Value of a VCDX to a VMware Ecosystem Partner?  Eight months later, and Nutanix has gone from four VCDXs to ten. That’s 7% of the 143 VCDXs worldwide. Nutanix has 25% more VCDXs than the next two highest count companies (EMC & Cisco) combined, and is second in count only to VMware.

The VMware Certified Design Expert (VCDX) certification is by far and away the most elite of all certifications in the IT industry. Written exams, preparation of a solution design that can run hundreds of pages, and having to defend all aspects of this design are part of the grueling certification process.

For those of you who haven’t had the privilege of interacting with a VCDX, I have to tell you – these folks are ridiculously smart! They tend to not only be exceptionally technically astute, but also really plugged into the virtualization industry pulse as well. They are in the enviable position of pretty much being able to work wherever they want. So the obvious question is, why are so many coming to Nutanix? I thought I’d ask:

Jason Langone – VCDX #54 @langoneJ.  [Nutanix’s first VCDX, it was Jason who recruited me to Nutanix. I still haven’t forgiven him. J]

I signed with Nutanix in fall of 2011. I viewed it as a way to take my technical capabilities and apply it to a more valuable part of the stack, the mission-supporting applications. Having been in the field for years, I saw how I and others would get bogged down in the minutia of vSphere design. Nutanix allowed me to elevate my discussion.

Lane Leverett – VCDX #53.  @wolfbrthr [Nutanix’s 2nd VCDX. I introduced Lane to Nutanix]

After taking a look at Nutanix, my interest was piqued as I could see it as the next major disruptive technology to really hit the datacenter. The technology was sound and very complete in the vision, but it was the culture of the company that really sealed the deal for me in coming to work at Nutanix. There is a spirit of collaboration and a level of talent across all segments of the company that I have not seen anywhere else I’ve been. Just as we are breaking down the silos and barriers in traditional IT (Storage/Servers/Network/Virtualization), our own company breaks down the silos within the company to allow for true cross company collaboration and teamwork.

Josh Odgers – VCDX #90.  @josh_odgers

Having designed and implemented countless solutions over the years, my experience tells me that as long as the customer’s requirements are met, the simpler the solution the better! When taking a deep look into Nutanix, I found a solution that would clearly and simply solve numerous challenges (both Commercial and Technical) for customers. I wanted to be a part of bringing this new style of architecture (Webscale / Scale-out shared nothing) to the market as I strongly believe it will be a major part of datacenters of the future.

Josh’s blog post announcing his joining Nutanix

Michael Webster – VCDX #66   @vcdxnz001

What attracted me to Nutanix was a feeling that enterprise IT infrastructure was broken, but it didn’t have to be so complex to deliver the availability, manageability, security and performance that was required. I saw a great business opportunity to take the simplicity and ease of use of the hyper-converged infrastructure that Nutanix had developed and apply it to business critical apps and help develop use cases around them. This is an area I’d been focusing on for the better part of 10 years.

I didn’t join Nutanix for the money, but rather to change the world for the better, and to do it with a great team and a company that cares about its customers and wants to deliver great products. We do things differently here. Nutanix is a company that understands how enterprise IT really should be, and we’re delivering it every day.

Michael’s post on #webscale

Derek Seaman – VCDX #125  @vDerekS

I fully believe to become better you have to work with people smarter than yourself, and Nutanix is brimming with talent. I was also attracted to the Nutanix solution from both a technical and customer satisfaction point of view. It is elegantly simple and is really disrupting the traditional datacenter. No more Fibre Channel Zoning, or configuring complex shared storage, or performing individual server configuration. Simplicity at its best.

Magnus Andersson – VCDX #56. @magander3 [Magnus was the second double VCDX – one of only 6 in the world].

The reason I joined Nutanix comes down to the amazing software we have developed. I truly believe that our software and strategy focusing on scalability, simplicity and performance will make life easier for the IT industry. I have a split role between the solutions & performance in addition to consulting. This makes it possible for me to contribute to the development of the software while at the same time getting to it.

Magnus blog site

Michael Berthiaume – VCDX #130.  @VMmike130 [I’m very fortunate to be able to interact frequently with Mike in his role as SE for the Channels team]

I decided to come to Nutanix primarily because I am a passionate technologist with a pure focus on simple and innovative data center solutions. Our industry is undergoing a major transformation, and IT departments are constantly asked to do more under intense pressure from business leaders looking at low cost cloud-based solutions.

As a modern datacenter technologist, discovering Nutanix was a “Eureka!” moment for me – finally finding a solution that provides the best qualities of both on-premise and cloud solutions. Nutanix ultimately allows IT to refocus on improving agility and rapid application delivery rather than on configuring complex infrastructure often struggling just to keep the ship afloat.

Nutanix represents an overdue “revolution” in IT of which I want to be a part.

Samir Roshan – VCDX #124. @Kooltechies

Nutanix has an incredible market opportunity, a great pool of talent and an amazing culture. For me, this opportunity opens a whole new world as I get to work with the best people in the industry while working with the fastest growing enterprise infrastructure company of the past decade.

I have never previously worked for an organization where each and every person is self-driven and motivated to do something new every day. And yet, the culture within Nutanix is both very humbling and customer centric.

Artur Krzywdzinski – VCDX #77.  @artur_ka

I like challenges and strongly believe that Nutanix is setting the standard for datacenters in the cloud era.

Jonathan Kohler – VCDX #116  @JonKohler

I first became aware of Nutanix in January 2014 through various blog posting and social media from both Nutanix employees and from other respected people within the industry. I wanted to see what all the buzz was about, and the more I read, the more intrigued and excited I became about the technology. The phrases “No Way”, “How did they do that”, and “Oh my, that is cool” kept coming up over and over again.

As I went through the interview process, I was also impressed by Nutanix’s ability to attract and retain very talented people at all levels in the organization, not just VCDXs.

Making History

While the VCDXs all have their own reasons for why they joined Nutanix, I think it’s safe to say that they all feel something incredible is taking place at the company.

It’s very rare for a new technology to be so disruptive that it changes the shape of an industry. VMware did it with virtualization in the early 2000s, and now Nutanix is doing it again with web-scale converged infrastructure. It’s exciting and gratifying to contribute to making history.

See Also:

VMware VCDX by the Numbers. 09/27/2013.  Brian Suhr (VCDX #118). www.virtualizetips.com

Meet the VMware Certified Design Experts. VMware Web Site.

 

Meet the VCDXs at VMworld!

Looking for an opportunity to meet (and question) these experts in person? Several of our VCDXs and vExperts will be at the Nutanix booth at VMworld San Francisco (August 25-27) to answer questions, provide guidance, and talk tech. Find out more about our VMworld activities here.

 

 

 

 

June 28, 2014

Implications of the Dell OEM Relationship for Nutanix Channel Partners

In the technology industry, OEM means that a manufacturer purchases a product at a deep discount with carte blanche to package and sell it as desired. Nutanix, consistent with our corporate DNA for breaking the rules, is also changing the OEM game. We announced last week an OEM relationship whereby Dell is going to manufacturer appliances built with Nutanix software. But Dell’s pricing is structured in such a way that the company will not have a price advantage over other partners selling Nutanix appliances.

The Dell OEM Agreement

Most younger manufacturers would likely, if approached by Dell, jump through any hoops necessary to  partner with the Austin goliath. And while Nutanix is both excited about this agreement’s potential and impressed with the professionalism and responsiveness of the Dell teams we’ve met, we also are very appreciative of – and loyal to – our existing partners. These are the folks who have quickly made Nutanix an established brand across the globe.

This is why we structured the Dell OEM agreement in a manner that gives everyone a level playing field. Dell’s pricing is in parity with that of the rest of our channel.  Dell will also be subject to the same stringent rules as all of Nutanix partners in terms of forecasting and registering opportunities.

The Real Competition?  Datacenter Status Quo

Dell has a broad global reach which some partners consider to be a competitive threat. The difference is that the web-scale industry in which we’re competing is still at the nascent stage. The real competition for Nutanix partners is not Dell or any other Nutanix reseller – it’s the $50B+ servers and storage status quo.

I’ve written some blog posts about partners who have utilized Nutanix to transform their businesses. They proactively describe to their customers how web-scale converged infrastructure enables a more scalable, predictable, resilient and less complex platform for successfully implementing virtualization initiatives. These partners have enjoyed huge increases in sales, margin and in customer satisfaction.

But the entrenched three-tier infrastructure mentality is not easy to overcome. Partners often have to work hard to help customers understand the benefits of web-scale and why it is well worth going through the pain for them to transition to a new architecture.

When I ran a consultancy in the early days of VMware, we faced very similar challenges with promoting virtualization. Customers were often surprisingly resistant to virtualizing their production environments no matter how compelling a case we made around the economics and other benefits. VMware’s OEM agreement with Dell helped accelerate the acceptance of virtualization which in turn expanded the market for all of VMware’s partners.

Although Nutanix is growing at an unprecedented rate for an infrastructure company, most organizations still do not understand web-scale IT. Our relationship with Dell is an important step to help accelerate enterprise embracement of web-scale converged infrastructure. As was the case with virtualization, this will benefit all Nutanix partners.

Web-Scale Opportunities

The Challenger Sale by Matthew Dixon stresses that the most successful salespeople are the ones who challenge customers to consider different and more beneficial solutions. Web-scale, of course, is an ideal solution with which challenger salespeople can excel. By bringing their customers a superior platform for hosting a virtualized datacenter, they gain increased trust, respect and business.

As web-scale converged infrastructure becomes more widely adopted, the benefits for Nutanix partners extend beyond even the $50B server and storage business. Web-scale lends itself perfectly to use cases such as private and hybrid cloud, big data, superior disaster recovery and remote branch infrastructure simplification.

Nutanix partners are already seeing increased opportunities in all of these areas. And as Nutanix’s industry-leading features continue to expand, partners will be able to wrap more customized management solutions around our web-scale converged infrastructure.

June 14, 2014

10 reasons why web-scale infrastructure will trounce converged infrastructure 1.0

In The Register last week, a VCE spokesperson said that comparing Nutanix to Vblock is like comparing a glove box in one vehicle to an entire car. A more applicable, albeit fictional, automobile analogy would be to contrast an inexpensive Tesla with a Cadillac Escalade for commuting back and forth to work.

VCE certainly hasn’t achieved a $1.8B run rate by selling a bad solution.  On the contrary, I think it represents the pinnacle of the three-tier datacenter model.  I was an early public proponent of both Vblock and UCS. A couple of my early UCS blog posts still show up in the top two results when Google searching “UCS vs. Matrix”.  And I helped facilitate several publicized Vblock sales while at Presidio.

But back then I didn’t know that Nutanix was already bringing web-scale architecture to the enterprise.  The same reasons that Google and the other leading cloud providers eliminated SAN-based infrastructure for their primary hosting businesses are the reasons that enterprises across the globe are gravitating to web-scale converged infrastructure today.  Just the administration required for SANs alone (not even counting UCS) is tremendously complex and expensive. The lack of scalability, the vulnerability to downtime, the complexity and the high cost for the equipment, rack space, power and cooling are all drivers for enterprise migration to web-scale IT.

Here are the 10 reasons that web-scale converged infrastructure is going to trounce Vblock’s converged infrastructure 1.0:

1)      Frankenblocks are Unnatural Solutions

Frankenblock was the affectionate nickname bestowed upon Vblocks by some early customers, but it’s applicable to all of the legacy storage manufacturers’ converged infrastructure solutions. These technological anomalies have only been able to flourish due to the immaturity of the virtualization industry.

A virtualized datacenter necessitates extensive collaboration among the storage, server and network teams – something that many IT shops, with their stovepipe functional organizational models, are not designed to accommodate. Troubleshooting complex virtualization issues generally requires calls to multiple manufacturers – and finger-pointing commonly ensures. And it can easily take months for organizations to procure the storage, server and network components and get everything working well together in a virtualized environment.

The converged infrastructure 1.0 approach helps mitigate these virtualization challenges to a certain degree, but it is an unwieldy solution. Vblocks, for example, include EMC storage and UCS already pre-racked and cabled, yet the lead time for delivery is still extensive. Although there is one number to call for support, difficult problems still end up in conference calls with the individual manufacturers. Even the ordering process can be complex for channel partners who need to contact three different manufacturers in order to obtain the lowest-cost VCE quote (which still costs significantly more than just purchasing the individual product components).

2)      VCE Claims it “…delivers the industry’s only true converged infrastructure systems”, Yet it Doesn’t Actually Converge Any Infrastructure

VCE coined, “Converged infrastructure”, but every leading storage manufacturer now uses the term to describe a prepackaged combination of servers and storage arrays – promising application optimization and a single point of management as part of an integrated stack. The irony is the complete lack of convergence – at least from an infrastructure standpoint.

Wiktionary defines convergence as the merging of distinct devices, technologies or industries into a unified whole. It follows that converged infrastructure should eliminate redundant hardware and consolidate disparate management tiers. The convergence of voice and data networks, for example, eliminated both the hardware and management requirements for separate voice networks (PBXs).

Vblock, FlexPod, PureFlex and all of the other “converged infrastructure” solutions utilize proprietary arrays with Intel based storage controllers that are very similar to the Intel servers they use for compute. There is no elimination of hardware, rack space, power or cooling requirements. And there is no consolidation of management; the arrays still require specialized storage administration.

3)      Hardware-Defined Convergence Doesn’t Work so Well

Let’s use “hardware-defined convergence” as a more applicable moniker for simply packaging physical products together. When I was a kid, it was a foregone conclusion that the car-plane would be ubiquitous by now. But hardware-defined convergence has never proven to be very successful. Converged infrastructure 1.0, whether Vblock or not, will certainly not be the exception to the rule.

carplane

VCE architects have long recognized this deficiency. A couple of years ago, when he worked at the Office of the CTO for VCE, Steve Chambers wrote a blog post about hyper-converged infrastructure to describe solutions that truly did converge compute and storage.

chambers

Although Nutanix is the only “hyper-converged infrastructure” manufacturer providing a distributed file system to bring web-scale IT to the datacenter, many other manufacturers have entered the market. Most significantly, VMware recently introduced VSAN. As the dominant virtualization leader, VMware’s endorsement of software-defined converged infrastructure (also commonly referred to as Server SAN  as well as hyper-converged infrastructure) is a tremendous validation of the architecture. EMC is also jumping into the space with ScaleIO and with its upcoming Project Mystic.

4)      Software-Defined Convergence Whips Hardware-Defined Convergence Every Time

For a newer technology to displace an incumbent, it typically needs to be notably easier to use, significantly less expensive, or both. Blackberry initially dominated the market it created by using software to converge cell phones with PC email functionality. But although Blackberry had a tremendous advantage as first mover, it also had a huge vulnerability – a physical keyboard.

When the iPhone was first announced, Steve Ballmer famously scoffed at both its high price and its lack of a physical keyboard – saying that businesses would never accept it. But both business users and consumers loved the convenience of the software-defined iPhone keyboard. While perhaps not quite as easy to type upon as Blackberry’s physical version,  the ease-of-use more than compensates as the iPhone adjusts to reflect the functionality being utilized whether a phone, calculator, MP3 player, camera, email device, etc.  The iPhone and other software-defined smart phones quickly made Blackberry largely irrelevant.

Vblock’s packaging of UCS servers and arrays has captured a lot of sales to organizations hurting from the pain of datacenter virtualization. But software-defined alternatives will inevitably win the day.

5)      Centralized Storage is Anachronistic in the Modern, Virtualized, Datacenter

When VMware announced vMotion in 2003, the vCenter 1.0 Users Manual included a bullet point on page 37 stating, “The hosts must share a Storage Area Network (SAN).” This requirement changed the face of the enterprise datacenter for the next ten years as organizations around the globe purchased arrays in order to run ESX and vMotion.

But even with today’s modernized SAN architecture, centralized storage is still not a good fit for a virtualized datacenter. Traditional architecture separates the flash and disk away from the compute (where it should be) and sticks them in proprietary arrays at the end of the network where they’re subject to performance degradation from network hops and latency. Even arrays consisting of all flash still suffer from network bottlenecks.

Utilizing physical storage controllers furthermore makes it challenging to ensure adequate IOPS for the many different types of virtual workloads.  This further contributes to I/O related issues such as the “blender effect”.

blender

Additionally, complex infrastructure tasks such as creating, masking and zoning physical LUN devices have no awareness of the virtual machines that are running on them, making it impossible to define granular policy such as compression, deduplication, data protection and replication.

6)      Vblock Compounds the Problem of Poor Storage Array Scalability

The inability for storage arrays to easily and inexpensively scale is one of the biggest detriments to a virtualized datacenter. This is particularly true when uncertainty exists as to the ultimate resource requirements such as with virtual desktops and private cloud. The large initial investment required for a storage array capable of handling projected future requirements is often enough to discourage organizations from moving forward with a VDI or private cloud initiative.

Vblocks can compound this problem by requiring staircase purchases of not only storage, but also UCS chasses and Nexus switches. This tends to be particularly problematic for private cloud initiatives in organizations relying upon project-based budgeting for funding their IT infrastructures. It also makes chargeback more difficult to implement due to the complexity of allocating costs of large blocks of storage, compute and networking in a meaningful manner.

7)      Even Cisco UCS Has its Limits

UCS was the largest investment Cisco had ever made when it undertook development of the product 10 years ago. UCS was built under the direction of Ed Bugnion, VMware’s co-founder and CTO, and it was the only product developed by any of the datacenter leaders specifically for hosting a virtualized datacenter. When UCS debuted a little over five years ago, it incorporated some remarkable innovations such as  FCoE (Fibre Channel Over Ethernet), hypervisor bypass, extended memory and a GUI that can help the server and storage teams collaborate more effectively.

Despite widespread sentiment that Cisco didn’t know anything about the server business and would fail miserably, UCS went onto become the top-selling blade server in the Americas. But UCS has an enormous disadvantage. It only addresses a small part of the virtualized datacenter issues – the compute. By far the majority of issues have to do with storage. Not surprisingly, along with VCE/EMC, three other datacenter storage manufacturers, NetApp, Hitachi and Nimble, all incorporate UCS for the compute portion of their converged infrastructure solutions.

8)      Web-Scale has Already Won in the Cloud Provider Space

Years before VMware transformed the enterprise space, the Internet giants were already consuming large quantities of SANs and NAS. When Google came on the scene, the co-founders knew that they would need to handle billions of users and trillions of objects. They wanted a solution that would be much more economical, efficient, resilient and scalable than shared storage.

Google consequently took a scientific approach to rethinking datacenter infrastructure. Indeed, the company hired a team of scientists who developed the Google File System along with MapReduce and NoSQL. Instead of using storage arrays, Google runs hundreds of thousands of commodity servers utilizing GFS to aggregate the local storage.

Google published papers on its GFS-based architecture in 2003, and now every leading cloud provider no longer uses arrays for their primary hosting business. Web-scale IT, including commodity servers, local storage and some variation of a distributed file system, has become the de facto standard in this very demanding space.

The low cost, resiliency, simplicity and scalability of the web-scale architecture ensure that it will also become the standard of the modern (virtualized) datacenter.

9)      Web-Scale Converged Infrastructure is Proving to be Even More Effective in the Enterprise

“50 per cent of global enterprises will be taking an architectural approach to web-scale IT by 2017”

–  Gartner 03/10/2014

Although most enterprise, and even government, datacenters are not as large as those of the leading cloud providers, they often receive even greater benefit from Web-Scale converged infrastructure.  This is due to their much greater number of applications, the majority of which tend to be off-the-shelf.

Conventional three-tier architecture is infrastructure-centric rather than VM-centric. The storage subsystem has no virtual machine awareness and no insight into the number or configuration of disparate workloads residing on them. Administrators are forced to use complex data analysis techniques in an attempt to mitigate the “thebully/victim” and “noisy neighbor” effects among virtual machines along with mysterious application sluggishness and potential service disruption.

Nutanix Web-Scale Converged Infrastructure, on the other hand, is VM-aware. The architecture utilizes Virtual Machines, rather than LUNs, as the primary building block of the datacenter. Policies traditionally defined at the storage pool or LUN level can now be applied at the individual virtual machine level – where they make sense, resolving the application and service disruption issues and enabling both analytics and replication capabilities at a very granular virtual machine level.

10)   Web-Scale Converged Infrastructure Delivers what VCE Promises

VblockAd

Simplicity is just one of the claims commonly made by VCE that Nutanix actually fulfills. Another is time-to-deployment. VCE boasts that it only takes 40 days to procure and stand-up a Vblock. Nutanix installs in hours, and can be procured in less than a week. And not only is Nutanix much less expensive to procure and scale, but it also is much less expensive and complex to operate.

Web-Scale also enables a vastly less complex and more elegant approach to hosting virtualized infrastructure than packaging servers, even UCS servers, with storage arrays. For example, since the virtualization administrators manage the entire environment, there is no need for collaboration between server and storage teams.

Enterprises and governments around the world are embracing Web-Scale IT. Nutanix is now the fastest-growing infrastructure company of the past three decades.

To learn more about web-scale, check out the industry-wide online event that Nutanix is hosting live on June 25, Web-Scale Wednesday.  Featuring speakers from Facebook, Twitter, Nutanix, Citrix, Dell, Wikibon, The Register and more.

Thanks to Michael Berthiaume, VCDX, and Jerome Cheng for their contributions to this article. 

 

April 28, 2014

Nutanix blows away industry Net Promoter Scores

No matter how disruptive a new technology may be, if the manufacturer doesn’t provide adequate support, the company isn’t going anywhere. Nutanix strives to back our innovative web-scale IT infrastructure with world-class customer support. The efforts have paid off as Nutanix recently received both a NorthFace ScoreBoard Award with a Net Promoter Score that is off the charts for the IT industry.

World-Class Excellence in Customer Service

Omega Management Group recognized 35 companies for “Delivering World-Class Customer Service” in 2013. Nutanix was honored to be included in this elite group of companies. Nutanix is the only datacenter infrastructure company on the list. The award is based solely upon customer ratings.

Omega Award

Net Promoter Score

A high Net Promoter Score is notoriously difficult to obtain. Apple, for example, despite their reputation for sterling customer service received a +42 last year. The dominant virtualization player, VMware, received a +48. Dell achieved a very respectable +37.  EMC garnered a +28 and NetApp a +25. And Nutanix?  A whopping +73 (and it’s now over 80). The next highest on this list of 60 tech vendors was Intel with 53.

“We have experienced an extremely responsive support and quick problem resolution from Nutanix. As far as the IT industry is concerned, this level of support is extremely remarkable.”                                                                         Gregory Padak, Lead Software Configuration Manager of InComm

Social Media

Nutanix SVP of Operations, David Sangster points out that 80% of customer service related tweets are negative or critical. This contrasts with a survey showing that 80% of big companies claim that they deliver “superior” customer service. But as Sangster also points out, a New York Times survey reported that customers rated only 8% as delivering a superior customer service level. Nutanix consistently receives strong social media validation of its technology and support.

SuhrTweet

Nutanix Next

On March 11, 2014, Nutanix launched a new online community, Nutanix Next. Nutanix Next provides a forum for partners and customers to interact directly with members of the growing Nutanix community including engineers and developers but also other customers, partners and industry experts.

April 14, 2014

Nutanix Now – Recap of our First Partner Conference

“The most exciting aspect about Nutanix has to be their passion and enthusiasm. Their approach is refreshing, compelling and as an organization, they’re willing to listen.”            Kevin Kaiser – CDW

After representing six different channel partner businesses at countless partner conferences during my 25 years in the channel, it was a bit surreal being on the other side of the fence at our first partner conference last week. But it was great fun to see the ubiquitous excitement and enthusiasm among our partners for the web-scale revolution Nutanix is bringing to the enterprise datacenter.

Collaboration

“The energy from the Nutanix team was contagious and exciting as they grow a company with a truly game changing technology.”    Kent Christensen – Datalink

Nutanix Now was held at the Denver Grand Hyatt. Attendees came from across North America, representing partner organizations ranging in size from small boutique firms to the largest players on the planet. The format consisted of a cocktail party the first night followed by a day of presentations and panels including both business and technical break-out sessions. An Awards Dinner was held that night, while our inaugural Partner Advisory Council took place the next morning.

While the conference was quite elaborate for a company as young as Nutanix, it was still small enough to engender a sense of intimacy. Both partner and Nutanix attendees networked extensively and freely shared ideas. This was particularly evident at the Partner Advisory Council where the 12 PAC members requested the membership list be distributed in order for them to stay in contact with each other.

Transparency

“The openness and humility of the entire Nutanix organization was very apparent throughout the conference, especially in their willingness to accept feedback on their partner programs and to incorporate that feedback into shaping the programs going forward.”      Tim Lewis – Itex

We were extremely pleased with the feedback we received from the attendees – even before we surprised them with a GoPro as a parting gift.

One of Nutanix’s guiding principles is transparency, and the content and dialogue of the conference supported this philosophy. We heard many partners comment on how refreshing and different this approach is from that of the traditional datacenter incumbents. They also gave positive feedback regarding the approachability of Nutanix executives.

The attendees appreciated the one day format that packed in lots of information, but which was kept interesting by passionate presenters. Partners told us that Nutanix Now was “raw and real” vs. “staged and canned” which is how they described most other similar partner events. Several attendees told us it was the best partner conference they’d ever attended.

Gates

 

Going All In

“Attending Nutanix Now, I got to see a company with visionary leadership, a mature approach to the channel, and strong credibility in an area of cutting-edge technology.”    Matt Wagner – Logicalis

The theme of the event was “Now”. Now is the time to seize the moment, to make the decision, to go all in. A keynote by former Secretary of Defense, Dr. Robert Gates (see the CRN article coverage) punctuated the importance of having conviction in one’s decisions even when they differ from multiple expert opinions. Whether launching down a huge ski slope, jumping out of a plane or going all in with Nutanix – there is the point at which a decision and commitment must be made. The GoPro gift symbolized the spirit of capturing the moment.

Other event highlights included a product roadmap of upcoming Nutanix technologies, the launching of Nutanix services for partners, unveiling of a very simple ROI app for partner reps (CRN coverage), a presentation on how Nutanix partners can make money in the cloud, and case studies and videos of Nutanix partners who have gone “all in”.

A panel composed of Nutanix’s sales leaders showcased how integrated our Go-to-Market plans are between our sales reps and channel partners. And because Nutanix sells 100% through the channel, the typical concerns about direct sales were absent from both the Conference and the subsequent PAC meeting.

Partner Awards

I found Nutanix Now to be an extremely relevant, informative, and productive conference.  Additionally the speakers that presented the material were incredibly talented SMEs and provided ample time at the end of each session for audience participation and open discussion.   Nutanix is a positively disruptive solution!!”                Deborah Bannworth –Sirius Computer Solutions

At the Awards dinner following the Wednesday meetings, five partner organizations and two individuals received special recognition:

Outstanding Momentum                               Sirius Computer Solutions

Outstanding Innovation                                Logicalis

Partner of the Year – Commercial                  CDW

Partner of the Year – Fed                              DH Technologies

Partner of the Year – Canada                        Zycom Technology

“NVP” – Sales                                               Larry Gross of TIG

“NVP” – Technical                                         Steve Dowling of Sirius Computer Solutions

Several other partners including TIG, Meridian IT, CDWG, Progressive Communications and NovaTech, to name just a few, told us that they intended to be up on stage at next year’s conference.

Uncertainty Means Opportunity

“We severely underestimated the value of Nutanix Now. I wish more of our sales team were at the event. The value of the Nutanix roadmap over the next 6-8 months will be as game-changing as the last 18 months. I’m looking forward to the public announcement on the roadmap so we can see how the traditional datacenter infrastructure vendors react.”   Devin Henderson – DH Technologies

I used to attend Novell Platinum Partner conferences years ago where CEO Ray Noorda, besides coining the term ‘coopetion’, used to say, “In mystery, there’s margin.” I think that today a more appropriate message might be, “uncertainty means opportunity”. Many CIOs are confused about what type of cloud strategy to embrace, where their various virtual workloads should reside, and how to best manage their increasingly hybrid environments. Nutanix provides partners with a platform for addressing these concerns, and by so doing enhancing their own long-term success.

While naturally I’m quite biased, I consider our first channel conference a resounding success. Beyond the glowing feedback, there was a level of energy that seemed almost palpable. Of course, in typical Nutanix fashion, our Senior Director of Channel Marketing Michele, Taylor-Smith, rounded up all of the Nutanix Now organizers and participants for a post mortem the day after the event to see how we can improve. Our next Conference is in Marbella, Spain on May 20 and 21st followed by another one in Phuket, Thailand on June 24 – 26.

The Resonating “Now” Theme

“I’ve had the privilege of serving on the initial PACs for disruptive IT manufacturers such as Citrix, and VMware. I believe Nutanix is in that same class. We see Nutanix as a true game-changer in the datacenter and a company that is two plus years ahead of the market on execution with no plans to slow up on delivering their future vision.”                Jim Steinlage – Choice Solutions, LLC

I’ve previously written a couple of blog posts in a series on Nutanix transformation. Nutanix is not only increasingly transforming datacenters across the globes, but also partner businesses. More posts in the series are coming.

Our conference was great opportunity for me to hear about the impact of Nutanix web-scale on partner businesses in person. Nutanix provides partners with the phenomenal and rare opportunity to disrupt the $20B – $40B server and storage market. Nutanix enables them to stand far apart from the crowd while bringing their customers a clearly superior solution.

The partners that are capitalizing on this opportunity are seeing higher sales, more margin, more opportunities and are tending to have a lot of fun in the process. An increasing number of our partners are catching the same passion that is so pervasive among Nutanix employees. Their faith in, and commitment to, Nutanix is an essential part of our success in the market and of our status as the fastest-growing infrastructure company of the past three decades. Now is the time to kick our growth into the next gear and our partners recognize the time to go all-in with Nutanix is Now.

“Nutanix Summit was a fantastic event. It is hard to believe that was Nutanix’s first summit – everything was done extremely well.  It really showcased Nutanix’s position today, and how the company plans to maintain that lead on the competition.  We also gained a solid understanding of your culture.”     Jennifer Keating – CDWG

“The Nutanix Now conference was a great collaborative experience; I left feeling valued. Having access and building one-on-one relationships with the senior executives at Nutanix helped to strengthen our partnership. The ideas shared were great and will help my organization thrive in 3rd Platform with Nutanix as a strategic partner in VDI and Private Cloud infrastructure.”      Terry Buchanan – Zycom Technology

“Seeing other partners also successfully transforming their businesses with Nutanix helped get the competitive juices flowing.  And we’re elevating our overall SDDC/Web-scale messaging.”      Linda Brock –Progressive Communications

April 4, 2014

My new whitepaper: Virtual Desktop Infrastructure & Healthcare

My latest whitepaper (written with the assistance of Nutanix’s healthcare expert, George Fiffick) is now available on Nutanix’s web site.

The whitepaper discusses the difference between VDI & SBC, specific VDI benefits for healthcare, and enabling ROI from VDI with Nutanix. It also expounds upon why it’s essential to take a strategic approach to VDI.

 

 

February 24, 2014

All in with Nutanix: DH Technologies

Devin Henderson started DH Technologies with 12 employees on May 1st of 2013 after breaking off from a 3-year old solutions provider in which he was a partner. His new firm began selling Nutanix on the first day, and received a PO within the first week. DH Technologies went on to sell 22 blocks within the first 5 months and capture the first Nutanix Federal Partner of the Year Award.

“We always lead with Nutanix – it’s a game-changing technology,” said Devin. “It addresses so many different issues of a virtualized datacenter – essentially every use case we come across: Technology refreshes, big data, VDI, etc.  From a sales standpoint, it’s very simple to quote. From an engineering standpoint, it’s very easy to create an architecture with it.”

Although DH Technologies sells hardware and software, it is really a services firm. Devin said he is not concerned in the slightest that his company doesn’t get to charge customers for the weeks or months of up-front configuration services commonly required for traditional servers, UCS systems and SANs.

“The lack of back-end configuration services is obviously better for our customers,” said Devin. “But it’s also better for us. The reduced services means that our customers can afford to implement bigger projects, while the velocity of Nutanix deployments allows us to take on more projects. The types of services we’re now able to provide are much more valuable to the customer than rack-and-stack, and are certainly more challenging and stimulating for our engineers. And we’re able to charge more because of the specific skill set involved.”

Devin says that slashing implementation cost and time builds additional trust among customers. Their clients tend to quickly come back for many more blocks – not just for the initial use case, but for additional use cases as well once they’ve realized the exceptional value of Nutanix.

“The speed at which Nutanix deploys is very impressive to our customers,” said Devin. “They tend to be very open to speaking with us about what other opportunities they may have. We had one customer that went from a single NX-2400 from VDI to 8 blocks of NX-3460 as they expanded to server consolidation. They also utilized Nutanix to provide replication between their production data center and disaster recovery site.”

DH Technologies has developed a fully automated Cloud Computing solution showcase environment built upon Nutanix called Virtual Dojo. Virtual Dojo enables prospective customers to experience running a VMware View desktop in order to test and learn about the technology. It also enables them to log onto the Nutanix Prism interface as well as to look at other products such as Arista and Solarwinds.

Devin sees endless possibilities for DH Technologies and Nutanix. “The new capabilities Nutanix keeps adding still further expands our opportunities,” said Devin. “For every dollar we make on Nutanix, we make many more on ancillary products and services. And right or wrong, many customers perceive the hypervisor as a commodity. Nutanix gives them the choice to work with whatever hypervisor they want.”

“When it comes to Nutanix, we’re ‘all in’.”

February 22, 2014

Reached a milestone: 1,000 Bikram Yoga classes

“I hate working out. Hate it. But I like the results.”

    Jack LaLane (pioneering and world-famous fitness guru)   

      

Last week, I took my 1,000th Bikram Yoga class at our home studio in Lake Tahoe. During the past 5 years, in addition to our home studio, I’ve taken classes at 68 different Bikram studios in 19 states and across 6 countries. It has been a great joy to be able to participate in the Bikram Yoga community around the globe.

For those of you unfamiliar with Bikram yoga, it’s a 90 minute session of two sets of 26 yoga poses done in rapid procession in 105 F heat and up to 40% humidity. It’s a grueling class that founder, Bikram Choudhury, describes as a “torture chamber”.

My friend, Art Javar, invited me to attend my first Bikram Yoga class on October 24, 2008 (my 52nd birthday). Since my wife had been bugging me for years to go to a yoga class with her, I thought I’d take the opportunity to win some husband points and we joined Art and his wife, Lisa, at the Bikram Yoga studio in Napa, CA.

I absolutely hated the class. I’ve never been very flexible and have little in the way of coordination or balance. I was uncomfortable with the deep stretching of the yoga postures, and the high heat made me simply miserable. During the class I was thinking that you’d have to be an idiot to do this stuff. I was literally swearing at Art under my breath for inviting me to attend this madness. I knew with a certainty that I’d never return.

But then a funny thing happened the next day – I felt really good! I felt so good, in fact, that I told my wife, “I think I need to do this again.” Wendy and I became so enamored with the yoga that we held off moving to Lake Tahoe for two years because of the lack of a studio there.  We finally gave in and built our own mini-studio in the storage room in our former storage area.

I’ve heard some first-hand stories of phenomenal Bikram results. One lady had been told by doctors she wouldn’t walk again following a severe auto accident. Literally dragging herself through the postures, she eventually was cured and is now a teacher. Another student eliminated years of chronic and pervasive joint pain. Bishnu Ghosh Cup 2011 World Champion, Joseph Encinia, stopped all medications after years of inactivity due to severe rheumatoid arthritis. Others find they gain mental clarity, sleep better and, of course, tone up and lose weight.

With around 600 studios across the globe and more opening all the time, Bikram may be the world’s most popular yoga – and almost certainly is the fastest-growing. While the 26 postures were culled from thousands of yoga “asanas” dating back 4,000 years, each posture in the Bikram Yoga sequence is designed to safely stretch and open the body, in preparation for the next posture.

The teacher “dialogue” is the same wherever you go around the world earning Bikram snide nicknames from conventional yoga practitioners such as “McYoga”.  But the familiarity of the dialogue engenders rapid improvement and even helps the time go by more quickly. I also found that staring at a half clothed reflection in a mirror for 90 minutes makes it difficult to engage in self-delusion, even for those of us masterful at rationalizing away opposing evidence. Prior to starting Bikram I thought that I was in good shape, though my friend, Craig Sim, now assures me that I was actually on the “doughy” side.

I no longer hate Bikram Yoga classes (as much). I sometimes even look forward to them. Despite, or rather because, of their arduousness, I find them meditative. I’ve never been able to silence my mind enough to meditate in the conventional fashion, but in Bikram I focus on my breathing in order to get through the postures more easily. I leave the 90 minute classes feeling calm and content.

To me, a Bikram Yoga class seems a bit nostalgic in that it is somewhat like wrestling (which I did in high school and college), but without the pain. While for the first year or so I was prone to sluff off during the postures, I now strive to achieve the wrestling maxim of “leaving it all on the mat”. Bikram has been a life-changer for me. As I’ve told Wendy, I’d give up both of my sports-related passions of snowboarding and mountain biking before quitting Bikram. Here’s why:

  • My chronic back pain, dating back to cracking my spine in wrestling at 14 years old, is gone.
  • My chronic neck pain, dating back to an injury sustained while coaching wrestling at 30, is significantly reduced.
  • It’s a nice way for my wife and me to bond through the shared ordeal. While I’m no longer the worst in a class, she is always one of the best. She’s a former gymnast so I don’t resent instructors always asking her when she’s going to become a teacher even though I practice far more than she does. I plan to pass her up by my 102nd birthday.
  • I’ve lost 15 pounds and toned up.
  • I do better in the heat. This sounds funny, but it’s really true. Even when we went trekking in the Sapa Mountains of Vietnam, the pounding heat didn’t bother me much.
  • I’m a better public speaker. Bikram Yoga taught me to calm myself through breathing. It also helps me relax in other stressful situations.
  • My spring-time allergies disappeared, perhaps through sweating out the allergens.
  • My stamina has increased.
  • I’ve gained surprising flexibility in certain areas. I could barely reach below my knees before, now I’m getting close to putting my palms on the floor.
  • My appetite has shifted and I eat more healthily than I used to. Consistently torturing my body gives me more willpower to put less junk in it.

I’ll conclude with a brief story. When Wendy and I first started Bikram, we tried out a few different studios. I was inevitably the worst student in every class. On our 7th class, we went to the Walnut Creek studio where the owner, Virginia, was teaching. She kept correcting a student named Paul so I’m thinking to myself, “Wow. There is actually someone here who is worse than I am”. Then she came over to me and said, “Your name is Paul, right?

Nameste

February 20, 2014

Innovation and the Channel

In 2005, my partner and I started a VMware consultancy. Our business plan was simple: Gain a national reputation for virtualization expertise, and then sell to a big solution provider wanting to acquire a VMware practice. We cashed out just over three years later for $6.2M. Some might feel that we were foolish to sell so quickly, and I would agree – but that’s not the point of this article. The reason we were able to execute as planned was due to two blatantly obvious industry phenomenon: 1) Virtualization would inevitably become the datacenter standard; and 2) Most large channel partners would nevertheless be slow to invest in virtualization.

The S**t’s about to Hit the SAN

When the leading channel magazine, CRN, hands out awards for innovation – they always go to manufacturers. Solution providers instead are recognized for the size of their revenues or for how many certifications they hold from the leading datacenter manufacturers. The resellers’ allegiance to the giant incumbents pays off in terms of leads and back-end marketing dollars. And the model works quite effectively even when incorporating smaller technology disruptions such as disk-based backup or flash. But it breaks down in the face of a tectonic shift.

We are facing a tectonic shift.

From outside the datacenter, AWS is committed to decimating infrastructure sales by moving workloads to the cloud. Jeff Bezos famously warned equipment manufacturers, “Your margin is my opportunity”. AWS does billions of dollars in business – and it’s just getting started. Storage manufacturers, server manufactures and VARs will increasingly feel the pain.

And from inside the datacenter, Nutanix and VMware are revolutionizing the infrastructure for virtualized workloads by eliminating the need to purchase and manage SANs entirely. Nutanix also eliminates servers while VMware VSAN commoditizes them. VARs will struggle to differentiate, for example, HP servers that share the same Ready Node certification as low cost Super Micros. Without the supplementary margin from servers and storage, many VMware partners will struggle to make a living off just software.

Strapping on Wings is not the Way to Fly

Clay Christenson, author of The Innovator’s Dilemma, talks about how mankind unsuccessfully tried to fly for hundreds of years by strapping on wings. Various configurations of feathers, after all, are best practices by the most successful fliers in nature. It was only once fluid mechanics and lift were understood that the basis for modern flight developed.

Early last decade, VMware changed IT by bringing the same type of virtualization that IBM pioneered in mainframes to the X86 world. Yet other than the partial exception of Cisco UCS, all of the leading hardware manufacturers continue to approach virtualized infrastructure with the same old disparate tiers of compute and storage that Google long ago recognized as being highly inefficient.

As organizations increasingly virtualize, they encounter problems from deploying separate compute and storage tiers such as complexity, performance, troubleshooting and collaboration. The datacenter incumbents have all responded with their “converged infrastructure” offerings that provide faster deployment and one number to call for support.

UCS, which was the first purpose-built server for hosting virtual infrastructure, features in the “converged storage” solutions of EMC, NetApp, Hitachi and Nimble. But juxtaposing storage arrays with UCS, or with any other server for that matter, is the equivalent to trying to fly by strapping on wings.

Web-Scale

When Google debuted in 1998, the founders did not want to simply adopt the same type of SAN infrastructures utilized by Yahoo and the other Internet leaders of the day. The company instead hired a team of scientists from prestigious universities to create an entirely new approach to infrastructure.

The scientists developed Google File System (GFS), along with MapReduce and No SQL, running on hundreds of thousands of commodity servers with local storage – and no SANs. The result was massive parallel computing, exceptional fault tolerance and linear scalability. Google rocketed past Yahoo to become the dominant leader in search. The Google infrastructure model was eventually adopted by Yahoo and all the other leading cloud providers.

A couple of the developers of GFS, including the lead scientist, saw an opportunity to bring the advantages of this web-scale architecture to commercial and government enterprises by leveraging the hypervisor. They, along with engineers from companies such as Oracle, VMware, Microsoft and Facebook, spent four years developing the Nutanix Distributed File System.

Nutanix refers to its architecture as web-scale; it incorporates the same scalability, simplicity, resiliency, and lower cost that have already won in the very demanding cloud provider environment. It will inevitably win in the enterprise as well.

Web-scale additionally provides a foundation for still more exceptional things to come including advanced analytics, automated workflows and hybrid cloud enablement among other capabilities. @AndreLeibovici discusses some of these advantages in his recent blog post.

While it is far too early to say that Nutanix specifically is going to be the winner in this space – the company is certainly off to a good start. With $100M in total revenues after only 2 years of selling, Nutanix is by far and away the fastest-growing infrastructure company of the past decade.

The disruptive nature of web-scale puts channel partners in a dilemma that they didn’t face when VMware ESX first came on the scene. ESX, after all, vastly increased storage sales along with fiber channel switches and powerful servers. Nutanix’s web-scale architecture, on the other hand, competes with every leading server and storage manufacturer.

Time to Innovate

It is misleading to say that channel partners don’t innovate. Savvy players identify which new technologies are important and help their clients justify and implement them. Large and even multi-billion dollar channel organizations have been built by partners early to market with manufacturers such as Cisco, NetApp, Palo Alto Networks, Splunk, Riverbed, VMware and Citrix.

For some time now many VARs, particularly the larger ones, have become complacent. Though they may have been able to slowly embrace virtualization without negative consequences, the datacenter infrastructure revolution is not likely to be so forgiving. VMware’s introduction of VSAN validates software-defined storage and web-scale architecture. It forces a choice between loyalty to vendors peddling outdated architecture, and to doing what’s right for customers.

Increasingly, channel partners are reshaping divisions, or even their entire businesses, around Nutanix. These partners radiate the excitement that comes from changing the dynamics of the datacenter. Their commitment to web-scale is paying off it terms of recognition as thought-leaders, from increased customer loyalty and from escalating sales.

Thanks to @sudheenair for his many contributions to this article.

See Also:

Nutanix Joins the $1 Billion Valuation Club as It Takes On Tech Giants. 01/14/2014. Deborah Gage and Shira Ovide. The Wall Street Journal

CRN Tech Elite 250. 01/14/2014. Rick Whiting. CRN.

Not Just Datacenter Transformation, Nutanix is also Transforming Partner Businesses. 01/13/2014. Steve Kaplan. Nutanix Web site.

2013 Solution Provider 500. 10/18/2013. Kristin Bent. CRN.

The 20 Smartest Things Jeff Bezos Has Ever Said. 09/09/2013. Morgan House. The Motley Fool.

February 15, 2014

Not just datacenter transformation; Nutanix is also transforming partner businesses

How do you know when a datacenter technology manufacturer has something truly disruptive? One sign is when channel partners start to focus their businesses around selling and implementing the manufacturer’s solutions.

This type of channel dedication helped make Citrix and VMware into datacenter behemoths, but it has been notably lacking when it comes to conventional server and storage manufacturers. And the incumbents’ “converged infrastructure” spin-offs are not doing much better.

The story is now changing as partners across the globe increasingly embrace the web-scale architecture Nutanix brings to the datacenter.

Sirius

“Nutanix,” argues Steve Dowling, Director of Converged Infrastructure for Sirius Computer Systems, “has bypassed the ‘low-end disruption’ stage of the Christiansen Failure Framework.” Dowling says that the Google lineage of the Nutanix Distributed File System enabled Nutanix to take the same type of web-scale IT infrastructure that dominates the cloud provider space and bring it directly to the enterprise datacenter.

The Christiansen Failure Framework is described in The Innovator’s Dilemma by Clayton Christianson. Sirius, a national solutions provider with $1.4 billion dollars in annual revenues, uses Christiansen’s model to help ensure they continue to bring the best technology options to their clients. It was recognition of the maturity of the Nutanix offering in terms of market disruption that led Sirius to quickly make Nutanix a strategic partner. Nutanix is currently on track to be the fastest-growing new partner in Sirius’ 34 years of business.

InnovatorDilemma

“We created the converged infrastructure division because we recognized early that this is the type of solution that companies need,” said Dowling. “But younger people coming into the workforce, in particular, are used to the iPhone type of software-defined convergence; they’re looking for a similar elegance and simplicity in their datacenter environment. Nutanix provides a way for Sirius to get in front of the demand with a converged solution that mirrors what the cloud providers have adopted during the past decade.”

Dowling says that one of the big attractions of infrastructure-based cloud computing is that it provides a way for CFOs to mitigate uncertainty by paying for resources only as consumed. But, of course, this flexibility comes at a cost as well as with a certain loss of control. Dowling says that Sirius commonly receives requests to help organizations turn around and “in-source” their public cloud or otherwise outsourced environments.

“The huge advantage Nutanix brings is that companies can start their IT initiatives with a very small cost commitment,” said Dowling. “The scalability of Nutanix lets them expand their environments only as needed. This means that they, rather than the public cloud providers, benefit from continued enhancements resulting from Moore’s Law. Our customers retain control and pay only a fraction of the price they would if utilizing a public cloud provider.”

BEarena

It’s not just giant U.S. partners that are having great results with Nutanix. BEarena is a solutions integrator in Sydney, Australia that focuses on all things virtual – they have always worked to keep their customers abreast of emerging technology and to give them an unbiased view of the latest solutions in the industry

“Nutanix,” said Darren Ashley, president of BEarena, “has transformed our business 100%. In the last 12 months, we’ve gotten feedback from customers, analysts and even Nutanix competitors (who have been courting us), evidencing that BEarena is increasingly seen as being very innovative, forward-thinking and leading the charge in converged infrastructure deployments”.

Ashley says that BEarena staff is very technical. They don’t focus on products, but rather on solutions. Once they discovered and understood the Nutanix solution – they refocused their business around it. Half of their technical staff now holds NPP (Nutanix Platform Professional) certifications, while the other half are NPSE (Nutanix Platform Sales Engineer) certified.

“We make the pitch to our clients about the attributes of a modern datacenter”, said Ashley. “These include software-defined performance, flash designed from the ground up, elastic consumption, multi-hypervisor support and private/public cloud interoperability. We let them know that only Nutanix has all of these capabilities. Then the lights go on.”

Zycom Technology

Zycom Technology is a 16-year-old solutions integrator operating throughout Canada. Like many solutions providers, Zycom is in transition from a boutique integrator to a private cloud integration and service company with a vision. Zycom’s strategy is to become one of Canada’s leading integrators in private cloud technology. The firm has a particular emphasis in both VDI and in DaaS. In fact, they’ve built their entire DaaS platform around Nutanix.

“As good as our VDI practice was, relying on traditional servers and storage was an up-hill battle,” said Terry Buchanan, Zycom CTO.  “Nutanix has really enabled VDI to take off as well as accelerated our sales funnel. We have never before seen this type of uptick. We have clients fighting over who is going to get to try out our Nutanix demo gear next.”

Buchanan says that one of the challenges that anyone in the channel who deals with servers and storage faces is that organizations tend to be very set in their ways, and are often resistant to disruptive products. But he says that RAID is an antiquated technology, and that IT professionals are surprisingly receptive to the Nutanix web-scale architecture. Even technology laggards perk right up when introduced to Nutanix.

“And they’re buying Nutanix not just for VDI,” said Buchanan. “Our clients are looking to use Nutanix for production environments, mission-critical apps and even for their entire infrastructures. We’re also getting traction in private cloud as well as with ISPs. Our clients are captivated by the up-front and long-term savings, operational performance, scalability and simplicity.”

Buchanan says that Nutanix recently enabled them to break into a new account which they had been targeting unsuccessfully for 5 or 6 years. Just presenting Nutanix opened the doors wide.

“Our internal nickname for Nutanix is ‘the Sales Enablement Department’,” said Buchanan. “We made Nutanix a strategic partner after just 3 months – we’ve never done that before with any new manufacturer. We hope to be Nutanix’s largest partner in Canada.”

Nutanix Now

Nutanix is hosting three partner summits across the globe in the next few months, and the theme is “Nutanix Now”.  Unlike the leading datacenter incumbents, each of whom have tens of thousands of partners, Nutanix is looking to have a smaller, but more focused and effective partner channel.

It is rare when an opportunity comes along for forward-looking channel partners to build a business around a new datacenter technology. The time to transform their operations with Nutanix is clearly happening now.

 

 

January 5, 2014

Five tips for standing on the shoulders of the channel

Selling through the channel is sometimes tough for manufactures to swallow – especially in situations where they do most of the work yet the partners still make significant margins. But the channel, when effectively managed, can dramatically scale a manufacturer’s business. This requires an investment not only in foregone margin, but in building a channel organization focused on partner success.

I worked for six different channel partner businesses over the past 25 years, was a columnist for three different channel magazines and sat on partner advisory boards for several manufacturers. Now that I’ve been running Channels for Nutanix for the past nine months, I’ve had some time to reflect upon the five key tips for manufacturers wishing to develop a world class channel organization.

1)      Give the Partners a Reason for Selling Your Product

I remember sitting at a Citrix Partner Summit in the early 2000s when CEO Mark Templeton was going on and on about Citrix’ goal to be a $1B company. While Mark is one of the people I admire most in the industry, I was disappointed that his messaging was all about Citrix rather than how achieving his objective would benefit the channel. I know through subsequent conversations I had with other Citrix partners that I was not alone in feeling this way.

Channel partners are out there selling, and hopefully evangelizing, your products as a massive unpaid sales force. Ensure that your messaging is geared toward helping them understand how selling your product is going to build their businesses. And this goes well beyond your product margin or even the associated product and services drag. Your messaging should incorporate how your product will give them a competitive advantage, win more business and lead to new customer relationships.

2)      Stratify Your Channel

One of the many smart things that Ray Noorda did at Novell was to stratify the channel into three tiers: Silver, Gold and Platinum. This is tremendously helpful to manufacturers because it allows us to identify those partners willing to make the investment into selling our products in terms of obtaining the required certifications, engaging in the expected marketing efforts, etc. We can return the favor by investing back with the top partners by providing them with more leads, more MDF, more training and so on.

VMware learned years ago that channel partners that make the effort to achieve certain competencies through obtaining certifications tend to see an increase of two to three times in related sales.  A stratified channel is a way to encourage partners to obtain their certifications in order to move up to a higher tier.

3)      Collaborate with Your Partners

One of the things that Microsoft did really well was establish a Partner Advisory Board and then act upon the input they received. I was on the Microsoft Partner Advisory Board in the early 2000’s when we had a couple of meetings where we railed against the competition we were seeing from Microsoft Consulting Services. Microsoft made some significant changes as a result, and won a lot of partner respect.

Collaboration comes into play when you truly work together with your partner community to build a business. Partners have a tremendous amount of knowledge. Capturing and embracing this knowledge not only helps your partner community become more effective, but betters your organization as well.  Creating an interactive partner advisory board establishes a conduit for this transfer of knowledge.  Allow the partners to be honest about what they get and what they need from you. Allow them to tear apart what you offer them, but also consider what they are requesting. Then embrace it and deliver it.  All parties win with this type of collaboration, but it should be conducted in a formal setting such as with frequent advisory sessions.

4)      Enable Your Partners

One of Ray Noorda’s sayings was, “In mystery, there’s margin”. More than ever, VARs can only survive, let alone thrive, by clearly delivering “Value” to their customers. This entails helping them choose the optimal products and cloud-based services for their businesses.

Provide your partners with the ability to build a business around your technology. This includes helping them make sense of the new technical elements, companies and partnerships in the marketplace that make up these solutions. As an example, Nutanix is helping our partners capitalize upon our alliances with OpenStack, with various big data manufacturers, with data protection specialists, etc.

Effective training is key. Partners should be able to simply convey the value of your product to their customers. Any content or material that helps educate your partner base should be consumable in many formats and available at any given time.  Text, video, audio, etc. delivered on-demand is critical.

5)      Build Partner Loyalty

In my early days as a VMware channel partner, my firm brought a rep into a meeting with a new client we were targeting in Reno. I learned later that this same rep went back to the same client for a follow-up meeting with Dell. I raised Cain, and the rep was subsequently moved to another part of the organization. I was impressed with how quickly VMware acted to rectify the unfortunate situation.

The easiest way to build partner loyalty is to establish trust. Channel conflict is inevitable, especially if you sell direct as well. It is essential to clearly lay out the rules of engagement, and then always do what you say you’re going to do.

 

 

 

 

 

 

 

 

December 4, 2013

VirtualMan Converges

For those of you who remember the VirtualMan series of comic books, you may be interested to know that VirtualMan is keeping up with the times. His sidekick, formerly known as 1LUN, now has a new name more appropriate to the optimal infrastructure for virtualized datacenters.

cartoon vman 01

 

November 16, 2013

What is the Value of a VCDX to a VMware Ecosystem Partner?

The VMware Certified Design Expert (VCDX) certification is the most elite of all certifications in the IT industry, and by far the most difficult to achieve. Written exams, preparation of a solution design that can run hundreds of pages, and having to defend all aspects of this design are all part of the certification process. Only 124 engineers across the globe have managed to obtain it thus far, and around half of those work at VMware.

The VCDX certification is not just about VMware, it’s also about servers, storage, networking and applications. The VCDX must also be able to understand business requirements and effectively translate those requirements into technical solutions. He/she must also understand the limitations of virtualization and be able to adapt technology to enable successful implementations. The certification is realistically impossible to achieve without a great deal of real-world experience.

The value of a VCDX certification to a Solution Architect is obvious. The distinction automatically opens doors in terms of job/consulting opportunities, speaking engagements and access to hard-to-reach people. It also makes them part of the tight-knit community of VCDXs around the globe who share ideas and technology.

Many VMware partners, both manufacturer and channel, do not really understand the value of a VCDX certification. Nutanix is not one of these partners.  With the arrival of Michael Webster, we now have four VCDXs, ranking second only to VMware.   And we continue to actively recruit still more VCDXs as well as to encourage existing employees to obtain their certification.

The primary areas of VCDX value to Nutanix include credibility, product improvement, hiring, and VMware partnership.

Credibility

The large incumbent datacenter manufacturers have a long-time established business with an entrenched partner channel and huge customer base. When they speak, the media and the industry listen. Cisco, for example, was able to quickly become the world’s second leading blade manufacturer with UCS even though the company had no prior server experience.

When an early stage company such as Nutanix seeks to disrupt the entire datacenter with a new technology, it is much more difficult to grab mindshare from customers, channel partners, analysts and the industry at large. VCDXs can work where they choose. The eagerness of so many VCDXs to work with Nutanix helps provide validation for new approach to datacenter architecture. And because VCDXs tend to be passionate as well as prolific speakers, writers and social media practitioners, their Nutanix advocacy has far-reaching effects throughout the industry.

Product Improvement

The Nutanix Distributed File System originated with a couple of the developers of GFS, including the lead scientist, who saw an opportunity to bring the advantages of true convergence to commercial and government enterprises by leveraging the hypervisor itself. Nutanix engineers who came from companies such as Google, Facebook and Oracle leverage their experience with massively scalable architectures to continually enhance Nutanix’ products and capabilities.  But they don’t necessarily have the touch points of helping clients build out Nutanix-enabled enterprises.

Our VCDXs help bridge this gap because of both their broad-based knowledge and their ongoing extensive interactions with clients, partners and Nutanix field personnel.  They assist our product engineers by letting them know where Nutanix is working really well and we have opportunities for improvement or enhancement.

Hiring

The hiring process at Nutanix, while standardized, is also particularly rigorous in order to identify candidates with a broad base of skills.  We’ve found that VCDXs are exceptional in that regard.  This isn’t to say that we see the four magic letters and immediately say, “They’re really good”; VCDXs go through the same evaluation process as everyone else.  But VMware’s training, testing and certification is a remarkable screener for finding the top architects.  In addition, they tend to be exceptional communicators. The VCDX candidates have to defend their solution against barrage of questions from a panel of other VCDXs.  How they handle themselves in defending their solution under pressure is all part of the certification process.

Hiring VCDXs also sends a clear message to the virtualization community that Nutanix is willing to make the investment to bring top talent to the organization.   Great technologists tend to attract other outstanding architects and engineers who want to work with similarly able peers.

VMware Partnership

VMware is an important Nutanix partner.  A lot of our clients like the VMware stack including vCD, vCloud Suite, Horizon Suite, etc.  Having VCDXs on board helps us provide a very high level of pre-sales and post-sales support and strengthens our partnership with VMware.

Measuring the VCDX Value

While Nutanix is very big at identifying metrics and holding people accountable for reaching them, we have not attempted to quantify the worth of a VCDX.  But everyone, from the top executives down, understands and recognizes the benefits we receive from having VCDXs on our team.  And we’re working to give back to the community. We recently ran a contest to sponsor an architect in obtaining VCDX certification. We were so impressed with the responses that we ended up sponsoring two VCDX candidates.

Thanks to ChrisFendya (@ChrisFendya) of Nutanix partner Trace3 and to Lukas Lundell (@LukasLundell), Josh Odgers (@josh_jodgers), Jason Langone (@langonej) and Lane Leverett (@wolfbrthr) all of Nutanix for their suggestions.

November 13, 2013

Calling Nutanix a Storage Company Doesn’t Compute

Disruptive technologies are sometimes positioned within the context of existing solutions – the classic example being the horseless carriage as the initial categorization of automobiles. In our industry, we’ve seen Salesforce evangelize “No Software” long before cloud computing became popular. Citrix emphasized application delivery rather than hosted desktop sessions. VMware initially promoted “Mainframe-class reliability, security and management on Intel computers” until the virtualization concept took hold.

Nutanix faces a similar challenge in messaging our Virtual Computing Platform. I had breakfast last week with a savvy financial analyst who quickly grasped the disruptive nature of the company and how it’s restructuring the architecture of virtualized datacenters. And yet, as we were parting he still lumped Nutanix in with the myriad fast-growing new storage companies.

There is nothing wrong with storage, of course, at least not if talking about partially used paint cans, boxes of old records or even archived data. But when it comes to frequently accessed information and high IOPS, “storage” shouldn’t even be used in the same sentence.

As an analogy, consider car keys. Most folks use them on a daily basis; it wouldn’t make sense to keep them in the basement storage room. Adding a fire pole to the basement or a moving walkway to speed progress through the storage room are rather silly solutions. It makes vastly more sense to keep the car keys conveniently close by on the kitchen counter or dresser. Similarly, active information should be maintained on SSDs and disk close to the CPUs, not on a centralized storage array accessed over a network.

History of Enterprise Storage

Enterprise storage for X86-based networks started with the introduction of the EMC Symmetrix in 1990. Despite the significant expense and complexity it entailed, larger organizations began consolidating their data from local server drives to the EMC arrays in order to benefit from shared access and other advantages. The Internet boom during the second half of the 1990s fueled a rapid increase in sales for both EMC and newcomer NetApp. But the dot com burst early the next decade reversed the trend, and both organizations saw declining revenues. That is, until VMware came on the scene.

EMC sales

EMC Sales in Billions

In 2003, VMware added a truly remarkable enhancement to ESX, the ability to vMotion virtual machines between hosts. IT staffs were eager to deploy this capability to eliminate the requirement for maintenance windows among other use cases. But on page 37 of the VMware VirtualCenter 1.0 User Manual under VirtualCenter VMotion Requirements was the bullet that would change the datacenter: “The hosts must share a storage area network (SAN) infrastructure”.

The tide immediately changed for the storage manufacturers as customers began purchasing shared storage to utilize VMotion and DRS. EMC recognized the potential goldmine and announced its intentions later that same year to acquire VMware for $625M.

vCenter Manual

                      The Bullet that Changed the Enterprise Datacenter

The rest, of course, is history. The storage industry, driven by VMware virtualization, took off. In addition to rapid growth of both EMC and NetApp, other large datacenter incumbents IBM, HP, Hitachi and Dell all dramatically increased their storage businesses organically, through acquisitions, or both.

Today, in addition to the traditional storage players, we have new start-ups focusing on flash, server compute, virtualization, or scale-out in order to offer enhanced performance or lower costs. One of the flash start-ups, ExtremIO, was acquired by EMC 18 months ago and is being formally launched as an EMC product tomorrow. Another storage start-up focused on flash, Nimble, has filed for an IPO.

EMC Stock

EMC Stock Price Increased 129% and NetApp 96% the Year VMware Announced “SAN Required”

Storage Might be Faster, but it’s Still Storage

A storage array might consist of all flash and be blindingly fast in terms of IOPs, but it still is accessed by servers across a physical network – largely negating the speed advantage. And with some exceptions, including EMC’s ExtremIO, all of the storage traffic is funneled through two physical storage controllers which can, and does, lead to boot storms and write storms – particularly in an environment demanding high IOPs such as VDI.

This hub and spoke model of storage and servers also doesn’t scale. As more servers are added, the IOPs are continually divided by the number of servers, reducing the available performance. And when an array fills up with data, a complex and expensive forklift is required in order to continue expanding.

Lazy River

 A Networking “Lazy River” Traverses the Distance between Servers & Storage

The proprietary storage arrays are also difficult to manage. LUNs must be carved up and datastores configured. Each brand of array requires specialists who are skilled in administering the storage environment.

Bringing a Knife to a Gun Fight

While VMware-induced SAN deployments were propagating throughout enterprise datacenters, a very different scenario was unfolding in the Internet space. The story goes that prior to the launch of Google, Sergey Brin took a tour of the Yahoo datacenter which consisted of well over 1,000 NetApp Filers. He was astounded that so many of the arrays sat mostly idle due to lack of activity by users in different global time zones. Brin refused to accept his team’s explanation that storage arrays could not accommodate varying sets of user data. He told them to find a way to make Google’s storage agile, scalable and efficient.

The new company hired a team of scientists who developed the Google File System (GFS) and MapReduce to enable a massively scalable environment utilizing only commodity servers with local drives and with no need for managing and optimizing the storage environment. The impact was quickly felt throughout the Internet provider space whose leaders eventually all adopted a Google-like architecture. Robin Harris of StorageMojo estimated that Google had a 5-8X cost advantage over former search leader Yahoo. For Yahoo, it was like bringing a knife to a gun fight.

A couple of the developers of GFS, including the lead scientist, saw an opportunity to bring the advantages of true convergence to commercial and government enterprises by leveraging the hypervisor itself. Nutanix uses a similar distributed file system along with variants of MapReduce and No SQL to enable predictive data placement. Nutanix, like Google, eschews RAID and instead utilizes commodity hardware with data replicated across multiple nodes.

The Nutanix Virtual Compute Platform includes both flash and disk as part of a commodity server rather than as components in a proprietary array. Workloads automatically migrate between the flash and disk depending upon how actively they are accessed, optimizing performance while minimizing cost.

The Nutanix Virtual Compute Platform doesn’t require specialized storage administrators. It is up and on-line in under an hour and then managed entirely from the virtualization console by the same team administrating the virtual machines. And Nutanix, which utilizes virtualized storage controllers, scales just like Google. A Nutanix Virtual Computing Platform can start with a single 3-node cluster, and then grow one node at a time to accommodate thousands of nodes – without any change in configuration.

Nutanix Scale

Nutanix Scales:  Increasing Storage Controllers, Read/Write Cache & Compute/Storage Capacity

A Radically Simpler, and More Efficient, way of Building Enterprise Datacenters

When I corrected the financial analyst about his misclassification of Nutanix as a storage company, he sheepishly acknowledged his mistake. Nutanix is certainly not a storage company, just as it is not a server company. Nutanix has brought the same type of SAN-less scale-out architecture pioneered by Google to virtualized datacenters. Government and commercial enterprises across the globe are increasingly enjoying the cloud provider advantages of low cost, scalability, simplicity and resiliency.

And while the analyst might be struggling with how to categorize the Nutanix Virtual Computing Platform, he knows that it is proving to be wildly popular. Nutanix continues its pace as the fastest growing infrastructure company of the past decade.

Thanks to Dave Gwyn (@dhgwyn) and Sudheesh Nair (@sudheenair) from whom I appropriated pretty much every good idea and analogy relayed in this article.

 

See Also:

Scale Out Shared Nothing Architecture Resiliency by Nutanix.  10/26/2013.  Josh Odgers. CloudXC.

How Yahoo Can Beat Google.  07/05/2007. Robin Harris. Storage Mojo.

Killing With Kindness: Death by Big Iron. 05/23/2006. Robin Harris. Storage Mojo

October 23, 2013

Life at Nutanix

I was working furiously an hour ago at 10:30 pm trying to get caught up on a multitude of tasks when I received a Twitter Direct Message from a long-time acquaintance asking me, “So how is life at Nutanix?”

I paused for a moment to consider how to answer that question. Of course, one answer is that life is hectic. Years ago, I used to work for Tom Flink when he was president of Vector ESP which had purchased the Citrix consultancy I ran. After joining Nutanix seven months ago, my very first email was to Tom who is now VP, WW Channels & Market Development Sales at Citrix. Tom told me that the pace at a manufacturer is crazy; it’s at a whole different level than that of a channel partner. This turned out to be an understatement.

Life is also challenging. I came to Nutanix with 25 years of experience on the channel side of the business, but without any experience on the manufacturing side. I figured if Tom could make the transition so brilliantly, that I could hopefully be successful as well. But I was unprepared for the expansive strategic thinking the position entails – especially at an extraordinarily fast-growing early stage company such as Nutanix. I’ve taken some lumps and bruises, but with the invaluable contributions of several very experienced folks within Nutanix, I think we’re on track to building a world-class partner program.

Most importantly, life is exhilarating. Every day I get to interact with extraordinary engineers, salespeople, marketing professionals, operations experts and executives across the globe. In addition to the mass of talent and experience, there is a constant buzz of excitement throughout the company as we work to compete against the giant incumbents dominating datacenter servers and storage.

It is great fun to be part of an organization that is changing an industry.

September 13, 2013

Why infrastructure is converging

Converged infrastructure is hot! In September alone, Cisco purchased Whiptail to “accelerate Cisco …momentum in converged infrastructure,” VMware announced VSAN, VCE unveiled “true” converged infrastructure, and Nutanix won Best of VMworld – Gold for the 3rd year in a row. But what exactly is converged infrastructure, and why is its importance escalating?

VCE True Convergence

Convergence Defined

The definition of convergence depends upon whether the context is economics, mathematics, music, literature, etc. Even within the category of ‘computing and technology’, convergence can mean different things. Fortunately, Wiktionary provides an overarching definition: “The merging of distinct technologies, industries, or devices into a unified whole.”

Examples of successful technological convergence are not easy to find. Products, like biological species, become increasingly specialized over time in order to best exploit specific niches. Even the most popular convergence gadget ever introduced, the iPhone, was predicted by respected marketing gurus to eventually fail because single-purpose devices are nearly always superior to the converged variety.

Convergence flourishes when both cost and complexity are reduced. In telephony, for example, convergence of voice and data networks set the stage for VoIP. Duplicate hardware was eliminated along with the requirement to manage a separate PBX.

False Convergence Image I

Why Converged Infrastructure?

Data center virtualization often finds IT staffs unprepared for the barrage of additional objects requiring management. The number of VMs typically far exceeds the number of former physical servers. IT must also deal with new virtualization hosts, vSwitches, vAdapters, and additional management tools. Troubleshooting complexity escalates as multiple vendors play an integral part in the infrastructure.

Virtualized data centers additionally play havoc with the traditional stovepipe IT organizational model. Server, storage and network teams can no longer work effectively in silos. But traditional infrastructure tools, processes and policies don’t lend well to efficient collaboration between the functional groups.

As a result of these challenges, virtualized customers are demanding solutions that enhance collaboration, reduce management complexity, and that help eliminate inevitable finger-pointing from different vendors. The data center manufacturers have responded. EMC joined with Cisco, VMware and Intel to create VCE. VCE’s Vblocks are selling at a one billion dollar run rate, leading all the major storage manufacturers who now also offer Converged Infrastructure (CI) solutions that combine compute, storage and network resources either as a product or as a reference architecture.

Convergence Table One

Converged or Preassembled

The bundling of compute and storage tiers together in one rack or chassis, however, does not constitute convergence – at least not as Wiktionary defines it. The underlying compute and storage tiers require duplicate hardware and separate management along with an intermediate network to move data continuously between them. A more applicable moniker would be “Adjacent Infrastructure.”

Enabling Convergence with Virtualization

VMware may be a partner in VCE, but it knows that the hypervisor is key for converging multiple services. VMware NSX and VSAN converge network and storage elements respectively as part of a software-defined data center. The underlying hardware elements become commoditized as the intelligence moves up into the software.

Trailfootmarks comment

Nutanix’ Anjan Srinivas recently wrote a blog post welcoming VMware to the software-defined storage (SDS) club. Nutanix leverages the hypervisor to virtualize the storage controllers found running traditional SANs. Transforming storage into a software-defined service enables convergence with software-defined compute (i.e. virtual machines) into a single system. SANs are eliminated while both storage and compute are administered holistically through the virtualization management console.

True Converged Infrastructure

VCE is making a big show – including a countdown clock, for the unveiling of “true converged infrastructure.” Being snarky in nature, my first thought was to wonder if existing Vblocks represent “False Converged Infrastructure.”

But, to be fair, VCE uses the convergence term to connote integration of a complex application stack with the underlying infrastructure. I assume “true converged infrastructure” will be along these lines and may also include continued improvements in consolidated management of the different tiers. I doubt that it heralds a move away from proprietary arrays.

VCE Webcast Announcement

The Inevitability of Converged Infrastructure

Converged infrastructure, while new in the enterprise space, certainly is not new. Google rolled out commodity servers with local storage over ten years ago – utilizing the Google File System (GFS) and other technologies to enable a very scalable, resilient and low cost converged infrastructure.

This architecture gave Google a huge advantage over the other Internet leaders of the day who were still using SANs. Robin Harris of StorageMojo estimated that Yahoo, which was a very large NetApp shop, spent 3 to 10 times more on storage than Google. He said that for Yahoo, it was like bringing a knife to a gun fight.

In keeping with its philosophy of an open systems approach, Google published a paper on the GFS in 2003. This eventually led to adoption of a similar architecture by Amazon with DynamoDB, by Facebook with Haystack and by Yahoo with Hadoop. Twitter, Salesforce, eBay and even Microsoft Azure all now also utilize scale-out local storage infrastructures for their primary businesses rather than SANs.

A distributed file system benefits cloud providers due to a limited number of applications requiring customized APIs to utilize it. The multitude of off-the-shelf applications used by commercial and government organizations has long made similar adoption unrealistic.

This is no longer the case.  Highly virtualized data centers now enable enterprises to incorporate SDS as an integral part of a scale-out converged infrastructure. They can achieve the same benefits as the cloud providers including reduced costs, consolidated and simplified management, less vulnerability to downtime, etc.

The superiority of scale-out converged infrastructure is validated by its widespread adoption in the demanding cloud-provider space. While legacy storage manufacturers today are successfully incorporating various convergence permutations, the SDS variety of converged infrastructure increasingly will replace proprietary arrays as the data center standard.

 

Disclaimer: I work for Nutanix, but the opinions expressed are my own. Nutanix’ Virtual Computing Platform is variously referred to as converged infrastructure or hyper-convergence.

Thanks to Bas Raayman (@BasRaayman), Anjan Srinivas (@anjans) and Sudheesh Nair (@sudheenair) for their reviews and suggestions.

References:

The Business Value of Converged Infrastructure Technologies. 08/28/2013. David Floyer. Wikibon

Nutanix Receives Best of VMworld Recognition for the Past Three Consecutive Years. 08/28/2103. Nutanix Press Release

VMware VSAN Validates an Increasing Shift to Software-Defined Storage. 06/16/2013. Anjan Srinivas. Nutanix Web site

VCE Vblock Demand Hits Billion Dollar Run Rate Three Years After Launch. 02/20/2013. Cisco Web Site
Google Throws Open Doors to Its Top-Secret Datacenter. 10/17/2012. Steven Levy. Wired.

Converged Infrastructure Takes the Market by Storm. 08/22/2012. David Vellante.  Wikibon.

Ex-Google Man Sells Search Genius to Rest of World.  12/21/2011. Cade Metz. Wired

The Battle for Convergence. 12/12/2012. Stuart Miniman. Wikibon Blog.

The Efficient Cloud: All of Salesforce runs on only 1,000 servers. 03/23/2009. Erik Schofeld. Techcrunch

How Yahoo Can Beat Google. 07/05/2007. Robin Harris. StorageMojo

iPhone Challenge: Marketing Pundits Unite. 04/30/2007. Seth Godin

The Google File System.  Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. Google Research Publications

August 16, 2013

A new VMworld perspective

There he was up on the big stage at VMworld 2006 general session: my creation, VirtualMan, animated by @JerryChen speaking live with VMware COO, Carl Eschenbach. As a bit of icing on the cake, VMware put up a VirtualMan cut-out where you could get your picture taken as VirtualMan. And the cut-out happened to be close to the booth of our primary competitor. Sweet!
VMworldCartoon

The 2006 VMworld was my second, and probably my favorite, but hardly my last. I went to the next six VMworlds in a row, representing three different channel organizations. I had the great privilege of being able to present at most of them, though honestly, it used to be a lot easier to get a speaking slot.

VMware virtualization reshaped our industry and created the foundation for cloud computing. From the first VMworld I attended in 2005 with about 3,700 attendees and 700 partners, to last year’s 21,000 participants, this IT conference is unique among the very many I’ve been to over my 25 years in IT. Nothing I’ve seen matches VMworld for the pervasive energy and enthusiasm from folks excited to be part of the IT virtualization revolution.

Ironically, despite all the VMworld conferences, I’ve probably attended less than half a dozen sessions. Like many VMware partners, I ended up visiting the plethora of manufacturer booths or scurrying from meeting to meeting whether with manufacturers or with peers. Then there are the dinners, parties, customer get-togethers and, more recently, tweet-ups. While the sessions were great (and I would try to catch up with at least some of them by reviewing the on-line versions), it was the networking I found to be the most valuable. I’m sure that remains true for many of our partners today.

VMmanCutOut

This will be my first VMworld working for a manufacturer rather than a VMware partner. But if possible, I expect the conference to be even busier. I have no presentations this year, but still have plenty of great alliance partners with whom to meet such as Unidesk’s Ron Oglesby.

OglesbyTweet

Most importantly, VMworld will give me the opportunity to meet face-to-face with many of our partners – some of whom I’ve known for decades, and some of whom I’ve yet to meet in person.

Being on the channel side for so many years, I recognize the huge investment attending VMworld requires in both cost and time. It is great fun to be able to offer our partners a solution enabling them to bring to their clients the same type of SAN-less scale-out architecture Google pioneered in the cloud provider space.

And while VirtualMan won’t be making an appearance at 2013 VMworld (undoubtedly to the relief of my former partner, Gary Lamb, in whose likeness, VirtualMan was drawn – though Gary does admit it helped with his dating), it’s awesome that Nutanix has a comic-based theme. We’re giving away a 1st edition X-Men comic book autographed by Stan Lee. The comic book is valued at $35,000.

x-men-landing-page

And I love the tie-in of the “X” not only with Nutanix , but with the whole ‘No SAN’ motif.

 

Whenever I do have a spare moment, I’ll be at our booth. So come by to say hi, talk to some of our many Nutanix experts including 3 VCDXs and 6 vExperts, and possibly win an X-Men 1st edition comic book.

June 23, 2013

RIP Ed Iacobucci, a business hero

“Every human being has his own vision of what’s happening in the future. I was lucky in that what I thought would happen did happen.”

– Ed Iacobucci, Founder Of Citrix

It was October of 1999, and our business, driven in part by the Y2K buying binge, was booming. I had refused an offer to sell my solutions provider company to FutureLink for what was a ridiculous sum of cash and stock. Just a few months later, the phones stopped ringing – and by the middle of January, I believed I’d made a big mistake.

Iaccobucci & me at Tilden Park late January 2000

Iaccobucci & me at Tilden Park late January 2000

For the first time in my life, I got depressed. I went on anti-anxiety pills for three days even though I detested all medications. I was reluctant to go into work, despite the passion I had maintained for my business for 13 years. At the end of the month, we held our first big seminar at Tilden Golf Course in Berkeley.  Ed Iacobucci, Chairman of the Board of Citrix, was our keynote speaker. We drew a great crowd of clients and prospects. It was a terrific event and helped me snap out of my funk.

How Citrix Reshaped my Career

In the early 1990s, my firm, RYNO Technology, was a Novell Platinum Partner tucked away in the industrial park of Benicia – a small town in northern California. While we were doing okay, we didn’t have much of an identity and certainly didn’t stand out from the crowd of around 340,000 VARs in those days.  In addition to Novell networking, we did everything from Lotus Notes to selling white-box PCs to offering printer repair.

We started selling Citrix WinView for remote access in 1994 and were immediately impressed with the product’s quality and performance.  When Citrix WinFrame debuted in 1995, I caught Ed’s vision for centralized hosting of Windows desktops, and repositioned our business to focus exclusively on Citrix server-based computing.

Although RYNO’s exposure was somewhat limited by our geographic proximity, we gained a strong reputation in the U.S. and even did some high-profile work overseas.  I may have started out the new century with an uncharacteristic pessimistic attitude, but in 2001, RYNO was named the first “Citrix Partner of the Year for the United States.”

The Citrix Phenomenon

Ed was a true visionary. He led the original IBM OS/2 development team, but when he presented his idea for a multi-user version of the operating system to IBM, the company turned him down.  Undeterred, Ed met with Bill Gates and Steve Ballmer; they wanted to hire him. Ed told them that he wanted to do what they did and make a lot of money. So, he founded Citrix in 1989. Gates and Ballmer told Ed they wanted to support him, and Microsoft licensed the OS/2 source code to Citrix as well as the source code to Windows NT 3.51. Microsoft later invested significant monies in Citrix.

Mark Templeton joined Citrix, and under his and Ed’s leadership, the company grew rapidly. Citrix popularized the concept of anytime, anywhere computing by abstracting desktops from personal computers and moving them to centralized Citrix server farms. Similar to the VDI concept today, users saw screen prints of their applications, which executed on the central servers.

Citrix was named to both the NASDAQ 100 and S&P 500 in record time. In 1998, Iacobucci received the Ernst & Young “International Entrepreneur of the Year” award.

Ed Iacobucci’s Legacy

Just five months after our Tilden Park seminar, Ed left Citrix.  He went on to co-found DayJet Corporation and later VirtualWorks Group. But Ed’s legacy lives on – not only with Citrix, which has grown to become one of the largest software companies in the world, but with the entire End-User Computing industry. All of us who work in any way with virtual desktops owe a debt to Ed for his initial vision.

I’ve had a 30-year hobby of reading about great entrepreneurs, and in my opinion, Ed Iacobucci belongs in this category. He was a brilliant visionary and a really nice guy. I am honored I had the opportunity to spend a morning with him, and I am extremely appreciative of the impact he had on my life and career.

 

 

 

June 3, 2013

In spring, thoughts turn to Software-defined Storage

This past Friday, May 31 marks the date when three datacenter players all simultaneously and coincidentally drew separate lines in the sand in terms of defining the future of enterprise storage.

Hitachi Data Systems (HDS), Nutanix and NetApp each published a very different perspective on Software-defined Storage (SDS). Hitachi CTO, Hu Yoshida, makes the case in Software Defined Storage is not about Commodity Storage, that intelligence belongs in proprietary arrays, rather than in software. Nutanix CEO, Dheeraj Pandey, takes the exact opposite position in Software-defined Storage: Our Take. And NetApp via Virtualization Solutions Architect, Nick Howell, claims that it has been selling SDS for years in a personal post titled OK, Sure, We’ll Call it ‘Software-Defined Storage’.

The debate on SDS is not just one of semantics. As Pandey points out in his article, SDS is an essential component of a software-defined datacenter (SDDC) which in turn is at the heart of private cloud. Customers purchasing a manufacturer’s vision of SDS are setting the course for their own future SDDC and private cloud initiatives.

Datacenter Storage Evolution

EMC introduced the Symmetrix in 1990, and larger organizations increasingly started moving their data from local drives to central arrays in order to benefit from shared access, enterprise management and upgradability. By the time Google debuted in 1998, Yahoo was the incumbent search market leader utilizing, as did Alta Vista, Ebay and the other large Internet firms of the period, proprietary storage arrays for the bulk of its business. Yahoo was even featured in NetApp co-founder Dave Hitz’s book, How to Castrate a bull.

Google was confident in the superiority of its search algorithm, and anticipated billions of users searching trillions of objects. It knew that Yahoo’s shared storage model simply could not scale to handle this type of volume let alone the expense and complexity it entailed.

Google recognized that a SAN utilizes the same basic Intel components as a server. Rather than placing storage into a proprietary and expensive SAN or NAS, the company aggregated the local drives of custom-built simple servers. The company hired a handful of scientists from prestigious universities to build the Google File System (GFS) in order to achieve massive parallel computing with exceptional fault tolerance. Google also invented the MapReduce and No SQL technologies to enable linear scalability without any performance degradation. This model eliminated network traffic between the compute and storage tiers and was much simpler to manage.

Google’s converged infrastructure helped it rocket to quickly become the dominant search engine player. Robin Harris of StorageMojo estimated that Yahoo spent 3 to 10 times more on storage than Google. He said that for Yahoo, it was like bringing a knife to a gun fight.

In keeping with its philosophy of an open systems approach, Google published a paper on the GFS in 2003. This eventually led to adoption of a similar architecture by Amazon with DynamoDB, by Facebook with Haystack and by Yahoo with Hadoop. Twitter, Salesforce, eBay and even Microsoft Azure all now also utilize scale-out local storage infrastructures rather than SANs for their primary businesses.

But even as the Internet leaders embraced the Google scale-out datacenter model, commercial and government enterprises of all sizes were going in the opposite direction by purchasing arrays in order to take advantage of the VMware capabilities of vMotion, High Availability and Fault Tolerance. Ironically, SANs were never built with virtualization in mind. They were meant for a one-to-one relationship between LUN and physical server rather than for many different workloads on a single LUN.

A Different Approach to the Datacenter

 A couple of the developers of GFS saw an opportunity to bring the advantages of true convergence to commercial and government enterprises by leveraging the hypervisor itself. They, along with engineers from companies such as Oracle, VMware, Microsoft and Facebook, spent three years developing the Nutanix Distributed File System (NDFS) which is at the heart of the Nutanix Virtual Computing Platform (VCP).

Nutanix, like Google, believes that a datacenter architecture utilizing separate tiers of servers and storage arrays is fundamentally flawed. A data center should be virtualized with the infrastructure intelligence residing in the software rather than in proprietary equipment. Commodity hardware is an essential SDS component because it can quickly be upgraded as the CPU, flash and HDD industry technology advances. This model is significantly less expensive to implement, is more resilient and vastly more scalable. It is also much simpler to manage.

SDS, as defined by Pandey, also serves as the underpinning of a SDDC. Abstracting all datacenter components from the underlying physical resources provides more flexibility and versatility. Specific networking, storage and compute equipment are no longer required.

NetApp and HDS, with the majority of their revenues stemming from proprietary arrays, have no choice but to defend the datacenter status quo. This is why Hitachi’s Yoshida insists that intelligent hardware is necessary to enable software-defined storage.

While Pandey readily admits that Nutanix is just scratching at the surface of SDS, he presents a compelling case that there are seven principles that constitute the “true north” of SDS —all of which Nutanix embraces. This approach to SDS enables actual convergence of the compute and storage tiers. Anything else is just adjacency.

Contribution: Thanks to Lane Leverett (VCDX #53) for his edits to this article.

See Also:

Ex-Google Man Sells Search Genius to Rest of World.  12/21/2011. Cade Metz. Wired.

The Battle for Convergence. 12/12/2012. Stuart Miniman. Wikibon Blog.

The Efficient Cloud: All of Salesforce runs on only 1,000 servers. 03/23/2009. Erik Schofeld. Techcrunch.

How Yahoo Can Beat Google. 07/05/2007. Robin Harris. StorageMojo.

The Google File System.  Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. Google Research Publications.

How to Castrate a Bull. 2009. Dave Hitz. Book by NetApp Co-Founder.

 

 

May 27, 2013

Nutanix and Veeam help firms avoid disaster

While the major storage array manufacturers typically have the ability to replicate array to array, they do not support replication at the VM level. This precludes replication from many datacenters or offices to one central location. It also leads to challenges in terms of supporting VMware Storage DRS as well as in backing up VMware vCloud Director.

VMware and Veeam share a similar focus on the virtualized data center and on the private cloud which utilizes virtualization as its underpinning technology. The combination of Nutanix and Veeam provide a great combination for protecting data in both environments.

Protecting the Modern Data Center

With today’s robust hardware (i.e. Nutanix) and virtualization technology, there is no reason why a virtualized workload shouldn’t run at least as fast as its physical counterpart. And virtualization brings capabilities such as High Availability and Fault Tolerance that completely mitigate server failure.

The new generation of data center is not only virtualized, it is software-defined. Intelligence, rather than residing in proprietary hardware, all occurs in the software. This model is significantly less expensive to implement, is more resilient, vastly more scalable and much simpler to manage. It is also the foundation for private cloud.

Both Nutanix and Veeam focus only on the software-defined data center (SDDC). This is a huge advantage in that neither company has to deal with legacy challenges of backing up architecture designed for a physical environment. Combining Veeam and Nutanix takes advantage of the abstraction of the underlying compute, memory and I/O resources to enable enhanced backup and replication capabilities, along with significantly simplified implementation and management. 

As an example, Veeam Instant VM Recovery enables immediate restore of a VM back into the production environment by running it directly from the backup file. This improves RTO while minimizing disruption and downtime of the production VMs. Nutanix includes the ability to set higher IOP levels which enhances Instant VM Recovery both for restoring the production VM as well as for performing DR testing.

Superior Disaster Recovery

Nutanix sets up within an hour, including Veeam replication.  Organizations can replicate from arrays in one or several production data centers to the Nutanix cluster at the DR facility. This is particularly useful for organizations with remote offices running legacy storage arrays. They can utilize Veeam replication to continuously replicate to a Nutanix cluster in the corporate data center.

Veeam utlizes a dual proxy architecture that significantly enhances performance by providing an optimal route for data transfer. But in order to be effective, the source proxy needs a server with direct access to the storage on which the VMs reside or where the VM data is written. This enables data retrieval directly from the datastore, bypassing the Local Area Network. Nutanix’ converged compute/storage clusters provide the ideal architecture for Veeam Backup Proxies which can distribute the replication jobs across the Nutanix clusters.

Veeam’s upcoming Virtual Labs for Replicas, enhances DR efficiency by enabling resource usage rather than sitting idle waiting for a disaster to occur. Nutanix also enhances DR efficiency with a converged storage/compute appliance that minimizes space, power and cost – yet is perfectly linearly scalable to accommodate additional resource requirements either for DR or the increased capabilities Veeam enables.

Synergistic Replication

Unlike the storage manufacturers which replicate from array to array, Veaam replication takes place at the VM level. This ties in perfectly to Nutanix’ VM-aware replication. Customers do not have to worry about cobbling together VM replication on top of array level replication. The combination of Veeam and Nutanix gives them true converged DR at the application level for both storage and compute.

Both products also enable support for VMware Storage DRS (SDRS). Traditional array-based replication products tend to stumble with SDRS as VMs that should not be replicated still get moved to the ‘Replication LUN’ and are sent down the wire. Even worse, VMs that should be replicated fail to move over to the ‘Replication LUN’ and remain unprotected.

Backing up VMware vCloud Director

VMware vCloud Director is one of the leading platforms for private clouds. Using the vCloud Director API, Veeam displays the vCloud Director infrastructure directly in Veeam Backup & Replication, enabling backup all vApp metadata and attributes, restoring vApps and VMs directly to vCloud Director, and supporting restore of fast-provisioned VMs.

In addition to enhancing the Veeam proxies with respect to vCD, Nutanix also simplifies the vCD environment. A lack of LUN limits (Nutanix doesn’t need LUNs) means that as many volumes can be run on one volume as needed. Nodes can be added as desired without any changes in the configuration of vCloud Director.

Nutanix and Veeam Partnership

Nutanix and Veeam have been working together for some time to help bring their combined messaging to both customers and channel partners. Look for increased collaboration in the future.

 

Contribution: Much of the content of this article was derived from the 04/30/2013 post, Why Veeam and Nutanix? Fast, Low Impact, Results. By Dwane Lessner.

 

See Also:

VM-centric Disaster Recovery Done Right. 05/17/2013. Partha Ramachandran. Nutanix Web Site.

Veeam Backup & Replication

 

 

 

April 7, 2013

Software is also eating the data center

Mark Andreesen’s famous August 2011 WSJ article, Why Software Is Eating the World, discusses how software companies, especially Silicon Valley firms, are disrupting industries across the planet. Most big data center players still cling to the hardware-based models of yore. But the growing ubiquity of the hypervisor as the new data center O/S means that software-defined technologies will increasingly challenge the status quo.

Software-Defined Data Center Attributes

Proliferating data center virtualization has revealed the necessity for simplified infrastructure design, implementation and support. Major data center players including EMC, Cisco, NetApp, HP, IBM, Oracle and Dell have responded with converged infrastructure (CI) solutions that combine compute, storage and network resources either as products or as reference architectures.

These solutions have found a very receptive market – VCE alone is exceeding a billion dollar run rate just three years after launch. But while they solve many of the efficiency challenges of a virtualized infrastructure, the CI dependency on storage hardware limits their ability to enable a next generation software-defined data center (SDDC) which is defined by the following attributes:

Convergence: A SDDC should embody true convergence across different tiers of data center applications, consolidating the infrastructure in the process. Converging storage and compute onto the same rack or even the same chassis still leaves two distinct tiers requiring an intermediate network to move data continuously between them. Storage controllers operating a single box are meaningless in a SDDC. They need to be aggregated over multiple nodes to enable management and resiliency as part of a single global system.

Elastic Consumption: The software-defined data center is VMware’s version of private cloud. As such, it should mimic the public cloud in terms of elastic resource consumption. But separate storage and compute tiers require that either excess capacity be purchased up-front, or that forklift upgrades be incurred as demand increases.

Hybrid Agility: The three hybrid components of a SDDC include flash + disk, multiple hypervisors and private/public cloud interoperability:

  • Flash + Disk: Tying flash to the array makes it difficult to address certain workloads such as big data, to manage data on a lifecycle basis and to incorporate technology innovations.
  • Multi-Hypervisors: Despite the many benefits accruing from using a single hypervisor, organizations increasingly are deploying multiple options.
  • Private/Public Cloud Interoperability: Hybrid agility requires seamless exchange of workloads between private clouds and public providers.

Legacy storage solutions will find it hard to retrofit flash and public cloud storage into their offerings. Legacy system management services will find it hard to subsume management of multiple hypervisors within a single pane of glass. Design of a consumer-grade console to manage these hybrid environments requires fresh thinking.

Hyper-Convergence

Cloud providers such as Google, Facebook, Amazon, Microsoft Azure and Twitter all utilize custom-built servers with aggregated local storage rather than SANs. This environment, also known as hyper-convergence, is efficient, reliable, extremely scalable and low-cost.

But unlike the Internet juggernauts, it is impractical for enterprises to run their myriad applications on a custom-built distributed server environment. The Nutanix concept originated with a couple of the Google File System architects who realized that they could leverage the hypervisor to achieve the same hyper-convergence benefits for the masses. Over time, the engineering team gathered additional top talent from VMware, Oracle, Microsoft and most recently, Facebook.

Nutanix utilizes the hypervisor as a substrate where everything now runs as a service. The storage controllers themselves are virtualized onto the hypervisor right next to the workloads and data. This eliminates the traffic from server to shared storage device. And the virtualized storage pools enable capabilities such as VMware Fault Tolerance, High Availability and DRS to all work “out of the box”.

The Nutanix Virtual Computing Platform consolidates the compute and storage tiers onto one unified appliance that takes up only 2U of rack space. It accommodates four X86 servers, server-attached PCIe Flash and high capacity SATA drives. The result is reduced cabling, power and cooling requirements as well as reduced network traffic.

The best hardware-based CI solutions incorporate a GUI enabling effective collaboration between separate compute and storage teams. Hyper-convergence, along with consolidating multiple technologies, also abstracts the low-level intricacies within each functional silo. Policy and resource management are elevated to a level where it can be managed by a single data center team, enabling organizations to move away from a stovepipe IT staffing model.

SDDC Performance

Data center manufacturers like to argue that specifically designed hardware with custom ASICs enables performance at scale that is superior to software. While often true in the early stages of software innovations, history shows that superior ease-of-use is more important to consumers than a small performance advantage.

As an example, Java was initially much slower than C. But its versatility and ease-of-use eventually led to much greater market share than its predecessor. This phenomenon is amplified by Moore’s Law which renders any initial performance advantages irrelevant. We saw this take place with virtual servers which for some time now have ran as fast or faster than their physical counterparts, and we’re seeing it take place again today with virtual desktops. VDI is now much more dominant than server-based computing, and it’s increasingly eating into the market for physical PCs.

But Nutanix is far from religious about the software-defined everything mantra. Storage is virtualized without any intermediation from the VMware hypervisor, and includes PCIe pass-thru. Accessing storage hardware directly without going through the hypervisor significantly enhances performance for services requiring specific-purpose hardware.

Marketing Speak?

The SDDC terminology is not just marketing speak. As an analogy, think about what Apple did to phones, calculators, cameras, Rolodexes, Sony Walkmans, eReaders, etc. The iPhone converged all of these individual technologies using a software-defined platform that changes the keyboard on the fly to match whatever functionality is accessed.

Hyper-convergence is necessary to provide iPhone-like consolidation benefits to a software-defined data center. And in the process, it reduces both cost and complexity. Most importantly, fractional and elastic resource consumption facilitates a private cloud environment.

In this model, technology management is much more aligned with data center-level objectives. And rather than spending the majority of their time on infrastructure issues, the IT staff can work more closely with the business. This allows them to leverage the SDDC capabilities of speed and agility to achieve not just IT, but business objectives.

See Also:
The Nutanix Solution. Nutanix Web site.
VCE Vblock Demand Hits Billion Dollar Run Rate Three Years After Launch. 02/20/2013. EMC Press Release.
Converged Infrastructure Takes the Market by Storm. 08/22/2012. David Vellante. Wikibon.
HyperConvergence phase added to the Infrastructure Continuum. 08/20/2012. Steve Chambers. ViewYonder.
Why Software is Eating the World. 08/20/2011. Marc Andreessen. The Wall Street Journal.
Apache Hadoop. Wikipedia.

March 20, 2013

Moving across the channel

After 25 years in the IT channel, including positions at six different solutions providers, writing columns for three different channel magazines, and seats on partner advisory councils for several manufacturers – I’ve switched sides. I recently joined Nutanix to help build a world-class worldwide partner network.

Disruptive Technologies

This was not an easy decision. I was at Presidio almost five years and had the opportunity to work with many outstanding people. I’m proud of all the great work the organization has done for clients across the country, and am particularly pleased that Presidio was recently named the VMware 2012 Global Partner of the Year. But every so often a disruptive technology emerges that compels me to bet my career on it. Nutanix has developed such a technology.

The first time a disruptive product captured my attention was nearly two decades ago when Citrix introduced WinFrame. I abandoned all our other work and refocused my Novell Platinum business around Citrix thin-client computing (as it was called then). Six years later my brother and I sold the company after being named the first U.S. Citrix Partner of the Year.

In 2005 my friend, Gary Lamb, persuaded me to overcome my reluctance to give up a cushy ROI consulting gig by showing me VMware vMotion. I knew that virtualization would forever alter the computing landscape, and we formed a company focused exclusively on enterprise virtualization.

The first disruptive virtualization platform that I saw was Cisco’s UCS which, when it debuted in 2009, was panned by competitors and industry media alike who scoffed that Cisco would never be able to compete in servers. Undismayed, I promoted UCS in my presentations, in my writings and in other forums. And despite the widespread skepticism, UCS is now the world’s number three selling blade and plays a pivotal role in the integrated infrastructure platforms of EMC/VCE, NetApp and Hitachi.

Nutanix

As virtualized data centers have continued to evolve, the traditional SAN and NAS-based architectures have become increasingly strained by the explosion of VMs and the resulting I/O needs of today’s enterprise data centers. Moving data between compute and storage tiers introduces unnecessary latency and storage degradation while leading to laborious administration, inflexibility and forklift upgrades.

Internet juggernauts such as Google, Facebook, Amazon, Twitter, etc. avoid this problem in their environments by not utilizing SANs. They instead use custom-built servers with aggregated local storage. Four years ago, a couple of the architects of the Google File System decided they could make this same type of technology available for the masses – virtualizing the storage controllers themselves inside the hypervisor.

The culmination of their efforts, Nutanix, is the company which has built a virtual computing platform. It is simple to manage and scales with perfect linearity. When combined with minimal space requirements, low power and cooling needs, a very affordable price point, and integration into VMware vCenter for management – Nutanix is an ideal platform for the next-generation data center.

Virtual Desktops

While Nutanix is being deployed for high-performance computing requirements such as Hadoop clusters, its SAN-free architecture is driving a particularly quick adoption in virtual desktop environments.

Gartner recently studied 19 organizations that implemented VDI with either VMware View or with Citrix XenDesktop. Storage turned out to consume 40% – 60% of the entire VDI budgets, and every organization spent more on storage than expected.

Nutanix eliminates the challenge of properly sizing SANs for the demanding and variable storage requirements of VDI. And rather than facing a forklift upgrade to accommodate expanding virtual desktop environments, organizations can start small and then add nodes as PCs and laptops come up for refresh. These attributes make it possible to achieve both a significant ROI and short payback period for VDI initiatives while also improving the user experience.

The Nutanix Organization

Disruptive technology alone, even when augmented by the $70M of venture capital Nutanix has raised, is not enough to ensure success in our tumultuous industry. The company backs its products with a stellar team of employees including well-known VMware VCDXs such as Jason Langone and Lane Leverett. The corporate culture is passion, technology and commitment to both partner and customer success. The result is the fastest growing infrastructure start-up company in the last decade.

I’m thrilled with the opportunity to work the other side of the channel, particularly when fueled by the Nutanix rocket. I look forward to implementing the best of breed channel structure, communication and support that I’ve encountered from leading manufacturers over the decades, and to helping make our partners among the most successful in the industry.

See also:

Converged Infrastructure Vendor Nutanix Hires Former VAR To Develop Channel. 03/18/2012. Joe Kovar. CRN.

March 15, 2013

Ten mistakes that can kill a private cloud

Gartner’s Thomas Bittman wrote a blog post in August 2009 titled, If You Build a Private Cloud, Will Anyone Come? Unfortunately, the Field of Dreams all too often becomes a Field of Nightmares as organizations spend millions of dollars building private clouds – only to have them sit mostly idle. Here is a list of ten mistakes to avoid.

1) Failure to Understand the Business Requirements
Private cloud is business-focused, not IT-Centric. IT staff cannot design the cloud in isolation. In order to both understand the business requirements and to identify opportunities,they must develop relationships with the business customers. Proactively questioning users enables them to discern what IT services will be of value. Only then can they design a cloud that will be utilized.

2) Business Unit Confusion about Private Cloud Value
Businesse users typically don’t have a good understanding of private cloud or how it can help them be more successful. They may not even value the top cloud attributes of speed and agility. IT must help them grasp how important these attributes are in achieving both efficiency and innovation. A private cloud not only enables them to respond more quickly to their customers, it allows them to experiment with new technologies without requiring large capital investments or months of time.

3) IT Staff Skepticism about Private Cloud Viability
CIOs tend to be adept at putting together wonderfully compelling presentations for senior management about how private cloud will transform the IT organization into a service model. But they often neglect to sell their staffs who are the ones tasked with building the environment.

A healthy skepticism is baked into most IT professionals who have become jaded from years of magic bullet promises. Private cloud’s complexity tends to set off alarms. Not only are the myriad technological pieces challenging in their own right, but the requirement for new configurations, processes and behavioral changes can quickly make a private cloud initiative overwhelming.

Staff resistance can translate into a more drawn-out implementation which in turn can spell cloud stall or outright project failure.

4) IT Staff Concerns that Private Cloud Will Automate Away Their Jobs
While they might not be the most thankful jobs in IT, infrastructure tasks such as hardware management and software patching provide a visible and reassuring sense of purpose. Perceiving private cloud as a means of automating these functions is bound to generate at least some IT staff resistance.

Effectively addressing new business-driven needs requires that the IT staff focus on what is inside the VM rather than technical details of disks, servers and networks. They must let go of the daily infrastructure maintenance and firefighting. A private cloud, rather than eliminating their employment, opens up many more opportunities to focus on higher business value taks. They will be able to grow IT services in a much more effective and fulfilling manner.

5) Relying on Technology Manufacturers for Private Cloud Design
While CIOs get the importance of transforming IT to a service provider model, it’s easier said than done. Many look to technology manufacturers to enable the leap from virtualized environment to private cloud.

These vendors offer products with wonderful capabilities utilizing converged infrastructure, orchestration software, self-service portals and chargeback models. But they fall far short of making a private cloud.

Private cloud requires focus on higher-level services offerings, workflow and features that enable a consumption-based environment. For example, if a workload cost exceeds approval authority, it should automatically be routed to appropriate parties for approval. This type of capability is missing from most cloud-based solutions.

The private cloud architecture and products cannot be meaningfully evaluated in isolation or even against alternative solutions on a features or price basis. A private cloud initiative must start with identifiying the business objectives along with the requirements to meet these objectives. Only then can the products be effectively evaluated – but within the context of achieving the identified objectives.

6) Trying to Bite off too much at Once
Attempting to deploy a private cloud in one fell swoop is highly unlikely to address the myriad business needs. It can also result in compromises early in the implementation stages such as abandoning SLAs and chargeback. The functions instead are handled manually with assurances that they will be automated down the road. Inefficiency and user dissatisfaction ensues while growth is inhibited.

A private cloud implementation should start small – targeted to specific business units with pressing needs that are likely to ensure utilization. But, the big picture should always be kept in mind including the projected customers (both internal and external) as the system and capabilities grow. And enterprise components such as services catalog, SLAs, security, chargeback, etc. should be incorporated from the start. Unanticipated variables may warrant modifications, but wholesale abandonment must be avoided.

7) Offering Too Much
Without a good understanding of the business requirements and constraints around commonly provisioned workloads, IT may try to include non-standard items on the “menu” between the users and the services. This leads to lots of cusomization and daily changes – not a good environment for private cloud.

Not every workload and IT service is a fit for private cloud. Starting with low-hanging fruit enables many small wins which builds credibility and enthusiasm while simultaneously reducing the cost of managing the traditional IT infrastructure.

8) Not Utilizing the Best Management Tools
Organizations frequently venture into private cloud with the same management tools they utilized in a virtual environment or, even worse, in their former physical data centers.

Virtualization tools such as VMware vCenter Operations Suite are essential to a private cloud in order to tell which virtualized components are acting abnormally. Private cloud additionally requires metrics to ensure appropriate speed and agility along including SLA performance. They need to provide tenant transparency and Line of Business access. Accurately measuring services allows pricing to enable IT-as-a-Service without time-consuming negotiations and interdepartmental budgeting meeting.

In many cases, the new cloud tools can substitute for older tools no longer required. This frees up recurring maintenance expenses.

9) Failure to Embrace an IT-as-a-Service Mentality
Continuing to perceive its role as a static cost center will increasingly render IT irrelevant. Business units will instead utilize services external to Corporate IT which may be less effective, lack required security and compliancy parameters, and which can even be more expensive. They can also soak up corporate resources when IT is called in to resolve issues.

Transitioning to ITaaS requires that IT functions in many respecs like a public provider. This entails organizational change along with new processes and retooling of traditional roles. These changes, in fact are far more important than the technology. IT leadership must both drive the ITaaS vision along with the changes required to make the vision a reality.

10) Failure to Embrace the Public Cloud
Increasingly, Public cloud services offer users attractive options for effectively doing their jobs. IT mandates to shut down these options only increase business unit dissatisfaction and resistance.

IT needs to embrace the public cloud – incorporating SaaS, PaaS and even IaaS where appropriate, but ensuring that security, compliance and recoverability standards are met. IT becomes the intermediary – helping with contracts, relationships, problem management and integration. As a trusted advisor to the business, IT should provide the best and most cost-effective services whether internal or external to the organization.

See Also:
Cloud Services can save you Money – if you’re Careful. 03/13/2013. Nancy Gohring. Computerworld.
Getting Private Cloud? Better Change Your Funding Model. 09/25/2012. Steve Kaplan. By The Bell.
Cloud: If You Can’t Beat It… 07/22/2012. Steve Kaplan. By The Bell.
If you Build a Private Cloud, Will Anyone Come? 08/09/2009. Thomas Bittman. Gartner.com.

Presidio’s Vishal Nangrani, Jeremy Oakey and Ryan Hughes all contributed to this article.

March 13, 2013

Is VMware really committed to end-user computing?

“…as far as the industry is concerned, EUC is VMware’s redheaded stepchild.”
– 03/11/13 Tweet by Tal Klein (@VirtualTal)

While I have much respect for Bromium’s Tal Klein, we don’t always see eye to eye. His recent Tweet prompted me to write a bit about VMware’s commitment to EUC.

Filling the Field Gap
VMware capitalized on a trend it was seeing of customers virtualizing their desktops on ESX, and coined the term VDI in 2006. Since then, the company has both grown and evolved the business to become much more comprehensive. Today, VMware End-User Computing (EUC) consists of a product family encompassing physical, virtual Windows, mobile and Web-based desktops.

The analyst and media consensus is that VMware and Citrix combine to dominate the VDI market, though reports differ about which company has the highest market share. And while I disagree with Tal’s contention that VMware EUC is widely perceived as a red-headed stepchild, I do agree that Citrix has been more successful in capturing EUC mindshare.

Unlike Citrix whose DNA is all desktop, VMware made its name in the data center and now also leads the industry in private cloud. This lack of EUC focus has been evident in the field where VMware reps typically fail to match the desktop acumen and evangelism of their Citrix counterparts.

VMware is now, with the biggest investment in its history, making an enormous effort to resolve this deficiency. The company is hiring hundreds of EUC focused sales reps and SEs (many of them from Citrix) across the globe. And while its existing reps will continue to also push EUC, this new dedicated
sales force is bound to give a lot more visibility to View Horizon and the other EUC products.

Commitment
At the recent VMware Partner Exchange (PEX) in Las Vegas, VMware prominently emphasized EUC as one of the company’s three primary initiatives with desktop-oriented keynotes, boot camps, solutions partner sessions, exhibits, eco-system partner presentations and executive summits.

6a01156f01861f970c017c379b6d28970b-320wi

VMware also has been investing in EUC technologies both internally and externally with recent acquisitions such as Wanova. At PEX, it announced an expanded EUC competency program that rewards VMware partners who devote the resources required to making their EUC practices successful.

The company is vigorously encouraging VDI partnerships with storage manufacturers by validating joint solutions as part of its recently announced Horizon View vFast Track Reference Architectures. And VMware continues to increase EUC collaboration with other leading industry manufactures such as Cisco with their joint Office-in-a-Box initiative.

Organizational Challenges
The lack of a singular EUC focus does create some challenges for VMware that its competitor avoids. For example, VMware dominates the data center with an 85% virtualization market share. It should be leveraging this advantage by messaging an ability to utilize the same platform and management
tools from the server down to the desktop.

VMware vCenter Operations Suite (vCOPs) is the fastest-growing VMware product of all time next to ESX. It would make sense for VMware to provide, at a minimum, a scaled down version of vCOPs for View with every copy of View Horizon. The company could then offer upgrades at an additional cost.

But, the vCOPs business unit has its own P&L to manage. From what I’ve been able to gather, that unit has been unwilling to take a hit to revenues by providing a free version of its product as part of Horizon View.

These types of organizational issues aside, VMware clearly is dedicated to the desktop market. Plummeting costs of VDI infrastructure along with new capabilities from products such as VMware Mirage ensure some exciting times ahead in EUC.

See Also:

  • The History of VDI. 06/27/2011. VittorioViarengo. Virtualization Journey
  • Cisco Office in a Box Solution. Cisco White Paper.
  • Horizon Branch Office Desktop. VMware brochure.

March 7, 2013

Cloud Wars: VMware vCloud Suite vs. Cisco IAC + Cloupia

VMware vCloud Suite and Cisco IAC + Cloupia continue to emerge as the two dominant commercial cloud stacks. Organizations adhering to Gartner’s advice not to mix and match when it comes to building a private cloud increasingly will face a choice between VMware’s “top-down” or Cisco’s “bottom’s-up” approach.

Best Friends
A Cisco SE told me not long ago, “VMware may be our best friend – but they’re not our only friend.” VMware’s July 2012 acquisition of the Software-Defined Networking company, Nicira, resulted in widespread media speculation that the two organizations would now find themselves at odds.

But long before the Nicira purchase, Cisco and VMware were already engaged in a networking skirmish. When introduced in 2009, the Cisco Nexus 1000V virtual switch was widely promoted to clients by VMware sales reps. Things quietly changed and for some time now, VMware reps have emphasized their own vSphere Distributed Switch (VDS) instead of the Cisco product.

VMware hasn’t said much about how Nicira will impact VDS or whether it will be incorporated into its vCloud Suite. Meanwhile, Cisco has evolved the Nexus 1000V to become the foundation for its cloud networking stack. Increasing integration is now anticipated with Cloupia.

And speaking of Cloupia, its November 2012 acquisition by Cisco set up a subtext of the brewing cloud stack battle since it competes directly against the July 2012 VMware acquisition of DynamicOps.

Coincidentally, both of these products overlap with the preexisting suite capabilities of their new owners which can lead to confusion as to when to utilize them. DynamicOps (now called vCAC), for example, provides both self-service catalog and chargeback – capabilities already available in vCloud Suite.

Though Cloupia (now called CUIC) is not part of IAC, it is frequently sold in conjunction with the product. It provides overlapping capabilities with IAC such as a services catalog, orchestration and an automation framework.

Not unexpectedly, both acquisitions still require significant integration within the product family which further makes for difficult choices. As an example, vCAC does not yet extract all of the objects managed by vCloud Director. And CIAC has not yet integrated with the pre-built automation of Flexpod provisioning in CUIC.

Differing Private Cloud Philosophies
Cisco maintains that everything starts with converged infrastructure. Its bottom’s up private cloud approach is designed to provide more flexibility in working with multiple hypervisors, APIs and management tools.

Cisco also says that specifically designed hardware and custom built ASICs provide superior performance – especially on a larger scale. This is why switches replaced software bridging and why Cisco UCS does so well on the VMware VMmark benchmark scores.

VMware’s messaging, on the other hand, focuses on the software-defined data center (SDDC). From VMware’s vantage point, abstracting all of the data center components from the underlying physical resources provides more flexibility and versatility. Specific networking, storage and compute equipment are no longer required. This top-down approach also allows for easier application of polices, such as security, across all hardware platforms.

In reality, the VMware and Cisco cloud stack approaches are probably much closer than the organizations’ marketing would indicate. Both manufacturers are well aware of the requirement to support diverse software and hardware platforms.

Choosing the Right Stack
An organization committed to VMware that utilizes network and compute products other than Cisco may be more inclined to implement vCloud Suite than IAC in order to maintain a consistent architecture. A committed Cisco networking shop, or one considering Cisco UCS, may find the Cisco story more compelling.

Most organizations considering private cloud probably utilize both VMware and Cisco products. If they have heavy automation requirements, they may gravitate toward CUIC which has an advantage in terms of providing out-of-the-box automation. This is particularly true when utilized in conjunction with the NetApp FlexPod. Cloupia was one of the earliest FlexPod Validated Management Partners.

Organizations with primary requirements for an easy-to-implement services catalog that centers around virtualization and virtual containers may be more inclined to go the VMware route. IAC is highly customizable, but can require more time to accommodate individual needs.

These use cases aside, an organization considering private cloud should not get mired in comparing cloud stack features. A private cloud is pointless, after all, if the business units refuse to utilize it. Building a private cloud based upon products, technologies or architectures tends to lead to low adoption rates.

Designing an optimal private cloud starts with identifying the business objectives and associated requirements. Only then should organizations seriously investigate the appropriate architecture and equipment, evaluating them within the context of the business objectives they want to achieve.

Thanks to Presidio’s Vishal Nangrani who contributed to this article.
See Also:

Martin Casado on Changing Networking. 02/14/2013. Stu Miniman. Wikibon.

VMware’s SDN Dilemma: VXLAN or Nicira? 01/13/2013. Greg Fero. Network Computing.

Cisco’s Nexus 1000V Evolves to a Networking Stack Foundation. 02/02/2012. Steve Kaplan. By The Bell.

Cisco UCS Sets World-Record Cloud Computing Performance. 09/08/2012. Cisco Brochure.

March 5, 2013

The time I called it really wrong

CRN’s Joe Kovar published an article today about Seagate’s EVault cloud storage business joining OpenStack. That reminded me of a story…

1998
In 1998, I ran a solutions provider business out of Benicia, California that was just beginning to focus exclusively on Citrix technologies. One day, a white-haired gentleman in his 50’s named Phil Gilmour walked into our shop armed with an extensive list of products he wanted to purchase for his new venture.

Phil had recently acquired some backup software out of Canada. He told me about his vision of going around to the local banks and backing them up through the Internet. Phil’s previous experience consisted of running his CPA practice for the past 25 years. It didn’t seem to me that he knew much about banks – and he certainly knew very little about technology.

So I’m thinking to myself that this is a nice older guy who has the best of intentions, but that he wouldn’t stand a chance in our cutthroat industry. I gave him maybe four months until he gave up either from lack of sales or from overwhelming technical frustrations or both.

“Look,” I said. “There’s no reason to buy so much equipment up front. Why don’t you start small and then you can always acquire more products as the business warrants.”

Phil followed my advice. He scaled back his purchase and went on his way.

today
The company Phil started was called, of course, EVault. Within six years it had become one of the fastest-growing technology companies in North America. Phil sold the business in 2007 to Seagate for $185M.

As Joe’s article reveals, Phil’s venture is still going strong though he has long since moved onto other things. And while I’ve frequently overestimated the potential of various firms during my many years as a student of business, EVault is certainly my biggest underestimate ever.

January 15, 2013

My 5 cloud and virtualization predictions for 2013

I’m jumping on the bandwagon and, for the first time, posting my virtualization/cloud predictions for the new year:

Increasing gravitation toward two primary Cloud stacks:
While Microsoft, OpenStack and CloudStack will all continue to gain customers, it will be VMware's vCloud Suite vs. Cisco's IAC + Cloupia (now CUIC: Cisco Unified Infrastructure Controller) as the two dominant platform choices for private clouds. VMware will leverage its virtualization dominance to remain the leader in private cloud, though lack of clear messaging around the delineation between use cases for vCD and vCAC (formerly DynamicOps) combined with Cisco’s strong converged infrastructure story will help Cisco’s solution grow more quickly.

VDI deployments will escalate: The continued increasing density advantages resulting from Moore’s Law combined with storage innovations such as converged infrastructure, flash and software-based accelerators will reduce the CapEx cost of virtual desktops to a tipping point where 2013 finally becomes the Year of VDI. A harbinger: Presidio just received word today that we won the Atlanta Public Schools VDI RFP which includes 24,000 zero clients and 8,000 concurrent virtual desktop users.

Virtual desktops will be integrated into private clouds: Today, like their physical counterparts, virtual desktops are typically treated as organizational silos. But expect one or more solutions that enable self-service provisioning and chargeback capabilities for virtual desktops.

The Multi-Hypervisor fad will lose luster: The use of multiple hypervisors itself will continue to slowly increase, but the hype as a cost-savings strategy will die off. Organizations will increasingly realize that other than in certain technology silos, the management and disaster recovery cost of maintaining multiple hypervisors far outweighs any perceived licensing cost advantages.

ROI/TCO cloud calculators will emerge: Financial tools and standardized metrics will emerge to help organizations make economic comparisons between virtualized environments and private clouds, and between private clouds and various public cloud alternatives.

November 11, 2012

My foreword to Desktops-as-a-Serivce: Building the Model

Desktops-as-a-Service: Building the Model, by Jason Langone, Kanuj Behl, Phil Ditzel and Dwayne Lessner, is now available for order on both iTunes and on Amazon. I was honored to be asked to write the Foreword for this excellent book, and with Jason’s permission, am posting it here. 

 

Is next year finally going to be the Year of VDI? Probably not, but it will reflect the continued momentum of desktops-as-a-service (DaaS).

I’ve been involved in the desktop virtualization space since the debut of Citrix WinFrame in 1995. The Novell networking reseller business I ran shifted our emphasis to desktops. We began to encourage our clients to replace their PCs with centrally hosted server-based computing solutions. We developed ROI modeling to show the savings resulting from eliminating PC upgrades along with remote office servers and supporting infrastructures. Although we were huge advocates of the technology and were named the Citrix Partner of the Year, we sold the business without ever seeing SBC go mainstream.

In 2005, I co-founded another consulting business focused on deploying VMware ESX. I thought I was done with desktops, but then VDI showed up and I’ve been back advocating the virtual versions again. “The year of VDI” is now a phrase smirked at annually by industry media. But while I agree with the popular consensus that VDI itself has limited market potential, I am very bullish on the prospects for DaaS.

Why the Time is Right for DaaS

More public cloud providers increasingly offer DaaS though, as the authors point out, they are somewhat handicapped by Microsoft licensing policies around multi-tenancy. The biggest DaaS deployments today are taking place within organizations. 

One of the appeals of DaaS is that it does not require much of a conceptual leap to make the jump from virtual desktops. When you think about it, the virtual desktop already exhibits most attributes of cloud computing: it can be provisioned on-demand from shared resource pools, accessed over the Internet, and scaled up or down instantly as required.  

Enabling self-service provisioning along with metering to facilitate chargeback transforms virtual desktops to DaaS. Multi-tenancy is added to the mix for most public cloud DaaS providers as well as for some internal IT organizations. 

DaaS, whether on-premise or publicly hosted, has many compelling benefits. For one thing, it addresses the reality that a “desktop” is no longer just a Windows-based machine. Desktops now include Web-based applications along with storage services such as DropBox for sharing corporate information. Computing devices run the gamut from smart phones to zero-clients to iPads, and are often owned by users as part of BYOD.

DaaS provides the framework for IT to ensure corporate standards are maintained around security, compliance and recoverability. 

Whether on-premise or via cloud providers, DaaS utilizes a chargeback system whereby users pay for the desktop resources they consume. The public cloud model enables organizations to eliminate capital expenditures entirely, while internal DaaS can potentially slash on-going operating expenses.

DaaS chargeback drives efficiency in two ways. Access to accurate desktop cost information helps business units more effectively plan and budget. And receiving a monthly bill makes users much more cognizant about optimizing resource consumption.

Both public and on-premise versions of DaaS benefit from the unrelenting consolidation efficiencies of Moore’s Law which states that the number of transistors per chip doubles roughly every two years. On the edge, this added power doesn’t buy us much – PCs already have more capabilities than most users will ever utilize. In the data center, though, we still deal with very expensive CPU, memory, storage and space/power.  

Doubling the number of VMs per server host every couple of years slashes the costs of moving virtual desktops to the data center. In fact, it’s really better than this. Continued industry innovations augment Moore’s Law with accelerating increases in virtual machine density, making DaaS still more economically attractive.

Building-the-Model

As anyone involved in the SBC or VDI space knows, implementing a successful enterprise environment is not easy. The challenge is that, unlike the data center, we now have thousands of users each with their own experiences, expectations and – perceptions. 

When it comes to users, perception is reality. One of our early SBC implementations was for a small school district in San Jose. It failed because during the pilot, a teacher’s keyboard happened to break. Although we gave her a replacement and showed her that her old keyboard had just suffered a natural death, she went around to all the other teachers telling them, “Don’t let them put Citrix in your classroom. It breaks keyboards”.

When rolling out DaaS, you only have one chance to get it right. Just one disgruntled user can potentially kill a project. When a bunch of users become upset because of poor performance, dropped sessions, or an inability to access their old information – they quickly generate a negative
vibe that is extremely difficult to overcome. While I don’t have the hard data to support it, I suspect that the majority of VDI projects (which are simpler than DaaS) probably slow or stall completely at the pilot phase.

A successful DaaS environment mandates that every element be well designed and tested from the context of its role in supporting the overall architecture. Langone, Behl, Ditzel and Lessner bring a wealth of invaluable field experience that enables both exceptional planning and implementation.

The authors cover basics such as infrastructure, connection brokers, multi-tenancy, user profiles, andapplications. They also dive into chargeback, identity management, and appliances. An entire section on the cost model facilitates the all-important financial understanding and justification of a DaaS initiative. Another section on the operational model shows how to monitor, manage and administrate the DaaS environment.

In Closing

This book describes the architectural, financial and organizational elements necessary for a successful DaaS initiative. It is written from the perspective of engineers and focuses on enabling readers to bridge the gap between VDI and DaaS.  I hope you enjoy reading the book and wish you success in building a robust and profitable DaaS offering.

Steve Kaplan

November 1, 2012

Pano Logic ceases operations

Pano-Logic-ceases-operations

This has not been a resounding year for pioneering virtual desktop device manufacturers. First Wyse, the inventor of the Windows terminal, gets purchased by Dell, and now the originator of the zero-client, Pano Logic, has appeared to suddenly go out of business. 

I first came across Pano Logic when the VMware partner I ran, AccessFlow, was presenting at a small trade show on the San Francisco Peninsula in 2006. I was immediately intrigued with their shiny silver zero-client boxes and the illuminated blue on/off button.

We signed up as a Pano partner and once the units started shipping in 2007, I insisted that our salespeople carry them on all of their sales calls. In those days, non-IT people tended to have a hard enough time grasping the concept of a virtual server let alone a virtual desktop. The Panos were an easy, albeit inaccurate, way to convey the VDI concept.

When AccessFlow was purchased by INX the following year, we really started getting traction with the Pano devices. We were the number one Pano Logic reseller four quarters in a row – and a commemorative, but functional, gold-plated Pano was shipped to me each time. The Pano Logic company thrived and picked up a $12M investment from Goldman Sachs in 2008 and another $20M from Mayfield in 2010.

Pano Logic was a slick and very reasonably-priced solution, and the devices did indeed help facilitate virtual desktop sales. While there were, as with many new technologies, sometimes support issues, the company tended to be extremely responsive in resolving them. The Pano co-founder personally worked with our team to fix a problem in one particularly difficult situation. 

Pano Logic utilized its own connection broker which was fine in the early days, but which met huge resistance once VMware View started utilizing PCoIP. VMware reps began to perceive Panos as competition even though the devices required VMware ESX on the back end. But the loss of VMware field support negatively impacted the company's momentum.

Pano Logic's sales were further impacted when Wyse and other specialty thin-client manufacturers began to make PCoIP-baesd zero-clients. These units enabled the minimal maintenance benefits of Panos, but tended to provide better performance.

In retrospect, the death knell for Pano was probably rung once the manufacturing giants such as Cisco, Samsung and, most recently, LG, came on the scene with their own zero-client devices.

While I passed on the gold-plated Panos to our various offices that sold the most units each quarter, I do still have a commemorative golf club with a Pano device as a putter. When an artist dies, his works often escalates in price. I wonder if my Pano putter will be worth something now?

See Also:

The Strange Case Of A $38 Million Enterprise Company That's Gone Missing.11/01/2012. Julie Bort. Business Insider.

UCS Central facilitates global data center management

Since the Cisco Unified Computing System (UCS) first debuted in early 2009, I’ve written several articles on this site and for other venues extolling its unique value as a purposefully designed platform for hosting virtual infrastructure. Despite the previous lack of any server experience, Cisco is now tied IBM as the world’s #2 provider of x86 blade servers. Cisco upped the ante this morning with three new announcements enabling consolidated management of data center operations across the globe.

Cisco UCS Manager 2.1

The Cisco UCS B (blade) Series, despite initial widespread skepticism, now has a 15% market share for all blade servers by revenues. Designed under the direction of VMware co-founder Ed Bugnion, the UCS was built over a period of 5 years as a superior platform for hosting virtualized data centers. But the continuing evolution of virtualization and big data technology demands ever more memory, multiple adapters, specialized adapters and high disk spindle counts in local storage. These needs are better served by rack mount form factors.

UCS Manager 2.1 extends the unified fabric capabilities of UCS to provide “single-wire” connectivity to Cisco UCS C-Series rack servers. This enables a significant reduction of switch infrastructure and cabling requirements along with physical NICs and HBAs. Cisco says that its solution offers around a 50% per-node savings over a deployment of 100 typical rack servers.

UCS Manager 2.1 brings the operational benefits of blades such as rapid deployment, cable reduction, and common access to rack-mount servers. Virtualization architects now can focus on server resource requirements such as the ratio of cores to memory, spindle quantities, type of IO, etc. without concern for the shape of the sheet metal.

Another element of UCS Manager 2.1 is integration with a new Meta management product, Cisco UCS Central. 

Cisco UCS Central

UCS-Central-facilitates-global-data-center-management

The many unique attributes of Cisco UCS have contributed to the propagation of virtualization as the data center standard at even the largest enterprises. But this success has driven a demand for new capabilities. 

Large organizations may require multiple UCS units due to geographical or departmental considerations, or because of the scalability limitation of 160 servers per UCS. Each unit includes UCS Manager which enables policy-based control, but which inevitably overlaps with other UCS Manager instances. 

Cisco UCS Central acts as a manager of managers. It provides a policy repository that sits above the UCS Manager instances, creating global policies that can then be managed locally. UCS Central provides a single pane of glass into visibility across all global systems allowing centralize inventory, faults, logs and server consoles. 

Administrative settings are configured as global policies to which individual UCS Managers can subscribe. This guarantees consistent configurations for all domains.

The next release of UCS Central will introduce global service profiles. Applying service profiles from one geographic domain to another provides the foundation for a global disaster recovery solution.

UCS Central is free for the first 5 domains, and then is licensed on a per domain basis for the sixth domain onwards.

Storage Innovations and Ecosystem Support

Cisco UCS now supports three new storage capabilities: Multi-Hop FCoE from UCS environment to the Array, FC Zoning as part of UCS Manager 2.1, and Unified Connect to support multiple protocols (FCoE, iSCSI, NFS, CIFS) on a single port.  

Accompanying system management enhancements and integrations are provided by UCS ecopartners such as VMware, Microsoft, Citrix, Oracle, EMC, IBM, Cloupia and Splunk among many others.

UCS Central, UCS Manager 2.1 and the growing ecosystem support enable unified management of UCS domains and of thousands of servers across disparate data centers. This positions UCS as an optimized platform not just for hosting virtual infrastructure, but for hosting virtualized data centers and cloud computing on a global scale.

 

See Also:

Yes,Cisco UCS Servers are that Good. 09/27/2012. Bill Shields. Cisco Blog.

Cisco UCS Blades Outpace HP (and other facts). Cisco.com

UCS spurs shared virtualized data center vision for VMware, Cisco. 05/21/2009. Steve Kaplan. SearchVMware.com

Cisco UCS – a Disruptive Platform. 05/05/2009. Steve Kaplan. DABCC.

October 12, 2012

Would you like some cloud with that?

 

Would-you-like-some-cloud-with-that

 

VMware’s vCloud Suite promotion is a brilliant move in the chess game of market share. It puts the foundational products for VMware’s cloud strategy into hundreds of thousands of customer environments while offering both a vision and tangible path for transformation to software-defined datacenters.

The Complexity of Selling Cloud

It is not uncommon for industry manufacturers to position themselves as cloud players simply through rebranding. The technical term is “cloudwashing”. My #3 favorite cloudwashing example is Oracle’s Exalogic Elastic Cloud. My #2 favorite is the renaming of HP’s BladeSystem Matrix to CloudSystem Matrix. My all-time favorite is the rebranding of Wyse’s thin clients as “cloud clients”. 

The plethora of marketing misinformation contributes to a widespread lack of understanding of the transformational capabilities of cloud computing. VMware needed to differentiate its offering while providing both a complete and tangible product-based solution. 

VMware’s answer was the vCloud Suite which bundles products representing the entire set of cloud infrastructure capabilities, but at a lower cost than the assembled piece parts – and priced strictly per processor rather than per VM. The suite comes in three versions: Standard, Advanced and Enterprise.

Until December 15th, every customer with vSphere Enterprise Plus receives a free upgrade to vCloud Suite Standard, while every vSphere Enterprise customer can purchase the upgrade at a 71% discount. vCloud Suite Standard includes vSphere Enterprise Plus, vCloud Director and vCloud Networking and Security. 

The vCloud Suite promotion may not necessarily imbue customers with the vision for the business benefits private cloud enables, but it will prompt them to learn about its IT-specific efficiencies. For example, administrators can manage far more virtual machines, and downtime resulting from server failure decreases from five minutes to around a minute. Automating approval and other processes slashes VM provisioning times from an average of five days to minutes. 

The Software Defined Datacenter

VMware differentiates its approach to cloud computing by emphasizing the software-defined datacenter (SDDC). A Virtual Data Center (VDC) is a construct of vCloud Director that represents an entire data center. And just as a physical server can host multiple virtual machines, a SDDC can host multiple VDCs.

The SDDC defines an application along with all of the resources it needs and enables control of the data center entirely driven by software. In practical terms, the SDDC automatically maps a virtual machine to the appropriate resources such as storage, network, firewall, intrusion detection, load-balancers, availability, backup, DR, compliancy, etc.

The vCloud Suite enables the SDDC. VMware vSphere provides software-defined compute and memory. It combines with Site Recovery Manager to enable software-defined storage and availability. VMware vCloud Networking and Security (vCNS) provides, of course, software-defined networking and security.

VMware vCloud Director enables secure multi-tenancy as well as placement and load balancing of software-defined datacenter services. VMware vCenter Operations provides automated cloud operations management. VMware vFabric Application Director (now AppD) enables automated applications provisioning and vCenter Chargeback provides metered chargeback reporting and accountability. 

VMware vCloud Connector is a free download enabling application migration between clouds while vCenter Orchestrator, included with VMware vCenter Server, enables orchestration with third-party systems. 

Although vCloud Suite only debuted at VMworld San Francisco in late August, VMware already announced significant management enhancements at VMworld Europe this week including “multi-cloud infrastructure provisioning” and “IT benchmarking”. Especially noteworthy is the inclusion of vCloud Automation Center (vCAC) which is based on DynamicOps. It will be interesting to see how VMware handles the overlap of vCACcapabilities such as catalog and chargeback with those of vCloud Director.  

While I haven’t heard anything regarding End User Computing and vCloud Suite, I would like to see integration there as well. Organizations should be able to automate and meter the provisioning of virtual desktops utilizing the same tool sets as they do for virtual servers. 

Upgrade Path

Those of us who sold ESX in the early days of virtualization spent a lot of time educating clients about the concept. But as the overwhelming economic advantages of virtualization quickly became well-known, the sale regressed into one of fulfillment rather than evangelism for many partners. Cloud computing is at a stage where the sale again is very conceptual in nature.  

Many cloud vendors emphasize automation and provisioning, but they really mean scripting. The rapid proliferation of vCloud Suite Standard in data centers across the globe will spur conversations about the policy-driven architecture and advantages of a SDDC. As customers increasingly understand the benefits, they will take advantage of the easy upgrade path to the more complete Advanced and Enterprise versions. This should help VMware leverage its dominance in virtualization to establish a similar position in cloud computing.

 

See Also:

VMware Fills Gap in Cloud Management with vCloud Suite. 10/10/2012. Chris Preimesberger. eWeek.

The Cloud Backlash Could be Deep. 10/06/2012. Mark Thiele. Gigamon.

VMware Showcasing vCloud Director in New Cloud Bundle. 09/11/2012. Kevin McLaughlin. CRN.

Managing the Software-Defined Datacenter. 08/27/2012. Kit Colbert. VMware Office of the CTO. 

 

September 25, 2012

Getting private cloud? Better change your funding model

Getting-private-cloud-Better-change-your-funding-model

The traditional data center is a mishmash of equipment, tools and operating systems – and project-based funding is largely to blame. This budgeting anachronism was sustainable when servers were physical, but causes problems in a virtualized environment. It’s anathema to private cloud.

Transitioning to private cloud requires a chargeback funding model. Chargeback aligns with the dynamics of shared resource pools and facilitates both improved budgeting and planning along with more efficient resource consumption. 

Funding Physical Infrastructure

Organizations often start upon the virtualization journey with “low hanging fruit” servers and gradually work up to Tier 1 machines. IT tends to approach their mixed physical and virtual environments from a mostly physical perspective. Much of the architecture, tools, processes and equipment utilized to run the physical data center continue to be applied to the virtual environment. 

Some of these physical data center vestiges, such as traditional servers, backup products and security processes, may not be optimal for a virtual infrastructure – but they at least work. The outcome is  more painful when continuing to utilize project-based funding. 

Project-based funding generally includes an annual budget enabling current IT services along with projected growth. Business units requesting new applications or technology projects provide additional monies, but frequently insist upon the specific equipment they feel best meets their individual needs. Little importance is placed upon how the products interoperate with the data center environment as a whole.  

The result is a data center full of silos containing overlapping or redundant equipment that is both expensive and difficult to manage efficiently. Seventy percent of the  traditional IT budget goes just to “keep the lights on”. It is rather humorous to recall that Gartner’s number one energy saving recommendation at its 2007 Data Center Conference was to turn off servers that appear idle and see if anyone complains (searchcio.com:Top 10 ways to save energy in the datacenter). 

Funding Virtual Infrastructure

Project-based budgeting works in a physical data center despite the drawbacks because each business unit “owns” its servers and generally utilizes local or otherwise dedicated storage. But this model quickly becomes problematic as organizations virtualize.  

Virtualization eliminates the need to purchase departmental-specific resources. Business units can no longer even identify their equipment: Virtual machines migrate across hosts; storage moves between shared arrays; virtual switches direct and monitor traffic; and virtual load-balancers and firewall appliances replace their physical counterparts. 

IT fulfills new project requests by simply increasing resource pool capacity. At least, this is often possible in the initial stages of virtualization. But virtualized data centers become subject to a phenomenon known as Jevons Paradox whereby reduced technology costs lead to increased demand. User quickly figure out that IT can now “spin up a VM” rather than going through an extensive and expensive procurement cycle, and their server requests escalate.

As resources reach capacity, IT has no option but to ask the next service requestor to bear the burden of required expansion. Pity the business unit with a VM request just barely exceeding existing capacity. IT may ask it to fund a whole new blade chassis, SAN or Nexus 7000 switch.

This does not bode well for cloud. Rather than gaining instant and automatic access to the required infrastructure, the business unit either has to cough up the monies for far more capacity than it  requires, or wait until either the next business cycle or until other departments fund the purchase.

Private Cloud Advantages

Organizations implement private clouds for two primary reasons. The first, and most important, is the ability to align more flexible and cost-effective computing capabilities with facilitating business objectives such as increasing top-line revenues.  

The second major driver for private cloud is to remedy virtualization inefficiencies such as lengthy provisioning times. Sure, a virtual machine can be spun up in minutes, but putting it into production is a whole other matter. A 2011 study sponsored by CA Technologies, The State of IT Automation, showed that 47% of the virtualized organizations queried reported taking a week or longer to provision a virtual machine. In some extreme cases, departments frustrated by delays, have been known to revert back to purchasing cheap pizza box servers.  

Provisioning a production VM first requires that the requester obtain approvals. Then the server team needs to acquire the necessary LUNs from the storage group, the VLANs from the network team, and the firewall configurations from the security folks. Management, load-balancing and regulatory compliance requirements can cause further delays. 

A private cloud takes all of this process and standardizes, automates, and optimizes it in a repeatable manner. The time to provision a virtual machine, along with the associated storage, network and security components, decreases from days to minutes.

Funding Private Cloud

Virtualized organizations often utilize Capacity management in conjunction with modified budgeting processes to ensure adequate resources for upcoming projects. While this model can work well for a virtual data center, it is insufficient for private cloud.

The very definition of cloud, as stated by the National Institute of Standards and Technology (NIST), includes “measured service” as one of the five primary attributes. NIST emphasizes, “Typically, this is done on a pay-per-use or charge-per-use basis”. Yet a survey conducted late last year of 257 IT managers showed that only 40% of those with or planning private clouds had or were “developing some kind of chargeback method” (@joemckendrick 04/12/2012 ZDNet).

Another term for private cloud is IT-as-a-Service (ITaaS). IT must mirror public cloud providers by charging users for resource consumption. An effective chargeback environment reduces time-consuming negotiations and interdepartmental budgetary meetings. A BU purchases computing resources as needed and when the project completes, billing stops. Knowing resource costs in advance is advantageous for BUs in terms of budgeting and planning as well as in pricing products dependent upon IT capabilities.

Without the natural consequences resulting from a pay-as-you-go model, users tend to over-consume. A chargeback model drives efficiency because users naturally want to minimize their costs. When a BU manager sees, for example, that her department is being charged each month for the 20 VMs they no longer use, she takes the initiative to have them decommissioned.

Embracing Cloud Competition

Public cloud ensures the end of the competition-free environment that IT has enjoyed for decades. Business units are increasingly considering cloud-based alternatives such as SaaS and IaaS. IT, rather than fearing or resisting the public cloud, should embrace it by implementing an efficient and effective hybrid cloud strategy.

Providing an accurate chargeback model to make it easier for business units to compare internal costs with off-premise options. IT can lead the way by helping them evaluate which venues make the most sense for hosting various workloads while still ensuring corporate standards of performance, security, compliance and disaster recovery.

 

Link Alander, CIO of Lone Star College System; Rob Bergin @rbergin, a systems administrator at a Fortune 100 company; Thomas Gamull @MagicalYak at Presidio and Jeremy Oakey at Presidio all contributed to this article.

 

See Also:

Podcastwith VMware CTO, Steve Herod. 08/22/2012. Dana Gardner. BriefingsDirect.

The SoftwareDefined Datacenter. Video on www.vmware.com

The New Challenges of Capacity Management In Virtualized Cloudy IT.06/11/2012. Taneja Group.

Virtual Machines put the ‘Fun’ in Dysfunctional. 05/22/2012. Enterprise Networking Planet.

Cloud Computing: Why You Can't Ignore Chargeback. 11/05/2010. Bernard Golden. CIO.