Cisco UCS vs. HP Matrix: strategic vs. tactical approach to virtualization

"Revolutionary. Cutting edge. State of the art. These words and phrases are bandied around for so very many products in the IT field that they become useless, bland, expected. The truth is that truly revolutionary products are few and far between. That said, Cisco's Unified Computing System fits the bill."

Paul Venezia, ComputerWorld. November 10, 2009.

 

Following months of rumors about its "project California", Cisco made a big production last March in unveiling the Unified Computing System as transformative to the data center and "…as important to the industry as the personal computer was in the early '80s." Top executives from industry leaders such as Intel, EMC, VMware, Microsoft, Red Hat and others participated in the fanfare along with customers, partners and analysts. Cisco CEO, John Chambers, said, "This new Unified Computing System brings together the concepts of compute, network, virtualization and storage in a way that we think isn't just a product announcement, but we think is the future on which others will build".

Thirty-five days later, HP issued a press release announcing the HP Matrix with an obvious attempted jab at Cisco, "…the industry's first all-in-one software, server, storage and networking platform that allows customers to get the benefits of a converged system without a 'rip and replace' strategy for all their existing data center investments." HP's Web site and partner communications continue, albeit without much substance, to aggressively position Cisco UCS as inferior to Matrix.

An updated perspective is enabled by the several months that have now passed since both products began shipping. The overwhelming popularity, elegance, reliability and ease of deployment of UCS evidence the three years of investment Cisco made in developing an optimized virtualization platform. The complexity, limitations and lack of much excitement around Matrix, on the other hand, bear out a rushed repackaging of existing HP products in response to the Cisco announcement.

A Difference in Business Philosophy

Cisco approaches markets in terms of long-term architectural strategies rather than from the perspective of individual products or even product categories. Cisco foresaw the coming pervasiveness of data center virtualization and made partnership overtures to both HP and IBM years ago to to develop a more comprehensive computing architecture (Light Reading 12/9/2009). After being turned down by the server manufacturers, Cisco decided to undertake the effort on its own.

According to the Cisco book published early this year titled, Project California: A Data Center Virtualization Server, UCS is, "one of the largest endeavors ever attempted by Cisco". The Cisco-funded startup, Nuova, developed UCS under the leadership of VMware co-founder and former CTO, Ed Bugnion, to engineer a virtualization hosting platform for unifying the traditional data center functional silos of servers, storage and networking. Cisco UCS incorporates myriad innovations in architecture, performance, unified fabric and management.

The initial HP Matrix press release appears to be the first public mention of the product; it is hard to imagine that it resulted from a long-term data center strategy. The HP-sponsored April, 2009 IDC white paper, HP BladeSystem Matrix: Enabling Adaptive Infrastructure, says "HP is not introducing any brand-new technologies". Matrix not only lacks innovation, it feels like a work in progress. Even the "adaptive infrastructure" messaging used to introduce Matrix last April appears to have been replaced by "dynamic infrastructure".

HP Matrix – Time to Swallow the Red Pill

The HP BladeSystem Matrix Starter Kit includes the HP BladeSystem c7000 enclosure, HP VirtualConnect Flex-10 Ethernet modules, HP Virtual 8GB 24-Port Fibre Channel Modules, an HP Proliant BL460C (commonly referred to as the CMS – or Central Management Server) and 16 HP Insight Software packages. It also includes optional HP storage.

HP BladeSystem Matrix Starter Kit

  • HP 10000 G2 Series rack
  • HP BladeSystem c7000 enclosure
  • HP Proliant BL460c commonly referred to as the CMS (Central Management Server)
  • HP Virtual Connect Flex-10 Ethernet modules
  • HP Virtual 8Gb 24-Port Fibre Channel modules
  • HP StorageWorks 4400 Enterprise Virtual Array Starter Kit (optional)
  • Infrastructure Operating Environment licenses (Insight software)
  • Onsite BladeSystem Matrix Starter Kit Implementation service
  • 14 existing HP Insight software and 2 new packages
    • HP Insight Software (IS) DVD – Integrated Installer 3.10
    • New HP Insight Recovery 1.00
    • New HP Insight Orchestration 1.0.2
    • HP Systems Insight Manager (HP SIM) 5.3.1
    • HP Insight Dynamics – VSE (ID-VSE) A4.1.2
    • HP Insight Power Manager (IPM) 2.0
    • Management Information Base (MIBs) 8.20
    • Performance Management Pack (PMP) 5.2.2
    • HP Insight Rapid Deployment software (RDP) 3.83.16 and 3.83.17
    • Remote Support Software Manager (RSSWM) 5.21
    • HP Insight Server Migration software for ProLiant (SMP)3.70
    • System Management Homepage (SMH) 3.00
    • HP Version Control Repository Manager (VCRM) 2.2.0
    • HP Virtual Connect Enterprise Manager (VCEM) 1.32
    • HP Virtual Machine Management (VMM) 3.7.1
    • WMI Mapper 2.60

HP BladeSystem c7000 enclosure

The version of the HP BladeSystem c7000 enclosure tested with, and certified for, Matrix runs only on single phase power which can cause some older data centers to require modification. Seven full height and a single half-height or 15 half-height device bays are available for populating with server blades although the first blade in the enclosure is reserved for the required CMS.

Central Management Server (CMS)

The Central Management Server is the brain of all operations and runs the Insight software. No fault tolerance or clustering and little to no redundancy results in a scenario of maintaining all eggs in one basket. The CMS requires either Windows Server 2003 R2 or SP2 / Windows Server 2008 R1. A failed CMS requires significant effort to restore it back on line. An unrecoverable CMS requires an on-site visit from an HP Engineering Services engineer along with a minimum one-week engagement to get the OS back on line.

Virtual Connect

A primary component of the HP BladeSystems Matrix, the Virtual Connect Flex10 module, provides some of the stateless capabilities that UCS enables along with a reduced cabling requirement. It has some negatives as well. For example, to move profiles between all the matrix-managed enclosures (chasses), all enclosures must be identical with the same number of VC modules in each enclosure, the same number of uplink cables all plugged to the same port numbers and with the same VLAN configuration an all enclosures. One enclosure cannot have more bandwidth requirements than other enclosures. Only four enclosures can be added to a single VC domain. Two chasses, each its own VC domain (the default), cannot be merged into one domain. One must be selected as the master and the other wiped out and then added to the existing domain.

Most importantly, the nature of Virtual Connect is to set up trunked ports between the core switch and VC modules for passing all VLAN traffic. This allows server teams to create additional LAN and SAN networks inside a VC domain and gives the server administrators control of the edge network. From a virtualized data center perspective, however, this scenario is disadvantageous in that it detracts from the networking team's responsibility for applying consistent network operations, policies and troubleshooting procedures. It is contrary to the joint efforts of Cisco and VMware in developing the VN-Link technology that enables the network team to effectively take back control of the vSwitch environment.

Insight Software

The HP Insight Software that comes packaged with Matrix is a collection of 14 pre-existing software packages and only 2 new packages. Page 8 of the HP BladeSystems Matrix Compatibility Chart shows the currently supported Managed node operating systems. Customers are locked into a small number that can be deployed, managed and controlled.

As the 33-page Insight Software Installation checklist guide shows, operating systems and databases are very limited. For example, Footnote 1 of Table 2.2 on page 11 warns that, "ID – VSE, VCEM, IO, and HP IR do not support CMS installation to 64-bit Windows Server 2008." Insight Orchestration requires Internet Explorer 6.0 SP3. CMS requires Microsoft SQL Server.

Overall Matrix options are quite limited as well. VMware vSphere, for instance, is supported in technology preview only. Matrix CMS only supports up to 250 logical servers whether they be physical or virtual servers (search for "250" on the HP BladeSystem Matrix Overview). While it is possible to combine multiple CMS units in order to reach an upper limit of 1,000 logical/physical servers, they are not clustered and do not share information. Server profiles cannot be moved from one CMS to another. This limits the Matrix as a solution to smaller organizations not wishing to utilize VDI, although VDI is impractical in any case since Matrix lacks automated provisioning support for desktop operating systems.

HP Matrix Implementation Service

Setting up all of the HP Matrix hardware and software is, not unexpectedly, very complex. A June 17, 2009 Infoworld review says, "The setup and initial configuration of the Matrix product is not for the faint of heart." The Matrix includes a mandatory two-week Onsite BladeSystem Matrix Starter Kit Implementation Service performed by a HP-Certified Matrix Professional from HP Engineering Services. But two weeks is still a short window for many organizations attempting to bring in all of the storage, network and server team players who need to provide input for the set-up – making implementations particularly challenging.

A customer cannot perform the Matrix install without a certified HP BladeSystems Matrix Engineer. HP also highly recommends HP Education Services for customer training and education along with Additional Technical Services. Matrix is QA/QC certified to only support a strict firmware, driver and server BIOS level. HP recommends that customers not update these components without first contacting the HP Matrix support line to ensure these updates will not negatively affect the overall Matrix infrastructure.

 

Cisco UCS vs. HP Matrix Matrix

 

Cisco UCS

HP Matrix

Enterprise scalability

40 chasses, 320 blades – tens of thousands of VMs

250 total logical servers. Can combine up to 4 CMS to reach 1,000 logical servers, but no clustering or information sharing. Server profiles cannot be moved from one CMS to another

Redundancy

All components redundant

Central Management Server has no fault tolerance or clustering and little or no redundancy.

Memory

96GB Half Width Blade and 384GB Full Width Blade

(8GB DIMMs)

With HP BL490C half-height blades : 144 GB w/8 GB DIMMs, 192 w/16 GB DIMMs 1

With HP BL685c (AMD) blades: 256 GB

"Closed" Architecture Limitations

Cisco UCS requires Cisco servers, CNAs and Fabric Interconnects for optimal performance

Requires one of the following specific HP ProLiant blades: HP ProLiant BL260c, HP ProLiant BL280c, HP ProLiant BL460c, HP ProLiant BL465c, HP ProLiant BL490c, HP ProLiant BL495c, HP ProLiant BL680c or HP ProLiant BL685c.2

vNIC & vHBA Support

Up to 128 each with Palo Adapter (56 vNICs per half-slot server today)

LAN – Ethernet 16 x 10 Gb downlinks to server ports

SAN – Fiber 16 X 8 Gb 2/4/8Gb auto negotiating server ports

OS Support for Management Software

None required

Windows Server® 2008, Enterprise Edition 32 bit

Windows Server® 2003, Enterprise Edition R2/SP2: 32 bit

Database Support for Management Software

None required

Microsoft SQL Server 2008, Microsoft SQL Server 2005 SP2,

Microsoft SQL Server 2005 Express Edition SP2

Hypervisor Support

Supports any X86-based hypervisor. Particular advantages from tight integration with vSphere

VMware ESX Server 3.5.0 Update 4

VMware ESX Server 4.0 (pilot & test environments only)

Windows Server 2008 Hyper-V (though not yet supported by Insight Recovery)

Guest OS Support (server)

Any

Windows Server® 2008, Datacenter Edition 32 bit and x64

Windows Server® 2008 Hyper-V, Datacenter1 x64

Windows Server® 2003, Enterprise Edition R2/SP2: 32 bit R2/SP2: x64

Red Hat Enterprise Linux 43 Update 7: 32 bit Update 7: AMD64 and Intel® EM64T

Red Hat Enterprise Linux 53 Update 3: 32 bit Update 3: AMD64 and Intel® EM64T

SUSE Linux Enterprise Server 103 SP2: 32 bit SP2: AMD64 and Intel® EM64T

Guest OS Support (VDI)

Any

None (No Matrix automated provisioning support )

3rd party development

XML-based API

None

QOS

Yes

None

Minimum cables required per chassis (inc. FC & redundancy)

2

6

Maximum cables potentially needed per chassis (inc. FC & redundancy)

8

34

FCoE

Yes

No

Ability to deliver native network and storage performance to VMs via hypervisor bypass

Yes

No

Network traffic monitoring & application of live-migration aware network and security policies

Cisco VN-Link / Nexus 1000V

None

Mfg. Support

1-Year

3-Year

Search.twitter.com hits: 12/15/09 – 12/22/09 in English [-ROIdude (my Matrix/UCS inquiries)]

54

12

1The HP BL490c half-height blades support up to 144 GB with 8 GB DIMMs, or 192 GB with 16 GB DIMMs. They utilize 3 DIMMs per channel meaning that with over 96GB, the entire memory bus speed drops to 800 MHz. Additionally, the BL 490s have no RAID controllers and the SSD Hard drives are not hot-pluggable. Cisco UCS, on the other hand, uses a patented Cisco Extended Memory Technology which enables up to 384 GB on a full-width Intel Nehalem-based blade without sacrificing performance or requiring very expensive 16 GB DIMMs.

2 While the c7000 will work with any HP ProLiant blade, Matrix only works with the blade models listed.

 

Cisco UCS – Revolutionizing Data Center Virtualization

Much has been written describing Cisco UCS both in this blog and in many other publications – obviating the need to go into details. At a high level, UCS is the culmination of years of development within Cisco/Nuova while also reflecting its close partnership with VMware. UCS was designed from the ground up as an optimized hosting platform for a virtualized data center. It integrates tightly with vSphere 4 to deliver an enterprise hosting platform enabling even the largest organizations to feel comfortable about virtualizing their data centers.

Despite its significant technology advances, Cisco UCS is surprisingly simple to install (HealthITGuy's Blog 11/25/2009 How Long Does it Take to Add UCS?). It utilizes only two (redundant) switches per 40 chasses and includes a common GUI making it easy for the server, network and storage teams to coordinate their efforts – yet be guided by role-based and resource-based management policies.

While Cisco UCS had already been running at IT infrastructure provider Savvis and other beta customers at the time of its announcement this past March, it appears that the first implementation of HP Matrix was at Stein Mart in the late summer. A December 9, 2009 BusinessWeek article reports that over 100 companies now are using UCS, and I suspect this is a quite conservative number. HP has not published Matrix sales figures, but supposedly a lack of qualified HP implementation engineering resources limit deployments to only a couple a month. If true, this means that Matrix production systems are likely somewhere in the 12 – 15 unit range.

Cisco UCS is a truly enterprise-class virtualization platform that, unlike traditional servers built for the physical world, is inspiring IT organizations with the confidence to embrace a completely virtualized data center strategy. This is not by accident. At Cisco's December 8, 2009 Financial Analyst Conference, Chambers said, "If you're reacting to what a competitor does, you're looking out the rearview mirror. You're three to five years behind." HP Matrix has a long way to go.

 

Author Disclosure: I work for a professional services company which is also a leading Cisco partner. I researched this article carefully, but welcome any corrective feedback.

  Author Note: 07/27/2010:  Part II now on-line at:  http://www.bythebell.com/2010/07/cisco-ucs-vs-hp-bladesystem-matrix-an-update.html

   

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , . Bookmark the permalink.

54 Responses to Cisco UCS vs. HP Matrix: strategic vs. tactical approach to virtualization

  1. Steve Kaplanste says:

    Thanks Trey, I appreciate that. Even if I didn’t work with UCS, it’s easy to find a ton of information on the product (including a published book and customer/partner reviews). It took a lot of research to figure out just what Matrix is.

  2. Jase says:

    The table was quite informative, but the article itself was obviously biased towards Cisco which was disappointing.
    If you want to do an article, maybe explain the Cisco offering in the same level of detail as the HP offering and maybe address what HP has said about Cisco, instead of saying it just has tight integration with vSphere, its revolutionary etc.
    However thank you for disclosing your relationship with Cisco 🙂

  3. Steve Kaplan says:

    Jase, Thanks for your comments. As I mentioned in the article, so much has already been written about UCS that it just didn’t seem necessary to dive into details yet again.ViewYonder already answered the HP Web site criticism of UCS so I didn’t really see the need to rehash that. Some HP partners with whom I spoke mentioned HP’s arguments against UCS, but I don’t have anything in writing I can reference. The Matrix itself is such a strange “product” in that it’s a repackaging of a large number of former products that I thought I would primarily focus on unraveling its contents and how the major aspects compare with UCS.

  4. mac says:

    Hi,
    a couple of things before I make my points in reference to the article.
    first I’d like to say that I actually like UCS and HP Bladesystem (also dell particularly egenera mgmt).
    secondly I don’t believe in zealot approach to vendor offerings. Which is best for the client is best for me.
    thirdly no vendor has the ultimate solution (mileage will vary).
    fourth I have no affiliation with HP and would be quite happy with Cisco UCS or Dell Poweredge blades
    That said fair discussion is the key to all things. So here are a few things I think don’t quite fit.
    1. You are comparing apples with oranges. UCS is a (very) good blade design, you should be comparing it with HP comparable bladesystem design (not matrix). If you want to compare matrix to something then the (also well designed) VCE Vblocks would make more sense. The target market for vblocks / Matrix is the private cloud with integrated storage etc.. ready to deploy put it on the floor solution <-- this is my main point 2. As you pointed out matrix is a combination of already available products (but integrated), These products are being used by clients already (individually) although seemingly more complicated to setup (at first), people adapt and learn. So we see that clients seem to like them (I only have the HP market share to judge this by and my conversation with clients). HP are simply wrapping them up around converged Infrastructure (further info below). 3. I believe (someone else will confirm) that with the use of Flex-10 you can get 64 vNic's (16 ports with 4 on each) for ethernet not the 16 you put in the table. 4. Nexus 1000v works with HP bladesystem (to my knowledge - i could be wrong), so it can be used on either solution. 5. FCoE support will be in Q1 2010 - I have heard (can't confirm that). There is no rush for that anyway. 6. The HP power mgmt (thermal logic) is also very good and covers the components in the chassis. I think that would be useful comparison in the article as power is a big issue in Data centers. I believe that UCS has power mgmt but i'm not sure whether it goes to the degrees of Bladesystem (UCS guys will tell you - i honestly don't know). 7. The new stuff around flexfabric, resource pools and smartgrid seems to provide a very holistic approach to data center. I'm not sure what offerings over vendors have. Although Panduit (who work with cisco) have a very compelling story around racks/power in data center. That is a wider discussion than blade designs/deployments though. I'm not interested in starting war, just thought i'd make these few points. Thanks for the article lots of interesting points, i was actually thinking of writing one myself as i think UCS and Bladesystem are the leading chassis/blades. I believe however they are tackling it from different angles - different clients will align with different angles 😉

  5. Bob Olwig says:

    Steve, thanks for the article and the research. I also have learned a lot from the comments–you have some knowledgeable followers! HP & Cisco (very much strategic partners not that long ago!) each offer some compelling technology with their strengths and weaknesses. I’m sure you will agree that 2010 will be an interesting year helping customers sort out the differences and what’s best for them. Happy Holidays!

  6. dan says:

    you really don’t help cisco’s story by making this so obviously biased towards cisco…and as the above comment points out you are wrong on a couple of the points. i would really like to see a non-biased write-up. cisco’s offering is very strong, you don’t need to twist the truth quite so much to make a point.

  7. Steve Kaplan says:

    Mac,
    1) As far as your main comment goes in comparing applies with oranges, it was HP’s continued comparison of UCS to Matrix that inspired this piece. Clearly HP sees UCS as the competition as the Matrix was conceived in response to UCS long before the debut of Vblocks. While Matrix includes optional HP storage, it works with any brand. Likewise, UCS works with any storage brand – VCE Vblock happens to be a version tested and validated with EMC.
    2) I agree that the individual Matrix products are popular and widely in use. Matrix itself, however, appears to have very limited adoption due to its complexity. There is also a huge customer excitement and industry buzz around UCS that appears to be entirely lacking around Matrix (try search.twitter.com for the latest comments on both).
    3) I believe that 16 is the accurate number, but I welcome correction if not.
    4) VMware vSphere is currently only supported in technology preview (experimental) mode only, meaning Nexus 1000V is not currently supported. Also, if you hit the HP Web site link included in the beginning of my article, you’ll notice that HP prominently slams the Nexus 1000V – certainly implying that it is not needed with Matrix.
    5) I have heard that FCoE support will be included. I’ve also heard more capabilities are coming for UCS. I wanted to write an article about present technology, not future.
    6) I don’t know about the comparison between power management, but would be interested to learn – perhaps another reader will respond. I do know that the UCS was designed to be very energy efficient, and the minimal cabling and switching requirements alone would make it difficult for any other blade system to compare.
    7) Again, I haven’t researched flexfabric resource pools and smartgrid, although will research it if no other reader responds with clarification.
    I appreciate your comments. If you write an article with a different perspective, I’ll certainly look forward to reading it.

  8. Steve Kaplan says:

    Bob, I have long been an HP fan myself and agree the company certainly has some compelling technologies. Unfortunately, I don’t believe the Matrix is anywhere near the list. It appears to have been a kneejerk reaction to Cisco’s UCS, but enterprise virtualization success requires a very strategic approach.

  9. Steve Kaplan says:

    Dan, I replied to the comment suggesting I was wrong on some points correcting his misconceptions. While as I stated in my disclosure, I work for a large Cisco partner, I did try to be accurate and compare the good and bad of both products in the Matrix matrix (I just couldn’t find much good about Matrix). During my research, I spoke with partners from both HP and Cisco to learn their perspectives. I’d appreciate knowing which part of the article “twists the truth”.

  10. mac says:

    Steve,
    I appreciate the thoughtful discussion we are having.
    1) HP have criticized UCS (unfortunately that is what vendors do). They then go on to talk about Matrix as being the step forward in the evolution of the data center (whether that is right is open for discussion). HP Bladesystem also work with a multitude of storage systems and therefore is worth a comparison with UCS in your table.
    Personally if i was talking to a client i wouldn’t position UCS Vs Matrix as (to me) it is 2 different markets/configs. It would be Cisco UCS Vs HP bladesystem and VEC vblock Vs HP Matrix. I think that makes more sense to clients/customers (or at least to me)
    2. I agree with you that UCS has the buzz and matrix is more complex. I’ve followed twitter/google wave discussions on UCS. Matrix discussion are few and far between. However I still think they are different target markets/products. I think that Matrix fits into HP’s (very wide) overall converged infrastructure product portfolio (including flexfabric, resource pools and smartgrid)and not as focussed as cisco UCS strategy
    3. As i understand it flex10 allows you to present 4 nics per port on a 16 port flex10 module (64 ports therefore). You can also have multiple Flex10 modules in the chassis
    4. i wouldn’t compare it with Matrix, so here i would say that HP bladesystem does work with VMware. As for the 1000v with matrix (again i wouldn’t compare it with UCS, but) it is a design consideration as to whether you actually do need it, some would say yes others no and HP slamming it doesn’t mean it won’t work with Matrix 😉 (i don’t know if it does or not). However 1000v does work with HP bladesystem with 1 exception around VNTag (i think that is right)
    5. The question is what does UCS FCoE give you today? reduction in cables, reduced server power requirements, converged infrastructure and 10GigE – at the server edge. Virtual Connect gives you that as well without FCoE – at the server edge. There is still work on CEE (DCB whatever you call it) for full edge to core configuration. Again this is only my opinion (for what that is worth). Might be worth listing flex10 as a different approach in your table for convergence.
    6. from what i understand there is alot of power mgmt across components in the HP bladesystem chassis.
    5. The new offerings/features do seem to attack the issues of data center holistically. however they part of a wider strategy for the data center.
    Like i said i wouldn’t compare UCS with Matrix, however if i did produce a table it would be UCS Vs HP bladesystem and vblock Vs Matrix. In fact i think you’ve given me an idea 😉

  11. mac says:

    Steve,
    Having said all that i’m not a big fan of tables. I think sometimes we are forced into producing these things because of FUD by vendors (on all sides) that confuse Consultants never mind clients. I did enjoy your article as there are very few decent discussions on the subject.

  12. Steve Kaplan says:

    I couldn’t resist creating a Matrix matrix. 🙂

  13. Steve Kaplan says:

    Mac, thanks again for your detailed comments. In the interest of saving time, I don’t want to respond to them all. The primary difference of opinion we have is that I completely and totally disagree that UCS should be compared to BladeSystem. BladeSystem is a server designed for a physical world. UCS was developed over a 3-year period as an optimized hosting platform for virtual infrastructure that includes breakthrough innovations in performance, fabric unification and management. While UCS certainly includes server components, its capabilities and purpose go far, far beyond the BladeSystem world. Trying to compare the two would be silly (which is why HP doesn’t do it).

  14. mac says:

    Steve,
    that’s cool. i think we do disagree on Blade chassis, FEX (IOM), Fabric Interconnect. I see these components as being built to simplify management and increase scalability rapidly (320 physical blades in a single domain – better than Bladesystem or the Matrix) over a large blade deployment.
    This design has as much benefit for bare metal installs (as someone from cisco mentioned to me, which i agree with) as virtualized installs.
    The increase in memory via the ASIC in the full height blade is definitely built for virtualization as it allows greater memory in the blade. The integration with 1000v (that works with Bladesystem anyway) is also a virtualization benefit as it allows the the host based VM network management to be managed/used by vCenter (across massive number of blades).
    UCSM allow service profiles to move across that domain, bladesystem with VC can do the same (over less blades as you pointed out).
    I’m not sure as to where the “capabilities go far beyond the bladesystem world”, different – yes. That goes back to what i was saying about the different angles from which HP and Cisco are approaching the Data Center, clients mileage will vary, no solution is the best fit for all. It has been an interesting discussion.
    anyway ciao.

  15. Aubrey says:

    UCS blade systems are an immature, unproven platform. HP and Dell Blades are quite the opposite.
    Cisco needs to prove the UCS platform to be Enterprise ready and reliable. That is not the case as of today.

  16. Steve Kaplan says:

    Aubrey,
    I appreciate you taking the time to comment, though I have to disagree completely. Remember that UCS was developed over a period of 3 years which doubtlessly included an abundance of testing. UCS B Series was announced last March after already being utilized in beta by 10 organizations. One of those organizations, Savvis, is a global leader in providing IaaS with over 4,000 customers including 40% of of the Fortune 100. Savvis has been so impressed with UCS technology that it is utilizing UCS as the cornerstone of its private clouds platform. UCS has been deployed at over 100 organizations overall, including 6 of which I’ve had some personal involvement. Every customer of whom I’m aware is very happy with the robustness, versatility and reliability of UCS as an optimized hosting platform for virtual infrastructure. In summary, Cisco has done an outstanding job of proving UCS to be enterprise ready today.
    For more information about this, I suggest reading the November 7 post on this blog site, “Addressing concerns about the ‘newness’ of Cisco UCS”.

  17. John says:

    Totally agree with Aubrey, how can you compare a first generation bladeserver with the 20 years market experience of HP/Compaq’s Proliant Servers, are you serious? Your article misses lots of details about the HP offering that you totally gfail with making a comparison.
    The well respected theregister.co.uk did a better job:
    http://www.theregister.co.uk/2009/09/02/ucs_needs_a_bra
    or
    http://regmedia.co.uk/2009/09/02/cisco-ucs-sags.jpg
    That doesn’t look like 3 years preparation, that looks like a first generation model….

  18. Steve Kaplan says:

    John,
    You say that my article “misses lots of details about the HP offering that [I] totally gfail with making a comparison”, and that the Register did a better job by showing a photo of a sliver of 16 UCS chasses at VMworld last September. First, I would like you to make good on your assertion and supply those details that I missed. Second, how does one grainy and unexplained photo do a better job of comparison than my quite detailed article? Remember that the UCS for VMworld was set up before the product began publicly shipping and in a ridiculously short time and in a make-shift data center in the hall of Moscone Center. It was one of the largest virtual data centers ever set up in one place, hosting 37,000 VMs (which were all provisioned, I believe, in less than 2 days). This is 148 times more VMs than Matrix could have hosted without setting up multiple CMS units which would have not been clustered and unable to share information. Even if willing to suffer these limitations, Matrix still could have only handled 1/37th the number of VMs needed for the show (assuming of course that it could have been set up in time for them to be provisioned – a not very likely assumption). The Register doesn’t explain the details behind the grainy photo. Maybe someone put in the rails incorrectly in the chassis in the rush to set it up. Maybe it was a bad camera or just a bad picture. As mentioned in the post I referenced for Aubrey, though, the UCS display was so impressive that many attendees flocked to it just to take pictures of it. If you do a Google search on VMworld UCS you’ll see scores of much clearer pictures and videos showing a very elegant implementation, but here is just one: http://www.flickr.com/photos/cisco_pics/3887435777/. Also keep in mind that UCS won the Best of VMworld 2009 Gold Award – apparently the judges did not share your concerns.
    Again, my article compares Cisco UCS with HP Matrix, not with ProLiant Servers. ProLiant Servers are, well, just servers designed for a physical environment. Your premise is that the 20-year history of the blade component of Matrix somehow makes it superior to UCS which uses vastly superior new technology as part of an optimized platform for virtual infrastructure. I don’t buy it, and customers apparently aren’t either.

  19. Pete says:

    Steve, thanks for an informative article. I’ve hoped to see a side-by-side of UCS and Matrix for some time now. It’s pretty tiresome wading through all the FUD coming from competitors rather than letting their products speak for themselves. If all the competition can do is bad-mouth you, then they have already lost the market.
    Let’s hope HP and Dell can step up their game rather than act as petulant name callers more often than not.

  20. Steve Kaplan says:

    Pete, thanks for the compliment and comments, which I think are right on. I do have to say, though, other than The Register article, “Dell bitchslaps Cisco over UCS…”, I’m not aware of any UCS criticisms coming from Dell. I’m actually somewhat intrigued by the Dell PAN System which utilizes Dell R710 servers & Egenera PAN Manager software & professional services, but just don’t know a lot about it.

  21. mac says:

    I have to agree with steve regarding the register picture – I don’t get what the story is in that article – except that someone didn’t put the blades in right.
    I do disagree with steve’s view implications that other design’s are years behind – i can’t see the argument here. As I’ve said before they are different and will be used by different clients. Let them stand on there own 2 feet.
    The proliant blades (which are described as 20 years old) are based on the nehalem processors as UCS is (without memory extension for the full height blade). if you look at the BL2x220c as an example that is also an innovative design built for virtual environment – that is not a 20year old design. In fact some would say you have greater choice with both Intel and AMD chips.
    If you look at HP (adaptive infrastructure – now converged infrastructure)and IBM (dynamic infrastructure) as two examples they have been talking about flexible/adaptive infrastructure in the Data Center (and deploying increase orchestration within) for the last 10 years. Both have been pioneers in blade design/deployments for years.
    Now the idea that these designs are purely for physical world is completely incorrect, you only have to look at Virtual connect (it does what it says on the tin), and IBM openfabric manager to see that they have been building solutions for the data center, that include virtualization. In fact if you look at Egenera some might say that they were the first to talk about unified computing.
    We must also remember that the DC is not just about virtualization, it is also about convergence, power (i.e. thermal logic), flexibility and scalability – some of which comes with virtualization, but can also come without.
    You want a blade infrastructure that gives you physical, virtual benefits, but also fits into a great DC strategy. It is not enough to say that blade ‘X’ takes more memory than any other blade.
    Like i said before UCS will gain market because it is a good blade design with good simplified management. Is it the best? there is no such thing. Do I think it better than HP, Dell, IBM blades? i think it depends on the context/requirements. Do i think that HP/Dell should stop the FUD? yes, but i expect that of all vendors (including Cisco).
    I’ve said before i’m not a big fan of tables as they tell a black and white story, which clients rarely are and in this case i don’t agree with the comparison anyway. I just wish vendors sold IT on the basis of what it will deliver for THAT client (not the marketing) – i suppose world peace is more likely.

  22. Steve Kaplan says:

    Mac,
    I did not mean to imply that ProLiant blades reflect a 20-year old design. That would be rather bizarre, eh? My statement was in response to John’s comment which said that UCS can’t compare with 20 years of HP/Compaq experience with Proliant Servers. Certainly today’s blades from all manufacturers use current technology. UCS, however, incorporates myriad technological innovations that set it far apart from blade servers. These innovations, along with attributes such as exclusive use of Intel Nehalem processors, are intentional elements of a very elegant, efficient and cost-effective architecture designed as part of a long-term strategy to revolutionize the virtualized data center.
    You discuss a lot of points outside the scope of the article having to do with Egenera, IBM Open Fabric Manager, Dell blades, etc. While potentially interesting, this isn’t the forum in which to properly discuss and evaluate so many different technologies. My article, inspired by HP’s continued claims that Matrix is superior to UCS, focuses on comparing the two solutions – particularly within the context of a virtualized data center. As I state in the article, if you find anything incorrect in the comparison, I welcome the feedback.

  23. mac says:

    Steve,
    Aside from my view of the comparison which i’ve previously mentioned, It is these “myriad of technological innovations that set it far apart from blade servers”, that I’m struggling with and want to hear more about.
    There are innovations/design differences i.e. Memory ASIC in the full height blades, the Fabric Extenders (IOM), Fabric Interconnects, the increased Airflow in the chassis etc..
    However i could draw up a list of innovations for HP, Dell and IBM blades, and also go onto (like cisco) to claim to revolutionize the virtualized data center.
    I just don’t see where they put UCS ‘far’ apart from other blade solutions (different yes). It is this assertion (of the degree of superiority) that doesn’t sit comfortably with me, even though i think it is an excellent design.
    Any vendor or non-vendor started discussion that indicates that they are better always worries me. Only Time will tell if it is the real deal and even then each client’s mileage will vary.

  24. Steve Kaplan says:

    Mac,
    That makes perfect sense, thanks for clarifying. I didn’t want to dive too deep into those differentiators in the article since so much has already been written about Cisco UCS (A “cisco ucs” search gives you 281,000 hits to look through). You might want to start with Paul Venezia’s InfoWorld article referenced at the very beginning of my post. I also have some other articles I’ve written on UCS both on this blog and on DABCC. Some other good sources include Musings of Rodos (http://rodos.haywood.org) and of course the Cisco Web site itself. If you’re up for a really in-depth dive into UCS, I recommend the “Project California” book I reference in my article.
    UCS innovations range from the mundane (such as simplifying cabling) to the way Cisco utilized its network prowess to reengineer the way Nehalem accesses memory. I consider embedded management, unified fabric and true stateless computing to be exceptional as well. Most importantly, though, in my opinion is the foresight Cisco had years ago to make such a large investment in a Unified Computing System purposefully designed and engineered to optimize hosting a virtualized data center. This is the concept that both HP and IBM rejected when Cisco approached them for a partnership in the effort. I doubt that HP still embraces the vision of completely virtualized data centers. It continues to emphasize how Matrix “unites the tools, processes and architecture of your physical and virtual worlds.” A June 5, 2009 post on the HP Communities by HP Product Marketing Manager for Servers, Daniel Bowers, says, “So, that’s why I’m scratching my head. Cisco has lots of smart people – but I can’t understand these new UCS announcements.”

  25. Interesting and very in-depth. What most comparisons seem to miss, though, are the numerous 3rd-party products needed to make both CSCO and HP products fully-functional with both HA and DR. I’ve also done a few analyses at http://bit.ly/jE9Yp and http://bit.ly/KrjsH The Egenera product, while not widely compared with the others, actually out-performs both from a simplicity perspective.

  26. Roger Randolph says:

    Thanks for this article – very informative. I’ve used HP, IBM, and Dell blades and have been looking at UCS. Some thoughts/questions come to mind:
    * Clearly UCS has some impressive maximum specs (384GB RAM, for example) – but in my experience, those maximums are rarely used, and certainly not cost effective. I would be interested in your thoughts around IBM/HP/Dell blades and their current max specifications – i.e. are they “good enough”?
    * Getting real world pricing for UCS has proven challenging. Do you have anything that shows a reasonable pricing comparison (or as close as possible)?
    Thanks!

  27. Steve Kaplan says:

    Ken, Thanks for your comments. Despite being on several panels with Egenera CTO, Pete Mancia, last August at the Pacific Crest Conference in Vail, I still don’t know very much about how Egenera compares with UCS’s offering. This is on my list to research.

  28. Steve Kaplan says:

    Roger,
    Thank you for your feedback and questions. While I agree that 384GB is probably not used much today, I suspect that increasing levels of data center virtualization combined with high memory use cases such as VDI will make having all the RAM possible desirable. The patented Cisco memory capabilities mean that UCS utilizes smaller DIMMs which can exponentially reduce memory cost compared with other platforms.
    Regarding standard blades – well, I’ve long been against using them in virtualized data centers because their purpose, to reduce space, is almost never a concern in a virtualized data center. I published an article about this very topic on dabcc over a year ago http://www.dabcc.com/article.aspx?id=9114 where I speculated that Cisco might come out with a new type of blade developed for the Nexus 7000. I didn’t know, of course, that Cisco had already been working for years on UCS which eliminates the drawbacks of blade servers in a virtualized data center.
    I want to talk with our sales management team before publishing UCS costs based upon street pricing rather than list. I will say that I’ve been very impressed with just how competitively priced UCS is against standard blade servers. The break-even point seems to be somewhere around 6 – 8 blades (including 2 chasses and 2 6120 fabric interconnects for redunancy). As the environment begins to scale up, UCS pricing becomes quite advantageous since the same two 6120s can handle up to 40 chasses. Cisco claims additional large ongoing operating savings from reduced cabling and power consumption requirements.

  29. Dan Busby says:

    Certainly Cisco biased….
    Some HP Matrix insight that is new….This really hits CISCO on “restrictions and limitations” which is kind of an exaggeration…..my comments follow….
    1.) ONLY the listed HP Blades are supported as if this is a limited list. Actually, this list includes a wide range of processing capability and price points that Cisco does not provide. For instance – 4 Socket Blades, AMD Blades, High Density SMP blades, Intel 74XX 6-Core processors. Cisco support ONLY ONE model of Intel processors – Intel Nehalem 55XX Series Quad-core and ONLY 2-Socket. This sounds like a win for HP! And, looking to the future, Intel will release a 4-Socket – Eight-Core processor ( that’s 32 cores, 64 threads per Blade) in Q1 2010…HP will support this technology for sure…….unclear what Cisco’s plans are……Cisco would need a “double wide” for this meaning only 4 Blades per chassis.
    2.) The number of cables “per chassis” is an incorrect comparison. The HP system cables are ONLY for the “uplinks” to the datacenter infrastructure. The Cisco cable situation MUST include the cables from the Cisco Blade chassis to the Cisco 6100 Switches and THEN include the “Uplink” cables from the Cisco 6100 Switches to the datacenter infrastructure to be an “apples to apples” comparison. Also, for the Cisco system, the comparison says a minimum of two cables per chassis – in this configuration the chassis is only provisioned for 2x 10G shared across 8 Blades – this is NOT a 10G redundant connection. The 8 cables are required for full 10G redundant fabric connections. So, the ‘REAL” comparison is 6 Cables for HP verses 8 Cables plus at a minimum 4 “UPLINK” cables – for a total of 12 Cables for Cisco, a total of 6 for HP.
    3.) The comparison fails to mention that the CISCO UCS system only supports 4Gb SAN uplinks. HP supports 8GB all the way from the Blade to the end point. CISCO needs to spin a new chip to change this. That will be expensive…and it not clear whether that will be compatible with the current Cisco 6100 switches being shipped now.
    4.) The comparison also doesn’t mention the fact that the total IO bandwidth for the CISCO UCS is limited to 160Gb/s for each CISCO UCS 6100 switch- so 320Gb/s total– even if it has 320 Blades. The HP system on the other hand increases “uplink” IO for both Network and SAN when an additional Blade chassis is added to the Domain. The HP Virtual Connect system does allow for the use of only the switches in ONE chassis for ALL the included chassis – OR – the use of the IO uplinks in ALL the chassis – so this is certainly a more flexible and higher capacity IO Uplink situation resulting in the ability to service EVERY Blade with 10GB Network and 8GB SAN. The CISCO UCS can only claim full 10GB redundant bandwidth from Blade to Infrastructure for up to about 40 Blades. The HP System can claim this capability for atleast up to 64 Blades. Advantage HP I think.
    5.) The comparison says that the CISCO UCS supports “any” OS and that HP system is more limited. Well, that is really an exaggeration. The CISCO compatibility list doesn’t include the Citrix XenServer hypervisor for instance. HP does support XenServer. Also, since Cisco just started their product with the Intel Nehalem 5500 series processors – there are minimum operating system revisions for all the linux distributions. In fact, REDHAT 3 is not supported for instance. This will require customers to use the latest revision of the Linux distributions as a requirement. HP on the other hand, supports all Linux distributions. This would seem to be an HP advantage.
    6.) The comparison mentions that there is no clustering in the HP Matrix….but doesn’t mention how the UCS offers clustering. I don’t believe the UCS offers any clustering per se……any more or less than the HP system.

  30. Casey says:

    It seems to me that Dell is the one really not keeping up with UCS and Matrix. The Dell / Egenera deal although seems to getting some positive motion is not getting the support you would think they are (see Scalent / Dell).
    As Ken said there seems to be a lot of 3rd party apps involved in using Cisco and HP’s products however that can be a bad thing or a good thing and in the case of HP most of the tools are in house, but different products, however not too many consoles. Cisco I am not sure about.
    The videos and demos I have seen of the Egenera product it seems to be a single console, but very dated and slow responding. Probably using java, needs some re-work to clean it up.
    Will be interested to see if Scalent is able to make anything of their deal with Dell, if not Dell is going to go its normal path and get left behind farther and farther.
    It

  31. Steve Kaplan says:

    Dan,
    I checked out your bio on-line and saw that you are VP Engineering for Egenera. I appreciate the time you took to provide such an in-depth comment. I am going to think about my Matrix matrix, and make some minor modifications as a result of your comments. I wanted to respond, however, to your individual points:
    Point #1: UCS was designed for Data center virtualization. Cisco focused its first offering on the best of breed equipment for optimizing this vision. Currently the Nehalem processor has the highest performance for server virtualization, even out-performing 74xx processors with denser core ratios. The reason UCS supports only 2 sockets is the Nehalem processor only supports 2 sockets. Also, while HP has more blades in its portfolio, only the ProLiant blades are fully supported in Matrix.
    I wrote this article only about existing product capabilities, as diving into futures is just too uncertain for either manufacturer. That being said, Cisco is aligned with the Intel road-map – I don’t think there will be any issues of support. One thing I’d like to point out is that while UCS supports less blades per chassis than HP, a Matrix configuration of 2 VC-E and 2 VC-FC yields a list price of over $55K (according to http://www.hp.com). This includes no servers, just the chassis, power supplies, fans and VC modules. Therefore, every time a customer goes beyond 16 blades, she has to re-invest $55K just for the ante. UCS, on the other hand, yields a per-chassis cost of under $10K.
    Point #2: I’m not sure what you are meaning to say, or why the concerns about full 10 GB redundancy– I was emphasizing “minimum” cabling requirements. The UCS chassis is connected and redundant at 2 cables, although you are correct that uplink cables are needed as well, so perhaps I should increase the “minimum” by 4 (2 from the chassis to the 6100 and 2 from the 6100 upstream). In this case, though, it would only fair to say that all of the Virtual Connect modules plug into something, and I should probably count those cables too. Additionally, with UCS, as we add the 2nd, 3rd, 4th …40th chassis, only the 2 uplink cables are required. This is of course not true with Matrix.
    Point #3: Cisco supports 8 GB FC with the N10-E0060 option.
    Point #4: These numbers are not correct–the fabric interconnect has either ~05.Tb or ~1Tb or throughput and each chassis gets 40Gb (redundant) or 80Gb(non-redundant) of bandwidth with two fabric extenders in place. Additionally, Matrix only stacks Ethernet – every chassis still requires FC cables.
    As mentioned in the article, Virtual Connect requires the same VC domain configuration and same cable layout to all enclosures in a rack to accomplish increased IO bandwidth. This increases the amount and cost of rack wiring along with the cost for HP Services to perform the VC domain reconfiguration and overall data center operations / management cost. This additional cost is incurred every time additional enclosures are added to a rack. Exceeding 4 enclosures requires a new CMS and a new HP Services Matrix CMS install.
    Point #5: Cisco’s new compatibility list actually does include XenServer. Matrix’s does not. You are correct, however, about UCS not supporting RH3 (though Matrix does not support all versions of RH either). Here is the list of UCS supported operating systems:
    • Windows Server 2003 R2, 32 bit, 64 bit, Windows 7 with Hyper-V, 64 bit, Windows Server 2008 with
    Hyper-V, Standard and Enterprise Edition, 64 bit
    • VMware ESX 3.5 U4, VMware vSphere 4, 4 U1, 4i, 4i U1
    • RedHat RHEL 5.3, 64 bit, RHEL 5.4 KVM, 64 bit, RHEL 6 KVM, 64 bit, RedHat Rhat 4.8, 64 bit, and
    Fedora
    • Novell SLES 10 SP3, 64 bit, SLES 11, 64 bit, SLES 11 SP1 XEN, aSLES 11 XEN , 64 bit
    • Solaris x86 10.x, 64 bit
    • Oracle OVM 2.1.2, 2.2
    • Oracle Enterprise Linux
    • XenServer Citrix
    Point #6: UCS offers Active/Passive clustering of management systems. Technically, however, I should have excluded UCS blades from the UCS redundancy. While the stateless blades enabled by service profiles combined with blade resource pools enables very fast recovery in the event of blade failure, they are not truly redundant.
    Please note that in going over these responses with one of our UCS consultants, he informed me that he believes Cisco does offer a 3-year support contract (I will confirm) – my matrix listed only 1-year.

  32. mac says:

    Steve,
    This is good stuff,and is the really interesting bits for me.
    point1 – the prices i’ve seen between the two have been comparable (as no one pays list price). Is there a cisco version of hp’s online pricing? i have a spreadsheet but was after one where i could alter configurations and see price changes.
    Point2 – i think you are right here, in terms of a minimum cable configuration, the advantage for UCS is once you start to add additional chassis. So if you have 1 chassis you might have 4 (2 –> 6100 and then 2 –> uplink minimum) for UCS and 6 for Matrix. I think the point Dan was making was that in order to have a fully redundant (10G per blade) configuration you would require 8 cables (4 per FEX/IOM)for each chassis which works out slightly more than Matrix.
    Comes back to configuration again – client mileage will vary 😉
    Point 3 – this is good to hear, is that a module for the 6100?
    Point 4 – can you explain a little more how the 6100 has ~05.Tb or ~1Tb of throughput? i couldn’t quite work that out. Regarding the chassis getting 40GB redundant, that is only true if you have 8 (redundant)cables from the FEX/IOM on each chassis to the 6100, right? if you have 2 cables (redundant) that is 10GB (redundant) for however many blades you have in the chassis, right?
    Point 5 – that is good stuff
    Point 6 – I was aware of the active/apssive clustering of the 6100 that is a definite positive. Although the CMS on Matrix can also be given HA via MSCS and i think (someone else might confirm) you can run the CMS in VM (VMware only). Clearly that is simpler on UCS

  33. mac says:

    Actually thinking about it my point2 above should be 5GB per blade, as you have 8 cables from the FEX/IOM, 40GB (redundant) shared between 8 blades so 5GB each.

  34. Mike Sabo says:

    for point 3 That is an expansion card for the 6120/40 that is orderable but not shipping until Q1 2010
    for point 4 The internal through-puts for the 6120 and 2140 are are 520Gbs and 1.04Tps respectively. The actual through-put is determined by the number of interconnects, so each blade would need an interconnect from the fabric extender to the 6100 and an interconnect from the 6100 to the network Infastructure to get lossless 10Gb to the network. So a 6120 can host 1 8 slot chassis’s using 16 ports and a 6140 can host 3 chassis’s using 48 ports(with the 6 port 10G exp-mod). the idea behind the external cables vs internal network bus is that the external fabric interconnect makes the fabric infinately scalable while providing 10Gb to each server blade.

  35. mac says:

    Mike,
    thanks for that i’m still slightly confused. If the chassis has 8 ports/cables (4 per FEX/IOM) how can it use 16 ports on the 6120? sorry if that is a daft question

  36. pete says:

    I am enjoying the hearty discussion on this blog. I am puzzled though why Egenera would choose to fight HP’s battles rather than point out the merits of their products as an alternative to both UCS and Matrix…Can’t quite connect the dots there.
    Brad Hedlund rebutted another Egenera critique defending HP a while back.
    http://www.internetworkexpert.org/2009/04/28/cisco-ucs-pricing-response-to-egenera/

  37. Mike Sabo says:

    the 6100 acts as a passthrough switch so to get maximum capable throughput from one chassis you would need 16 cables comming out of the 6100 8 going to the 2100 fabric extenders (4 per FEX) on the 5100 chassis and 8 going to the upstream switch or network core. because of the sharing of bandwidth capabilities of both UCS and VMware, this topology is rarely needed but to get a ratio of 1:1 10Gbps per blade server to the network infrastructure you would need this layout

  38. Mike Sabo says:

    Your Topology is really going to depend on the throughput needs of your Datacenter and the servers and applications within. My point is really that it is posible to provide 10Gps to every blade in every UCS chassis in your network

  39. Steve Kaplan says:

    Pete, Thanks for includidng the link to Brad Hedlund’s article. I try to read all of his stuff, but missed that one – it’s very good and also addresses some of the pricing inquiries various readers have raised. As far as Egenera’s defense of HP goes, I’m kind of curious about that myself. A couple of Egenera folk have commented on this article, perhaps they’ll chime in (assuming they see this).

  40. mac says:

    Thanks Mike, i realised what you meant after i shutdown my laptop. Finally set of questions (honest)
    if you have 2 6100 connected to 1 chassis are both used for throughput? are the blades split over the 2 connections? and how is this balanced? or do all blades use 1 6100 with the other one being failover?
    I think it uses both paths and splits the blades over the 2 connections, is that correct?

  41. David says:

    Steve,
    Thank you for the informative comparison and also the exchanges below that it has generated. Is the table meanth for industry introspective discussion or for customer use in comparing enterprise IT building blocks ?
    To improve the table for customer comparison value I suggest two points : firstly adding a power management row showing how the products differentiate in managing power across the components (compute, storage, net). Secondly, on the first row – Enterprise Scalability you should define the workload or logical server – 250 or 1000 is too vague … could use some benchmark here and have values dependant on hypervisor / benchmark availability.

  42. Steve Kaplan says:

    David,
    Thanks for your comment & suggestions. I actually included the table because I couldn’t resist the idea of a Matrix matrix. I’m going to beg off on the power comparison because I don’t want to spend the cycles researching it. Regarding the 2nd row, I don’t understand what you mean by the logical server limit as being too vague. UCS has no limit to the number of logical (virtual) & physical servers it can support (other than maxing out the UCS resources of course, but that can scale up to tens of thousands of VMs). HP is capped at 250 per CMS, or up to 1,000 by combining 4 CMS units. This is described in a bit more detail under the Insight Software section of the article.

  43. Brad, Steve –
    All good comments, and a good set of threads here.
    First, I’m not so sure anybody’s ‘defending’ HP, but rather Dan’s giving a fair-and-balanced analysis of what we believe the options are when comparing HP.
    I _do_ think that this entire conversation is a little myopic though. Lots of conversation about cabling, switches, sockets and memory. Not lots of conversation re: management and higher-level services. Which, quite frankly, is where much of the user value is.
    By higher-level services, I mean *orchestrating* all of the converged infrastructure to give it automatic scaling, failover, DR, etc. These Higher-level services are not out-of-the-box services for UCS, nor 100% provided by the HP bundle.
    IMHO, the _whole_point_ of using fabric and converged infrastructure is to simplify management and to reduce point-product count… not to participate in a speeds-and-feeds contest. http://fountnhead.blogspot.com/2009/12/emergence-of-fabric-as-it-management.html
    As Casey points out, the Dell/Egenera solution is really the only one using a single unified console (“Dated?” Really?) where HA, DR, VMs, and all other config items are managed. That’s Dell’s play in the converged game. And, Mac, if you were to write up a comparison of your own (and include Dell’s m1000e chassis w/Egenera) you’d discover some really interesting facts 🙂
    Finally (To Pete’s point) at this point in the market, I’d LOVE to see more threads regarding the virtues of using converged technologies to simplify the datacenter — and less vendor-bashing / less speeds-and-feeds comparisons. These new approaches are amazingly complementary to OS virtualization (since they virtualize the infrastructure) and the industry needs to acquiesce this in the same way they finally ‘got’ hypervisors.

  44. Well written article – looks like you are getting some good comments. I appreciate uncovering that the CMS can’t run on 64 bit Windows and that the HP Matrix currently only supports vSphere in testing / development roles. It is important to note that HP Matrix is really nothing new. Although it’s being labled as a product, it’s just a re-branding of HP’s best-of-breed products. I look at is as more of a solution.
    That being said, Matrix does allow for customers to see what HP’s messaging is around a Converged Infrastructure. Customers who have already invested in HP can expand their existing infrastructure in a fashion that is modeled by the HP Matrix solution. In my opinion, HP’s saving grace with the Matrix offering is the Insight software. The HP Insight Orchestration (demo at http://h20324.www2.hp.com/SDP/Content/ContentPlay.aspx?id=1610&booth=49) simplifies deployment of infrastructure in a manner that could appeal to IT users.
    Cisco’s UCS offering is designed as a complete solution, best offered in net new datacenters. In fact, as I talk with customers about UCS, I reference the hardware as lego blocks – you simply add another block of servers as your needs increase. I really like Cisco’s design, but it’s still early on in their infancy. It’ll be interesting to see if IBM and Cisco come up with an agreement (notice how IBM has been quiet for some time???)
    As previous comments have shown, HP vs Cisco all depends on the customer. Each solution has their pro’s and their con’s. The key thing is that we all take the time to learn the products and be honest with the customers and help guide the customer based on their needs.
    Keep up the good work – I’ve got you linked on my blade server website, http://BladesMadeSimple.com

  45. Steve Kaplan says:

    Kevin,
    Thank you for the compliment and excellent comment. While I appreciate your point of view and agree with much of it, I have to take issue with, “Cisco’s UCS offering…is best offered in net new datacenters”. This is certainly a good use case for UCS, but a far broader application is to utilize the product to help revolutionize existing data centers. As I’m sure you’ve seen countless times in your consulting endeavors, traditional physical data centers consist of a mishmash of technology islands and IT functional silos. Organizations have the opportunity to utilize an enterprise virtualization architecture to unify the environment and eliminate huge costs and inefficiencies. Cisco’s UCS is an optimized enabling hardware platform. HP Matrix may, as you suggest, well have significant advantages in certain environments – but I have trouble seeing how a completely virtualized data center could be one of those.

  46. Andy Cadwell says:

    Steve, great article. Thanks for taking the time to write this up.
    I find it ironic that so many people have commented that one shouldnt compare HP Matrix to UCS. Well, the simple reality is that HP themselves were the first to compare Matrix to UCS in their November 2009 ESS Intelligence Brief(sent to partners and then customers)and also never mentions the Blade System in that comparison, and the HP field is making this comparison as well.
    Having been involved myself now in multiple UCS opportunities and subsequent installations, it is by far the easiest and most effective platform for virtualized server environments and customers large and small are extremely satisfied. I am typically very risk adverse in recommending new technologies to my clients, but this platform is a home run for Cisco, and a game changer for VMW as it is simply the easiest way to manage a 100% virtualized environment. AS you know, we already have clients adding “lego blocks” in tier 1 production application environments. That is proof enough for me that the technology is ready- customers are demanding more of it!!

  47. Gary Thome says:

    Steve,
    Thanks for taking the time to chat last week. I appreciate the opportunity to share our points of view. I’ve posted some thoughts on our “Eye on Blades” blog at HP. The link is here: http://www.communities.hp.com/online/blogs/eyeonblades/archive/2010/02/05/first-post-this-year.aspx
    I hope we can keep the dialogue going.

  48. Steve Kaplan says:

    Gary,
    I appreciate your call and information. I’ll review your blog shortly – thanks.

  49. Steve Kaplan says:

    Gary,
    I very much appreciate both your phone call and your response to my blog post.
    I wanted to start by addressing your comment about my omitting the “general descriptions of the very capabilities of vBladeSystem and BladeSystem Matrix that have made BladeSystem the most popular blades platform on the planet – with over 1.6 million blades sold.” My article was specifically about the BladeSystem Matrix which I assume now has unit sales somewhere in the vincinity of 20 based upon the limited qualified HP Implementation Engineering resources (I can find no published information on Matrix sales), as opposed to over 400 organizations that have purchased one or more Cisco UCS units. I do agree that HP Matrix includes systems management software that differentiates it from UCS, but which also forces a closed system of HP hardware and software. The Cisco approach is to utilize the XML API to which anyone can write and orchestrate the entire compute and network environment. EMC Ionix is an example.
    Your data center power and cooling comment puzzles me. You say, for instance, that “UCS requires up to double the amount of data center power allocated per server compared to BladeSystem.” Whether HP or Cisco, Intel servers draw the same amount of power under the same load. It is true that the UCS has 2500W capable power supplies, but certainly a 2500W circuit is not required. Cisco clearly built in power capabilities not just for today’s Intel chips, but to support the Intel roadmap which will continue to require more power to support greater memory and CPU core density.
    Although you commented that HP does not see UCS as comparable in functionality to BladeSystem Matrix, I think that the comparison between BladeSystem Matrix and UCS is the right one. Even HP’s Web site article, “The Real Story about Cisco’s One Giant Switch view of the data center”, repeatedly compares it to HP BladeSystem Matrix including the article conclusion, “HP BladeSystem Matrix is the most advanced, best integrated and easiest way for businesses to achieve an AI and can be integrated into existing infrastructure.”
    Publishing the introductory Matrix press release just 35 days after the UCS announcement also left an inevitable impression that the Matrix was a reactive response to the Cisco product. SearchDataCenter.com, for example, ran an April 20, 2009 article titled, “HP’s Bladesystem Matrix to challenge Cisco’s Unified Computing System”, that posed this exact question. Author Bridget Botelho, however, received a rather evasive response, “When asked if the Matrix is purely a reaction to Cisco’s UCS, HP said, ‘We certainly see this as doing everything Cisco is doing and much more.’”
    Other media comparisons of the two products included The Register “HP pits Matrix against Cisco’s California”, CIO “HP followed that up with a direct slap at Cisco, when it announced BladeSystem Matrix…designed to compete with Cisco’s Unified Computing System” and InfoWorld “That sounds an awful lot like Cisco’s Unified Computing System”, among many others.

  50. Mseanmcgee says:

    Steve,
    Just to back you up on the number of FlexNICs allowed in Matrix. It only supports up to 16 FlexNICs on half height blades.
    (Remember, half height blades have 2 onboard FlexNICs mapping to I/O bays 1 & 2 and 2 Mezzanine slots mapping to I/O bays 3-8)
    Matrix requires both Virtual Connect Flex-10 and Virtual Connect Fiber Channel. That consumes 4 of the 8 I/O bays and 1 of the server mezzanine slots (HBA). The two Flex-10 modules in bays 1 and 2 provide 8 FlexNICs (4 FlexNICs x 2 Flex-10 ports).
    Now, one would assume that bays 5-8 could be used to provide another four Flex-10 NICs (2nd Mezzanine card) for 4 x 4 FlexNICs, but that isn’t the case. HP only makes 2 port mezzanine cards that support Flex-10. (http://u.nu/8qb37) As such, they only allow bays 5 & 6 for additional Flex-10 ports.
    So, 4 FlexNICs for Flex-10 ports on 1,2,5,&6. 4×4=16 FlexNICs.
    You were right… 😉
    P.S. 16 seems like such a small number after you hear about Cisco’s Palo adapter with 58 “FlexNICs” on 2 ports. Cisco should rename Palo to “Flex-58”.

  51. Hi Steve,
    This is the best post I have read until now that tries to conciliate a strategic vision with a technical detail.
    For sure end users and the B2B ICT channel in general will base their decision to adopt Cisco UCS or HP Matrix based on detailed ROI calculations but contribution like yours will help everybody not to get lost in too much details.
    Please keep on.

  52. G says:

    OMG,
    I got to admit, this is fan-damm-tastic post!!! Greatly appreciate the information shared/exchanged. Just to introduce myself I am independent Consultant and am being asked to present “C&T” level with a fortune 10 seeking comparison between Cisco UCS & anybody/everybody on the planet (not just technical but business wise). I am really grateful to this post and the professional debates. It would be very kind if anyone can compare “Business/Value Proposition” and not just technical, as i am sure you will agree end of the day it’s just not about only Technology… there is lot more on the plate for the “C” level.

  53. Steve Kaplan says:

    G, Thanks for the compliment and suggestion. I’ll go ahead and do that.
    Steve

Leave a Reply

Your email address will not be published. Required fields are marked *

*