| Main |

July 16, 2010

Cisco UCS vs. HP BladeSystem Matrix: an Update

HP's Chief Architect for Infrastructure Software and Blades, Gary Thome, responded to my December 2009 post writing that HP "does not see UCS as comparable in functionality to BladeSystem Matrix, which we believe is in a category by itself." This argument is not without merit; either the EMC/vSphere/UCS Vblock or the NetApp/vSphere/UCS Secure Multi-Tenancy might better compare with the Matrix, particularly when purchased in conjunction with optional EVA storage. I nonetheless decided to maintain the Matrix vs. UCS face-off in this updated comparison for the following reasons:

  1. Both customers and other industry players perceive Matrix as competition to UCS. Egenera VP of Marketing, Ken Oestreich, emphasized this in his blog post last month.
  2. HP's Web site positions Matrix as competition to UCS. [see author update note below]
  3. The Matrix press release last year closely followed the UCS announcement and included a jab at Cisco's data center strategy. Publications such as InfoWorld, CIO.com, searchdatacenter.com and The Register all ran articles spotlighting Matrix and UCS as competing products.

Technical Advances

One of my colleagues received the following unsolicited email a couple of months ago from an HP rep:

From: Citte, Chad (ESS Mid-Market Partner Specialist)

Sent: Monday, May 17, 2010 1:29 PM
Subject: IT Critics Declare HP Dominance Over Cisco

In recent IT competitive news, HP is capturing wins across the board. While Cisco is busy fighting for credibility with their “Unified Computing Strategy” (UCS), HP continues to advance without missing a beat. 

As can be inferred from the email, the Matrix has advanced since its debut including a tripling of the number of supported servers to 1,500 and support for VMware vSphere 4.0 (though not the VMware Virtual Distributed Switch). The full list of Matrix enhancements can be found in the new HP BladeSystem Matrix 6.0 Update 1 Release Notes, and an updated UCS vs. Matrix matrix follows at the end of this post.

Despite its enhancements, the Matrix remains a daunting assimilation of existing HP products including enclosures, blades, Virtual Connect switches and 16 HP Insight Software packages. The Central Management Server (or servers) is similarly comprised of HP SIM, Storage Works XP CommandView, etc. as can be found on Page 13 in the Compatibility Chart. Conceivably, the Matrix functionality could be built outside of the "Matrix". Unlike the Cisco UCS, however, which only automates the provisioning of virtual servers, the Matrix automates both the virtual and physical environments.

Cisco Fighting for Credibility with UCS?

The HP email claims that Cisco is fighting for credibility with its UCS, although evidence indicates otherwise:

  • Sales: UCS sales continue to soar. A May 12, 2010 article in The Register said that Cisco's UCS sequential revenue growth last quarter was up 168% with the unique customer base doubling to over 900. A Cisco June blog post put the number of UCS customers at around 1,000. HP does not disclose BladeSystem Matrix sales, but discussions with both current and former HP employees indicate total implementations are around 60 – 75. Customers, though, are unable to upgrade any software or firmware in the solution and be in a supported configuration. Even a bug fix in one of the myriad Matrix components cannot be installed until the entire Matrix solution is tested and certified with the fix, which can take months. Many Matrix customers have apparently given up on dealing with the complexities and now run its blades as just standard servers.
  • Partnerships: Cisco UCS is generating outstanding momentum with key industry players. Two of the top three leading storage manufacturers have strategic virtualization offerings that build upon UCS as the compute platform.
  • Customer References: Customers rave about UCS. The hosting provider, Savvis, for example is basing its private cloud hosting strategy upon the UCS. Vince Stephens, TASER International VP of Network Operations, said, "We realized we couldn't build the data center of the future with yesterday's technology". Joe Onisick's post shows how UCS converted him from skeptic to advocate stating, "UCS changes the game for server architecture." Michael Heil says, "Terms that come to mind that describe UCS are simplified management, elegant design, paradigm shift, future of computing, time and cost saver, etc."
  • Buzz: A Google Blogs search shows 8,553 results for Cisco UCS compared to 228 for the HP Matrix, and I could not find a single customer touting its Matrix experience. A Twitter search inevitably shows abundant UCS related Tweets. HP Matrix is generally nowhere to be found.
  • Awards: Cisco UCS has garnered both awards and trade journal accolades including the VMworld 2009 Gold Award for Hardware in Virtualization, the Best of Interop 2009 Award and the Best Data Center Innovation Award at BladeSystems Insight 2009 Event. HP Matrix has won no awards.

Price Comparison

Some media stories have speculated that HP's low server margins will force Cisco to reduce UCS prices. The relative capability of UCS vs. Matrix to enable a successfully virtualized data center (vDC) and the associated huge savings and other benefits matters far more than any cost differential. But for a reality check, I compared pricing for both UCS and Matrix assuming 32 blades (UCS B200 M2 and HP BL 490c respectively) with 96 GB of DDR-3 1066 dual rank RAM. I used the lower of prices from the HP Web site or the HP BladeSystem Matrix TCO calculator and then validated that they were equivalent to, or lower than, the CDW Web site. I used Cisco's MSRP UCS pricing which customers should easily be able to obtain from their Cisco partner.

6a01156f01861f970c0133f25488d6970b-800wi 
 
The table shows the UCS is less expensive for this configuration. Additionally, fewer switches and cables result in lower ongoing operating costs than the Matrix, while the Fabric Interconnect 10GB Ethernet switching capability helps further reduce comparative costs as the units scale.

Implementation

The Matrix requires a 2-week Implementation Service by an HP-Certified Matrix Professional from HP Engineering Services. The company is considering a partner implementation program; although one of the two partners with whom I spoke told me he was not enthusiastic about going through the arduous certification requirements. The other liked the idea, but said his company will probably continue approaching converged infrastructure from a best-of-breed approach rather than specifically promoting HP Matrix.

Cisco has certified its partners to provide UCS implementation since its debut, and customers can also take a 2-day UCS boot camp. Mark Domel of DrillingInfo, recently sent me an email describing his experience with installing UCS:

"In our case the time it took to un-box the solution, rack, cable, bring online, and install VMware was about six hours with two guys. We were building VMs and remarking at how fast it all went. Planning and managing cabling from a traditional blade solution to the storage and Ethernet networks is a usually a major task. However, with UCS it was all just as simple as choosing how much redundancy/performance we wanted between the 6120s and the chassis and then connecting the uplinks to the network and storage."

Virtualization Philosophy

Cisco UCS was developed from a clean slate over a period of three years under the leadership of VMware's co-founder and former CTO as an optimized hosting platform for virtual infrastructure. It supplements the hypervisor in managing the virtualization environment and provides an XML API to which anyone can write and orchestrate the entire compute and network environment. This will enable a particularly symbiotic relationship with VMware's upcoming Redwood (vCloud Services Director).

HP Matrix is designed as a self-service provisioning portal handling tasks from migrating virtual machines to managing VM lifecycles. These capabilities while ambitious, can put it at odds with data center architects striving to run a vDC as the standard with (if necessary) a limited number of physical servers as exceptions. For example, the Matrix' Virtual Connect component is pitched to server teams as a way to manage the switches without the inconvenience of network group oversight. And while Matrix includes Roles Based Access Control, once an HP enclosure is incorporated into the Matrix in production mode, the network team cannot even make changes such as VLAN configurations.

Wrap Up

HP makes great servers and recently passed IBM as the world leader in server sales. But unlike the UCS which is on its way to becoming a major player in the vDC space, the Matrix' tepid reception does not bode well. In order to avoid a fate as the "New Coke" of the virtualization era, future versions of the Matrix likely will need to incorporate the type of resiliancy and management ease that is contributing to the success of the UCS.

 

Cisco UCS vs. HP Matrix Matrix – Updated

Attribute

Cisco UCS

HP Matrix

Enterprise scalability

14 chasses (eventually 40), 112 blades – potentially thousands of VMs. Up to 5 UCS chasses in a rack.

1,500 total logical servers (or up to 70 VM hosts – whichever is less). Can combine up to 4 CMS to reach 6,000 logical servers, but no clustering or information sharing. Server profiles cannot be moved from one CMS to another unless using EVA with HP IR & like logical servers on both CMS servers. Up to two C7000 chasses in a rack (due to high power requirements).

Redundancy

All components redundant

Central Management Server has no fault tolerance or clustering and little or no redundancy.

System Management Software Packages Required

UCSM

Onboard Administrator, Systems Insight Manager, Virtual Connect, Virtual Connect Enterprise Manager, Insight Dynamics VSE.

(Note: ID-VSE has capabilities that UCSM lacks including trending/baselining, physical and virtual resource monitoring).

"Closed" Architecture Limitations

Cisco UCS requires Cisco servers, CNAs and Fabric Interconnects for optimal performance

Requires one of the following specific HP ProLiant blades: HP ProLiant BL260c, HP ProLiant BL460c, HP ProLiant BL465c, HP ProLiant BL490c, HP ProLiant BL495c, HP ProLiant BL680c or HP ProLiant BL685c.1

vNIC & vHBA Support

56 vNICs per server for every 2 port Palo Adapter

LAN – Ethernet 32 x 10 Gb downlinks to server ports with 2 Flex-10 modules & each server can have 8 FlexNICs.

SAN – Fiber 16 X 8 Gb 2/4/8Gb auto negotiating server ports

Automated Server Provisioning

Virtual Only. Automated physical server provisioning requires 3rd party tools.

Both Virtual and Physical

Storage Support

Works with most leading industry storage manufacturers to enable automated provisioning, though requires 3rd party management applications. Particularly tight integration with both EMC and NetApp.

Automated storage provisioning only supported at this time for HP EVA – and only in experimental mode. Otherwise, storage must be manually provisioned.

Unified Fabric/Converged Network

Both Ethernet and Fibre Channel enabled without purchasing separate infrastructure components.

HP does not currently support the convergence of Ethernet and Fibre Channel in any Blade System products, although Flex Fabric has been announced which will converge Ethernet & FC within an HP enclosure, but which will not reduce cabling to or from the enclosure. Each enclosure requires 2 Ethernet and FC interconnect devices and these must be Virtual Connect Flex-10 modules.

Systems Management Software

None. Cisco's approach is to utilize the XML API to which anyone can write and orchestrate the entire compute and network environment. VMware's Ionix is an example, BMC Bladelogic another

Yes. Requires HP hardware and software.

Stateless Computing

Yes. UCS Service Profiles can capture the entire personality of the server and it's hardware configuration

Limited capabilities using Virtual Connect, but the hardware configurations must be identical. While VC does have Roles Based Access Control enabling the network team to configure VC, once it is in production as part of Matrix, the network administrators can no longer make changes.

Ability to deliver native network performance to VMs via hypervisor bypass

Yes

No

Network traffic monitoring & application of live-migration aware network and security policies

Cisco VN-Link / Nexus 1000V

None

Memory

96GB Half Width Blade and 384GB Full Width Blade

(8GB DIMMs)

With HP BL490C half-height blades : 144 GB w/8 GB DIMMs, 192 w/16 GB DIMMs

With HP BL685c (AMD) blades: 256 GB

(NOTE: New HP BL620 AMD based blades have been announced with larger memory capabilities but are not yet part of Matrix)

OS Support for Management SW

No separate management server required

Windows Server® 2008 SP2/R22

Windows Server® 2003 SP2/R2

Database Support for Management SW

None required

Microsoft SQL Server 2008 SP1, Microsoft SQL Server 2005 SP3, Microsoft SQL Server Express Edition – though only up to 500 systems and 5,000 events and no remote database support

Browser Support for Management SW

Internet Explorer 5.0 or higher; Mozilla Firefox 3.0 or higher

Internet Explorer 7 or 8 or Firefox 3.x (some limitations)

Runtime Environment for Management SW

Sun JRE 1.6 or later

None required

Added Prerequisite SW for Management SW

None

.NET 1.1 Framework, .NET 2.0 SP1 Frameowrk, .NET 3.0 Framework, .NET 3.5 SP1 Framework, AP .Net service, Adobe Acrobat Reader, Adobe Flash Player Version 9 or 10, MS iSCSI Software Initiator, SNMP, TCP/IP with DNS installed, Windows Automated Installation KIT (WAIK) Version 1.1; Windows Server 2003/2008

Hypervisor Support

Supports any X86-based hypervisor. Particular advantages from tight integration with vSphere

VMware ESX Server 3.5.0 Update 4 or 5

VMware ESX Server 4.0 & Update 1

VMware ESXi

Citrix XenServer 5.5

Windows Server 2008 Hyper-V SP2/R2

Xen on RHEL

Xen on SLES

Guest OS Support (server)

Windows Server 2003 R2, 32 bit, 64 bit, Windows 7 with Hyper-V, 64 bit, Windows Server 2008 with

Hyper-V, Standard and Enterprise Edition, 64 bit

o VMware ESX 3.5 U4, VMware vSphere 4, 4 U1, 4i, 4i U1

o RedHat RHEL 5.3, 64 bit, RHEL 5.4 KVM, 64 bit, RHEL 6 KVM, 64 bit, RedHat Rhat 4.8, 64 bit, and

Fedora

Novell SLES 10 SP3, 64 bit, SLES 11, 64 bit, SLES 11 SP1 XEN, aSLES 11 XEN , 64 bit

Solaris x86 10.x, 64 bit

Oracle OVM 2.1.2, 2.2

Oracle Enterprise Linux

XenServer Citrix

Windows Server® 2008/2003. Microsoft Windows Vista.

Red Hat Enterprise Linux 4.8 Update 7: 32 bit Update 7: AMD64 and Intel® EM64T

Red Hat Enterprise Linux 5.4 Update 3: 32 bit Update 3: AMD64 and Intel® EM64T

SUSE Linux Enterprise Server 10 SP3 & SLES 11

(Note: RHEL & SLES VM guests on Hyper-V are not supported by Insight Orchestration or Insight Recovery. Insight Recovery supports non-clustered Hyper-V Windows guests as a technology preview).

Distributed Virtual Switch Support

VMware vSphere vDS & Cisco Nexus 1000V

None – just standard VMware vSwitch

Guest OS Support (VDI)

All

None (No Matrix automated provisioning support )

VMware vCenter Integration

Yes

Limited

3rd party development

XML-based API

None

QOS

Yes

None

V2P Capability

No, unless in conjunction with certain storage partners

Yes

Switch Efficiency

One set of top rack switches manages up to 14 chasses (eventually 40).

One set of top of rack switches required for each rack.

Minimum cables required per chassis (inc. FC & redundancy)

2

6

Maximum cables potentially needed per chassis (inc. FC & redundancy)

8

34

FCoE

Yes

Limited – only within the chassis with FlexFabric.

Complexity and ease of implementation

Very fast set-up, though designing and fine-tuning service profiles and templates for optimizing virtual infrastructure provisioning/management can take time. Many Cisco channel partners are certified in implementation and customers can also take UCS classes.

60 hour on-site engagement required by HP Implementation Service – no partner certified implementers. Customers also unable to upgrade any software or firmware in the solution and still be in a supported configuration.

Ease of Support

Customers can apply their own patches and updates to individual components as appropriate.

If a bug is found in any one of the Matrix components, the customer is prohibited from installing an update until the entire Matrix solution is tested and certified.

Mfg. Support

3 -year

3-Year

 

1 While the c7000 will work with any HP ProLiant blade, Matrix only works with the blade models listed.

2HP strongly recommends the use of Windows Server 2008 SP2, Enterprise Edition (64-bit version) on a ProLiant server with at least 32 GB memory

 

Author Disclosure: I work for a professional services company which is also a leading Cisco partner. I researched this article carefully, but welcome any corrective feedback.

07/22/2010:  Author Follow-up Note:  HP just changed its Web Site page titled "The Real Story about the Cisco UCS " that I linked to in my article. Here is the original page as a PDF on ViewYonder.

07/26/2010: Author Follow-up Note: HP’s Director Biz Strategy blogged in response to this post. I in turn commented back on his post, but my comment was taken off line (I assume it is going through some sort of standard review process). While I did not keep an exact copy, it is close to the following:

I am the blogger mentioned in your post. addressed the comparison issue at the beginning of my article, and still stand by it 100%.  In fact, unbeknown to me, searchdatacenter.com published an article the day before the publication of my post leading off with a grouping of Cisco UCS and HP Matrix.

In terms of access to HP experts, Jason Treu was my only point of contact. While Gary Thome personally and graciously had taken the time to speak with me following my first post, I know how busy he is and did not feel it appropriate to reach out to him directly. Instead I sent the following email to Jason. He never responded.

From: Steve Kaplan
Sent: Friday, June 11, 2010 6:41 AM
To: jason.treu@hp.com
Subject: Questions for HP

Hi Jason,

Gary asked that I bring any questions to HP.  I am planning to write an updated post on UCS vs Matrix, and have the following questions:

1)   Ballpark # of Matrix customers

2)   Reference list of 3 customers to call

3)   Any information on the upcoming channel program for authorized channel implementation of Matrix

4)   Any other relevant updates/capabilities about Blade Server Matrix.

Thanks,

Steve

[my contact information including both office and cellular phone numbers]

TrackBack

TrackBack URL for this entry:
http://bythebell.com/2010/07/cisco-ucs-vs-hp-bladesystem-matrix-an-update.html/trackback

Listed below are links to weblogs that reference Cisco UCS vs. HP BladeSystem Matrix: an Update:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

23 Responses to Cisco UCS vs. HP BladeSystem Matrix: an Update

  1. Hello,
    Can’t figure out why you chose to compare 24 blades when 32 would be a better comparison as it is divisible by both HP and Cisco blades per chassis; 4 Cisco UCS chassis vs 2 HP Chassis.
    Stopped reading there as it seems like you are trying to dupe the reader. Very excited about UCS though. Currently involved in an eval ourselves. Matrix seems like a hurried,ugly response to UCS.

  2. Andrew,
    I used 24 blades because I recently went through an ROI calculation for a customer with 26 blades, and 24 was convenient. But, I redid the calculations using 32 blades as you suggest. As you can see, it resulted in a still more advantageous price delta for UCS which generally scales more efficiently. Thank you for catching this, I appreciate it.

  3. Nice article
    I recently started using UCS after being a long time user of 15x c7000 chassis
    The one item missing in your price comparison that made a huge deal for our TCO was aggregation layer switches
    CoreSW – distributionSW – accessSW – servers
    UCS basically collapses the distribution and access layer with the 6040/6020
    UCS 14chassis = 112 servers
    c7000 7chassis = 112 servers
    Uplink out of the environment
    UCS needs minimum 4 ports (1 port channel to each core)
    c7000 needs minimum 28 ports for 1 core port channel to each chassis ethernet switch

  4. It has been said in the previous post of the series, but again, you shouldn’t be comparing Cisco UCS with Matrix. It has nothing to do, you should be comparing UCS to BladeSystem OR vBlocks with Matrix.
    I am also missing a detailed comparison on bandwidths, I have the feeling leaving just 8 ports (when maxing out the chassis) for all the servers is like a little to nothing bandwidth for the VMs.
    On the other hand, having the mgmt solution inside of hardware elements leaves you with a hard limit for the max number chassis to be managed. In the best case you said it can handle up to 14 chassis (eventually 40) … this doesn’t seem to be like “enterprise scalability”. AND in this case, you would leave just one port for each of the chassis, therefore no usable BW for the servers or furthermore the VMs.
    And, if you would be comparing BladeSystem with UCS you would realize there are far more issues to have in mind when buying servers, I find the UCS offering a bit limited in server choice / connectivity, probably enough for a number of customers but definitely not a complete portfolio.
    Quite an interesting read but you should be delivering a less biased article. I would be really interested in knowing your point of view in a detailed comparison of Matrix & vBlocks

  5. Great post. I am just wondering about the vDS support in HP. I am not aware of any limitaions with vDS not being supported with VC/Flex-10. I have been trained and am certified in both UCS and HP C-Class Blades. I still (personally) slant to preferring HP, but UCS is also a great fit for many customers. Cisco is certainly giving HP a run for the money.
    (Disclaimer I also work for a Cisco partner, but we are also an HP partner).

  6. Patrick,
    Thanks for your comments. I would welcome the opportunity to learn more about the TCO calculations you derived if you are willing. Please email me: steve.kaplan at inxi.com. thanks.

  7. Jose,
    I think I explained pretty well why I decided to go with the UCS vs. Matrix comparison. In fact, an article on Thursday in searchdatacenter.com leads off with a UCS vs. Matrix comparison. Unless the Matrix picks up momentum, I do not know if I’ll be doing any further comparisons whether with UCS or other solutions.

  8. David,
    Thanks for the compliment. As both a Cisco & HP partner, I particularly value your perspective. When researching the article, I spoke with a couple of other partner friends in the same category. I confess I do not know either about vDS port group support in either the VC/Flex-10 or Matrix configurations, though you may be able to find the later in the compatibility guide I linked to in the article.

  9. what about Power consumption, foot print in datacenter, and IO throughput per chassis? Maybe you can consider to put this into your comparison as well

  10. Craig,
    I did mention in the article that the Matrix, due to increased power requirements, can have only 2 enclosures per rack vs. 5 chasses for UCS (though it can accomodate twice as many servers per enclosure). As far as actual power consumption goes, it would be interesting to compare the two, although I honestly am not quite sure how to go about that. I will see if I can come up with something. In any case, I don’t expect the difference to be material one way or another.

  11. Craig/Steve,
    Here’s a recent 3rd party power comparison between Cisco UCS and HP BladeSystem:
    http://www.principledtechnologies.com/clients/reports/Cisco/UCSPower0310.pdf
    It was sponsored by Cisco but it also provides all the details of the power comparison. HP is free to point out the mistakes, if any. So far, hearing just crickets from HP on the report is telling…
    Regards,
    -sean

  12. Steve, I believe the momentum depends on the country, here in Spain Matrix is doing well, but you are quite right, the hype on Cisco’s products is always higher.
    Of course I am not demanding you to write an apples-to-apples report, but as I mentioned I feel you left out several technical points that should be on this writting:
    · Total bandwidth when maxing out the management switches (this is 40 chassis I believe with just 2x 10Gb FCoE links for 8 half height servers), there you need 4Gbps storage traffic, leaving you 6Gb Eth for 8 servers. How much BW is there for the VMs then?
    · If you would like to increase this BW, then you can use up to 8x 10Gb FCoE links thus limiting the number of chassis scalability. (I know you can use an external manager to manage several UCS 6140 switches from a single pane of glass but this was mentioned as a weakness in HP’s solution)
    · Full-height servers with the Catalina memory adapter are not balanced servers, taking into account you have very limited I/O Bandwidth, you shouldn’t be delivering so much memory for VMs … you are just increasing the I/O bottleneck while adding components to the server that weren’t on Intel’s mind when they designed the memory controller
    It is true HP can / may use a higher number of links for each chassis, but this is for the greater good. There is a good point with the Palo adapter where you can “partition” (I know this is not the right word, please excuse my limited english) up to 58 times, this is really good to assign ports to VMs directly but, what about when you need more BW (again!), with HP you can use another pair of Virtual Connects and a 2 port Flex-10 mezzanine and there you go, 20 more Gb Eth available). This leaves HP’s half-height servers with up to 40Gb Eth while maintaining 2x 8Gbps FC ports.
    Maybe I am wrong with this numbers, but I really feel you sould be taking care of them when talking about enterprise solutions, especially when you talked about scalability. Please correct me if I am wrong in any of those points.
    Thanks!
    Jose.

  13. Jose,
    Thanks for your comment and question.
    UCS is designed to deliver a robust computing environment that is simple to construct and engineer for application requirements. It provides a simple range of choices to each server – 2.5GB, 5GB and 10GB within each chassis. Applications within a UCS cluster can be vMotioned to the appropriate chassis with the bandwidth required for the application. With Web servers, add a chassis with just two links. With DB applications that might require 8GB of sustained bandwidth, add a chassis with 4 links.
    As I mentioned, UCS will eventually scale to a maximum of 40 chasses with up to 1TB of traffic with the currently shipping Interconnect. Part of the UCS sizing is to engineer planning around how many applications require what type of bandwidth and map accordingly. As the fabric becomes more capable, the investment in UCS hardware is maintained as software updates accommodate these changes. Just add the appropriate links and vMotion VMs to the chassis with the appropriate bandwidth.
    It appears that what you’re really asking is how to scale to the maximum number of servers while accommodating rapid changes in bandwidth demand. As you note, the UCS Palo adapter enables dynamic distribution of bandwidth across the UCS environment as applications demands change, enhanced with policy driven tools such as the Nexus 1000V and UCSM service profiles. Few applications require more than 1GB of bandwidth – those that do, especially at a sustained level, are quite rare. With the Palo, UCS can segment and allocate traffic to those apps needing bandwidth at a much more granular and efficient level than with Virtual Connect.
    You criticize the Catalina ASIC because for this first generation, Cisco did surrender a bandwidth step (going from 133MHz memory to 1066 MHz), but if using all 18 memory slots available on a Matrix system, you have to go from 1333MHz all the way down to 800MHz. And on the new UCS M2 full-width blades, all memory can now run at a maximum speed of 1333MHz. Here is a good article from last April by Pete Karelis on the benefits of Cisco’s Catalina chip http://www.goarticles.com/cgi-bin/showa.cgi?C=2753596
    The discussion of memory bandwidth is a bit of a red herring in any case since it only touches a very few high end HPC type of applications – those requiring sustained high memory transfers for hours of compute cycles at a time. Average applications using Nehalem CPUs on the desktop show little impact from lower memory bandwidth on applications performance. This review provides a good explanation http://www.tomshardware.com/reviews/memory-scaling-i7,2325-11.html I also like the explanatory comment by Joe Onisick in response to this Blades Made Simple Post http://bladesmadesimple.com/2010/05/dell-flexmem-bridge-helps-save-50-on-virtualization-licensing/

  14. Jose,
    It really seems as though you’re missing the point of the architecture. Your comments are arguing against the plausibility of the maximum architectural limits within UCS. For the sake of argument let’s say you are totally correct lets say:
    1)40 chassis would not provide enough bandwidth at 20Gbpps shared accross 8 blades.
    2) 40Gbps of onboard I/O for a B250 blade would not be enough I/O to support utilizing 384GB of memory.
    So with that let’s envision a UCS system using 10 chassis each max attached with 4 links for 80Gbps of bandwidth. Let’s place 4 B250 blades in each chassis, 192GB of memory, and 2x VIC/Palo cards.
    In this configuration I will have 40Gbps to each blade that I can granularly tune per application or VM. This provides I/O flexibility not found on any other architecture. Rather than having 20-30Mbps utilization (industry average) on my redundant 4Gbps dedicated FC HBAs, I now have that bandwidth shared accross LAN and SAN on the same pipe. During the day when LAN traffic is heavy it utilizes the necessary bandwidth and at night the FCoE based backup kicks up and has access to enough bandwidth to shrink my backup window.
    Additionally I have saved typically about 50% on memory costs by using less costly 4GB DIMMs to reach 192GB, this is true over most major manufacturers except the one that lowered their memory pricing below their own costs (losing money on memory) in order to comabt this Cisco advantage.
    Additionally without a single additional license or mandatory service hour this system described above has 1 single point of management for the compute environment all the way to the VMware networking. That means 40 blades managed under one system without anything else involved. If I’m running VMware and using a very reasonable 25:1 virtualization ratio that means I have 2-3 enterprise racks, 1 point of management running 1000 VMs. All well within very reasonable CPU memory and I/O constraints.
    The maximums of any system will be questionable for many workloads but have benefits in corner cases. The real value of an architecture is it’s flexibility, how much can I tailor it to the specific application requirements.
    384GB has it’s use case in high memory requirements and possibly databases, whereas 40 chassis at 20Gbps may be more than enough for web clusters or hosted environments.
    Joe

  15. Steve -
    Great analysis. You should also know that Forrester Research (James Staten) did somewhat of a similar comparison, but also included offerings from Dell, Egenera, and IBM.
    While you pointed it out in your analysis, I must emphasize that when doing a total-cost calculation, the Cisco UCS doesn’t come complete with the management SW you’d typically need for an enterprise data center. So there’s additional cost and integration there that’s already included in the Matrix (Jose alluded to this).
    With full disclosure, I work for Egenera — and our architecture is very similar to that of UCS, except we use standard Ethernet and I/O. Plus, our SW already includes SW provisioning, HA and DR services,so there is no additional mgmt SW to purchase :)
    The more “real world” analyses we do, the better. It doesn’t serve any pragmatic purposes simply to compare speeds-n-feeds.

  16. Ken,
    Thanks for your comment. I will be contacting you soon and hope to learn more about your views on Egenera, UCS and convereged infrastructure in general.

  17. (Disclosure NetApp Employee)
    I noticed that the V2P portion says “only with certain storage vendors”, I think it would be most accurate to say “Only with NetApp or IBM N-Series” – see
    http://blogs.netapp.com/virtualstorageguy/2009/10/vce-101-oracle-on-vmware-without-limits.html#more
    For more details.
    Regards
    John Martin
    Principal Technologist
    NetApp ANZ

  18. John,
    Thanks for your email. You might have noted that the NetApp post to which you link starts off referencing me; I am well aware of the NetApp capabilities. The reason I wrote “with specific storage vendors” is that I fairly recently read a post by EMC saying that their SANs could now do this as well (I believe it was by Chuck Hollis). Please let me know if you think I misinterpreted this claim.

  19. It is worth noting that the power consumption comparisons performed by “principled technologies” were done with 4 PDUs in the Cisco UCS and 6 PDUs in the HP gear. One of the ways you can tell someone doesn’t know what they are doing is when they have 6 PDUs in HP chasis.

  20. Greetings-
    I enjoyed, and learned a lot by reading these blogs. There was a number of comments regarding which HP solution should be compared with UCS. For me, what matters is feature functionality and effectiveness. I would really appreciate if you could point me to a comparison of HP’s other solutions with UCS.

  21. I would like to share my personal experience with the Matrix. I used to work for a company in the UK which had the Matrix deployed in their Data Center. Because the Matrix has to be deployed by authorized personnel, HP sent a couple of engineers to install it for us. We had to go through the mentioned 2 week deployment just to get the thing up and running. We ended up with some generic deployment and configuration. The solutions is not fit for purpose at all – public cloud as it was marketed and sold to us. It is almost over a year now and the Matrix is just sitting there collecting dust while the company is struggling to find clients for it. The Matrix in our case consists of 2 racks, including storage and the cabling is just appalling – bunch of copper and fiber cable running inside and between the racks. Again, this is first-hand experience and has nothing to do with all the technical mambo-jumbo. I have seen several production deployments of UCS now and they don’t look anything like the Matrix – installed in single rack with just a few neatly bundled cables, installed and managed by the clients themselves.

  22. HP Mambo Jumbo,
    I have heard of some success stories, but also (albeit 2nd hand other than you) of situations such as the one you’ve encountered. I am still very curious as to how HP counts thousands of Matrix customers as I wrote about om March 17, 2011 http://www.bythebell.com/2011/03/whats-behind-the-surge-in-hp-matrix-customers.html

  23. What’s up everyone, it’s my first visit at
    this site, and piece of writing is truly fruitful for me, keep up posting these types of posts.

Post a comment

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>