At the Interop Las Vegas show in May, I got an up close look at the new HP12910 switch. I thought I would post some pictures I took and give my take on this new platform. First, I should point out that this is the smaller of 2 new switches from HP. There is a larger 16 slot switch that was not on display in HP’s booth at Interop. Second, these new 12900’s were brought over with the 3Com acquisition. They are not brand new HP designs, not that it really matters.
At first glance, one might look at the 12910 or 12916 and think they are Cisco Nexus 7000 clones. Looking at the 12910, you can see the physical resemblance to the Nexus 7010. Upon closer inspection, the platform itself is a bit different. There are actually 10 slots for line cards. The supervisors are located in the rear of the chassis. In actuality, it is a 12 slot chassis. There are also 6 fabric modules instead of 5 on the Nexus 7010. I could go on, but let me just show you the up close pictures and comment on each one. I should also point out that I may be completely wrong in some of my comments. This is a new chassis and other than a spec sheet, not much information is available. I suspect that will change in the near future.
Notice the cable management at the top of the chassis. Also, the bottom portion below the line cards appears to be for air intake. This is a front to back airflow chassis. I was able to remove the bottom cover and it looks like this:
You can see that only the top portion of this is for air intake. The bottom portion is where the 4 power supplies are housed. These are hot swappable of course, but the difference is that there are no spots to plug in a traditional power cord. That happens in the back where there is a PDU(Power Distribution Unit). It looks like this:
A different way to break out power compared to most chassis I see. I’m not saying it is a bad design as I am not a power expert by any means. Just another way to do it. I do like the fact that you plug the cables into the rear of the chassis. A bit cleaner than having to run the power cords through the rack to get to the back where the outlets probably reside.
Those red arrows are pointing to little metal loops that I believe are meant for securing fiber and copper cables to the chassis to keep them neat and orderly. The problem as I see it are that they are just big enough for plastic tie wraps, but too small to use velcro strips. I absolutely hate using plastic tie wraps on cabling in data centers unless they are used on the back of patch panels to bundle fixed drops going to wall outlets or another patch panel. I’ve just seen too many fiber and even copper cables get ruined when you have to add an additional cable to the bundle or remove one. If there is enough slack in the tie wrap to cut it with a pair of snips or scissors, then it isn’t too bad. Unfortunately, people tend to tighten them up to where you can’t easily cut it without damaging the cables they are wrapped around. Perhaps there are tiny velcro straps I am not aware of, or these loops have a different purpose.
Here is a shot of what I believe are 10Gig and 40Gig line cards. The metal levers that secure the line cards into place are offset enough from the card that you can effectively remove a cable from the ports closest to the levers without wanting to scream obscenities at the line card.
I thought this rear chassis map was a nice touch. The fan trays, fabric modules, and supervisor slots in red are marked so the odds of someone putting a supervisor in a fabric slot or vice versa are minimized. Yes, that could happen.
Some Technical Details
You can look at the spec sheet here. A few things worth noting:
1. This switch is Openflow 1.3 capable.
2. It has 23Tbps capability.
4. It will support Multitenant Device Context in 2014, which I would compare to Cisco’s Virtual Device Context. This allows you to segment this physical chassis into 4 distinct logical switches for multi-tenancy or to separate functions like WAN aggregation from LAN aggregation.
5. Plenty of 10Gbps ports (480), 40Gbps ports (160), and will eventually(Q1 2014) support 32 100Gbps ports. I suspect the 40Gbps and 100Gbps density will increase with newer fabric modules in the future.
6. It will support Ethernet Virtual Interconnect in 2014 which allows you to extend layer 2 across a total of 8 different data centers. This would be similar to what Cisco does with Overlay Transport Virtualization on the Nexus and ASR platforms. It runs over any layer 3 connection, so as long as you can route using IP, it will work. This is great for things like vMotion.
Much like the Cisco Nexus 7000 family, Brocade VDX 8770, and other large switches, the HP 12910 isn’t for everyone. It’s meant to move large amounts of traffic across data center networks. Most customers out there don’t need this kind of power. However, for those that do, this switch gives you a ton of throughput with some interesting features. More information on this platform should be coming as it is rolled out to production networks around the world. I just wanted to share some pictures and thoughts around this new platform.
Disclaimer: HP paid for my travel and accommodations at Interop Las Vegas 2013. I was not asked to write anything about them in return and received no compensation for my time spent with them at this or any other event they sent me to.