Blades – Cisco Nexus – and Converged Networking

By | April 7, 2013

Converged networking enables storage and data network traffic to share adapters, network links and switches at least part of the way through your data center, before splitting up and going their separate ways. What’s the point? Cost savings and the ability to simplify your design. When you look at the cost of connecting your servers to the network and to the SAN, per-port costs can be pretty significant, especially when you’re talking about 8G SAN ports and 10G Ethernet ports. So if you can reduce your port count, you can save thousands. On top of that, you can save more by eliminating separate SAN and Ethernet adapters in your servers and separate interconnect modules in your blade enclosures.

Back to basics for a minute, the traditional data center used to contain (and for many perhaps it still does) stacks of individual rack-mounted servers. To provide redundant connectivity to the Ethernet and the SAN, each server would contain at least one dual-port Ethernet adapter and one dual-port fibre-channel HBA. These adapters were then cabled to their respective switches, for a total of four cables per server. If you stacked, say, ten servers per rack, that’s forty cables per rack, not counting power cables, keyboard and monitor cables, out-of-band management connections, etc. How about twenty servers per rack? That’s eighty cables.

Where do all these cables go? If you have a row of server racks in your data center, and each rack has forty to eighty data cables coming out the top, not only will your racks have a mess of cables in the back, potentially blocking airflow, but you’ll either need Ethernet and SAN switches at the top of each rack, or expensive structured cabling in each rack, bringing all those connections to a common point where an even huger mess of cabling will come together for connection to your switches.

This I think is where blades really help. Blade enclosures enable you to pack up to sixteen two-processor servers into 10U of rack space, which is great in itself, but they also enable those sixteen servers to share connections to your networks, using far less cables. Of course, in order to provide enough throughput for those sixteen servers you’ll need 10G Ethernet connectivity. Two or perhaps four 10G uplinks can provide enough throughput for the entire blade enclosure, including both IP networking and storage I/O.

To combine data and storage traffic, the blades require converged network adapters, or CNAs. The CNA replaces the separate Ethernet and fibre-channel adapters. Today’s HP blade servers and Cisco UCS blade servers come equipped with CNAs. The CNA transports fibre-channel storage traffic as FCoE (fibre-channel over Ethernet) along side the server’s IP network traffic.

To support this converged traffic, FCoE-capable interconnect modules are required in the blade enclosures. In Cisco UCS enclosures, these are called fabric extenders or FEX modules. There are also FEX modules for HP blade enclosures, known as B22 modules. Alternatively, HP Flex-Fabric modules can be used, but these provide a different cabling model, which I’ll explain later.

FEX modules are required to be uplinked to Cisco Nexus 5000 series network switches. The FEX acts like a remote line card of its parent Nexus switch. Ethernet and FCoE traffic is carried over the uplinks to the Nexus switch where it is then split up and sent over Ethernet or fibre-channel cables to the larger network and the SAN, respectively. Up to 24 FEX’s can be connected to a Nexus switch, which can all share the network and SAN connections headed northbound. The result is that cabling is reduced both at the blade enclosures as well as to the network core and the SAN switches.

In the HP blade enclosures, as an alternative to the FEX modules, HP Flex-Fabric modules can be used. These don’t depend on Cisco Nexus switches as Fibre-channel SAN connections can be cabled directly to the flex-fabric modules. This obviously requires more cabling over all, as every blade enclosure requires its own SAN cables, where the FEX’s share the SAN cables plugged into the Nexus switches. Still, the Flex-Fabric enables the use of the CNA adapters in the servers, eliminating the need for separate SAN HBA’s and interconnects.

The blade architecture that I recently deployed where I work is shown below. As you can see, the blade enclosure is connected to the Nexus 5k switches using four converged links, which are 10G. That’s all the cabling it needs. There are also four SAN links from the Nexus 5k’s to the SAN switches, so with only one blade enclosure, there’s been a net-zero cabling reduction, but as additional enclosures are added, the number of SAN cables remains low (subject to whatever your maximum oversubscription ratio might be). Additional northbound network and SAN connections can be added as the existing links start getting highly utilized, which after all, is how you save money, by running your equipment as hard as it will go.
fcoe

Leave a Reply