Nexus

You are currently browsing articles tagged Nexus.

So now that we know a little bit about fiber channel SANs and the goal of the Nexus 5k/2k, we can talk about some more advanced topics.  One topic that you’ll come across when working with the 5ks is NPV and NPIV.  NPV stands for N Port Virtualization.  NPIV stands for N Port ID Virtualization.  Both of these technologies are similar in what they accomplish, but are applied to different pieces of the SAN fabric. 

 NPIV
In a standard SAN configuration, you create zones that allow you to specify what devices need to talk to what.  Zones (in most cases) use the WWNs to assign different devices different access on the SAN fabric.  If we take the example of a file server, it needs to see the disk array for its local disk.  The disk arrays WWNs and the file servers HBAs would be a member of a zone allowing them to talk to each other.  Now, what happens if that file server is actually a VMware ESX box hosting multiple servers?  In the example just given, that server really only has one way to identify itself on the SAN which is the HBAs WWN.  But what if we want to map different LUNs to different VMs on the ESX server?  Enter, NPIV…

NPIV allows the ESX server to log into the fabric multiples times.  This allows the server to register multiple FCIDs which allows the administrator to give VMs their own WWNs and FCIDs for zoning.
photo 2 

So if we look at the diagram above, you can see that we have a VM Host that has a single HBA in it.  The HBA talks to the NPIV enabled switch and runs the FLOGI process twice to register two FCIDs so that Host 1 can be in a zone with LUN 1 and Host 2 can in a separate zone LUN 2. 

NPV
In a traditional SAN fabric, each switch gets assigned a Domain ID.  The Domain ID is an 8 bit field in the FCID.  Using our basic network math we can discern that we can then have 255 domain IDs.  In reality, some of these IDs are reserved and can’t be assigned to switches leaving us with 239 switches.  While that may seem like a lot to some of you, in large environments, that hard limit can become a serious issue as you try to scale the fabric.  The solution to this problem is NPV.  NPV allows SAN switches to essentially become N port proxies. 
photo 1

When talking about a NPV switch, I think its easiest to think of that switch as an HBA.  Take a look at the diagram above.  In this case the Nexus 5k is configured as an NPV switch.  The ports on the MDS appear as F type ports confirming that the MDS sees the 5k logically as a N port device.  In turn, the ports on the 5k are NP, or N port Proxy, ports.  The 5k then uses NPIV and proxies all of the connections for its connected hosts.  The HBAs on the bottom of the diagram see the ports on the 5k as F type ports just like a N port device would on a normal fabric switch.  You’ll note that I mentioned that NPV actually uses NPIV to some degree. So one of the first steps in configuring NPV is to ensure that both switches support, and have NPIV enabled.

The benefits of NPV are many.  Take for example a mixed vendor fabric.  In the past, you had to worry about interop modes, whether or not the two switches would talk correctly, and if vendor dependent features would still work.  Since you aren’t actually managing the NPV enabled switch and its just proxying connections, you can easily add switches to the fabric without worrying about interoperation.  Additionally, NPV helps solve the Domain ID limitation.  NPV enabled switches don’t get assigned a Domain ID since they aren’t technically manageable as a switch within the fabric. Additionally, NPV allows for reduced switch management/administration.  The NPV enabled switches don’t receive any additional SAN configuration once they are up and running in NPV mode.

Coming up next we’ll discuss the actual Nexus configuration.  Stay tuned….

Tags: , ,

So if you’re working Cisco at all these days, you’ve probably heard of the Nexus platform.  I’ve had some exposure to the 7k’s, but more recently we’ve been doing some testing with the 5k/2k platforms.  The 5k is a layer 2 switch, that allows FEX (Fabric Extender) modules to connect to them.  The FEX in this case, would be the Nexus 2k.  I like to think of the 2k as a line card in a 6500 chassis.  It has ASICs and is very much a layer 2 switch, but the brains of the operation are the 5k to which the 2ks connect.  To be clear, there is NO configuration done on the 2ks.  When the 2ks are connected to the 5ks, and the 5ks are configured for the FEXs, the ports from the FEX show up in the 5k configuration.  The idea is that the 5k could be a ‘middle of row’ data center solution with multiple 2ks hung off of it in adjoining racks.  This reduces the complexity of access layer switches since the only real configuration is on the 5k.

The other benefit of the 5k/2k solution is that they support 10 gig FCOE.  That is, with a fiber module in the 5k, I can connect the 5k to a SAN switch (MDS in most cases) and then extend a VSAN to the 5k.  The 5k can then associate the VSAN to a VLAN.  The access ports on the 2k are then configured as trunks, and allow traffic from both the SAN VLAN and the data VLAN (which is also connected to the 5k) to traverse a single link to the server.  The server then uses what are called  CNAs (Converged Network Adapter) to split the FC and IP traffic out and deliver it to the host.  So rather than having dual NICs (or more), and dual HBAs, we now have dual CNAs that deliver redundant paths to both the IP network as well as the SAN fabric.

It had been awhile since I worked with fiber channel, so to wrap up this post I’m going to define some of the basic terms/acronyms you should know about when working with this sort of technology.  Some will be obvious to many of you, but I’m going to cover all of the basics.  In following posts I plan on discussing NPV and NPIV, 5k/2k configuration, and troubleshooting steps for FCOE pertaining to the Nexus platform.

SAN/FC Terms and Acronyms
SAN
– Storage Area Network
FC – Fiber Channel
N(ode) Ports – End node port, either an HBA on a server, or a target on a storage array
F(abric) Ports – A port on a FC switch that is connected to the N port
E(xpansion) Port – A port on a FC switch that is connected to another FC switch
ISL – Inter Switch Link.  The connection between two E ports
WWN – World Wide Number.  A globally unique 64 bit number used to identify nodes on a SAN fabric
FLOGI – The fabric logon process
N Port address – Also called, N Port ID.  A 24 bit address that is automatically assigned to a WWN during the fabric login process.  The N port address is also commonly referred to as the Fiber Channel ID.

Tags: , ,

Newer entries »