You are currently browsing articles tagged FCOE.

So now that we have our base config running we can take a look at configuring the native FC interfaces, building the connection to the CNAs, and connecting the 5Ks to the IP network (I’ll refer to this as data network going forward). Before we dive into config, let’s talk about design considerations.

Design Considerations
Most deployments will (should) have redundant 5k’s as part of the setup.  So let’s take a look at a pretty common setup that you might see out in the wild from a logical perspective.  Drawing1

So this is how I like to conceptualize the basic setup (Yes I know I’m missing pieces, this is the ‘big picture’).  Both the native FC storage and the data network terminate into the 5Ks.  At that point, the FC VSAN gets mapped to a VLAN.  The VLAN is what’s passed to the 2K and on to the server.  So in my illustration, the green links are storage and the orange are data.  Pretty straightforward right?  I’m hoping you caught the fact that the link between the 5Ks only shows the data network.  Have you figured out why?  Even though we map the VSAN to a VLAN its still for the most part ‘its own thing’.  That being said, standard FC design concepts come into play.  One of those being having dual fabrics to each server.  If we allowed that VLAN across the trunk between the two 5Ks we’d be breaking that rule.  You’ll also need to take things like MPIO into account as well since you’ll have redundant active paths to the same storage.  The data network is still trunked between the 5Ks to facilitate a VPC configuration.  Now, on to the config…

Native Fiber Channel
At this point I’m assuming you have a native fiber channel card in your 5k and you’ve got some trunks going back to your MDS.  That being said, I’ll also assume you have more than one pair of fiber and you’d like to make the two pair that you do have into some type of port-channel.  Let’s get to it…

(Note: I’m only showing ‘one side’ of the config.  You’d obviously have to duplicate this on the other 5K as well.  Keep in mind that things like the Storage VLAN will be different between the two 5Ks)

Configure the ports to the MDS
Nexus5k(config)# interface san-port-channel <SAN Port Channel Number>
Nexus5k(config-if)# channel mode active
Nexus5k(config-if)# switchport trunk mode on
Nexus5k(config-if)# switchport mode NP [E]
(Note: Pick either NP or E depending on your deployment.  NP would be for NPV mode and E would be for Fabric Switch mode (or whatever they are calling it now))
Nexus5k(config-if)# switchport trunk allowed vsan 1
Nexus5k(config-if)# switchport trunk allowed vsan add <Assigned VSAN>
Nexus5k(config-if)# exit
Nexus5k(config)# int <Member 1>, <Member 2>, etc…
Nexus5k(config-if)# channel-group <San Port Channel Number> force
(Note: We are using force, make sure that your config for port-channel is the same on both ends.  A lot of it depends on the exact setup and what features (LACP, etc..) the gear supports)
no shutdown

As an obvious side note here, make sure you configure the MDS side to match the settings on this end.

Map a VSAN to a VLAN
Nexus5k(config)# vlan <VSAN VLAN Number>
Nexus5k(config-vlan)# fcoe vsan <Assigned VSAN>

Building the CNA Connections
There are two components to building the connectivity to the CNAs.  One being the data component and two being the FCOE component.  The data component is easy (for now, we’ll add VPC later) so let’s tackle that first followed by the VFC interfaces for the FCOE piece.

Configure the Port and Port-channel going to the CNA Card 
Nexus5k(config)# interface Ethernet <Number>
Nexus5k(config-if)# switchport mode trunk
Nexus5k(config-if)# switchport trunk allowed vlan <Native VLAN>, <Data VLAN>, <VSAN VLAN>
Nexus5k(config-if)# spanning-tree port type edge trunk
Nexus5k(config-if)# speed 10000
Nexus5k(config-if)# channel-group <CNA Port Channel Number> mode active
Nexus5k(config-if)# exit
Nexus5k(config)# interface port-channel <CNA Port Channel Number>
Nexus5k(config-if)# spanning-tree port type edge trunk

Configure the VFC Interface
Nexus5k(config)# interface vfc <VFC Number>
Nexus5k(config-if)# bind interface port-channel<CNA Port Channel Number>
Nexus5k(config-if)# no shutdown

Create the VSAN and map it to the appropriate interfaces
(Note: In this step we are just telling the VSAN database what ports should be part of what VSAN.  This would include the VFC interface facing the server and the port-channel facing the SAN.)
vsan database
Nexus5k(config-vsan-db)# vsan <Assigned VSAN>
Nexus5k(config-vsan-db)# vsan <Assigned VSAN> interface vfc<VFC Number>
Nexus5k(config-vsan-db)# vsan <Assigned VSAN> interface san-port-channel <San Port Channel Number>

Configure the Data Network pieces
In this section we’ll cover the trunk back to the data network (65K or 7K in most cases), the trunk between the 5Ks, and the VPC config.

Configure the connection back to the data network
I’m just going to list this as a config item.  There isn’t anything special about configuring a standard data trunk.  You can use a single port on each, make a port-channel, however you want to handle it.  You just need to get your data network VLANs onto the 5K.

Configure the trunk between the 5k’s
Nexus5k(config)# int <Member 1>, <Member 2>, etc…
Nexus5k(config-if)# switchport mode trunk
Nexus5k(config-if)# switchport trunk allowed vlan <Native VLAN>, <Data VLAN>
(Note: We are only allowing the Data VLANs across the trunk, not the storage VLAN)
Nexus5k(config-if)# spanning-tree port type network
Nexus5k(config-if)# channel-group <5K Port Channel Number> mode active

Configure VPC
(Note: This is a very basic VPC config.  I don’t have a dedicated keep-alive link in this example so the management VRF is being used.  Probably not a good idea for production)
Nexus5k(config)# vpc domain <VPC Domain Number>
Nexus5k(config-vpc-domain)# role priority 5000
Nexus5k(config-vpc-domain)# system-priority 4000
Nexus5k(config-vpc-domain)# peer-keepalive destination <Management IP of other Nexus>
Nexus5k(config-vpc-domain)# exit
Nexus5k(config)# interface port-channel <CNA Port Channel Number>
Nexus5k(config-if)# vpc <VPC Domain Number>
Nexus5k(config-if)# exit
Nexus5k(config)# interface port-channel<5K Port Channel Number>
Nexus5k(config-if)# vpc peer-link

I hate to say it, but there really isn’t much to it when you look at it as a whole.  The tricky part is troubleshooting.  There are many different pieces of new technology (VPC, FEX, FCoE, etc..) that it can be hard at times to see the issues.  At any rate, I’m anxious to hear if others have had any luck with their setups or what they have running currently.

Tags: , ,

So by this point I’m going to assume that you know the role that Nexus 5k and 2k series switches play in the data center.  The core of it all is that the 5k’s are the brain of the operation with 2ks acting as individual line cards in a virtual chassis.  The combination of the two gives us a distributed switching environment that supports FCOE and simplifies switch management. 

So you’ve just unboxed the equipment and you want to get it all configured.  Let’s start from scratch….

Configure the Management0 interface
The Nexus 5k’s come with a single out of band management interface that can be used to remotely administer the system.  I say out of band because the interface is in its very own VRF segregating it from the default VRF.  If you aren’t comfortable with the concept of VRF’s I would look into them, they play a large role in the Nexus platform. 

-Insert your relevant information between <>
-Console prompts are shown in green

Nexus5k# config t
interface mgmt0
Nexus5k(config-if)# ip address <Enter your IP> 

Now that the IP is configured, we should have layer 2 connectivity to the Nexus.  But, we won’t have layer 3 until we give it a default route. 

Nexus5k# config t
Nexus5k(config)# vrf context management
Nexus5k(config-vrf)# ip route <Default Gateway>
Nexus5k(config-vrf)# exit

Notice that the default route was defined under the VRF for the management interface.  So when I want to test connectivity from the Nexus to other subnets using ping, I need to specify the VRF.  If I don’t, it will use the default VRF and my pings won’t go anywhere since there isn’t a route.

Nexus5k# ping <Your Workstation IP> vrf management

Enable the required features
As you probably know, none of the features on a Nexus are turned on until you tell the switch to enable them.  This is done using the feature command.  For my testing, I enabled the following features…

Nexus5k# config t
Nexus5k(config)# feature fcoe
(Note: This enables FCOE on the switch)
Nexus5k(config)# feature telnet
(Note: This enables remote administration of the switch through telnet)
Nexus5k(config)# feature udld
(Note: This enables unidirectional link detection)
Nexus5k(config)# feature interface-vlan
(Note: This enables VLAN interface configuration.  The Nexus is a layer 2 switch, but its handy when testing CNA connectivity to be able to ping from the Nexus using the VLAN interfaces)
Nexus5k(config)# feature lacp
(Note: This enables 802.3ad on the switch for port-channel negotiation)
Nexus5k(config)# feature vpc
(Note: This enables the Virtual Port Channel configuration on the switch)
Nexus5k(config)# feature fex
(Note: This enables the fiber channel extender protocol which allows you to connect 2ks to the 5k)
Nexus5k(config)# feature npv
(Note: This enables N port Virtualization on the switch making it a N port proxy)
Nexus5k(config)# exit

Configure the FEX (2k) modules
As discussed, the Nexus 5k’s use the FEX service to connect and manage the 2k modules as fabric extenders.  The configuration is pretty straight forward. 

Nexus5k# config t
Nexus5k(config)# fex 100
(Note: This creates a FEX association group 100)
Nexus5k(config-fex)# pinning max-links 1
(Note: This sets the pinning to 1 since we are going to be configuring a port-channel for the connection between the 2 and 5k.)
Nexus5k(config-fex)# exit
Nexus5k(config)# interface port-channel 100
(Note: Create the port-channel interface)
Nexus5k(config-if)# switchport mode fex-fabric
(Note: You need to specify that the connection is a fex-fabric connection rather than a standard switchport)
Nexus5k(config-if)# fex associate 100 
(Note: Assign the FEX group 100 to the port-channel)
Nexus5k(config-if)# no shut
Nexus5k(config-if)# exit
Nexus5k(config)# interface <the first physical interface going to the FEX>, <the second>
(Note: In this scenario, I’m using two 10 gig links to connect the FEX to the 5k)
Nexus5k(config-if)# switchport mode fex-fabric
(Note: You need to specify that the connection is a fex-fabric connection rather than a standard switchport)
Nexus5k(config-if)# fex associate 100 
(Note: Assign the FEX group 100 to the physical interfaces)
Nexus5k(config-if)# channel-group 100 mode on
(Note: Bind the interfaces to the port-channel)
Nexus5k(config-if)# exit
Nexus5k(config)# exit

Once this is completed the link between the 5k and the 2k should come up.  Once the link is established, the 2k will sync with the 5k, get software upgrades if need be, and then reboot.  Once the reboot is done the FEX ports will show up in the running config and interface commands on the 5k.  You can check the FEX status at any time by using the ‘show fex’ command on the 5k.

Wrap up
So at this point the 5ks and the 2ks should be online.  Depending on your topology there will need to be some extra port, trunk, and port-channel configuration.  I won’t walk through that in this post.  Up next we’ll talk about connecting the 5k to the IP network, the SAN fabric, and the host CNA adapters. 

Tags: , ,

So now that we know a little bit about fiber channel SANs and the goal of the Nexus 5k/2k, we can talk about some more advanced topics.  One topic that you’ll come across when working with the 5ks is NPV and NPIV.  NPV stands for N Port Virtualization.  NPIV stands for N Port ID Virtualization.  Both of these technologies are similar in what they accomplish, but are applied to different pieces of the SAN fabric. 

In a standard SAN configuration, you create zones that allow you to specify what devices need to talk to what.  Zones (in most cases) use the WWNs to assign different devices different access on the SAN fabric.  If we take the example of a file server, it needs to see the disk array for its local disk.  The disk arrays WWNs and the file servers HBAs would be a member of a zone allowing them to talk to each other.  Now, what happens if that file server is actually a VMware ESX box hosting multiple servers?  In the example just given, that server really only has one way to identify itself on the SAN which is the HBAs WWN.  But what if we want to map different LUNs to different VMs on the ESX server?  Enter, NPIV…

NPIV allows the ESX server to log into the fabric multiples times.  This allows the server to register multiple FCIDs which allows the administrator to give VMs their own WWNs and FCIDs for zoning.
photo 2 

So if we look at the diagram above, you can see that we have a VM Host that has a single HBA in it.  The HBA talks to the NPIV enabled switch and runs the FLOGI process twice to register two FCIDs so that Host 1 can be in a zone with LUN 1 and Host 2 can in a separate zone LUN 2. 

In a traditional SAN fabric, each switch gets assigned a Domain ID.  The Domain ID is an 8 bit field in the FCID.  Using our basic network math we can discern that we can then have 255 domain IDs.  In reality, some of these IDs are reserved and can’t be assigned to switches leaving us with 239 switches.  While that may seem like a lot to some of you, in large environments, that hard limit can become a serious issue as you try to scale the fabric.  The solution to this problem is NPV.  NPV allows SAN switches to essentially become N port proxies. 
photo 1

When talking about a NPV switch, I think its easiest to think of that switch as an HBA.  Take a look at the diagram above.  In this case the Nexus 5k is configured as an NPV switch.  The ports on the MDS appear as F type ports confirming that the MDS sees the 5k logically as a N port device.  In turn, the ports on the 5k are NP, or N port Proxy, ports.  The 5k then uses NPIV and proxies all of the connections for its connected hosts.  The HBAs on the bottom of the diagram see the ports on the 5k as F type ports just like a N port device would on a normal fabric switch.  You’ll note that I mentioned that NPV actually uses NPIV to some degree. So one of the first steps in configuring NPV is to ensure that both switches support, and have NPIV enabled.

The benefits of NPV are many.  Take for example a mixed vendor fabric.  In the past, you had to worry about interop modes, whether or not the two switches would talk correctly, and if vendor dependent features would still work.  Since you aren’t actually managing the NPV enabled switch and its just proxying connections, you can easily add switches to the fabric without worrying about interoperation.  Additionally, NPV helps solve the Domain ID limitation.  NPV enabled switches don’t get assigned a Domain ID since they aren’t technically manageable as a switch within the fabric. Additionally, NPV allows for reduced switch management/administration.  The NPV enabled switches don’t receive any additional SAN configuration once they are up and running in NPV mode.

Coming up next we’ll discuss the actual Nexus configuration.  Stay tuned….

Tags: , ,

« Older entries