You are currently browsing articles tagged SAN.

So by this point I’m going to assume that you know the role that Nexus 5k and 2k series switches play in the data center.  The core of it all is that the 5k’s are the brain of the operation with 2ks acting as individual line cards in a virtual chassis.  The combination of the two gives us a distributed switching environment that supports FCOE and simplifies switch management. 

So you’ve just unboxed the equipment and you want to get it all configured.  Let’s start from scratch….

Configure the Management0 interface
The Nexus 5k’s come with a single out of band management interface that can be used to remotely administer the system.  I say out of band because the interface is in its very own VRF segregating it from the default VRF.  If you aren’t comfortable with the concept of VRF’s I would look into them, they play a large role in the Nexus platform. 

-Insert your relevant information between <>
-Console prompts are shown in green

Nexus5k# config t
interface mgmt0
Nexus5k(config-if)# ip address <Enter your IP> 

Now that the IP is configured, we should have layer 2 connectivity to the Nexus.  But, we won’t have layer 3 until we give it a default route. 

Nexus5k# config t
Nexus5k(config)# vrf context management
Nexus5k(config-vrf)# ip route <Default Gateway>
Nexus5k(config-vrf)# exit

Notice that the default route was defined under the VRF for the management interface.  So when I want to test connectivity from the Nexus to other subnets using ping, I need to specify the VRF.  If I don’t, it will use the default VRF and my pings won’t go anywhere since there isn’t a route.

Nexus5k# ping <Your Workstation IP> vrf management

Enable the required features
As you probably know, none of the features on a Nexus are turned on until you tell the switch to enable them.  This is done using the feature command.  For my testing, I enabled the following features…

Nexus5k# config t
Nexus5k(config)# feature fcoe
(Note: This enables FCOE on the switch)
Nexus5k(config)# feature telnet
(Note: This enables remote administration of the switch through telnet)
Nexus5k(config)# feature udld
(Note: This enables unidirectional link detection)
Nexus5k(config)# feature interface-vlan
(Note: This enables VLAN interface configuration.  The Nexus is a layer 2 switch, but its handy when testing CNA connectivity to be able to ping from the Nexus using the VLAN interfaces)
Nexus5k(config)# feature lacp
(Note: This enables 802.3ad on the switch for port-channel negotiation)
Nexus5k(config)# feature vpc
(Note: This enables the Virtual Port Channel configuration on the switch)
Nexus5k(config)# feature fex
(Note: This enables the fiber channel extender protocol which allows you to connect 2ks to the 5k)
Nexus5k(config)# feature npv
(Note: This enables N port Virtualization on the switch making it a N port proxy)
Nexus5k(config)# exit

Configure the FEX (2k) modules
As discussed, the Nexus 5k’s use the FEX service to connect and manage the 2k modules as fabric extenders.  The configuration is pretty straight forward. 

Nexus5k# config t
Nexus5k(config)# fex 100
(Note: This creates a FEX association group 100)
Nexus5k(config-fex)# pinning max-links 1
(Note: This sets the pinning to 1 since we are going to be configuring a port-channel for the connection between the 2 and 5k.)
Nexus5k(config-fex)# exit
Nexus5k(config)# interface port-channel 100
(Note: Create the port-channel interface)
Nexus5k(config-if)# switchport mode fex-fabric
(Note: You need to specify that the connection is a fex-fabric connection rather than a standard switchport)
Nexus5k(config-if)# fex associate 100 
(Note: Assign the FEX group 100 to the port-channel)
Nexus5k(config-if)# no shut
Nexus5k(config-if)# exit
Nexus5k(config)# interface <the first physical interface going to the FEX>, <the second>
(Note: In this scenario, I’m using two 10 gig links to connect the FEX to the 5k)
Nexus5k(config-if)# switchport mode fex-fabric
(Note: You need to specify that the connection is a fex-fabric connection rather than a standard switchport)
Nexus5k(config-if)# fex associate 100 
(Note: Assign the FEX group 100 to the physical interfaces)
Nexus5k(config-if)# channel-group 100 mode on
(Note: Bind the interfaces to the port-channel)
Nexus5k(config-if)# exit
Nexus5k(config)# exit

Once this is completed the link between the 5k and the 2k should come up.  Once the link is established, the 2k will sync with the 5k, get software upgrades if need be, and then reboot.  Once the reboot is done the FEX ports will show up in the running config and interface commands on the 5k.  You can check the FEX status at any time by using the ‘show fex’ command on the 5k.

Wrap up
So at this point the 5ks and the 2ks should be online.  Depending on your topology there will need to be some extra port, trunk, and port-channel configuration.  I won’t walk through that in this post.  Up next we’ll talk about connecting the 5k to the IP network, the SAN fabric, and the host CNA adapters. 

Tags: , ,

So now that we know a little bit about fiber channel SANs and the goal of the Nexus 5k/2k, we can talk about some more advanced topics.  One topic that you’ll come across when working with the 5ks is NPV and NPIV.  NPV stands for N Port Virtualization.  NPIV stands for N Port ID Virtualization.  Both of these technologies are similar in what they accomplish, but are applied to different pieces of the SAN fabric. 

In a standard SAN configuration, you create zones that allow you to specify what devices need to talk to what.  Zones (in most cases) use the WWNs to assign different devices different access on the SAN fabric.  If we take the example of a file server, it needs to see the disk array for its local disk.  The disk arrays WWNs and the file servers HBAs would be a member of a zone allowing them to talk to each other.  Now, what happens if that file server is actually a VMware ESX box hosting multiple servers?  In the example just given, that server really only has one way to identify itself on the SAN which is the HBAs WWN.  But what if we want to map different LUNs to different VMs on the ESX server?  Enter, NPIV…

NPIV allows the ESX server to log into the fabric multiples times.  This allows the server to register multiple FCIDs which allows the administrator to give VMs their own WWNs and FCIDs for zoning.
photo 2 

So if we look at the diagram above, you can see that we have a VM Host that has a single HBA in it.  The HBA talks to the NPIV enabled switch and runs the FLOGI process twice to register two FCIDs so that Host 1 can be in a zone with LUN 1 and Host 2 can in a separate zone LUN 2. 

In a traditional SAN fabric, each switch gets assigned a Domain ID.  The Domain ID is an 8 bit field in the FCID.  Using our basic network math we can discern that we can then have 255 domain IDs.  In reality, some of these IDs are reserved and can’t be assigned to switches leaving us with 239 switches.  While that may seem like a lot to some of you, in large environments, that hard limit can become a serious issue as you try to scale the fabric.  The solution to this problem is NPV.  NPV allows SAN switches to essentially become N port proxies. 
photo 1

When talking about a NPV switch, I think its easiest to think of that switch as an HBA.  Take a look at the diagram above.  In this case the Nexus 5k is configured as an NPV switch.  The ports on the MDS appear as F type ports confirming that the MDS sees the 5k logically as a N port device.  In turn, the ports on the 5k are NP, or N port Proxy, ports.  The 5k then uses NPIV and proxies all of the connections for its connected hosts.  The HBAs on the bottom of the diagram see the ports on the 5k as F type ports just like a N port device would on a normal fabric switch.  You’ll note that I mentioned that NPV actually uses NPIV to some degree. So one of the first steps in configuring NPV is to ensure that both switches support, and have NPIV enabled.

The benefits of NPV are many.  Take for example a mixed vendor fabric.  In the past, you had to worry about interop modes, whether or not the two switches would talk correctly, and if vendor dependent features would still work.  Since you aren’t actually managing the NPV enabled switch and its just proxying connections, you can easily add switches to the fabric without worrying about interoperation.  Additionally, NPV helps solve the Domain ID limitation.  NPV enabled switches don’t get assigned a Domain ID since they aren’t technically manageable as a switch within the fabric. Additionally, NPV allows for reduced switch management/administration.  The NPV enabled switches don’t receive any additional SAN configuration once they are up and running in NPV mode.

Coming up next we’ll discuss the actual Nexus configuration.  Stay tuned….

Tags: , ,

As I laid out in my last post, these are the basic steps for the SAN configuration of a Cisco MDS series switch.  They are….

1.  Create the VSAN number you wish to use (1 is the default, not recommended to use the default for production SAN traffic)
2.  Add interfaces to your VSAN (just like you do with a VLAN)
3. Do any interface configuration needed on the FC interfaces (Just turn them on in most cases)
4. Verify the cabling and ensure that you have SAN connectivity
5. Create Aliases for WWNs (makes life easier)
6. Create required zones
7. Add members to your zones (I recommend using PWWNs)
8. Create a zoneset (I think Brocade calls this a ‘config’)
9. Add your zones to your zoneset
10. Activate the zoneset on the fabric

So lets walk through them one at a time and show the associated configuration.  I’ll use my same old color coding conventions.
-Insert your relevant information between <>
-Console prompts are shown in green

1.Create the VSAN number you wish to use (1 is the default, not recommended to use the default for production SAN traffic)
MDS(config)# vsan database
MDS(config-vsan-db)# vsan <new VSAN number>
MDS(config-vsan-db)# vsan <new VSAN number> name <VSAN name>
MDS(config-vsan-db)# exit

-Add interfaces to your VSAN (just like you do with a VLAN)
MDS(config)# vsan database
MDS(config-vsan-db)# vsan <VSAN number>
MDS(config-vsan-db)# vsan <VSAN number> interface <FC Interface>
MDS(config-vsan-db)# exit

-Do any interface configuration needed on the FC interfaces
MDS(config)# interface <FC Interface>
MDS(config-if)# no shutdown
MDS(config-if)# switchport mode <Either Auto, or a set type (E,F, etc.)>
MDS(config-if)# exit

-Verify the cabling and ensure that you have SAN connectivity
Your on your own here.  As with anything fiber, if you don’t have link, make sure you have the pair flipped correctly from SFP to SFP.

-Create Aliases for WWNs (makes life easier)
MDS(config)# fcalias name <name of the Alias> vsan <VSAN number>
MDS(config-fcalias)# member pwwn <WWN>
MDS(config-fcalias)# exit

-Create required zones and add members
MDS(config)# zone name <name of zone> vsan <VSAN number>
MDS(config-zone)# member fcalias <alias name 1>
MDS(config-zone)# member fcalias <alias name 2>
MDS(config-zone)# exit
(note: if you didn’t want to make aliases, you could use the keyword ‘pwwn’ rather than ‘fcalias’ and directly input the WWN)

-Create a zoneset (I think Brocade calls this a ‘config’) and add the zones to the zoneset.
MDS(config)# zoneset name <name of zoneset> vsan <VSAN number>
MDS(config-zoneset)# member <zone 1>
MDS(config-zoneset)# member <zone 2>
MDS(config-zoneset)# member <zone 3>
MDS(config-zoneset)# exit

-Activate the zoneset on the fabric
MDS(config)# zoneset activate name <zoneset name> vsan <VSAN number>

And that’s it.  It goes without saying, but save your config when you are done.

Tags: ,

« Older entries