Working with VMware NSX – The setup

I’ve spent some time over the last few weeks playing around with VMware’s NSX product.  In this post, I’d like to talk about getting the base NSX configuration done which we’ll build on in later posts.  However, when I say ‘base’, I don’t mean from scratch.  I’m going to start with a VMware environment that has the NSX manager and NSX controllers deployed already.  Since there isn’t a lot of ‘networking’ in getting the manager and controllers deployed, I’m not going to cover that piece.  But, if you do want to start from total scratch with NSX, see these great walk through from Chris Wahl and Anthony Burke…

Chris Wahl
http://wahlnetwork.com/2014/04/28/working-nsx-deploying-nsx-manager/
http://wahlnetwork.com/2014/05/06/working-nsx-assigning-user-permissions/
http://wahlnetwork.com/2014/06/02/working-nsx-deploying-nsx-controllers-via-gui-api/
http://wahlnetwork.com/2014/06/12/working-nsx-preparing-cluster-hosts/

Anthony Burke
http://networkinferno.net/installing-vmware-nsx-part-1
http://networkinferno.net/installing-vmware-nsx-part-2
http://networkinferno.net/installing-vmware-nsx-part-3

Both of those guys are certainly worth keeping an eye on for future NSX posts (they have other posts around NSX but I only included the ones above to get you to where I’m going to pick up).

So let’s talk about where I’m going to start from.  My topology from where I’ll start looks like this…

image

Note: For reference I’m going to try and use the green colors to symbolize NSX components, blue for tenant-1 components, and orange for tenant-2 components.

So here’s what I did to get this far…

Hosts – I have three hosts.  Thumper, Thumper2, Thumper3 (you have to have interesting naming schemes right?).  Each host is running ESXi 5.5 and is in it’s own cluster.  Thumper is in cluster_mgmt and will be used for VM’s that are required for management of vSphere and NSX.  Thumper2 and Thumper3 will be used for the user space guests and are in their own clusters as well (A and B). 

Subnets – See my IP spreadsheet below.  I took a /24 and started carving it up into different subnets.  I decided for the sake of simplicity to use the same VLAN for all of the management level interfaces (NSX manager, controllers, management vmKernel interfaces, etc).  The big hitter here is having the NSX VTEP interfaces on different subnets which I do…

image

Virtual Networking – I have two standard distributed switches in the lab.  The first is called ‘nsx_sr1’ and has the uplinks for both thumper and thumper2 on it.  In addition, there are all the required distributed port-groups we’ll need for the guests.  The second DVS is called ‘nsx_sr2’ and just has thumper3 on it.  I’m trying to replicate a multiple server room situation here with 3 hosts so this is the best I could do.

                              image image

Physical Networking – It’s a gig switch with all of the servers interfaces being dot1q trunks allowing VLANs 115-121. 

NSX Components – I wont dive into details here since this piece is well covered in other posts (See Chris and Anthony’s referenced above) but I will offer some tips.  First, make sure that DNS and NTP are IDENTICAL on each device within NSX.  This theme continues as you deploy any additional components.  Second, despite it letting you deploy the manager and controllers against a standard vSwitch, don’t do it.  I’m not sure if this was my issue exactly but I’ve been told that all of this stuff needs to be on a DVS.  I had repeated issues with the controller nodes continuing to be deployed to the standard vSwitch even when I told them to use the DVS.  Had to delete the standard vSwitch entirely before deploying the controller to get them on the DVS

Prepped the hosts and clusters – Again, I had more issues here but once I switched to the DVS and re-verified the DNS and NTP things worked for the most part.  This piece was mostly just clicking and waiting.

Guests – I created a web and app guest VM for each tenant to start with.  They’ll just be sitting there for now until we get further along on the configuration.

So now that we have our base, let’s talk a little bit about the rest of the configuration required for NSX to work.  Now that the manager and the controllers are in place, we need to deploy the VXLAN configuration.  Since I already completed the host prep, you should see green checks in both the ‘Installation Status’ as well as the ‘Firewall’ columns.  If you do, you can configure VXLAN.  This is done by clicking on the ‘Configure’ link in the VXLAN column of each cluster you wish to prepare…

image

When you click configure, you’ll be prompted to enter some information about how you want VXLAN configured…

image 
I didn’t have a pool created for this VLAN yet so I had to create one through the dropdown menu as well…

image

If you have multiple DVS in a particular cluster, you’ll be able to pick which one you want to use.  Next, you’ll need to pick the VLAN you want to use.  Keep in mind, what’ we’re really doing here is creating the VTEP interface that the host will use to send and receive VXLAN encapsulated traffic.  Much like any other vmKernel interface, it needs to get off the host to the routed network.  In this case, I’ve configured VLAN 118 and it’s associated interface on the upstream MLS.  Frames from the VTEP will be dot1q tagged out of the host and out to the physical network.  MTU defaults at 1600 to accommodate the encapsulation of the traffic. 

This process was done on each cluster (A, B, MGMT) in my vSphere DC.  I had initially thought I could use a separate network for each cluster.  However, when I attempted to do that, I received this message…

image

Looks like there’s a limitation that you can only have one VTEP VLAN per DVS.  We’ll talk about this more later when we talk about how NSX handles broadcast, multicast, and unknown multicast (BUM frames) since I believe this limitation to be related.  Since the management cluster and cluster A are on the same DVS I used the same VLAN hence the same IP pool for their VTEP interfaces.  Cluster B got it’s own IP pool out of VLAN 119…

image

image

Once all of the clusters are prepped for VXLAN, we should see green checks across the board on the Host Preparation tab…

image

To get a better idea of what we just did, let’s take a look at each individual host.  As you can see, each host has a new vmKernel interface…

image

image

image

Note that the VTEP interface that we just created shows as a TCP/IP stack of ‘vxlan’ to indicate it’s the VTEP.

So now we have the hosts truly ready to start sending VXLAN encapsulated traffic.  The next couple of steps wrap up the prep work required to before we start provisioning actual logical networks on NSX.  Head back to the NSX installation menu and click on the ‘Logical Network Preparation’ tab and select the ‘Segment ID’ sub tab.  Click the Edit button to add a segment ID range for use with NSX…

image

Segment ID’s are essentially what dot1Q numbers were to VLANs.  If you’re quick, you’ll notice that the range of possible segment ID’s is over 16 million.  In case you don’t know, the number of VLANs is limited by only having 12 bits in the dot1q header.  12 bits gives you…

1
2
4
8
16
32
64
128
256
512
1024
2048
——-
= 4095

The 24 bits that VXLAN allocates gives you 24 bits to play with which turns out to…

1
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
1048576
2097152
4194304
8388608
—————
= 16777215

So that’s where the segment ID range comes from.  I’m not planning on having that many, so I’m just picking 5000-6000 for my segment range. 

The next task is to define transport zones.  To be real honest, I don’t totally see the use case for these.  I had some discussions about this over on the VMware message boards and after I thought I understood them, it turned out that they really don’t work the way I had thought (see discussion here –> http://bit.ly/1oD5s6s).  Either way, they appear to be another means to isolate objects in NSX.  So I’ll define one for each tenant just to keep things separate…

image

image

Note that I add the transport zone to both cluster A and cluster B.  These are the two clusters where I will have NSX logical networks provisioned to start with so I’ll only select those two on each transport zone for now.  Also note that I selected unicast mode for the control plane mode.  I’m not interested in supporting multicast on the physical network for NSX to work and my hosts are 5.5 so I can use unicast mode for now…

image

So at this point, I’m fully prepped.  Out lab topology now looks like this…

image

Not much has changed at this point, just a couple more networks being trunked down to the servers.  The next step would be to start creating the logical networks and attaching the hosts to them.  I’ll tackle that in the next post along with the distributed local router configuration.

Stay tuned!

1 thought on “Working with VMware NSX – The setup

  1. Pingback: Working with VMware NSX | NSX Insight

Leave a Reply

Your email address will not be published. Required fields are marked *