I’m not going to cover the basics in this post so I’m going to assume you know about port-groups, vmnics (why they are called vmnics is something that bothers me to this day. vmnics = physical NICs = uplinks), and dot1q trunking.
Ok, I lied, let’s talk a little bit about the basics. The important thing to remember about VMware networking is that an ESXi box is an end host. What does that mean? It mean’s that it’s not a router. While an ESXi box can have multiple vmnics (or even VMs on it that may perform some routing) the box itself can’t route. It’s just like any other PC or server out in the world. The ESXi boxes IP identity is based on it’s VMkernel IP address. A ESXi box can have multiple VMkernel ports each with different IP addresses. However, only one of the VMkernel ports can have a default gateway. The VMkernel port with the default gateway is generally the management VMkernel port. Take a look at my management VMkernel port below…
Note how the VMkernel default gateway is an available IP on that subnet. Now, take a look at my storage VMkernel port…
See how it’s the same gateway as the management interface? This is (in my network engineering opinion) an important fact to remember. The ESXi box itself doesn’t do layer 3 routing. So I can create a ton of VMkernel ports on different VLANs, with different IPs, and trunk them all together. But I can’t route to any of the other VLANs or VMkernel ports by going through the management interface. So for instance, I needed a storage VMkernel port since my IP storage was on a different VLAN (15 in my case). So the iSCSI host was 10.10.10.6/24. If I want my ESXi host to talk to 10.10.10.6/24, I need a VMkernel port with an IP in that subnet.
Bottom line – If you want the ESXi host to talk to something, it needs a VMkernel interface on that subnet. One of those VMkernel interfaces can have a default gateway.
There are 4 different services that can be enabled on a VMkernel port. They are…
IP Storage (iSCSI port binding)
It’s generally recommended that you use a different VMkernel port for each of these functions. Ideally, you have enough physical NICs to do so. If not, you can always trunk them over the same physical links.
In my lab configuration, I have one storage VMkernel port (note that it doesn’t have the ‘IP Storage’ function enabled on it, that’s only for iSCSI multipathing (another post on that coming soon)) and another management interface that has the management and vMotion features enabled on it. Not best practice for production, but it works. The last feature FT logging will be covered in an upcoming blog.
Ok, so that sort of beat the snot out of VMkernel ports. I’m not going to talk about the basics of VM port groups though since those are really just like ‘switchport access’ virtual ports. Let’s talk about some of the other configuration features on VM port groups and VMkernel ports on the standard vSwitch.
So here’s the configuration I have on one of the hosts…
So let’s pop into the properties of vSwitch0 to see what we are dealing with…
The first item in the list is the vSwitch itself. There isn’t a ton of ‘interesting’ pieces to the switch but there are a couple of things to note. First off, even though it’s a virtual switch, it has a hard limit on the number of ports it can have. You can increase this all the way up to 4088 but any change requires a host reboot in order to take affect. The MTU is also a configuration item of the vSwitch itself. You’ll notice that the vSwitch has some duplicate configuration items in relation to VMkernel ports and port groups. You can set ‘global’ configuration items on the vSwitch that the VMkernel and port groups then inherit. These global settings can be overridden by simply configuring them on the port group or VMkernel interface.
The Network Adapters tab shows the physical NICs that are associated with that vSwitch. You can add and remove them from the vSwitch as well as change their speed and duplex in this view…
The Security Tab
In the security tab, we can configure promiscuous mode, MAC address changes, and forged transmits.
If you want to see all of the traffic traversing the vSwitch, it is possible to configure a port on the switch to be fully promiscuous. I would recommend doing this on the port-group level rather than on the vSwitch itself. Basically, you’d create a new port-group called ‘Promiscuous’ (or whatever you want to call it) and set the vlan tag to 4095 (All VLANs). Then edit the port group, and change the promiscuous mode to accept by overriding the default from the vSwitch itself. Default is to reject.
MAC Address Changes
This one’s pretty straight forward. It the vSwitch sees a host trying to change it’s MAC address, it will stop sending it frames. The only real situation I can think of where you’d want to allow MAC spoofing is if you were doing some type of clustering or had to change your MAC to comply with some type of licensing. Default is to accept.
Another straight forward configuration item. If the virtual host attempts to send frames with a source other than that of it’s own vNIC, the frames will get dropped. The default is to accept.
In the traffic shaping tab, we have a few settings we can tweak in regards to traffic shaping VMs bandwidth. It’s important to note that the traffic shaping settings only refer to outbound traffic with standard switches. Distributed switches can shape inbound and outbound.
Not going to explain that setting….
This is the amount of traffic you want to allow over the link averaged over time.
The maximum amount of traffic you want to allow in a burst. This parameter can never be smaller than the average bandwidth.
Specifies how large the burst can be in kilobytes. Unlike the other two settings, this a kilobytes setting. The other are kilobytes per second.
In the NIC teaming tab, we have quite a few options. We can determine the NIC load balancing method, failover/back, and set what NICs are active, standby, and not in use.
There are 4 different load balancing options.
Route based on the originating virtual port ID – This is the simplest mechanism. In this method, the virtual machines outbound traffic is mapped to a particular physical NIC. The VMkernel chooses the NIC based off the ID of the virtual port that the VM is connected to. In this method, one VM can never use more bandwidth than is provided by a single physical adapter.
Route based on IP hash – This is the source/destination IP hashing mechanism we are used to seeing on port-channels. When you use this method, the switch port’s on the MLS side need to be configured in an etherchannel.
Route based on Source MAC hash – Same as the virtual port ID method but it uses the virtual NICs MAC address.
Explicit Failover – The use of active and standby adapters.
Network Failover detection
This can be set to link status or beacon. Beacon sends out broadcasts which should be forwarded to the other uplinks on that vSwitch. This is really only useful if there are three or more physical links. With two links, if one had issues, neither port would receive the broadcasts from the other port.
This is sort of an interesting one. Notify switches is the mechanism in which the ESXi host notifies the physical switch that an IP address has moved. It uses a gratuitous ARP which causes other devices to update their ARP tables. For instance, when a vMotion occurs, we need some mechanism in which to tell the switches that the MAC address has moved to a new interface. If you disable this setting, then the ESXi hosts will not send the gratuitous ARP and you have to rely on MAC aging on the switch or the normal MAC learning process to learn of the hosts new location. I’d recommend you leave this one on.
Pretty straight forward. If the primary NIC goes down the secondary NIC will take over. If the primary comes back, this setting determines if the host should fail back to the primary NIC.
In this box you can determine the failover order as well as determine that some NICs are unused. This setting comes into play in an upcoming post about iSCSI multipathing.
So that’s about it for the standard vSwitch. In the next post we’ll talk about the distributed switch as well as the migration process from standard to distributed. I’m hoping to also squeeze in the iSCSI multipathing post in the next day or two.