While you’ve probably noticed from the last post that I switched to ESXi 4, the rest of the configuration we have done up to this point works the same on both 3.5 and 4, so I’m not going to repeat it. Needless to say its much easier to just deploy the .OVA template in ESXi 4 rather than converting it and modifying the controller. At any rate…
Let’s talk about the network configuration we are going to use for the lab setup. I’ve decided to have a single LTM with 3 Apache web servers sitting behind it. So this is sort of a logical depiction of it…
So our clients will hit a VIP (Virtual IP) on the outside of LTM and the LTM will load balance the traffic to any of the three web servers. Pretty easy right? The tricky part is the virtual network configuration for this. The VE LTM has three virtual NICS.
Adapter 1 is the management interface, adapter 2 is the inside interface, and adapter 3 is the outside. The default network on the ESXi box was called ‘VM Network’ so I just left that as is and set my normal network VLAN on is. The ‘LTM-Servers’ is the inside LTM interface that can talk to the servers, and the ‘LTM_Presentation’ is the outside interface on the same network where the VIPs will live. So my configuration looks like…
VLAN 10 – Management (Normal LAN VLAN where client live as well)
VLAN 20 – Internal (LTM-Servers)
VLAN 30 – External (LTM-Presentation)
So, we’ve done out base configuration and are able to access the LTM through the management IP. Now what? Let’s look at a less logical depiction of what my lab looks like…
So my physical switch is a Cisco 3750. On it, I’ve defined the three VLANs and assigned them each a VLAN interface.
VLAN 10 – 10.20.30.1 /24 (The Color Black)
VLAN 20 – 192.168.1.1 /24 (The Color (Green)
VLAN 30 – 10.20.20.1 /24 (The Color Blue)
The ESXi box has a single physical NIC which attaches to a trunk port on the 3750. The trunk configuration looks like this…
description Connection ESXi4
switchport trunk allowed vlan 10,20,30
switchport mode trunk
So basically we are allowing the three VLANs to traverse the trunk to the ESXi Host. Once that’s done, we need to define all of the VLANs on the ESXi host. These are created on the Configuration/Networking screen. Selected the properties of the physical NIC and add virtual machine networks for each network and set their VLAN tag appropriately. Once this is done, we can modify the LTM and the 3 web server’s settings and select which network we want them on. An example of me changing one of the web servers is shown below…
So at this point, we should have the VLANs defined on the ESXi host, the NICs of the LTM, and the NICs of the Web Server’s set. Let’s talk briefly about the network configuration of the web servers. I wanted to be able to SSH to each one of them directly so I set their default gateway to the VLAN 20 SVI on the 3750 (192.168.1.1). Anyone see a issue with this? Hopefully you realized that by doing this, we’ve broken default load balancing. Here’s what will happen…
-A request will come from a client (10.20.30.50) heading towards a VIP (10.20.20.1) on the LTM
-The LTM will examine the request, do its load balancing magic, modify the destination of the packet to be the web server it selected (192.168.1.41), and send it on its way.
-The web server will get the packet, examine it, note that it was in fact destined for itself, and attempt to reply to the host that was listed as the source of the packet (10.20.30.50)
-The return route for that packet will send it to web servers default gateway which happens to be the SVI interface (192.168.1.1) on the 3750. When it does this, it has effectively gone around the LTM and broke the connection.
See the problem? The issue is very easily resolve by using a SNAT on the LTM which will change the source to an IP on the LTM meaning all return traffic will always go to the LTM. More on SNAT in a later post.
So lets break it down…
VLAN 10 – 10.20.30.1 /24
VLAN 20 – 192.168.1.1 /24
VLAN 30 – 10.20.20.1 /24
Web Servers IPs
Web Server 1 – 192.168.1.41 /24
Web Server 2 – 192.168.1.42 /24
Web Server 3 – 192.168.1.43 /24
Default gateways on all point to the VLAN 20 SVI
Management – 10.20.30.20 /24
Internal – 192.168.1.40 /24
External – 10.20.20.40 /24
The LTM configuration of the IPs is pretty easy, here are some screen shots from my setup…
Interfaces (I didn’t do any config here, just a screen shot)
A final glance at what our VSwitch looks like on theESXi host…
So at this point, you should be able to…
-SSH to all of your web servers for management
-Connect to the ESXi instance for management
-Connect to the LTM for management on both the management VLAN and the External VLAN
I think that’s where I’ll leave it for today. Tomorrow we’ll talk about setting up some actual load balancing and configuring the nodes and members.
Can you add the SNAT portion to this? I think that is what I am missing in my configuration to allow proper connectivity. Also in 1000v should connectivity to the internal and external interfaces for the F5 be trunked?