In our last post, we did a very simple setup in which we defined a pool, our three web servers, and a virtual server. In the end, we were able to see simple load balancing in action. As I had said before, some of the configuration was done for us behind the scenes, and some items weren’t configured at all. Let’s take a look at what we missed…
Nodes and Monitors
A node is the actual physical server. So in this case, a node would be WebHost1, 2, or 3. Members are a specific service on a node. As we saw when we added pool resources we specified an IP address (the node) and the port (node + port = member (in most cases)). That being said, we never actually defined any nodes. Let’s take a look at what it did for us. Click on ‘Nodes’ under the ‘Local Traffic’ menu on the left. You should see something like…
So the LTM created nodes for us. These could be left as is, but I’m going to open each one and change the name so its more descriptive. You’ll notice that while you are in the node settings changing the name that there is an option for a health monitor. The default should be set to ‘Node Default’ as shown below…
Back on the main ‘Node’ screen you’ll see at the top an option for ‘Default Monitor’. You should have also noticed that the nodes are showing availability as ‘unknown’. This is because we are telling the node to check its availability using the ‘Node Default’ but we haven’t defined that yet. My host are available via ICMP, so I’ll configure that as my node default monitor.
Click ‘update’ and then click ‘Node List’ to go back to the main ‘Node’ window. It should now look like this…
Note the updated names as well as that they all show as green indicating that they are available. So now we have health checking enabled on both the member as well as the node itself. If a node goes offline, all of its members will fail the health check and be removed from the pool. We’ll wrap up this section with a quick test of that. I’m going to reboot WebHost1 and we are going to verify that traffic headed for the VIP doesn’t balance to the down server.
The instant the server stops responding to pings I can see that the pool member shows as down.
Additionally we can see that the node itself is marked as down…
A quick check of the VIP shows a pattern of…
This showing of course that WebHost1 is no longer in the pool. The instant it comes back online both monitors will show as green and it will be back as part of the pool.
In the next few posts I’m going to jump into some more of the more interesting pieces of the LTM config including load balancing methods, persistence, and hopefully I can get to iRules soon.