Linux

You are currently browsing the archive for the Linux category.

In my last post, I talked about some of the more physical aspects of my virtual home lab.  We talked about the need for nested virtualization as well as what the physical and virtual network would look like.  In this post, we’re going to look at building the VMs as well as getting the operating systems ready for the OpenStack install.  As a quick reminder, let’s take a look at what the logical lab looks like…

image
The lab will consists of 3 VMs (to start with), a controller, and two compute nodes.  Wile OpenStack can be installed on a variety of Linux operating systems, this series will be focusing on Ubuntu version 14.04.  The first thing we need to do is create a base image.  Without a base image, we’re going to be forced to install Ubuntu individually on each server which is not ideal.  So the first thing you’ll want to do is download the correct ISO and upload it to your ProxMox server. 

Note: Getting around in ProxMox is out of scope for this series.  HOWEVER – ProxMox has a rather large following on the internet which means that Google is your friend here.

Next thing we need to do is create a VM.  We’ll create a VM called ‘ubuntu1404base’ and afterwards turn it into a template that we can clone over and over again.  I used the defaults for the most part making these notable changes…

OS Type – Linux 4.x/3.x Kernel
Disk Drive Bus/Type – VirtIO
Disk Size – 30GB
Disk Format – QEMU (This is key to being able to do snapshots!)
CPU Type – host
Memory – 2048
Network Model – VirtIO

Once the VM is created, go under it’s hardware tab and create 2 additional NICs.  Provide the defaults with two exceptions.  First, make sure that they are all of type ‘VirtIO’.  Secondly, tag the 2nd NIC (net1) on VLAN20…

image 
The next step is to start the VM and install Ubuntu.  I accepted all of the defaults during the operating system install with these two exceptions…

Hostname – template (remember this is just the base template we’re building to start with)
Package Install – Select OpenSSH Server

Once the install completes, reboot the VM, and log in.  If you’re new to Ubuntu, you likely realized during the install it didn’t ask you to set a root password. You need to use the user account you created during setup to log in and then switch over to root privileges (sudo su) to get root access. Let’s start by updating the system and packages…

Note: This will be the first time we need VM network connectivity.  If that’s not working, we need to fix that first.  In this case Im assuming you have DHCP enabled on VLAN 10, VLAN 10 has a means to get to the internet, and that the VMs are using that for initial connectivity.

Next, install the following packages…

Then we need to tell the Ubuntu cloud archive which version of OpenStack packages we want…

After changing the cloud-archive, run the update once more…

Lastly, its necessary to change a couple of the default kernel networking parameters for all of this to work. There are 3 settings we need to modify and they all exist in the file /etc/sysctl.conf’…

Modify the 3 settings above to have the new values as shown above.

Now shutdown the VM and head back to ProxMox.  Right click on the VM and click on ‘Convert to Template’.  Once this is done, you should see the template appear on the left-hand side of the screen…

image
Now right click the template, and click on ‘Clone’.  On the menu give the VM a name, change the mode to ‘Full Clone’, select the target storage (local in my case), and make sure you change the format to ‘QEMU’…

image
After this completes, create VMs for the next two nodes, compute1 and compute2…

image

 image 
Alright, so our VMs are now built, the next step is to get the VMs base operating system configuration completed.  I’m making the following assumptions about the environment…

-The upstream network is configured for VLANs 10, 20, and 30
-You have a DNS server which has records to resolve the hostnames controller, compute1, and compute2 to the correct primary (net0) IP address

If that’s in place, let’s log into the VMs and start the configuration…

Note: Each of the below bolded items needs to be completed on each VM.  In some cases I list the entire config needed for each VM, in other cases I list what needs to change and assume you’ll change it to the correct value.

Change the hostname
vi /etc/hostname

Edit the network configuration
vi /etc/network/interfaces

The configuration for the network interfaces looks like this for each host…

Now reboot the VMs and make sure you can ping each of the VMs IP addresses on VLAN 10 and VLAN 20.  Also make sure you can do basic name resolution of each VM. 

That’s it!  We are now ready to begin the OpenStack installation in our next post!

Note: Now would be a good time to take snapshots of the VMs in case you want to revert to an earlier point.  I’ll make a couple of other suggestions as to when I would recommend taking them but I found them hugely helpful in this lab work!

Tags:

I’ve recently started to play around with OpenStack and decided the best way to do so would be in my home lab.  During my first attempt, I ran into quite a couple of hiccups that I thought were worth documenting.  In this post, I want to talk about the prep work I needed to do before I began the OpenStack install.

For the initial build, I wanted something simple so I opted for a 3 node build.  The logical topology looks like this…

image

The physical topology looks like this…

image
It’s one of my home lab boxes.  A 1u Supermicro with 8 gigs of RAM and a 4 core Intel Xeon (X3210) processor.  The hard drive is relatively tiny as well coming in at 200 gig.  To run all of the OpenStack nodes on 1 server, I needed a virtualization layer so I chose ProxMox (KVM) for this.

However, running a virtualized OpenStack environment presented some interesting challenges that I didn’t fully appreciate until I was almost done with the first build…

Nested Virtualization
You’re running a virtualization platform on a virtualized platform.  While this doesn’t seem like a huge deal in a home lab, your hardware (at least in my setup) had to support nested virtualization on the processor.  To be more specific, your VM needs to be able to load two kernel modules, kvm and kvm_intel (or kvm_amd if that’s your processor type).  In all of the VM builds I did up until this point, I found that I wasn’t able to load the proper modules…

image 
ProxMox has a great article out there on this, but I’ll walk you through the steps I took to enable my hardware for nested virtualization.

The first thing to do is to SSH into the ProxMox host, and check to see if hardware assisted virtualization is enabled.  To do that, run this command…

Note: You should first check the systems BIOS to see if Intel VT or AMD-V is disabled there.

In my case, that yielded this output…

image
You guessed it, ‘N’ means not enabled.  To change this, we need to run this command…

Note: Most of these commands are the same for Intel and AMD.  Just replace any instance of ‘intel’ below with ‘amd’.

Then we need to reload the ProxMox host for the setting to take affect.  Once reloaded you should be able to run the above command again and now get the following output…

image 
It’s also important that we make sure to set the CPU ‘type’ of the VM to ‘host’ rather than the default of ‘Default (kvm64)’…

image 
If we reboot our VM and check the kernel modules we should see that both kvm and kvm_intel are now loaded. 

image
Once the correct modules are loaded you’ll be all set to run nested KVM/

The Network
From a network perspective, we want our hosts to logically look something like this…

image 
Nothing too crazy here, just a VM with 3 NICs.  While I’m used to running all sorts of crazy network topologies virtually, this one gave me slight pause.  One of the modes that OpenStack uses for getting traffic out to the physical network is dot1q (VLAN) trunking.  In most virtual deployments, the hypervisor host gets a trunk port from the physical switch containing multiple VLANs.  Those VLANs are then mapped to ports or port-groups which can be assigned to VMs.  The net effect of this is that the VMs appear on the physical network in whatever VLAN you map them into without having to do any config on the VM OS itself.  This is very much like plugging a physical server into a switch and tagging it as an access port for a particular VLAN.  That model looks something like this…

image 
This is the model I planned on using for the management and the overlay NIC on each VM.  However, this same model does not apply when we start talking about our third NIC.  This NIC needs to be able send traffic tagged on the VM itself.  That looks more like this…

image

So while the first two interfaces are easy, the third interface is entirely different since what we’re really building is a trunk within a trunk.  So the physical diagram would look more like this…

image 
At first I thought as long as the VM NIC for the third interface (the trunk) was untagged, things should just work.  The VM would tag the traffic, the bridge on the ProxMox host wouldn’t modify the tag, and the physical switch would receive a tagged frame.  Unfortunately I didn’t have any luck with that working.  Captures seemed to show that the ProxMox host was stripping the tags before forwarding them on its trunk to the physical host.  Out of desperation I upgraded the ProxMox host from 3.4 to 4 and the problem magically went away.  Wish I had more info on that, but that’s what fixed my issue. 

So here’s what the NIC configuration for one of the VMs looks like…

image
I have 3 NICs defined for the VM.  Net0 will be in VLAN 10 but notice that I don’t specify a VLAN tag for that interface.  This is intentional in my configuration.  For better or worse, I don’t have a separate management network for the ProxMox server itself.  In addition, I manage the ProxMox server from the IP interface associated with the single bridge I have defined on the host (vmbr0)…

image 
Normally, I’d tag the vmbr interface in VLAN 10 but that would imply that all VMs connected to that bridge would also be in VLAN 10 inherently.  Since I don’t want to do that I need to not tag at the bridge level and tag at the VM NIC level.  So back to the original question, how are these things on VLAN 10 if I’m not tagging VLAN 10?  On the physical switch I configure the trunk port to have a native VLAN of 10…

image
What this does is tell the switch that any frames that arrive untagged should be a member of VLAN 10.  So this solves my problem and frees me up to either tag on the VM NIC (as I do with net0) or tag on the VM itself (as I’ll do with net2) while having all VM interfaces a member of a single bridge. 

Summary
I cant stress the importance of starting off on the right foot when building a lab like this.  Mapping all of this out before you start will save you TONS of time in the long run.  In the next post we’re going to start building the VMs and installing the operating systems and prerequisites.  Stay tuned!

Tags: , ,

Network namespaces allow you to provide unique views of the network to different processes running on a Linux host.  If you’re coming from a traditional networking background, the closest relative to network namespaces would be VRF (Virtual Routing and Forwarding) instances.  In both cases the constructs allow us to provide a different network experience to different processes or interfaces.  For the sake of starting the conversation, let’s quickly look at an example of both VRFs and network namespaces so you get an idea of how they work.

The easiest scenario to illustrate either of these technologies is out of band management.  Take for instance this very simple network diagram…

image     
Note: I’m being purposefully vague here about the network layout and addressing.  Bear with me for a moment while I get to the point. 

As you can see, we have two users that live on the same segment (forgive me for not drawing an Ethernet segment connecting the two).  Let’s assume that the user on the left has to traverse northbound to get to resources that hang off the top network cloud.  Let’s also assume the user on the right has to manage the router through the management interface.  Standard practice these days dictate that most network infrastructure be managed through a separate ‘out of band’ network.  So in the diagram above, we have two ‘data’ interfaces and one ‘mgmt’ interface.  We also have a default route pointing north and a 192 route pointing south.  Do you see the problem with the two different traffic flows we have?

image
The user on the left can route northbound following the default route to get to resources in the top network cloud.  When the resources send return traffic, the router routes them south as expected.  The user on the right routes directly to the router’s management interface, but then heads out the routers southbound data interface following the 192.168.1.0/24 route.  This is what we refer to as asymmetric routing.  While this isn’t always a bad thing in networking, there could be stateful network devices in path that won’t approve of the asymmetry.  Additionally, if we’re relying on the ‘data’ interface to return management traffic it sort of defeats the purpose of out of band management.  The problem is we can’t have a single routing table with the 192.168.1.0/24 route pointing out two different interfaces (you actually can but that would load balance the traffic and not fix the issue). 

So how is this solved?  We solve this by implementing a management VRF.  Each VRF is given it’s own routing table instance and IP interfaces can be joined to any available VRF.  So in this case, we can leave the two ‘data’ interfaces in the default VRF along with the route’s shown on the diagram.  In addition, we can create a management VRF, add the ‘mgmt’ interface to it, and add a duplicate 192.168.1.0/24 route pointing out of the management interface…

image 
This solves our asymmetric problem and allows the router to be partitioned from a layer 3 (routing) point of view.  The VRF construct has been around networking for a long time and is widely used in the networking space.   

Note: I didn’t show an example config of VRFs because they’re so widely used.  If you’d like to see one take a look at this walk through on Jeremy Stretch’s blog.

So how do VRFs compare to network namespaces? Let’s use a similar example…

image
Here we have a server that has a data interface and a management interface (not ILO, that’s different hardware with a different routing table).  Lets say that the user on the left wants to access a web page hosted off of the data interface while the user on the right wants to manage the server off of the ‘mgmt’ interface.  So we have the same problem as we did with the router.  If we only have one routing table instance, we can’t solve this without asymmetrically routing the traffic.  Enter network name spaces.  Since we want to get our feet wet with network namespaces let’s take a look at a more concrete example…

image  
Much like the router example from above, we’re going to simulate a server that has a data and a management interface.  This time however, we’re going to take things a step further to fully illustrate the power of network namespaces.  You’ll notice in this example that both the left and the right side of the diagram implement the exact same IP addressing.  We’ll assume that the server on the left wishes to access a web site hosted on the netnstest server and that the server on the right wishes to manage the netnstest server through SSH.  With the duplicate IP addressing on both sides of the diagram (and server) the only way we can make this work is with network namespaces. 

So let’s assume that the data connection is built in the default namespace.  Nothing special, just a standard interface configuration.  In my case, the configuration is stored in ‘/etc/sysconfig/network-scripts/ifcfg-ens18’ and looks like this…

image

Again – nothing special, a standard Linux network config file like you’re used to seeing.  At this point, the left server should be able to ping the netnstest server’s data interface as well as access the super cool webpage it is hosting…

image

Alright – so the normal network stuff is working.  So now let’s add the mgmt namespace.  To do that, let’s think about how we’d build a normal IP interface in Linux.  We’d probably do something like this…

Pretty straightforward right?  So how do we accomplish the same thing with network namespaces?  We do this…

The difference really boils down to the first two steps.  First, we have to create the mgmt network namespace.  Second, we have to add the physical interface that we want to use (ens19) for management to the network namespace.  After that, the only configuration difference is that we’re executing the Linux network commands within the namespace using the ‘ip netns exec mgmt’ syntax.

As you might have guessed, the ‘ip netns’ command is what we use to interact with Linux network namespaces from the shell.  If we look at the command help, we can see we have a few basic options…

image
The big hitters are add, delete, list, and exec.  The first three should speak for themselves.  Exec is the more interesting option and allows us to execute commands from within a specified network namespace.  We’ll see a few more examples of using the exec syntax in just a moment.  So let’s enter the above commands to build our management interface and then make sure it’s working.  First let’s make sure that our data interface still looks good…

image

Awesome, so now let’s try some tests from the right server to see what we get…

image
Looking good!  Now let’s try to manage the server by SSHing into it…

image 
Huh.  So that didn’t work.  If we think about this for a second, the SSH daemon wasn’t told to run in the management namespace.  It was already running in the default namespace.  So for this to work we have to run a copy of the SSH daemon in the ‘mgmt’ namespace.  We can do this by using the following command…

If we run that on the netnstest host, we should now be able to SSH into the server from the server on the right…

image 
As one last piece of evidence that this is working let’s telnet to the left and right router from the netnstest server itself…

image 
As you can see, when I telnet to 10.20.30.1 I get connected to the left router.  When I telnet to 10.20.30.1 using the ‘netns exec’ command from the ‘mgmt’ network namespace, I get connected to the right router.

We should make note at this point that network namespaces are not persistent.  That is, if I reboot the netnstest server I’ll lose all of this configuration.  The fix for that would be to create a startup script which builds the management interface each time the server boots.  As part of that script, you’ll need to start a copy of any services (SSH) that you wish to use in that network namespace. 

Interestingly enough, using network namespace for management interfaces isn’t the reason I wrote this blog.  The examples we went through above were designed to give you a base level understanding of how a network namespace can function.  While I’m not to saying that the management use case is invalid, the really interesting use cases for network namespaces lie in the virtual and container spaces.  In the next post on this topic we’ll be talking about how Docker uses network namespaces for container network isolation and connectivity. 

In the mean time – take a look at Matt Oswalt’s recent blog where he describes how namespaces are the new network access layer

Tags:

« Older entries § Newer entries »