Linux

You are currently browsing the archive for the Linux category.

If you’ve made it this far, hopefully you’ve already completed steps similar to those outlined in my previous two posts…

The Lab
Prepping the VMs

If you have, we’re now ready to start installing OpenStack itself.  To do this, I’ve built a set of installation scripts.  All of the files are out on Github…

https://github.com/jonlangemak/openstackbuild

I suggest you pull them from there into a local directory you can work off of.  There is a folder for each VM that needs to be built and each folder has a file called ‘install’.  This file contains all of the steps required to build each on one of the three nodes.  The remaining files are all of the configuration files that need to change in order for OpenStack to work in our build.  We’ll be copying these files over to the VMs as part of the install.

A couple of notes before we start…

-The beginning of each each install file lists all of the packages that need to be installed for this to work.  I suggest you start the package install on each VM at the same time as it can take some time to complete.

The controller install has an additional step before the package install which disables a service from running.  Ubuntu’s package manager automatically starts services as part of the installation.  This is different than how it’s handled on RHEL based systems.

-The config and configuration files assume that you used the same IPs, VLANs, or hostnames.  While most of the configuration relies on DNS, there are some hard coded static IPs.  If you are not using the same layout, you can search the config files for flags that looks like ‘**CHANGE THIS IF NEEDED**’.  Lines following that flag are specific to this configuration and will need to be changed if you used different IPs, VLANs, or hostnames.  Im 99% sure I flagged all of the areas but if I missed something let me know.

-The configuration relies on the upstream network being configure to support all of the defined networks and VLANs as described in earlier posts.  Later posts will rely on these subnets having reachability to the internet as well. 

You can take two approaches to install OpenStack using these files…

Completely Manual
As mentioned, each folder has an ‘install’ file that walks you through the build process.  It tells you where to place the config files with the expectation that you’ll delete the existing config file and replace it with the one from the working directory.  This works well, but is also a little more time consuming than I had hoped.

Manual with CURL to drop config files
In each components directory I’ve placed modified ‘install’ files for each node named ‘curlinstall’.  These install files are identical to the local install files, but replace all of the config placement with curl commands to download the files from a local HTTP server.  In my case, that server is ‘http://tools’.  If you want to take this approach you can easily modify (find and replace) the curl commands to suit your needs.

Note: Yes – I know.  This is screaming for automation.  Im hoping to get this changed over to Salt or Ansible once I find time but for now the focus is getting this built so we can examine the network constructs.

Regardless of which approach you take, the installation and configuration is pretty straight forward.  Follow the install scripts from top to bottom starting with the controller and then completing the install on compute1 and compute2.  Once you’re done, go ahead and try to access the portal at this URL…

image 
You should be able to log in with any of these credentials…

Admin User – admin/openstack
Test tenant 1 – demo/demo
Test tenant 2 – demo2/demo2

Make sure that all of the credentials work as you expect and you can reach the tenant dashboards.  Next, we need to run a test to make sure that everything is working as expected.  The first thing we have to do is create an external network for the tenants to consume.  To do this, log into the dashboard as the admin user and head over to the network tab and create a new network…

Note: Just follow along for now, the next posts will walk through what we’re actually doing and all of the terminology.

image 
Here I’m creating a new network called ‘external’, saying it exists out of the interface ‘public’, is of type ‘VLAN’, and uses VLAN tag 30.  I also declare it as ‘shared’ and ‘external’.  Create the network and make sure the creation succeeds.  You’ll get a message in the upper right corner telling you either way…

image If it succeeds, go ahead and edit the network by click on the network name…

image
Click on the ‘Create Subnet’ button to define an IP subnet in the network.  If you’re using the same subnets that I am, define the subnet as shown below…

image 
image 
Again, make sure it completes successfully…

image
Now go ahead and log out of the admin user and log in as the ‘demo’ user.  Let’s again head over to the network tab and create a new network using the following settings…

image

image

image 
Make sure the creation is successful…

image Once the network is created, head over and create a new router…

image 
Name the router as you wish and then select the ‘external’ network we created under the admin tenant as the router’s external network.  Again – make sure this goes through successfully…

image 
Click the router name to edit it and add a new interface…

image 
Make sure this succeeds as well…

image
Now we can try and launch an instance so head over to the instance tab and launch an instance…

image

image 
Hit launch and watch the status of the instance to see if it launches successfully…
 image 
If the instance launches successfully, the status should change to ‘Active’…

image
Once this happens, select the action of ‘Associate Floating IP’ under the instances context menu…

image
Click the plus sign…

image
Hit ‘Allocate IP’..

image
Finally click associate to bind the floating IP to the instance.  If successful, you should see it show up under the IP address column of the instance…

image 
Now head over to the access and security tab and manage the rules in the default security group.  We’re going to add two rules…

image

image 
Once these are added, we should be able to access the instance from the external network via the floating IP address…

image

Ping works, now let’s try SSH…

Note: the default Cirros image credentials are cirros/cubswin:)

image
Nice!  So its all working.  In the next post, we’re going to talk about the basic Linux networking constructs that OpenStack uses to accomplish this.  Stay tuned!

Tags:

In my last post, I talked about some of the more physical aspects of my virtual home lab.  We talked about the need for nested virtualization as well as what the physical and virtual network would look like.  In this post, we’re going to look at building the VMs as well as getting the operating systems ready for the OpenStack install.  As a quick reminder, let’s take a look at what the logical lab looks like…

image
The lab will consists of 3 VMs (to start with), a controller, and two compute nodes.  Wile OpenStack can be installed on a variety of Linux operating systems, this series will be focusing on Ubuntu version 14.04.  The first thing we need to do is create a base image.  Without a base image, we’re going to be forced to install Ubuntu individually on each server which is not ideal.  So the first thing you’ll want to do is download the correct ISO and upload it to your ProxMox server. 

Note: Getting around in ProxMox is out of scope for this series.  HOWEVER – ProxMox has a rather large following on the internet which means that Google is your friend here.

Next thing we need to do is create a VM.  We’ll create a VM called ‘ubuntu1404base’ and afterwards turn it into a template that we can clone over and over again.  I used the defaults for the most part making these notable changes…

OS Type – Linux 4.x/3.x Kernel
Disk Drive Bus/Type – VirtIO
Disk Size – 30GB
Disk Format – QEMU (This is key to being able to do snapshots!)
CPU Type – host
Memory – 2048
Network Model – VirtIO

Once the VM is created, go under it’s hardware tab and create 2 additional NICs.  Provide the defaults with two exceptions.  First, make sure that they are all of type ‘VirtIO’.  Secondly, tag the 2nd NIC (net1) on VLAN20…

image 
The next step is to start the VM and install Ubuntu.  I accepted all of the defaults during the operating system install with these two exceptions…

Hostname – template (remember this is just the base template we’re building to start with)
Package Install – Select OpenSSH Server

Once the install completes, reboot the VM, and log in.  If you’re new to Ubuntu, you likely realized during the install it didn’t ask you to set a root password. You need to use the user account you created during setup to log in and then switch over to root privileges (sudo su) to get root access. Let’s start by updating the system and packages…

Note: This will be the first time we need VM network connectivity.  If that’s not working, we need to fix that first.  In this case Im assuming you have DHCP enabled on VLAN 10, VLAN 10 has a means to get to the internet, and that the VMs are using that for initial connectivity.

Next, install the following packages…

Then we need to tell the Ubuntu cloud archive which version of OpenStack packages we want…

After changing the cloud-archive, run the update once more…

Lastly, its necessary to change a couple of the default kernel networking parameters for all of this to work. There are 3 settings we need to modify and they all exist in the file /etc/sysctl.conf’…

Modify the 3 settings above to have the new values as shown above.

Now shutdown the VM and head back to ProxMox.  Right click on the VM and click on ‘Convert to Template’.  Once this is done, you should see the template appear on the left-hand side of the screen…

image
Now right click the template, and click on ‘Clone’.  On the menu give the VM a name, change the mode to ‘Full Clone’, select the target storage (local in my case), and make sure you change the format to ‘QEMU’…

image
After this completes, create VMs for the next two nodes, compute1 and compute2…

image

 image 
Alright, so our VMs are now built, the next step is to get the VMs base operating system configuration completed.  I’m making the following assumptions about the environment…

-The upstream network is configured for VLANs 10, 20, and 30
-You have a DNS server which has records to resolve the hostnames controller, compute1, and compute2 to the correct primary (net0) IP address

If that’s in place, let’s log into the VMs and start the configuration…

Note: Each of the below bolded items needs to be completed on each VM.  In some cases I list the entire config needed for each VM, in other cases I list what needs to change and assume you’ll change it to the correct value.

Change the hostname
vi /etc/hostname

Edit the network configuration
vi /etc/network/interfaces

The configuration for the network interfaces looks like this for each host…

Now reboot the VMs and make sure you can ping each of the VMs IP addresses on VLAN 10 and VLAN 20.  Also make sure you can do basic name resolution of each VM. 

That’s it!  We are now ready to begin the OpenStack installation in our next post!

Note: Now would be a good time to take snapshots of the VMs in case you want to revert to an earlier point.  I’ll make a couple of other suggestions as to when I would recommend taking them but I found them hugely helpful in this lab work!

Tags:

I’ve recently started to play around with OpenStack and decided the best way to do so would be in my home lab.  During my first attempt, I ran into quite a couple of hiccups that I thought were worth documenting.  In this post, I want to talk about the prep work I needed to do before I began the OpenStack install.

For the initial build, I wanted something simple so I opted for a 3 node build.  The logical topology looks like this…

image

The physical topology looks like this…

image
It’s one of my home lab boxes.  A 1u Supermicro with 8 gigs of RAM and a 4 core Intel Xeon (X3210) processor.  The hard drive is relatively tiny as well coming in at 200 gig.  To run all of the OpenStack nodes on 1 server, I needed a virtualization layer so I chose ProxMox (KVM) for this.

However, running a virtualized OpenStack environment presented some interesting challenges that I didn’t fully appreciate until I was almost done with the first build…

Nested Virtualization
You’re running a virtualization platform on a virtualized platform.  While this doesn’t seem like a huge deal in a home lab, your hardware (at least in my setup) had to support nested virtualization on the processor.  To be more specific, your VM needs to be able to load two kernel modules, kvm and kvm_intel (or kvm_amd if that’s your processor type).  In all of the VM builds I did up until this point, I found that I wasn’t able to load the proper modules…

image 
ProxMox has a great article out there on this, but I’ll walk you through the steps I took to enable my hardware for nested virtualization.

The first thing to do is to SSH into the ProxMox host, and check to see if hardware assisted virtualization is enabled.  To do that, run this command…

Note: You should first check the systems BIOS to see if Intel VT or AMD-V is disabled there.

In my case, that yielded this output…

image
You guessed it, ‘N’ means not enabled.  To change this, we need to run this command…

Note: Most of these commands are the same for Intel and AMD.  Just replace any instance of ‘intel’ below with ‘amd’.

Then we need to reload the ProxMox host for the setting to take affect.  Once reloaded you should be able to run the above command again and now get the following output…

image 
It’s also important that we make sure to set the CPU ‘type’ of the VM to ‘host’ rather than the default of ‘Default (kvm64)’…

image 
If we reboot our VM and check the kernel modules we should see that both kvm and kvm_intel are now loaded. 

image
Once the correct modules are loaded you’ll be all set to run nested KVM/

The Network
From a network perspective, we want our hosts to logically look something like this…

image 
Nothing too crazy here, just a VM with 3 NICs.  While I’m used to running all sorts of crazy network topologies virtually, this one gave me slight pause.  One of the modes that OpenStack uses for getting traffic out to the physical network is dot1q (VLAN) trunking.  In most virtual deployments, the hypervisor host gets a trunk port from the physical switch containing multiple VLANs.  Those VLANs are then mapped to ports or port-groups which can be assigned to VMs.  The net effect of this is that the VMs appear on the physical network in whatever VLAN you map them into without having to do any config on the VM OS itself.  This is very much like plugging a physical server into a switch and tagging it as an access port for a particular VLAN.  That model looks something like this…

image 
This is the model I planned on using for the management and the overlay NIC on each VM.  However, this same model does not apply when we start talking about our third NIC.  This NIC needs to be able send traffic tagged on the VM itself.  That looks more like this…

image

So while the first two interfaces are easy, the third interface is entirely different since what we’re really building is a trunk within a trunk.  So the physical diagram would look more like this…

image 
At first I thought as long as the VM NIC for the third interface (the trunk) was untagged, things should just work.  The VM would tag the traffic, the bridge on the ProxMox host wouldn’t modify the tag, and the physical switch would receive a tagged frame.  Unfortunately I didn’t have any luck with that working.  Captures seemed to show that the ProxMox host was stripping the tags before forwarding them on its trunk to the physical host.  Out of desperation I upgraded the ProxMox host from 3.4 to 4 and the problem magically went away.  Wish I had more info on that, but that’s what fixed my issue. 

So here’s what the NIC configuration for one of the VMs looks like…

image
I have 3 NICs defined for the VM.  Net0 will be in VLAN 10 but notice that I don’t specify a VLAN tag for that interface.  This is intentional in my configuration.  For better or worse, I don’t have a separate management network for the ProxMox server itself.  In addition, I manage the ProxMox server from the IP interface associated with the single bridge I have defined on the host (vmbr0)…

image 
Normally, I’d tag the vmbr interface in VLAN 10 but that would imply that all VMs connected to that bridge would also be in VLAN 10 inherently.  Since I don’t want to do that I need to not tag at the bridge level and tag at the VM NIC level.  So back to the original question, how are these things on VLAN 10 if I’m not tagging VLAN 10?  On the physical switch I configure the trunk port to have a native VLAN of 10…

image
What this does is tell the switch that any frames that arrive untagged should be a member of VLAN 10.  So this solves my problem and frees me up to either tag on the VM NIC (as I do with net0) or tag on the VM itself (as I’ll do with net2) while having all VM interfaces a member of a single bridge. 

Summary
I cant stress the importance of starting off on the right foot when building a lab like this.  Mapping all of this out before you start will save you TONS of time in the long run.  In the next post we’re going to start building the VMs and installing the operating systems and prerequisites.  Stay tuned!

Tags: , ,

« Older entries § Newer entries »