In my last post, I talked about some of the more physical aspects of my virtual home lab. We talked about the need for nested virtualization as well as what the physical and virtual network would look like. In this post, we’re going to look at building the VMs as well as getting the operating systems ready for the OpenStack install. As a quick reminder, let’s take a look at what the logical lab looks like…
The lab will consists of 3 VMs (to start with), a controller, and two compute nodes. Wile OpenStack can be installed on a variety of Linux operating systems, this series will be focusing on Ubuntu version 14.04. The first thing we need to do is create a base image. Without a base image, we’re going to be forced to install Ubuntu individually on each server which is not ideal. So the first thing you’ll want to do is download the correct ISO and upload it to your ProxMox server.
Note: Getting around in ProxMox is out of scope for this series. HOWEVER – ProxMox has a rather large following on the internet which means that Google is your friend here.
Next thing we need to do is create a VM. We’ll create a VM called ‘ubuntu1404base’ and afterwards turn it into a template that we can clone over and over again. I used the defaults for the most part making these notable changes…
OS Type – Linux 4.x/3.x Kernel
Disk Drive Bus/Type – VirtIO
Disk Size – 30GB
Disk Format – QEMU (This is key to being able to do snapshots!)
CPU Type – host
Memory – 2048
Network Model – VirtIO
Once the VM is created, go under it’s hardware tab and create 2 additional NICs. Provide the defaults with two exceptions. First, make sure that they are all of type ‘VirtIO’. Secondly, tag the 2nd NIC (net1) on VLAN20…
Hostname – template (remember this is just the base template we’re building to start with)
Package Install – Select OpenSSH Server
Once the install completes, reboot the VM, and log in. If you’re new to Ubuntu, you likely realized during the install it didn’t ask you to set a root password. You need to use the user account you created during setup to log in and then switch over to root privileges (sudo su) to get root access. Let’s start by updating the system and packages…
Note: This will be the first time we need VM network connectivity. If that’s not working, we need to fix that first. In this case Im assuming you have DHCP enabled on VLAN 10, VLAN 10 has a means to get to the internet, and that the VMs are using that for initial connectivity.
apt-get update && apt-get dist-upgrade
Next, install the following packages…
apt-get install ntp crudini curl Ubuntu-cloud-keyring
Then we need to tell the Ubuntu cloud archive which version of OpenStack packages we want…
add-apt-repository -y cloud-archive:liberty
After changing the cloud-archive, run the update once more…
apt-get update && apt-get dist-upgrade
Lastly, its necessary to change a couple of the default kernel networking parameters for all of this to work. There are 3 settings we need to modify and they all exist in the file /etc/sysctl.conf’…
net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0
Modify the 3 settings above to have the new values as shown above.
Now shutdown the VM and head back to ProxMox. Right click on the VM and click on ‘Convert to Template’. Once this is done, you should see the template appear on the left-hand side of the screen…
Now right click the template, and click on ‘Clone’. On the menu give the VM a name, change the mode to ‘Full Clone’, select the target storage (local in my case), and make sure you change the format to ‘QEMU’…
-The upstream network is configured for VLANs 10, 20, and 30
-You have a DNS server which has records to resolve the hostnames controller, compute1, and compute2 to the correct primary (net0) IP address
If that’s in place, let’s log into the VMs and start the configuration…
Note: Each of the below bolded items needs to be completed on each VM. In some cases I list the entire config needed for each VM, in other cases I list what needs to change and assume you’ll change it to the correct value.
Change the hostname
Edit the network configuration
The configuration for the network interfaces looks like this for each host…
!Controller # The primary network interface auto eth0 iface eth0 inet static address 10.20.30.30 netmask 255.255.255.0 gateway 10.20.30.1 dns-nameserver 10.20.30.13 dns-search interubernet.local auto eth1 iface eth1 inet static address 192.168.30.30 netmask 255.255.255.0 auto eth2 iface eth2 inet manual
!Compute1 # The primary network interface auto eth0 iface eth0 inet static address 10.20.30.31 netmask 255.255.255.0 gateway 10.20.30.1 dns-nameserver 10.20.30.13 dns-search interubernet.local auto eth1 iface eth1 inet static address 192.168.30.31 netmask 255.255.255.0 auto eth2 iface eth2 inet manual
!Compute2 # The primary network interface auto eth0 iface eth0 inet static address 10.20.30.32 netmask 255.255.255.0 gateway 10.20.30.1 dns-nameserver 10.20.30.13 dns-search interubernet.local auto eth1 iface eth1 inet static address 192.168.30.32 netmask 255.255.255.0 auto eth2 iface eth2 inet manual
Now reboot the VMs and make sure you can ping each of the VMs IP addresses on VLAN 10 and VLAN 20. Also make sure you can do basic name resolution of each VM.
That’s it! We are now ready to begin the OpenStack installation in our next post!
Note: Now would be a good time to take snapshots of the VMs in case you want to revert to an earlier point. I’ll make a couple of other suggestions as to when I would recommend taking them but I found them hugely helpful in this lab work!