Linux

You are currently browsing the archive for the Linux category.

imageAs many of you have noticed I’ve been neglecting the blog for past few months.  The main reason for this is that the majority of my free time was being spent generating content for a new book.  I’m pleased to announce that the book, Docker Networking Cookbook, has now been released! 

Here’s a brief description of the book…

“Networking functionality in Docker has changed considerably since its first release, evolving to offer a rich set of built-in networking features, as well as an extensible plugin model allowing for a wide variety of networking functionality. This book explores Docker networking capabilities from end to end. Begin by examining the building blocks used by Docker to implement fundamental containing networking before learning how to consume built-in networking constructs as well as custom networks you create on your own. Next, explore common third-party networking plugins, including detailed information on how these plugins inter-operate with the Docker engine. Consider available options for securing container networks, as well as a process for troubleshooting container connectivity.  Finally, examine advanced Docker networking functions and their relevant use cases, tying together everything you need to succeed with your own projects.”

The book is available from Packt and I believe Amazon has it as well.  If you happen to buy a copy I would greatly appreciate it if you would send me any and all feedback you have.  This is my first attempt at writing a book so any feedback and critiques you can share would be really great.

A big thank you to all of the folks at Packt that made this possible and worked with me through the editing and publishing process.  I’d also like to thank the technical reviewer Francisco Souza for his review. 

Now that the book is published I look forward to spending my free time blogging again.  Thanks for hanging in there!

Tags:

Ansible up and running

After much delay – I’ve finally found time to take a look at Ansible.  I’ve spent some time looking at possible platforms to automate network deployment and Ansible seems to be a favorite in this arena.  One of the primary reasons for this is that Ansible is ‘clientless’ (I’m putting that in quotes for a reason, more on that in a later post).  So unlike Chef, Puppet, and Salt (Yes – there are proxy modes available in some products) Ansible does not require an installed client on the remote endpoints.  So let’s get right into a basic lab setup.

While the end goal will be to use Ansible to automate network appliances, we’re going to start with the a more standard use case – Linux servers.  The base lab we will start with is two servers, one acting as the Ansible server and the second being a Ansible client or remote server.  Both hosts are CentOS 7 based Linux hosts.  So our base lab looks like this…

image
Pretty exciting right?  I know, it’s not, but I want to start with the basics and build from there…

Note: I’ll refer to ansibleserver as ‘server’ and ansibleclient1 as ‘client’ throughout the remainder of the article.

If you’re following along, my assumption is that you have 2 servers that are similarly IP’d, have resolvable DNS names, and are fully updated CentOS 7 hosts.  That being said, let’s start out by building the Ansible server node.  The install process looks like this…

That’s it.  You’re done.  Do a quick check and see if Ansible is working as expected…

image
So it looks like everything is installed.  The next thing we want to do is configure a means to communicate with the clients.  Ansible’s default means of doing this is with SSH.  So let’s configure an SSH key on the server and send it to the client…

After the key has been installed, test it out by SSHing to the client from the server…

image

The login should work and you shouldn’t need to specify a password for the root user.

Note: In this example Im using the root account on both the server and the client.  By default, Ansible attempts to use the current logged in user to connect to the clients.  If you dont plan on using the root user on the server, you’ll need to tell Ansible to still use root for connectivity. I plan on covering this functionality in a later post.

The next thing we nee to do is to define the clients you want the server to work with.  This is done by defining them in Ansible ‘hosts’ file which is located in ‘/etc/ansible/hosts’.  Out of the box, this file is full of examples.  For the sake of clarity, I’ve removed the examples leaving the file looking like this…

image
Here you can see I’ve added a group named ‘linuxservers’ and in that group I’ve defined a client ‘ansibleclient1’.  Any host that you wish to manage must be specified in this file.  Once defined, it can be referenced either directly by name or as part of a group.  Review some of the default examples in this file to give you an idea of how you can match on different hosts.

So now that we have a client defined, how do we talk to it with Ansible?  The best test is to try and run some basic modules against the clients.  For instance, there’s a ‘ping’ module that lets you see which hosts are online…

image
Notice that you can run the ping module against the client server in a few different ways.  I can reference it by name, by group, or by using the ‘all’ flag which matches all clients defined in the hosts file.  As you’ll see, modules are the key component of Ansible that allow it to perform a wide variety of tasks.  For instance, there’s a module  for SELinux…

image
Above I call the ‘selinux’ module and pass it an argument with the ‘-a’ flag to tell it to disable SELinux.  There’s also a shell module that allows you to run shell commands on clients like I do below with the ‘ip addr’ command…

image

The list of all of the modules can be found on the Ansible website.  So while the modules themselves are powerful, running them in this manner isn’t much better than just executing one module at a time.  The system becomes really powerful when you couple modules with playbooks.

Ansible playbooks are written in YAML and contain one or more ‘plays’.  Let’s look at an example playbook so you can see what I’m talking about…

Here’s a fairly basic playbook that installs and starts an Apache web server.  This playbook defines one play, with one task.  Plays are defined by specifying the hosts to be part of the play followed by a series of tasks to execute against them.  Playbooks can contain multiple plays each with multiple tasks.  In this case, this play has one task which uses the module ‘yum’.  In addition, to the tasks, you can also define handlers.  These are items that you want to run ONLY if a certain task makes successful changes to the system.  So in this case, we tell the task ‘Install Apache Web Server’ to notify the handlers ‘openport80’ and ‘startwebserver’.  If the task results in the system successfully installing Apache, it will notify the handlers defined for the task.  So let’s save this playbook on the server and run it…

image
As you can see, the means to call a playbook is simple.  You just use the ‘ansible-playbook’ command and specify the YAML playbook definition.  As you can see from the run, Ansible successfully ran the task and in turn triggered the two defined handlers.  If we browse to the client on port 80, we should see the Apache start page…

image
Success!  Let’s run the playbook again and see what happens…

image
Note that this time the task completed without making any changes.  Since nothing changed, there’s no need to notify any of the handlers.  Pretty slick huh?

I hope this first look at Ansible has been helpful.  Stay tuned for more posts on other features and ways to automate with Ansible!

Tags:

If you’ve made it this far, hopefully you’ve already completed steps similar to those outlined in my previous two posts…

The Lab
Prepping the VMs

If you have, we’re now ready to start installing OpenStack itself.  To do this, I’ve built a set of installation scripts.  All of the files are out on Github…

https://github.com/jonlangemak/openstackbuild

I suggest you pull them from there into a local directory you can work off of.  There is a folder for each VM that needs to be built and each folder has a file called ‘install’.  This file contains all of the steps required to build each on one of the three nodes.  The remaining files are all of the configuration files that need to change in order for OpenStack to work in our build.  We’ll be copying these files over to the VMs as part of the install.

A couple of notes before we start…

-The beginning of each each install file lists all of the packages that need to be installed for this to work.  I suggest you start the package install on each VM at the same time as it can take some time to complete.

The controller install has an additional step before the package install which disables a service from running.  Ubuntu’s package manager automatically starts services as part of the installation.  This is different than how it’s handled on RHEL based systems.

-The config and configuration files assume that you used the same IPs, VLANs, or hostnames.  While most of the configuration relies on DNS, there are some hard coded static IPs.  If you are not using the same layout, you can search the config files for flags that looks like ‘**CHANGE THIS IF NEEDED**’.  Lines following that flag are specific to this configuration and will need to be changed if you used different IPs, VLANs, or hostnames.  Im 99% sure I flagged all of the areas but if I missed something let me know.

-The configuration relies on the upstream network being configure to support all of the defined networks and VLANs as described in earlier posts.  Later posts will rely on these subnets having reachability to the internet as well. 

You can take two approaches to install OpenStack using these files…

Completely Manual
As mentioned, each folder has an ‘install’ file that walks you through the build process.  It tells you where to place the config files with the expectation that you’ll delete the existing config file and replace it with the one from the working directory.  This works well, but is also a little more time consuming than I had hoped.

Manual with CURL to drop config files
In each components directory I’ve placed modified ‘install’ files for each node named ‘curlinstall’.  These install files are identical to the local install files, but replace all of the config placement with curl commands to download the files from a local HTTP server.  In my case, that server is ‘http://tools’.  If you want to take this approach you can easily modify (find and replace) the curl commands to suit your needs.

Note: Yes – I know.  This is screaming for automation.  Im hoping to get this changed over to Salt or Ansible once I find time but for now the focus is getting this built so we can examine the network constructs.

Regardless of which approach you take, the installation and configuration is pretty straight forward.  Follow the install scripts from top to bottom starting with the controller and then completing the install on compute1 and compute2.  Once you’re done, go ahead and try to access the portal at this URL…

image 
You should be able to log in with any of these credentials…

Admin User – admin/openstack
Test tenant 1 – demo/demo
Test tenant 2 – demo2/demo2

Make sure that all of the credentials work as you expect and you can reach the tenant dashboards.  Next, we need to run a test to make sure that everything is working as expected.  The first thing we have to do is create an external network for the tenants to consume.  To do this, log into the dashboard as the admin user and head over to the network tab and create a new network…

Note: Just follow along for now, the next posts will walk through what we’re actually doing and all of the terminology.

image 
Here I’m creating a new network called ‘external’, saying it exists out of the interface ‘public’, is of type ‘VLAN’, and uses VLAN tag 30.  I also declare it as ‘shared’ and ‘external’.  Create the network and make sure the creation succeeds.  You’ll get a message in the upper right corner telling you either way…

image If it succeeds, go ahead and edit the network by click on the network name…

image
Click on the ‘Create Subnet’ button to define an IP subnet in the network.  If you’re using the same subnets that I am, define the subnet as shown below…

image 
image 
Again, make sure it completes successfully…

image
Now go ahead and log out of the admin user and log in as the ‘demo’ user.  Let’s again head over to the network tab and create a new network using the following settings…

image

image

image 
Make sure the creation is successful…

image Once the network is created, head over and create a new router…

image 
Name the router as you wish and then select the ‘external’ network we created under the admin tenant as the router’s external network.  Again – make sure this goes through successfully…

image 
Click the router name to edit it and add a new interface…

image 
Make sure this succeeds as well…

image
Now we can try and launch an instance so head over to the instance tab and launch an instance…

image

image 
Hit launch and watch the status of the instance to see if it launches successfully…
 image 
If the instance launches successfully, the status should change to ‘Active’…

image
Once this happens, select the action of ‘Associate Floating IP’ under the instances context menu…

image
Click the plus sign…

image
Hit ‘Allocate IP’..

image
Finally click associate to bind the floating IP to the instance.  If successful, you should see it show up under the IP address column of the instance…

image 
Now head over to the access and security tab and manage the rules in the default security group.  We’re going to add two rules…

image

image 
Once these are added, we should be able to access the instance from the external network via the floating IP address…

image

Ping works, now let’s try SSH…

Note: the default Cirros image credentials are cirros/cubswin:)

image
Nice!  So its all working.  In the next post, we’re going to talk about the basic Linux networking constructs that OpenStack uses to accomplish this.  Stay tuned!

Tags:

« Older entries