docker

You are currently browsing the archive for the docker category.

Some of you will recall that I had previously written a set of SaltStack states to provision a bare metal Kubernetes cluster.  The idea was that you could use it to quickly deploy (and redeploy) a Kubernetes lab so you could get more familiar with the project and do some lab work on a real cluster.  Kubernetes is a fast moving project and I think you’ll find that those states likely no longer work with all of the changes that have been introduced into Kubernetes.  As I looked to refresh the posts I found that I was now much more comfortable with Ansible than I was with SaltStack so this time around I decided to write the automation using Ansible (I did also update the SaltStack version but I’ll be focusing on Ansible going forward).

However – before I could automate the configuration I had to remind myself how to do the install manually. To do this, I leaned heavily on Kelsey Hightower’s ‘Kubernetes the hard way‘ instructions.  These are a great resource and if you haven’t installed a cluster manually before I suggest you do that before attempting to automate an install.  You’ll find that the Ansible role I wrote VERY closely mimics Kelsey’s directions so if you’re looking to understand what the role does I’d suggest reading through Kelsey’s guide.  A big thank you to him for publishing it!

So let’s get started…

This is what my small lab looks like.  A couple of brief notes on the base setup…

  • All hosts are running a fresh install of Ubuntu 16.04.  The only options selected for package installation were for the OpenSSH server so we could access the servers via SSH
  • The servers all have static IPs as shown in the diagram and a default gateway as listed on the L3 gateway
  • All servers reference a local DNS server 10.20.30.13 (not pictured) and are resolvable in the local ‘interubernet.local’ domain (ex: ubuntu-1.interubernet.local).  They are also reachable via short name since the configuration also specifies a search domain (interubernet.local).
  • All of these servers can reach the internet through the L3 gateway and use the aforementioned DNS server to resolve public names.  This is important since the nodes will download binaries from the internet during cluster configuration.
  • In my example – I’m using 5 hosts.  You don’t need 5 but I think you’d want at least 3 so you could have 1 master and 2 minions but the role is configurable so you can have as many as you want
  • I’ll be using the first host (ubuntu-1) as both the Kubernetes master as well as the Ansible controller.  The remaining hosts will be Ansible clients and Kubernetes minions
  • The servers all have a user called ‘user’ which I’ll be using throughout the configuration

With that out of the way let’s get started.  The first thing we want to do is to get Ansible up and running.  To do that, we’ll start on the Ansible controller (ubuntu-1) by getting SSH prepped for Ansible.  The first thing we’ll do is generate an SSH key for our user (remember this is a new box, you might already have a key)…

To do this, we use the ‘ssh-keygen’ command to create a key for the user. In my case, I just hit enter to accept the defaults and to not set a password on the key (remember – this is a lab). Next we need to copy the public key to all of the servers that the Ansible controller needs to talk to. To perform the copy we’ll use the ‘ssh-copy-id’ command to move the key to all of the hosts…

Above I copied the key over for the user ‘user’ on the server ubuntu-5. You’ll need to do this for all 5 servers (including the master).  Now that’s in place make sure that the keys are working by trying to SSH directly into the servers…

While you’re in each server make sure that Python is installed on each host. Besides the above SSH setup – having Python installed is the only other requirement for the hosts to be able to communicate and work with the Ansible controller…

In my case Python wasn’t installed (these were really stripped down OS installs so that makes sense) but there’s a good chance your servers will already have Python. Once all of the clients are tested we can move on to install Ansible on the controller node. To do this we’ll use the following commands…

I wont bother showing the output since these are all pretty standard operations. Note that in addition to installing Ansible we also are installing Python PIP. Some of the Jinja templating I do with the playbook requires the Python netaddr library.  After you install Ansible and PIP take care of installing the netaddr package to get that out of the way…

Now we need to tell Ansible what hosts we’re working with. This is done by defining hosts in the ‘/etc/ansible/hosts’ file. The Kubernetes role I wrote expects two host groups. Once called ‘masters’ and once call ‘minions’. When you edit the host file for the first time there will likely be a series of comments with examples. To clean things up I like to delete all of the example comments and just add the two required groups. In the end my Ansible hosts file looks like this…

You’ll note that the ‘masters’ group is plural but at this time the role only supports defining a single master.

Now that we told Ansible what hosts it should talk to we can verify that Ansible can talk to them. To do that, run this command…

You should see a ‘pong’ result from each host indicating that it worked. Pretty easy right? Now we need to install the role. To do this we’ll create a new role directory called ‘kubernetes’ and then clone my repository into it like this…

Make sure you put the ‘.’ at the end of the git command otherwise git will create a new folder in the kubernetes directory to put all of the files in.  Once you’ve download the repository you need to update the variables that Ansible will use for the Kubernetes installation. To do that, you’ll need to edit roles variable file which should now be located at ‘/etc/ansible/roles/kubernetes/vars/main.yaml’. Let’s take a look at that file…

I’ve done my best to make ‘useful’ comments in here but there’s a lot more that needs to be explained (and will be in a future post) but for now you need to definitely pay attention to the following items…

  • The host_roles list needs to be updated to reflect your actual hosts.  You can add more or have less but the type of the host you define in this list needs to match what group its a member of in your Ansible hosts file.  That is, a minion type in the var file needs to be in the minion group in the Ansible host file.
  • Under cluster_info you need to make sure you pick two network that don’t overlap with your existing network.
    • For service_network_cidr pick an unused /24.  This wont ever get routed on your network but it should be unique.
    • For cluster_node_cidr pick a large network that you aren’t using (a /16 is ideal).  Kubernetes will allocate a /24 for each host out of this network to use for pods.  You WILL need to route this traffic on your L3 gateway but we’ll walk through how to do that once we get the cluster up and online.

Once the vars file is updated the last thing we need to do is tell Ansible what to do with the role. To do this, we’ll build a simple playbook that looks like this…

The playbook says “Run the role kubernetes on hosts in the masters and the minions group”.  Save the playbook as a YAML file somewhere on the system (in my case I just saved it in ~ as k8s_install.yaml). Now all we need to do is run it! To do that run this command…

Note the addition of the ‘–ask-become-pass’ parameter. When you run this command, Ansible will ask you for the sudo password to use on the hosts. Many of the tasks in the role require sudo access so this is required. An alternative to having to pass this parameter is to edit the sudoers file on each host and allow the user ‘user’ to perform passwordless sudo. However – using the parameter is just easier to get you up and running quickly.

Once you run the command Ansible will start doing it’s thing. The output is rather verbose and there will be lots of it…

If you encounter any failures using the role please contact me (or better yet open an issue in the repo on GitHub).  Once Ansible finishes running we should be able to check the status of the cluster almost immediately…

The component status should return a status of ‘Healthy’ for each component and the nodes should all move to a ‘Ready’ state.  The nodes will take a minute or two in order to transition from ‘NotReady’ to ‘Ready’ state so be patient. Once it’s up we need to work on setting up our network routing. As I mentioned above we need to route the network Kubernetes used for the pod networks to the appropriate hosts. To find which hosts got which network we can use this command which is pointed out in the ‘Kubernetes the hard way’ documentation…

Slick – ok. So now it’s up to us to make sure that each of those /24’s gets routed to each of those hosts. On my gateway, I want the routing to look like this…

Make sure you add the routes on your L3 gateway before you move on.  Once routing is in place we can deploy our first pods.  The first pod we’re going to deploy is kube-dns which is used by the cluster for name resolution.  The Ansible role already took care of placing the pod definition files for kube-dns on the controller for you, you just need to tell the cluster to run them…

As you can see there is both a service and a pod definition you need to install by passing the YAML file to the kubectl command with the ‘-f’ parameter. Once you’ve done that we can check to see the status of both the service and the pod…

If all went well you should see that each pod is running all three containers (denoted by the 3/3) and that the service is present.  At this point you’ve got yourself your very own Kubernetes cluster.  In the next post we’ll walk through deploying an example pod and step through how Kubernetes does networking.  Stay tuned!

Using CNI with Docker

In our last post we introduced ourselves to CNI (if you haven’t read that yet, I suggest you start there) as we worked through a simple example of connecting a network namespace to a bridge.  CNI managed both the creation of the bridge as well as connecting the namespace to the bridge using a VETH pair.  In this post we’ll explore how to do this same thing but with a container created by Docker.  As you’ll see, the process is largely the same.  Let’s jump right in.

This post assumes that you followed the steps in the first post (Understanding CNI) and have a ‘cni’ directory (~/cni) that contains the CNI binaries.  If you don’t have that – head back to the first post and follow the steps to download the pre-compiled CNI binaries.  It also assumes that you have a default Docker installation.  In my case, Im using Docker version 1.12.  

The first thing we need to do is to create a Docker container.  To do that we’ll run this command…

Notice that when we ran the command we told Docker to use a network of ‘none’. When Docker is told to do this, it will create the network namespace for the container, but it will not attempt to connect the containers network namespace to anything else.  If we look in the container we should see that it only has a loopback interface…

So now we want to use CNI to connect the container to something. Before we do that we need some information. Namely, we need a network definition for CNI to consume as well as some information about the container itself.  For the network definition, we’ll create a new definition and specify a few more options to see how they work.  Create the configuration with this command (I assume you’re creating this file in ~/cni)…

In addition to the parameters we saw in the last post, we’ve also added the following…

  • rangeStart: Defines where CNI should start allocating container IPs from within the defined subnet
  • rangeEnd: Defines the end of the range CNI can use to allocate container IPs
  • gateway: Defines the gateway that should be defined.  Previously we hadnt defined this so CNI picked the first IP for use as the bridge interface.

One thing you’ll notice that’s lacking in this configuration is anything related to DNS.  Hold that thought for now (it’s the topic of the next post).

So now that the network is defined we need some info about the container. Specifically we need the path to the container network namespace as well as the container ID. To get that info, we can grep the info from the ‘docker inspect’ command…

In this example I used the ‘-E’ flag with grep to tell it to do expression or pattern matching as Im looking for both the container ID as well as the SandboxKey. In the world of Docker, the network namespace file location is referred to as the ‘SandboxKey’ and the ‘Id’ is the container ID assigned by Docker.  So now that we have that info, we can build the environmental variables that we’re going to use with the CNI plugin.  Those would be…

  • CNI_COMMAND=ADD
  • CNI_CONTAINERID=1018026ebc02fa0cbf2be35325f4833ec1086cf6364c7b2cf17d80255d7d4a27
  • CNI_NETNS=/var/run/docker/netns/2e4813b1a912
  • CNI_IFNAME=eth0
  • CNI_PATH=pwd

Put that all together in a command and you end up with this…

The only thing left to do at this point is to run the plugin…

As we saw in the last post, the plugin executes and then provides us some return JSON about what it did.  So let’s look at our host and container again to see what we have…

From a host perspective, we have quite a few interfaces now. Since we picked up right where we left off with the last post we still have the cni_bridge0 interface along with it’s associated VETH pair. We now also have the cni_bridge1 bridge that we just created along with it’s associated VETH pair interface.  You can see that the cni_bridge1 interface has the IP address we defined as the ‘gateway’ as part of the network configuration.   You’ll also notice that the docker0 bridge is there since it was created by default when Docker was installed.

So now what about our container?  Let’s look…

As you can see, the container has the network configuration we’d expect…

  • It has an IP address within the defined range (10.15.30.100)
  • Its interface is named ‘eth0’
  • It has a default route pointing at the gateway IP address of 10.15.30.99
  • It has an additional route for 1.1.1.1/32 pointing at 10.15.30.1

And as a final quick test we can attempt to access the service in the container from the host…

So as you can see – connecting a Docker container wasn’t much different than connecting a network namespace. In fact – the process was identical, we just had to account for where Docker stores it’s network namespace definitions. In our next post we’re going to talk about DNS related setting for a container and how those play into CNI.

Tags:

imageAs many of you have noticed I’ve been neglecting the blog for past few months.  The main reason for this is that the majority of my free time was being spent generating content for a new book.  I’m pleased to announce that the book, Docker Networking Cookbook, has now been released! 

Here’s a brief description of the book…

“Networking functionality in Docker has changed considerably since its first release, evolving to offer a rich set of built-in networking features, as well as an extensible plugin model allowing for a wide variety of networking functionality. This book explores Docker networking capabilities from end to end. Begin by examining the building blocks used by Docker to implement fundamental containing networking before learning how to consume built-in networking constructs as well as custom networks you create on your own. Next, explore common third-party networking plugins, including detailed information on how these plugins inter-operate with the Docker engine. Consider available options for securing container networks, as well as a process for troubleshooting container connectivity.  Finally, examine advanced Docker networking functions and their relevant use cases, tying together everything you need to succeed with your own projects.”

The book is available from Packt and I believe Amazon has it as well.  If you happen to buy a copy I would greatly appreciate it if you would send me any and all feedback you have.  This is my first attempt at writing a book so any feedback and critiques you can share would be really great.

A big thank you to all of the folks at Packt that made this possible and worked with me through the editing and publishing process.  I’d also like to thank the technical reviewer Francisco Souza for his review. 

Now that the book is published I look forward to spending my free time blogging again.  Thanks for hanging in there!

Tags:

« Older entries