Using CNI with Docker

In our last post we introduced ourselves to CNI (if you haven’t read that yet, I suggest you start there) as we worked through a simple example of connecting a network namespace to a bridge.  CNI managed both the creation of the bridge as well as connecting the namespace to the bridge using a VETH pair.  In this post we’ll explore how to do this same thing but with a container created by Docker.  As you’ll see, the process is largely the same.  Let’s jump right in.

This post assumes that you followed the steps in the first post (Understanding CNI) and have a ‘cni’ directory (~/cni) that contains the CNI binaries.  If you don’t have that – head back to the first post and follow the steps to download the pre-compiled CNI binaries.  It also assumes that you have a default Docker installation.  In my case, Im using Docker version 1.12.  

The first thing we need to do is to create a Docker container.  To do that we’ll run this command…

Notice that when we ran the command we told Docker to use a network of ‘none’. When Docker is told to do this, it will create the network namespace for the container, but it will not attempt to connect the containers network namespace to anything else.  If we look in the container we should see that it only has a loopback interface…

So now we want to use CNI to connect the container to something. Before we do that we need some information. Namely, we need a network definition for CNI to consume as well as some information about the container itself.  For the network definition, we’ll create a new definition and specify a few more options to see how they work.  Create the configuration with this command (I assume you’re creating this file in ~/cni)…

In addition to the parameters we saw in the last post, we’ve also added the following…

  • rangeStart: Defines where CNI should start allocating container IPs from within the defined subnet
  • rangeEnd: Defines the end of the range CNI can use to allocate container IPs
  • gateway: Defines the gateway that should be defined.  Previously we hadnt defined this so CNI picked the first IP for use as the bridge interface.

One thing you’ll notice that’s lacking in this configuration is anything related to DNS.  Hold that thought for now (it’s the topic of the next post).

So now that the network is defined we need some info about the container. Specifically we need the path to the container network namespace as well as the container ID. To get that info, we can grep the info from the ‘docker inspect’ command…

In this example I used the ‘-E’ flag with grep to tell it to do expression or pattern matching as Im looking for both the container ID as well as the SandboxKey. In the world of Docker, the network namespace file location is referred to as the ‘SandboxKey’ and the ‘Id’ is the container ID assigned by Docker.  So now that we have that info, we can build the environmental variables that we’re going to use with the CNI plugin.  Those would be…

  • CNI_COMMAND=ADD
  • CNI_CONTAINERID=1018026ebc02fa0cbf2be35325f4833ec1086cf6364c7b2cf17d80255d7d4a27
  • CNI_NETNS=/var/run/docker/netns/2e4813b1a912
  • CNI_IFNAME=eth0
  • CNI_PATH=pwd

Put that all together in a command and you end up with this…

The only thing left to do at this point is to run the plugin…

As we saw in the last post, the plugin executes and then provides us some return JSON about what it did.  So let’s look at our host and container again to see what we have…

From a host perspective, we have quite a few interfaces now. Since we picked up right where we left off with the last post we still have the cni_bridge0 interface along with it’s associated VETH pair. We now also have the cni_bridge1 bridge that we just created along with it’s associated VETH pair interface.  You can see that the cni_bridge1 interface has the IP address we defined as the ‘gateway’ as part of the network configuration.   You’ll also notice that the docker0 bridge is there since it was created by default when Docker was installed.

So now what about our container?  Let’s look…

As you can see, the container has the network configuration we’d expect…

  • It has an IP address within the defined range (10.15.30.100)
  • Its interface is named ‘eth0’
  • It has a default route pointing at the gateway IP address of 10.15.30.99
  • It has an additional route for 1.1.1.1/32 pointing at 10.15.30.1

And as a final quick test we can attempt to access the service in the container from the host…

So as you can see – connecting a Docker container wasn’t much different than connecting a network namespace. In fact – the process was identical, we just had to account for where Docker stores it’s network namespace definitions. In our next post we’re going to talk about DNS related setting for a container and how those play into CNI.

Tags:

Reply

Your email address will not be published. Required fields are marked *