If you’ve been paying attention to the discussions around container networking you’ve likely heard the acronym CNI being used. CNI stands for Container Networking Interface and it’s goal is to create a generic plugin-based networking solution for containers. CNI is defined by a spec (read it now, its not very long) that has some interesting language in it. Here are a couple of points I found interesting during my first read through…
- The spec defines a container as being a Linux network namespace. We should be comfortable with that definition as container runtimes like Docker create a new network namespace for each container.
- Network definitions for CNI are stored as JSON files.
- The network definitions are streamed to the plugin through STDIN. That is – there are no configuration files sitting on the host for the network configuration.
- Other arguments are passed to the plugin via environmental variables
- A CNI plugin is implemented as an executable.
- The CNI plugin is responsible wiring up the container. That is – it needs to do all the work to get the container on the network. In Docker, this would include connecting the container network namespace back to the host somehow.
- The CNI plugin is responsible for IPAM which includes IP address assignment and installing any required routes.
If you’re used to dealing with Docker this doesn’t quite seem to fit the mold. It’s apparent to me that the CNI plugin is responsible for the network end of the container, but it wasn’t initially clear to me how that was actually implemented. So the next question might be, can I use CNI with Docker? The answer is yes, but not as an all in one solution. Docker has it’s own network plugin system called CNM. CNM allows plugins to interact directly with Docker. A CNM plugin can be registered to Docker and used directly from it. That is, you can use Docker to run containers and directly assign their network to the CNM registered plugin. This works well, but because Docker has CNM, they dont directly integrate with CNI (as far as I can tell). That does not mean however, that you can’t use CNI with Docker. Recall from the sixth bullet above that the plugin is responsible for wiring up the container. So it seems possible that Docker could be the container runtime – but not handle the networking end of things (more on this in a future post).
At this point – I think its fair to start looking at what CNI actually does to try to get a better feel for how it fits into the picture. Let’s look at a quick example of using one of the plugins.
Let’s start by downloading the pre-built CNI binaries…
user@ubuntu-1:~$ mkdir cni user@ubuntu-1:~$ cd cni user@ubuntu-1:~/cni$ curl -O -L https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tgz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 597 0 597 0 0 1379 0 --:--:-- --:--:-- --:--:-- 1381 100 15.3M 100 15.3M 0 0 4606k 0 0:00:03 0:00:03 --:--:-- 5597k user@ubuntu-1:~/cni$ user@ubuntu-1:~/cni$ tar -xzvf cni-amd64-v0.4.0.tgz ./ ./macvlan ./dhcp ./loopback ./ptp ./ipvlan ./bridge ./tuning ./noop ./host-local ./cnitool ./flannel user@ubuntu-1:~/cni$ user@ubuntu-1:~/cni$ ls bridge cni-amd64-v0.4.0.tgz cnitool dhcp flannel host-local ipvlan loopback macvlan noop ptp tuning
Ok – let’s make sure we understand what we just did there. We first created a directory called ‘cni’ to store the binaries in. We then used the curl command to download the CNI release bundle. When using curl to download a file we need to pass the ‘O’ parameter to tell curl to output to a file. We also need to pass the ‘L’ parameter in this case to allow curl to follow redirects since the URL we’re downloading from is actually redirecting us elsewhere. Once downloaded, we unpack the archive using the tar command.
After all of that we can see that we have a few new files. For right now, let’s focus on the ‘bridge’ file which is the bridge plugin. Bridge is one of the included plugins that ships with CNI. It’s job, as you might have guessed, is to attach a container to a bridge interface. So now that we have the plugins, how do we actually use them? One of the earlier bullet points mentioned that network configuration is streamed into the plugin through STDIN. So we know we need to use STDIN to get information about the network into the plugin but that’s not all the info the plugin needs. The plugin also needs more information such as the action you wish to perform, the namespace you wish to work with, and other various information. This information is passed to the plugin via environmental variables. Confused? No worries, let’s walk through an example. Let’s first define a network configuration file we wish to use for our bridge…
cat > mybridge.conf <<"EOF" { "cniVersion": "0.2.0", "name": "mybridge", "type": "bridge", "bridge": "cni_bridge0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "subnet": "10.15.20.0/24", "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "1.1.1.1/32", "gw":"10.15.20.1"} ] } } EOF
Above we create a JSON definition for our bridge network. There are some CNI generic definitions listed above as well as some specific to the bridge plugin itself. Let’s walk through them one at a time.
CNI generic parameters
- cniVersion: The version of the CNI spec in which the definition works with
- name: The network name
- type: The name of the plugin you wish to use. In this case, the actual name of the plugin executable
- args: Optional additional parameters
- ipMasq: Configure outbound masquerade (source NAT) for this network
- ipam:
- type: The name of the IPAM plugin executable
- subnet: The subnet to allocate out of (this is actually part of the IPAM plugin)
- routes:
- dst: The subnet you wish to reach
- gw: The IP address of the next hop to reach the dst. If not specified the default gateway for the subnet is assumed
- dns:
- nameservers: A list of nameservers you wish to use with this network
- domain: The search domain to use for DNS requests
- search: A list of search domains
- options: A list of options to be passed to the receiver
Plugin (bridge) specific parameters
- isgateway: If true, assigns an IP address to the bridge so containers connected to it may use it as a gateway.
- isdefaultgateway: If true, sets the assigned IP address as the default route.
- forceAddress: Tells the plugin to allocate a new IP address if the previous value has changed.
- mtu: Define the MTU of the bridge.
- hairpinMode: Set hairpin mode for the interfaces on the bridge
The items that are in bold above are the ones we’re using in this example. You should play around with the others to get a feeling for how they work but most are fairly straight forward. You’ll also note that one of the items is part of the IPAM plugin. We arent going to cover those in this post (we will later!) but for now just know that we’re using multiple CNI plugins to make this work.
Ok – so now that we have our network definition, we want to run it. However – at this point we’ve only defined characteristics of the bridge. The point of CNI is to network containers so we need to tell the plugin about the container we want to work with as well. These variables are passed to the plugin via environmental variables. So our command might look like this…
sudo CNI_COMMAND=ADD CNI_CONTAINERID=1234567890 CNI_NETNS=/var/run/netns/1234567890 CNI_IFNAME=eth12 CNI_PATH=`pwd` ./bridge <mybridge.conf
Let’s walk through this. I think most of you are probably familiar with using environmental variables on systems by setting them at the shell or system level. In addition to that, you can also pass them directly to a command. When you do this, they will be used only by the executable you are calling and only during that execution. So in this case, the following variables will be passed to the bridge executable…
- CNI_COMMAND=ADD – We are telling CNI that we want to add a connection
- CNI_CONTAINER=1234567890 – We’re telling CNI that the network namespace we want to work is called ‘1234567890’ (more on this below)
- CNI_NETNS=/var/run/netns/1234567890 – The path to the namespace in question
- CNI_IFNAME=eth12 – The name of the interface we wish to use on the container side of the connection
- CNI_PATH=`pwd` – We always need to tell CNI where the plugin executables live. In this case, since we’re already in the ‘cni’ directory we just have the variable reference pwd (present working directory). You need the ticks around the command pwd for it to evaluate correctly. Formatting here seems to be removing them but they are in the command above correctly
Once the variables you wish to pass to the executable are defined, we then pick the plugin we want to use which in this case is bridge. Lastly – we feed the network configuration file into the plugin using STDIN. To do this just use the left facing bracket ‘<‘. Before we run the command, we need to create the network namespace that the plugin is going to work with. Tpically the container runtime would handle this but since we’re keeping things simple this first go around we’ll just create one ourselves…
sudo ip netns add 1234567890
Once that’s created let’s run the plugin…
user@ubuntu-1:~/cni$ sudo CNI_COMMAND=ADD CNI_CONTAINERID=1234567890 CNI_NETNS=/var/run/netns/1234567890 CNI_IFNAME=eth12 CNI_PATH=`pwd` ./bridge <mybridge.conf 2017/02/17 09:46:01 Error retriving last reserved ip: Failed to retrieve last reserved ip: open /var/lib/cni/networks/mybridge/last_reserved_ip: no such file or directory { "ip4": { "ip": "10.15.20.2/24", "gateway": "10.15.20.1", "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "1.1.1.1/32", "gw": "10.15.20.1" } ] }, "dns": {} }user@ubuntu-1:~/cni$
Running the command returns a couple of things. First – it returns an error since the IPAM driver can’t find the file it uses to store IP information locally. If we ran this again for a different namespace, we wouldn’t get this error since the file is created the first time we run the plugin. The second thing we get is a JSON return indicating the relevant IP configuration that was configured by the plugin. In this case, the bridge itself should have received the IP address of 10.15.20.1/24 and the namespace interface would have received 10.15.20.2/24. It also added the default route and the 1.1.1.1/32 route that we defined in the network configuration JSON. So let’s look and see what it did…
user@ubuntu-1:~/cni$ ifconfig cni_bridge0 Link encap:Ethernet HWaddr 0a:58:0a:0f:14:01 inet addr:10.15.20.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::3cd5:6cff:fef9:9066/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:536 (536.0 B) TX bytes:648 (648.0 B) ens32 Link encap:Ethernet HWaddr 00:0c:29:3e:49:51 inet addr:10.20.30.71 Bcast:10.20.30.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe3e:4951/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17431176 errors:0 dropped:1240 overruns:0 frame:0 TX packets:14162993 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2566654572 (2.5 GB) TX bytes:9257712049 (9.2 GB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:45887226 errors:0 dropped:0 overruns:0 frame:0 TX packets:45887226 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:21016155576 (21.0 GB) TX bytes:21016155576 (21.0 GB) veth1fbfe91d Link encap:Ethernet HWaddr 26:68:37:93:26:4a inet6 addr: fe80::2468:37ff:fe93:264a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:1296 (1.2 KB) user@ubuntu-1:~/cni$
Notice we now have a bridge interface called ‘cni_bridge0’ which has the IP interface we expected to see. Also note at the bottom we have one side of a veth pair. Recall that we also asked it to enable masquerading. If we look at our hosts iptables rules we’ll see the masquerade and accept rule…
user@ubuntu-1:~/cni$ sudo iptables-save | grep mybridge -A POSTROUTING -s 10.15.20.0/24 -m comment --comment "name: \"mybridge\" id: \"1234567890\"" -j CNI-26633426ea992aa1f0477097 -A CNI-26633426ea992aa1f0477097 -d 10.15.20.0/24 -m comment --comment "name: \"mybridge\" id: \"1234567890\"" -j ACCEPT -A CNI-26633426ea992aa1f0477097 ! -d 224.0.0.0/4 -m comment --comment "name: \"mybridge\" id: \"1234567890\"" -j MASQUERADE user@ubuntu-1:~/cni$
Let’s now look in the network namespace…
user@ubuntu-1:~/cni$ sudo ip netns exec 1234567890 ifconfig eth12 Link encap:Ethernet HWaddr 0a:58:0a:0f:14:02 inet addr:10.15.20.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::d861:8ff:fe46:33ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KB) TX bytes:648 (648.0 B) user@ubuntu-1:~/cni$ sudo ip netns exec 1234567890 ip route default via 10.15.20.1 dev eth12 1.1.1.1 via 10.15.20.1 dev eth12 10.15.20.0/24 dev eth12 proto kernel scope link src 10.15.20.2 user@ubuntu-1:~/cni$
Our namespace is also configured as we expected. The namespace has an interface named ‘eth12’ with an IP address of 10.15.20.2/24 and the routes we defined are also there. So it worked!
This was a simple example but I think it highlights how CNI is implemented and works. Next week we’ll dig further into the CNI plugins as we examine an example of how to use CNI with a container runtime.
Before I wrap up – I do want to comment briefly on one item that I initially got hung up on and that’s how the plugin is actually called. In our example – we’re calling a specific plugin directly. As such – I was initially confused as to why you needed to specify the location of the plugins with the ‘CNI_PATH’. After all – we’re calling a plugin directly so obviously we already know where it is. The reason for this is that this is not how CNI is typically used. Typically – you have a another application or system that is reading the CNI network definitions and running them. In those cases, the CNI_PATH will already be defined within the system. Since the network configuration file defines what plugin to use (in our case bridge) all the system would need to know is where to find the plugins. To find them, it references the CNI_PATH variable. We’ll talk more about this in future posts where we discuss what other applications use CNI (cough, Kubernetes, cough) so for now just know that the example above shows how CNI works, but does not show a typical use case outside of testing.
Pingback: Understanding CNI (Container Networking Interface) - Gestalt IT
I always like the way you explain – very meticulously and focussing on important aspects. Keep it up.
Pingback: Open-source cloud royalty: OpenStack Queens released – Tech Mong
Pingback: Open-source cloud royalty: OpenStack Queens released - The Viral List
Pingback: Open-source cloud royalty: OpenStack Queens released | “BOOKr” – BOOKr
Pingback: Open-source cloud royalty: OpenStack Queens released - Latest Digital News Form
Pingback: Open-source cloud royalty: OpenStack Queens released | sports today
Pingback: Open-source cloud royalty: OpenStack Queens released – MJI IT Solutions
I tried walking through your steps below and I get the following error when I run the plugin:
plugins$ sudo CNI_COMMAND=ADD CNI_CONTAINERID=1234567890 CNI_NETNS=/var/run/netns/1234567890 CNI_IFNAME=eth12 CNI_PATH=`pwd` ./bridge <mybridge.conf
./bridge: 2: ./bridge: Syntax error: "(" unexpected
Have you run into this before? I'm running CNI plugin 0.7.0, and I updated CNI version to 0.3.1 above. I'm running Xubuntu on Hyper-V (Windows 10 is the host).
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Codename: xenial
Please let me know where i can find the future post for above topic, i need to understand the use cases of CNI. Thanks for above listed information 😉
Great post ! I have a question though, where would the other end of the veth pair be ?
Great introduction to the basics. Wanted to also know how the plug-in executable tears down the newly created network and the bridge interface
Pingback: CNI – morning's blog
Pingback: Yet Another CKA Study Guide – sv@work
Pingback: Certified-Administrator – The Next Gen Learnings
Pingback: Container Networking Basics | sifaserdarozen
Thank you for the article series — they’re great introduction to the CNI.
One small thing to add regarding the need of the `CNI_PATH` environment variable: even though we directly run the plugin binary, the CNI_PATH is still useful for the plugin itself, as a plugin may depend on another plugin to function properly. For example, the bridge plugin may need to run the host-local plugin depending on the given configuration file. If CNI_PATH was set incorrectly, the host-local plugin binary may not be found, returning an error.