In this post, I’d like to cover some of the new Docker network features. Docker 1.9 saw the release of user defined networks and the most recent version 1.10 added some additional features. In this post, we’ll cover the basics of the new network model as well as show some examples of what these new features provide.
So what’s new? Well – lots. To start with, let’s take a look at a Docker host running the newest version of Docker (1.10).
Note: I’m running this demo on CentOS 7 boxes. The default repository had version 1.8 so I had to update to the latest by using the update method shown in a previous post here. Before you continue, verify that ‘docker version’ shows you on the correct release.
You’ll notice that the Docker CLI now provide a new option to interact with the network through the ‘docker network’ command…
Alright – so let’s start with the basics and see what’s already defined…
By default a base Docker installation has these three networks defined. The networks are permanent and can not be modified. Taking a closer look, you likely notice that these predefined networks are the same as the network models we had in earlier versions of Docker. You could start a container in bridge mode by default, host mode my specifying ‘–net=host’, and without an interface by specifying ‘–net=none’. To level set – everything that was there before is still here. To make sure everything still works as expected, let’s run through building a container under each network type.
Note: These exact commands were taken out my earlier series of Docker networking 101 posts to show that the command syntax has not changed with the addition of multi-host networking. Those posts can be found here.
docker run -d --name web1 --net=host jonlangemak/docker:webinstance1
Executing the above command will spin up a test web server with the containers network stack being mapped directly to that of the host. Once the container is running, we should be able to access the web server through the Docker hosts IP address…
Note: You either need to disable firewalld on the Docker host or add the appropriate rules for this to work.
docker run -d --name web1 -p 8081:80 jonlangemak/docker:webinstance1
docker run -d --name web2 -p 8082:80 jonlangemak/docker:webinstance2
Here we’re running the default bridge mode and mapping ports into the container. Running those two containers should give you the web server you’re looking for on ports 8081 and 8082…
In addition, if we connect to the containers directly, we can see that communication between the two containers occurs directly across the docker0 bridge never leaving the host…
Here we can see that web1 has an APR entry for web2. Looking at Web2, we can see the MAC address is identical…
docker run -d --name web1 --net=none jonlangemak/docker:webinstance1
In this example we can see that the container doesn’t receive any interface at all…
As you can see, all three modes work just as they had in previous versions of Docker. So now that we’ve covered the existing network functions, lets’ talk about the new user defined networks…
User defined bridge networks
The easiest user defined network to use is the bridge. Defining a new bridge is pretty easy. Here’s a quick example…
docker network create --driver=bridge \
--subnet=192.168.127.0/24 --gateway=192.168.127.1 \
Here I create a new bridge named ‘testbridge’ and provide the following attributes…
Gateway – In this case I set it to 192.168.127.1 which will be the IP of the bridge created on the docker host. We can see this by looking at the Docker hosts interfaces…
Subnet – I specified this as 192.168.127.0/24. We can see in the output above that this is the CIDR associated with the bridge.
IP-range – If you wish to define a smaller subnet from which Docker can allocate container IPs from your can use this flag. The subnet you specify here must exist within the bridge subnet itself. In my case, I specified the second half of the defined subnet. When I start a container, I’ll get an IP out of the subnet if I assign the container to this bridge…
Our new bridge acts much like the docker0 bridge. Ports can be mapped to the physical host in the same manner. In the above example, we mapped port 8081 to port 80 in the container. Despite this container being on a different bridge, the connectivity works all the same…
We can make this example slightly more interesting by removing the existing container, removing the ‘testbridge’, and redefining it slightly differently…
docker stop web1
docker rm web1
docker network rm testbridge
docker network create --driver=bridge \
--subnet=192.168.127.0/24 --gateway=192.168.127.1 \
--ip-range=192.168.127.128/25 –-internal testbridge<p></p>
The only change here is the addition of the ‘—internal’ flag. This prevents any external communication from the bridge. Let’s check this out by defining the container like this…
docker run -d --net=testbridge -p 8081:80 --name web1 jonlangemak/docker:webinstance1
You’ll note that in this case, we can no longer access the web server container through the exposed port…
It’s obvious that the ‘—internal’ flag prevents containers attached to the bridge from talking outside of the host. So while we can now define new bridges and associate newly spawned containers to them, that by itself is not terribly interesting. What would be more interesting is the ability to connect existing containers to these new bridges. As luck would have it, we can use the docker network ‘connect’ and ‘disconnect’ commands to add and remove containers from any defined bridge. Let’s start by attaching the container web1 to the default docker0 bridge (bridge)…
docker network connect bridge web1
If we look at the network configuration of the container, we can see that it now has two NICs. One associated with ‘bridge’ (the docker0 bridge), and another associated with ‘testbridge’…
If we check again, we’ll see that we can now once again access the web server through the mapped port across the ‘bridge’ interface…
Next, let’s spin up our web2 container, and attach it to our default docker0 bridge…
docker run -d -p 8082:80 --name web2 jonlangemak/docker:webinstance2
Before we get too far – let’s take a logical look at where we stand…
We have a physical host (docker1) with a NIC called ‘ENS0’ which sits on the physical network with the IP address of 10.20.30.230. That host has 2 Linux bridges called ‘bridge’ (docker0) and ‘testbridge’ each with their own defined IP addresses. We also have two containers, one called web1 which is associated with both bridges and a second, web2, that’s associated with only the native Docker bridge.
Given this diagram, you might assume that web1 and web2 would be able to communicate directly with each other since they are connected to the same bridge. However, if you recall our earlier posts, Docker has something called ICC (Inter Container Communication) mode. When ICC is set to to false, containers can’t communicate with each other directly across the docker0 bridge.
Note: There’s a whole section on ICC and linking down below so if you don’t recall don’t worry!
In the case of this example, I have set ICC mode to false meaning that web1 can not talk to web2 across the docker0 bridge unless we define a link. However, ICC mode only applies to the default bridge (docker0). If we connect both containers to the bridge ‘testbridge’ they should be able to communicate directly across that bridge. Let’s give it a try…
docker network connect testbridge web2
So let’s try from the container and see what happens…
Success. User defined bridges are pretty easy to define and map containers to. Before we move on to user defined overlays, I want to briefly talk about linking and how it’s changed with the introduction of user defined networks.
Docker linking has been around since the early versions and was commonly mistaken for some kind of network feature of function. In reality, it has very little to do with network policy, particularly in Docker’s default configuration. Let’s take a quick look at how linking worked before user defined networks.
In a default configuration, Docker has the ICC value set to true. In this mode, all containers on the docker0 bridge can talk directly to each on any port they like. We saw this in action earlier with the bridge mode example where web1 and web2 were able to ping each other. If we change the default configuration, and disable ICC, we’ll see a different result. For instance, if we change the ICC value to ‘false’ in ‘/etc/sysconfig/docker’, we’ll notice that the above example no longer works…
If we want web1 to be able to access web2 we can ‘link’ the containers. Linking a container to another container allows the containers to talk to each other on the containers exposed ports.
Above, you can see that once the link is in place, I can’t ping web1 from web2, but I can access web1 on it’s exposed port. In this case, that port is 80. So linking with ICC disabled only allows linked containers to talk to each other on their exposed ports. This is the only way in which linking interests with network or security policy. The other feature linking gives you is name and service resolution. For instance, let’s look at the environmental variables on web2 once we link it to web1…
In addition to the environmental variables, you’ll also notice that web2’s hosts file has been updated to include the IP address of the web1 container. This means that I can now access the container by name rather than by IP address. As you can see, linking in previous versions of Docker had its uses and that same functionality is still available today.
That being said, user defined networks offer a pretty slick alternative to linking. So let’s go back to our example above where web1 and web2 are communicating across the ‘testbridge’ bridge. At this point, we haven’t defined any links at all but lets’ trying pinging web2 by name from web1…
Ok – so that’s pretty cool, but how is that working? Let’s check the environmental variables and the hosts file on the container…
Nothing at all here that would statically map the name web2 to an IP address. So how is this working? Docker now has an embedded DNS server. Container names are now registered with the Docker daemon and resolvable by any other containers on the same host. However – this functionality ONLY exists on user defined networks. You’ll note that the above ping returned an IP address associated with ‘testbridge’, not the default docker0 bridge.
That means I no longer need to statically link containers together in order for them to be able to communicate via name. In addition to this automatic behavior, you can also define global aliases and links on the user defined networks. For example, now try running these two commands…
docker network disconnect testbridge web1
docker network connect --alias=thebestserver --link=web2:webtwo testbridge web1
Above we removed web1 from ‘testbridge’ and then re-added it specifying a link and an alias. When using the link flag with user defined networks, it functions much in the same way as it did in the legacy linking method. Web1 will be able to resolve the container web2 either by it’s name, or it’s linked alias ‘webtwo’. In addition, user defined networks also provide what are referred to as ‘network-scoped aliases’. These aliases can be resolved by any container on the user defined network segment. So whereas links are defined by the container that wishes to use the link, aliases are defined by the container advertising the alias. Let’s log into each container and try pinging via the link and the alias…
In the case of web1, it’s able to ping both the defined link as well as the alias name. Let’s try web2…
So we can see that links used with user defined network are locally significant to the container. On the other hand, aliases are associated to a container when it joins a network, and are globally resolvable by any container on that same user defined network.
User defined overlay networks
The second and last built-in user defined network type is the overlay. Unlink the bridge type, the overlay requires an external key value store in order to store information such as the networks, endpoints, IP address, and discovery information. In most examples, that key value store is generally Consul but it can also be Etcd or ZooKeeper. So lets look at the lab we’re going to be using for this example…
Here we have 3 Docker hosts. Docker1 and docker2 live on 10.20.30.0/24 and docker3 lives on 192.168.30.0/24. Starting with a blank slate, all of the hosts have the docker0 bridge defined but no other user defined network or containers running.
The first thing we need to do is to tell all of the Docker hosts where to find the key value store. This is done by editing the Docker configuration settings in ‘/etc/sysconfig/docker’ and adding some option flags. In my case, my ‘OPTIONS’ now look like this…
OPTIONS='-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.20.30.230:8500/network --cluster-advertise=ens18:2375'
Make sure that you adjust your options to account for the IP address of the Docker host running Consul and the interface name defined under the ‘cluster-advertise’ flag. Update these options on all hosts participating in the cluster and then make sure you restart the Docker service.
Once Docker is back up and running, we need to deploy the aforementioned key value store for the overlay driver to use. As luck would have it, Consul offers their service as a container. So let’s deploy that container on docker1…
docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap
Once the Consul container is running, we’re all set to start defining overlay networks. Let’s go over to the docker3 host and define an overlay network…
docker network create -d overlay --subnet=10.10.10.0/24 testoverlay
Now if we look at docker1 or docker2, we should see the new overlay defined…
Perfect, so things are working as expected. Let’s now run one of our web containers on the host docker3…
Note: Unlike bridges, overlay networks do not pre-create the required interfaces on the Docker host until they are used by a container. Don’t be surprised if you don’t see these generated the instant you create the network.
docker run -d --net=testoverlay -p 8081:80 --name web1 jonlangemak/docker:webinstance1
Nothing too exciting here. Much like our other examples, we can now access the web server by browsing to the host docker3 on port 8081…
Let’s fire up the same container on docker2 and see what we get…
So it seems that container names across a user defined overlay can’t be common. This makes sense, so let’s instead load the second web instance on this host…
docker run -d --net=testoverlay -p 8082:80 --name web2 jonlangemak/docker:webinstance2
Once this container is running, let’s test the overlay by pinging web1 from web2…
Very cool. If we look at the physical network between docker2 and docker3 we’ll actually see the VXLAN encapsulated packets traversing the network between the two physical docker hosts…
It should be noted that there isn’t a bridge associated with the overlay itself. However – there is a bridge defined on each host which can be used for mapping ports of the physical host to containers that are a member of an overlay network. For instance, let’s look at the interfaces defined on the host docker3…
Notice that there’s a ‘docker_gwbridge’ bridge defined. If we look at the interfaces of the container itself, we see that it also has two interfaces…
Eth0 is a member of the overlay network, but eth1 is a member of the gateway bridge we saw defined on the host. In the case that you need to expose a port from a container on an overlay network you would need to use the ‘docker_gwbridge’ bridge. However, much like the user defined bridge, you can prevent external access by specifying the ‘—internal’ flag during network creation. This will prevent the container from receiving an additional interface associated with the gateway bridge. This does not however prevent the ‘docker_gwbridge’ from being created on the host.
Since our last example is going to use an internal overlay, let’s delete the web1 and web2 containers as well as the overlay network and rebuild the overlay network using the internal flag…
docker stop web2
docker rm web2
docker stop web1
docker rm web1
docker network rm testoverlay
docker network create -d overlay --internal --subnet=10.10.10.0/24 internaloverlay
docker run -d --net=internaloverlay --name web1 jonlangemak/docker:webinstance1
docker run -d --net=internaloverlay --name web2 jonlangemak/docker:webinstance2
So now we have two containers, each with a single interface on the overlay network. Let’s make sure they can talk to each other…
Perfect, the overlay is working. So at this point – our diagram sort of looks like this…
Not very exciting at this point especially considering we have no means to access the web server running on either of these containers from the outside world. To remedy that, why don’t we deploy a load balancer on docker1. To do this we’re going to use HAProxy so our first step will be coming up with a config file. The sample I’m using looks like this…
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
timeout connect 5000
timeout client 50000
timeout server 50000
stats auth user:password
stats uri /haproxyStats
server web1 web1:80 check
server web2 web2:80 check
option httpchk HEAD /index.html HTTP/1.0
For the sake of this test – let’s just focus on the backend section which defines two servers. One called web1 that’s accessible at the address of web1:80 and a second called web2 that’s accessible at the address of web2:80. Save this config file onto your host docker1, in my case, I put it in ‘/root/haproxy/haproxy.cfg’. Then we just fire up the container with this syntax…
docker run -d --net=internaloverlay --name haproxy -p 80:80 -v ~/haproxy:/usr/local/etc/haproxy/ haproxy
After this container kicks off, our topology now looks more like this…
So while the HAProxy container can now talk to both the backend servers, we still can’t talk to it on the frontend. If you recall, we defined the overlay as internal so we need to find another way to access the frontend. This can easily be achieved by connecting the HAProxy container to the native docker0 bridge using the network connect command…
docker network connect bridge haproxy
Once this is done, you should be able to hit the front end of the HAProxy container by hitting the docker1 host on port 80 since that’s the port we exposed.
And with any luck, you should see the HAProxy node load balancing requests between the two web server containers. Note that this also could have been accomplished by leaving the overlay network as an external network. In that case, the port 80 mapping we did with the HAProxy container would have been exposed through the ‘docker_gwbridge’ bridge and we wouldn’t have needed to add a second bridge interface to the container. I did it that way just to show you that you have options.
Bottom line – Lots of new features in the Docker networking space. I’m hoping to get another blog out shortly to discuss other network plugins. Thanks for reading!