Docker networking 101 – User defined networks

image In this post, I’d like to cover some of the new Docker network features.  Docker 1.9 saw the release of user defined networks and the most recent version 1.10 added some additional features.  In this post, we’ll cover the basics of the new network model as well as show some examples of what these new features provide.

So what’s new?  Well – lots.  To start with, let’s take a look at a Docker host running the newest version of Docker (1.10). 

Note: I’m running this demo on CentOS 7 boxes.  The default repository had version 1.8 so I had to update to the latest by using the update method shown in a previous post here.  Before you continue, verify that ‘docker version’ shows you on the correct release.

You’ll notice that the Docker CLI now provide a new option to interact with the network through the ‘docker network’ command…

Alright – so let’s start with the basics and see what’s already defined…


By default a base Docker installation has these three networks defined.  The networks are permanent and can not be modified.  Taking a closer look, you likely notice that these predefined networks are the same as the network models we had in earlier versions of Docker.  You could start a container in bridge mode by default, host mode my specifying ‘–net=host’, and without an interface by specifying ‘–net=none’.  To level set – everything that was there before is still here.  To make sure everything still works as expected, let’s run through building a container under each network type.

Note: These exact commands were taken out my earlier series of Docker networking 101 posts to show that the command syntax has not changed with the addition of multi-host networking.  Those posts can be found here.

Host mode

Executing the above command will spin up a test web server with the containers network stack being mapped directly to that of the host.  Once the container is running, we should be able to access the web server through the Docker hosts IP address…

Note: You either need to disable firewalld on the Docker host or add the appropriate rules for this to work.

image Bridge Mode

Here we’re running the default bridge mode and mapping ports into the container.  Running those two containers should give you the web server you’re looking for on ports 8081 and 8082…

In addition, if we connect to the containers directly, we can see that communication between the two containers occurs directly across the docker0 bridge never leaving the host…

  Here we can see that web1 has an APR entry for web2.  Looking at Web2, we can see the MAC address is identical…

None mode

In this example we can see that the container doesn’t receive any interface at all…

As you can see, all three modes work just as they had in previous versions of Docker.  So now that we’ve covered the existing network functions, lets’ talk about the new user defined networks…

User defined bridge networks
The easiest user defined network to use is the bridge.  Defining a new bridge is pretty easy.  Here’s a quick example…

Here I create a new bridge named ‘testbridge’ and provide the following attributes…

Gateway – In this case I set it to which will be the IP of the bridge created on the docker host.  We can see this by looking at the Docker hosts interfaces…

Subnet – I specified this as  We can see in the output above that this is the CIDR associated with the bridge.

IP-range – If you wish to define a smaller subnet from which Docker can allocate container IPs from your can use this flag.  The subnet you specify here must exist within the bridge subnet itself.    In my case, I specified the second half of the defined subnet.  When I start a container, I’ll get an IP out of the subnet if I assign the container to this bridge…

Our new bridge acts much like the docker0 bridge.  Ports can be mapped to the physical host in the same manner.  In the above example, we mapped port 8081 to port 80 in the container.  Despite this container being on a different bridge, the connectivity works all the same…


We can make this example slightly more interesting by removing the existing container, removing the ‘testbridge’, and redefining it slightly differently…

The only change here is the addition of the ‘—internal’ flag.  This prevents any external communication from the bridge.  Let’s check this out by defining the container like this…

You’ll note that in this case, we can no longer access the web server container through the exposed port…

It’s obvious that the ‘—internal’ flag prevents containers attached to the bridge from talking outside of the host.  So while we can now define new bridges and associate newly spawned containers to them, that by itself is not terribly interesting.  What would be more interesting is the ability to connect existing containers to these new bridges.  As luck would have it, we can use the docker network ‘connect’ and ‘disconnect’ commands to add and remove containers from any defined bridge.  Let’s start by attaching the container web1 to the default docker0 bridge (bridge)…

If we look at the network configuration of the container, we can see that it now has two NICs.  One associated with ‘bridge’ (the docker0 bridge), and another associated with ‘testbridge’…

If we check again, we’ll see that we can now once again access the web server through the mapped port across the ‘bridge’ interface…

Next, let’s spin up our web2 container, and attach it to our default docker0 bridge…

Before we get too far – let’s take a logical look at where we stand…

We have a physical host (docker1) with a NIC called ‘ENS0’ which sits on the physical network with the IP address of  That host has 2 Linux bridges called ‘bridge’ (docker0) and ‘testbridge’ each with their own defined IP addresses.  We also have two containers, one called web1 which is associated with both bridges and a second, web2, that’s associated with only the native Docker bridge. 

Given this diagram, you might assume that web1 and web2 would be able to communicate directly with each other since they are connected to the same bridge.  However, if you recall our earlier posts, Docker has something called ICC (Inter Container Communication) mode.  When ICC is set to to false, containers can’t communicate with each other directly across the docker0 bridge.

Note: There’s a whole section on ICC and linking down below so if you don’t recall don’t worry!

In the case of this example, I have set ICC mode to false meaning that web1 can not talk to web2 across the docker0 bridge unless we define a link.  However, ICC mode only applies to the default bridge (docker0).  If we connect both containers to the bridge ‘testbridge’ they should be able to communicate directly across that bridge.  Let’s give it a try…

So let’s try from the container and see what happens…

Success.  User defined bridges are pretty easy to define and map containers to.  Before we move on to user defined overlays, I want to briefly talk about linking and how it’s changed with the introduction of user defined networks.

Container Linking
Docker linking has been around since the early versions and was commonly mistaken for some kind of network feature of function.  In reality, it has very little to do with network policy, particularly in Docker’s default configuration. Let’s take a quick look at how linking worked before user defined networks.

In a default configuration, Docker has the ICC value set to true. In this mode, all containers on the docker0 bridge can talk directly to each on any port they like. We saw this in action earlier with the bridge mode example where web1 and web2 were able to ping each other. If we change the default configuration, and disable ICC, we’ll see a different result. For instance, if we change the ICC value to ‘false’ in ‘/etc/sysconfig/docker’, we’ll notice that the above example no longer works…

If we want web1 to be able to access web2 we can ‘link’ the containers.  Linking a container to another container allows the containers to talk to each other on the containers exposed ports.

Above, you can see that once the link is in place, I can’t ping web1 from web2, but I can access web1 on it’s exposed port. In this case, that port is 80. So linking with ICC disabled only allows linked containers to talk to each other on their exposed ports.  This is the only way in which linking interests with network or security policy.  The other feature linking gives you is name and service resolution.  For instance, let’s look at the environmental variables on web2 once we link it to web1…


In addition to the environmental variables, you’ll also notice that web2’s hosts file has been updated to include the IP address of the web1 container.  This means that I can now access the container by name rather than by IP address.  As you can see, linking in previous versions of Docker had its uses and that same functionality is still available today.

That being said, user defined networks offer a pretty slick alternative to linking.  So let’s go back to our example above where web1 and web2 are communicating across the ‘testbridge’ bridge.  At this point, we haven’t defined any links at all but lets’ trying pinging web2 by name from web1…

Ok – so that’s pretty cool, but how is that working?  Let’s check the environmental variables and the hosts file on the container…

Nothing at all here that would statically map the name web2 to an IP address.  So how is this working?  Docker now has an embedded DNS server.  Container names are now registered with the Docker daemon and resolvable by any other containers on the same host.  However – this functionality ONLY exists on user defined networks.  You’ll note that the above ping returned an IP address associated with ‘testbridge’, not the default docker0 bridge. 

That means I no longer need to statically link containers together in order for them to be able to communicate via name.  In addition to this automatic behavior, you can also define global aliases and links on the user defined networks.   For example, now try running these two commands…

Above we removed web1 from ‘testbridge’ and then re-added it specifying a link and an alias.  When using the link flag with user defined networks, it functions much in the same way as it did in the legacy linking method.  Web1 will be able to resolve the container web2 either by it’s name, or it’s linked alias ‘webtwo’.  In addition, user defined networks also provide what are referred to as ‘network-scoped aliases’.  These aliases can be resolved by any container on the user defined network segment.  So whereas links are defined by the container that wishes to use the link, aliases are defined by the container advertising the alias. Let’s log into each container and try pinging via the link and the alias…

In the case of web1, it’s able to ping both the defined link as well as the alias name.  Let’s try web2…

So we can see that links used with user defined network are locally significant to the container.  On the other hand, aliases are associated to a container when it joins a network, and are globally resolvable by any container on that same user defined network.

User defined overlay networks
The second and last built-in user defined network type is the overlay.  Unlink the bridge type, the overlay requires an external key value store in order to store information such as the networks, endpoints, IP address, and discovery information.  In most examples, that key value store is generally Consul but it can also be Etcd or ZooKeeper.  So lets look at the lab we’re going to be using for this example…image 
Here we have 3 Docker hosts.  Docker1 and docker2 live on and docker3 lives on  Starting with a blank slate, all of the hosts have the docker0 bridge defined but no other user defined network or containers running.

The first thing we need to do is to tell all of the Docker hosts where to find the key value store.  This is done by editing the Docker configuration settings in ‘/etc/sysconfig/docker’ and adding some option flags.  In my case, my ‘OPTIONS’ now look like this…

Make sure that you adjust your options to account for the IP address of the Docker host running Consul and the interface name defined under the ‘cluster-advertise’ flag.  Update these options on all hosts participating in the cluster and then make sure you restart the Docker service.

Once Docker is back up and running, we need to deploy the aforementioned key value store for the overlay driver to use.  As luck would have it, Consul offers their service as a container. So let’s deploy that container on docker1…

Once the Consul container is running, we’re all set to start defining overlay networks.  Let’s go over to the docker3 host and define an overlay network…

Now if we look at docker1 or docker2, we should see the new overlay defined…

Perfect, so things are working as expected.  Let’s now run one of our web containers on the host docker3…

Note: Unlike bridges, overlay networks do not pre-create the required interfaces on the Docker host until they are used by a container.  Don’t be surprised if you don’t see these generated the instant you create the network.

Nothing too exciting here.  Much like our other examples, we can now access the web server by browsing to the host docker3 on port 8081…

Let’s fire up the same container on docker2 and see what we get…

So it seems that container names across a user defined overlay can’t be common.  This makes sense, so let’s instead load the second web instance on this host…

Once this container is running, let’s test the overlay by pinging web1 from web2…

Very cool.  If we look at the physical network between docker2 and docker3 we’ll actually see the VXLAN encapsulated packets traversing the network between the two physical docker hosts…

It should be noted that there isn’t a bridge associated with the overlay itself.  However –  there is a bridge defined on each host which can be used for mapping ports of the physical host to containers that are a member of an overlay network.  For instance, let’s look at the interfaces defined on the host docker3…

Notice that there’s a ‘docker_gwbridge’ bridge defined.  If we look at the interfaces of the container itself, we see that it also has two interfaces…

Eth0 is a member of the overlay network, but eth1 is a member of the gateway bridge we saw defined on the host.  In the case that you need to expose a port from a container on an overlay network you would need to use the ‘docker_gwbridge’ bridge.  However, much like the user defined bridge, you can prevent external access by specifying the ‘—internal’ flag during network creation.  This will prevent the container from receiving an additional interface associated with the gateway bridge.  This does not however prevent the ‘docker_gwbridge’ from being created on the host.

Since our last example is going to use an internal overlay, let’s delete the web1 and web2 containers as well as the overlay network and rebuild the overlay network using the internal flag…

So now we have two containers, each with a single interface on the overlay network.  Let’s make sure they can talk to each other…

Perfect, the overlay is working.  So at this point – our diagram sort of looks like this…

Not very exciting at this point especially considering we have no means to access the web server running on either of these containers from the outside world.  To remedy that, why don’t we deploy a load balancer on docker1. To do this we’re going to use HAProxy so our first step will be coming up with a config file.  The sample I’m using looks like this…

For the sake of this test – let’s just focus on the backend section which defines two servers.  One called web1 that’s accessible at the address of web1:80 and a second called web2 that’s accessible at the address of web2:80.  Save this config file onto your host docker1, in my case, I put it in ‘/root/haproxy/haproxy.cfg’.  Then we just fire up the container with this syntax…

After this container kicks off, our topology now looks more like this…

So while the HAProxy container can now talk to both the backend servers, we still can’t talk to it on the frontend.  If you recall, we defined the overlay as internal so we need to find another way to access the frontend.  This can easily be achieved by connecting the HAProxy container to the native docker0 bridge using the network connect command…

Once this is done, you should be able to hit the front end of the HAProxy container by hitting the docker1 host on port 80 since that’s the port we exposed.

image image
And with any luck, you should see the HAProxy node load balancing requests between the two web server containers.  Note that this also could have been accomplished by leaving the overlay network as an external network.   In that case, the port 80 mapping we did with the HAProxy container would have been exposed through the ‘docker_gwbridge’ bridge and we wouldn’t have needed to add a second bridge interface to the container.  I did it that way just to show you that you have options.

Bottom line – Lots of new features in the Docker networking space.  I’m hoping to get another blog out shortly to discuss other network plugins.  Thanks for reading!


  1. emote’s avatar

    Great article!
    I’m just preparing installation with 2 physical hosts for HA and overlay network feature is very useful for that.

    btw. when running `docker network inspect test overlay` from one of the docker host, I’m getting only list of network members from this particular host, and for sure there other containers on different hosts connected to the network and communication between these two is ok.

    Any idea if it it is by design or I’m missing something?


    1. Jon Langemak’s avatar

      Is the consul container successfully running? If so, does each Docker host have the correct Docker options as I used in the post?


    2. Sai’s avatar

      Excellent article. Thanks for taking time to write up.
      I contribute to Tradewave and currently we use docker. We had to stitch most of the networking components ourselves, especially around container communication. But we are exploring Kubernetes at the moment.
      I am glad to see Docker is putting lot of effort in networking features. Also, its on my todo list to read up on your kubernetes posts.

      I am wondering whether should we build on top Docker or just move to Kubernetes. If you’re in SF area, would love to talk more about this.


      1. Jon Langemak’s avatar

        Unfortunately Im not in the SF area frequently but I am a few times a year. Feel free to reach out via email (on the contact page) if you’d like to chat more about it that way.

        Thanks for reading!


      2. Bruce’s avatar


        I’m trying the part under “User defined overlay networks”

        I need to connect multiple Azure machines together so that I can have celery workers added to the pool.
        I am using docker-compose, however, and am running into a couple of issues. I’ll paste my configs so far and hopefully you can help:

        /usr/bin/sudo docker-compose up -d
        /usr/bin/sudo docker network create -d overlay –subnet= workernet

        then in the docker-compose.yml file:
        – “8500”
        – “8500:8500”
        image: progrium/consul
        context: .
        – “-server”
        – “-bootstrap”
        I assume that my issue is in how to pass the -server and -bootstrap, as I get the following error message:

        Error response from daemon: error getting pools config from store: could not get pools config from store: Unexpected response code: 500

        I get the same error when I run it from the command line$ sudo docker network create -d overlay –subnet= workernet

        Any guidance would be greatly appreciated, please let me know if you need further info. I’m running in Azure on Ubuntu 16.04. Docker 1.13.0 and docker-compose 1.10.0



        1. Jon Langemak’s avatar

          I think its probably best to keep compose out of the discussion for now and just focus on the network piece. It sounds like you cant even pass the network create command to a host currently. Is that accurate? If so – can you paste in your Docker service settings? Not sure if you’re on systemd or not.


          1. Bruce’s avatar

            Hi, sorry, I’m new to docker, so not sure what you mean by “pass to host”. Where would I find service settings? If you mean where I define the services/containers, it’s all in the docker-compose.yml file. Ubuntu 16.04 is indeed systemd.


            1. Jon Langemak’s avatar

              No worries. What I was getting at is that you can’t even manually create networks. Is that correct?

              If so – we need to look at your Docker service settings. Usually part of a systemd drop in file or in /etc/sysconfig/docker


              1. Bruce’s avatar

                ok, as far as I can determine, the service settings are located in /etc/systemd/system/

                lrwxrwxrwx 1 root root 34 Feb 7 15:14 docker.service -> /lib/systemd/system/docker.service

                Description=Docker Application Container Engine
       docker.socket firewalld.service

                # the default is not to use systemd for cgroups because the delegate issues still
                # exists and systemd currently does not support the cgroup feature set required
                # for containers run by docker
                ExecStart=/usr/bin/dockerd -H tcp:// -H fd:// -H unix:///var/run/docker.sock –cluster-store=consul:// –cluster-advertise=enp0s3:2375
                ExecReload=/bin/kill -s HUP $MAINPID
                # Having non-zero Limit*s causes performance problems due to accounting overhead
                # in the kernel. We recommend using cgroups to do container-local accounting.
                # Uncomment TasksMax if your systemd version supports it.
                # Only systemd 226 and above support this version.
                # set delegate yes so that systemd does not reset the cgroups of docker containers
                # kill only the docker process, not all processes in the cgroup


              2. Jon Langemak’s avatar

                Ok – so it looks like you have the right configuration in there. You need to define the ‘–cluster-advertise’ and the ‘–cluster-store’ parameters to tell Docker where to find Consul. Can you confirm Docker is picking up these settings by running a ‘docker info’? You should see the cluster items defined in that output. Do this on all your hosts. Next we need to confirm that consul is reachable. How are you running Consul?

              3. Bruce’s avatar

                bruce@host:/srv/compose$ sudo docker info
                Containers: 7
                Running: 5
                Paused: 0
                Stopped: 2
                Images: 5
                Server Version: 1.13.0
                Storage Driver: aufs
                Root Dir: /var/lib/docker/aufs
                Backing Filesystem: extfs
                Dirs: 79
                Dirperm1 Supported: true
                Logging Driver: json-file
                Cgroup Driver: cgroupfs
                Volume: local
                Network: bridge host macvlan null overlay
                Swarm: inactive
                Runtimes: runc
                Default Runtime: runc
                Init Binary: docker-init
                containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
                runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
                init version: 949e6fa
                Security Options:
                Profile: default
                Kernel Version: 4.4.0-62-generic
                Operating System: Ubuntu 16.04.1 LTS
                OSType: linux
                Architecture: x86_64
                CPUs: 1
                Total Memory: 3.859 GiB
                Name: host
                ID: MDVA:H3U3:O33A:LFRW:TZFH:N3SI:2THF:4P3K:BKG2:6BDZ:M5CY:B7ZM
                Docker Root Dir: /var/lib/docker
                Debug Mode (client): false
                Debug Mode (server): false
                WARNING: No swap limit support
                Experimental: false
                Cluster Store: consul://
                Cluster Advertise:
                Insecure Registries:
                Live Restore Enabled: false

                I should also ask, Assuming the actual network my machine is attached to is, should I ne defining a different network for the overlay? And, Should the cluster-store then be on that network?

              4. Bruce’s avatar

                Sorry, missed the bit where you asked how I’m running consul. It’s in the docker-compose.yml file. I’ve included the bit at the top, there’s more below but not relevant:

                version: “2”

                – “5672”
                – “15672”
                – “5672:5672”
                – “15672:15672”
                image: rabbitmq:3.5.1-management

                – “8500”
                – “8500:8500”
                image: progrium/consul
                context: .
                – “-server”
                – “-bootstrap”

                i was trying to pass the server and bootstrap bits in, and this is all I could guess how to do it.

              5. Jon Langemak’s avatar

                Couple of things. Let’s hold off on compose now and focus on getting a overlay network created. Can you start the consul container manually just with docker run? Something like…

                docker run -d -p 8500:8500 -h consul –name consul progrium/consul -server -bootstrap

                Once that’s running, make sure that the container is running and has a published port (docker port). Then if we get that far, try defining just a simple overlay network…

                docker network create -d overlay testoverlay

              6. Bruce’s avatar

                Ok, yes, after I removed the consul container that was already there, those two commands ran with success.

              7. Jon Langemak’s avatar

                So you should now see that network on the other nodes when you do a ‘docker network ls’. The next step would be to test running containers on different nodes on that network and seeing if they can reach each other directly.

              8. Bruce’s avatar

                One would hope. Here’s my question: How do I retrieve info about the network? Do you have any idea why I can’t specify a particular subnet? The reason is that once this makes it into production, I assume I’m going to be needing to hardcode certain things, so I’ll need to be able to specifically define the network to be used.

                And to restate: When I create a network in this way, am I trying to actually join the LAN, or am I somehow just making up a new network subnet? IE, if my azure containers are all on, should I be using the same subnet in my overlay create, or should I maybe use like


              9. Jon Langemak’s avatar

                So extend the network command to something like this…

                docker network create -d overlay –subnet testoverlay

                Does that work? To answer your question, the overlay is it’s own network. It should be different from the network your hosts live on. If it isn’t, you could have issues. The overlay is an entirely new network that uses the servers LAN as a transport network. The LAN will never see the IPs from your overlay. Hope that makes sense.

              10. Bruce’s avatar

                Yes, that makes sense,thanks.

                One more question: In my, this statement:

                ExecStart=/usr/bin/dockerd -H tcp:// -H fd:// -H unix:///var/run/docker.sock –cluster-store=consul:// –cluster-advertise=enp0s3:2375

                Should the IP of the consul store be on the overlay network or my LAN? If it’s supposed to be on the overlay, how would I determine the IP of the consul container, would that not change each time it’s instantiated?

              11. Jon Langemak’s avatar

                Should be on the LAN. The docker service itself wont communicate across the overlay. That’s just for containers. So make sure that consul is running on a network that is reachable from the Docker hosts.


Your email address will not be published. Required fields are marked *