Docker Networking 101 – Host mode

imageIn our last post we covered what docker does with container networking in a default configuration.  In this post, I’d like to start covering the remaining non-default network configuration modes.  There are really 4 docker ‘provided’ network modes in which you can run containers…

Bridge mode – This is the default, we saw how this worked in the last post with the containers being attached to the docker0 bridge.

Host mode – The docker documentation claims that this mode does ‘not containerize the containers networking!’.  That being said, what this really does is just put the container in the hosts network stack.  That is, all of the network interfaces defined on the host will be accessible to the container.  This one is sort of interesting and has some caveats but we’ll talk about those in greater detail below.

Mapped Container mode – This mode essentially maps a new container into an existing containers network stack.  This means that while other resources (processes, filesystem, etc) will be kept separate, the network resources such as port mappings and IP addresses of the first container will be shared by the second container.

None – This one is pretty straight forward.  It tells docker to put the container in its own network stack but not to do configure any of the containers network interfaces.  This allows for you to create custom network configuration which we’ll talk about more in a later post.

Keep in mind that all these modes area applied at the container level so we can certainly have a mix of different network modes on the same docker host.

For this post, I’m going to use the same lab I used in the first post but with a minor tweak…

image
Note that my docker1 host now has two IP interfaces.   Other than that, everything is the same.

Host Mode
Host mode seems pretty straight forward but there are some items you need to keep in mind when deploying it.  Namely, some of the configuration I thought might happen automagically doesn’t actually happen.  Let’s look at an example so you can see what I’m talking about.  Let’s start a basic web container on the docker2 host.  I’ll do so with this command…

Note: All of the containers I use in these labs are available in my public repo so feel free to download them for testing.  There are 4 images I use in this lab all of which are running CentOS with Apache.  The only difference is slight configuration changes in the index.html page so we can see which one is which as well as some Apache config which I talk about more below.

Note that I’m passing the ‘–net=host’ flag in the docker run command.  Also note that I’m not specifying any port mappings.  Once the image is downloaded docker will run the image as a container called ‘web1’.  Since we told docker to run this container as a daemon let’s connect to a bash shell on the container using this command…

Once connected, let’s check and see what network interfaces we have in the container…

image

Note that we don’t have an IP address in the 172.17.0.0/16 address space.  Rather, we actually have all of the interfaces from the docker2 host.  For comparison sake, let’s see them side by side…

image
I hope that makes this obvious, but they’re totally identical.  This brings up some interesting possibilities.  But before we get carried away, let’s check and see what’s going on with our Apache server.  Since the IP of the container is the IP of the host one would assume that we should be able to hit our index.html on 10.20.30.101.  Let’s give it a try…

image

No dice.  So what’s going on?  Recall that docker makes rather extensive use of iptables for it’s bridging mode.  Let’s take a look at the iptables rule set and see what it has…

image
No rule to allow http.  One might suggest that since we didn’t use the ‘-p’ flag docker didn’t know to make a rule in iptables.  While that seems to be a possible fix it really isn’t.  Starting the container with a port mapping yields the same result.  That being said, it’s safe to say that you’re on your own when it comes to host mode networking.  Docker will expose the hosts network stack to the container but its then up to you to make the appropriate firewall rules.  So let’s add a rule that allows port 80 traffic through iptables…

Note: Interestingly enough you could actually make this rule from the container itself if you were to pass the ‘privileged=true’ flag in the docker run command.  While this may seem appealing from a automation perspective it seems unnecessary and possibly a bad idea.

With the iptables rule in place we should be able to browse to the web page through the host IP address…

image
Cool, so now we’re up and running in host mode.  One thing to keep in mind is that this mode of operation severely limits the services you can run on a single host.  AKA, we don’t get port mapping anymore.  If we try to run another container that also wants to use port 80 we’re going to run into issues.  Let’s try so you can see what I’m talking about.  Let’s spin up a second container called webinstance2 on docker2…

If we check we can see that both containers are now running…

image
At this point I can still get to my web1 index page but what happened with web2?  Let’s log into web2 and see what’s going on…

Let’s check the service status…

image
Alright, that looks bad.  Let’s try and start httpd…

image
That looks even worse.  So this isn’t a docker problem, it just the fact that the web instance 1 container is already bound to port 80 on the interfaces of the docker2 host.  We can see this binding by checking out netstat from inside the web instance 2 container…

image
We don’t get the PID info since this container’s processes are different than web instance 1’s but if we head back to the host and run the command again we can see that port is being used by httpd…

image
So that all sort of makes sense.  You can’t use the same IP address for the same service on different containers.  So at this point, I’d argue that our diagram looks a lot more like this…

image
The docker2 host is still there but the container is really right up front on the physical edge sine it’s sharing the same network stack as the host.  So this all seems very limiting.  Fortunately, we do have an option for running multiple identical services on the same docker host.  Recall that docker1 now has two IP address, .100 and .200.  If we run two Apache instances in host network mode one should be able to use .100 and the other .200.

Again – This isn’t a docker configuration problem.  Just as if this was a physical server running Apache we need to tell Apache where to listen and on what port.  This is done by modifying the apache config (in my case /etc/httpd/conf/httpd.conf) and setting the ‘Listen’ command.  By default, Apache will listen on port 80 on every interface.  The default config would look something like ‘Listen 80’.  To make this work we need to change the config to something like what is shown below on each respective container…

image

image
Fortunately for you, I already have two containers pre-configured with this configuration.  Testweb1 is setup to listen on 10.20.30.100:80 and testweb2 is listening on 10.20.30.200:80.  Let’s run them on docker1 and see the result…

Once both are downloaded let’s ensure they are both running…

image
Now let’s test and see what we get on each IP address…

Note: Same rules apply here, add a rule in iptables to allow http on the host

image image

Looks good, let’s check the docker host to see what it thinks is going on…

image
Nice, so the docker host sees two Apache processes one listening on each of it’s interfaces.  Our final diagram shows each container almost as its own distinct host…

image
So as you can see, host mode networking is a little bit different than bridge mode and requires some additional config to get it working properly.  However, the tradeoff is performance.  Containers running in the hosts network stack should see a higher level of performance than those traversing the docker0 bridge and iptables port mappings.

Next up we’ll cover the container in container mode of docker networking, stay tuned!

Tags:

  1. Bhargav’s avatar

    Another interesting read. Like the experiment with Host Mode with two containers running on same port with different IP-Address. Is this similar to kubernetes model ?

    Reply

    1. Jon Langemak’s avatar

      Very similar. Kubernetes uses the concept of pods. From what I understand a pod is a set of containers that can be deployed together on the same IP address. So you might have a container using port 80 and another using port 443 in the same pod. This clears up the port mapping confusion since each IP (pod) should be able to use the real service port. As to the network side of things I believe the pod IPs are just routed to the docker host.

      Im still playing around with Kubernetes so this is all just my current understanding. I’m hoping to have more time to play around with it in the coming weeks so I can get a blog out on their network model.

      Reply

    2. Bhargav’s avatar

      From whatever limited knowledge i have WRT Kubernetes, multiple pods can be located in the same host. PoD-A with IP-A with have services running on different ports with IP-A and PoD-B with IP-B will have services running on different ports with IP-B. So, the underlying host NIC will have multiple IP’s (IP-A & IP-B) configured. In your example, it is 10.20.30.100 & 10.20.30.200.

      A socket is defined by IP & Port-No, so with experiment you have done, can we call a socket could be dis-aggregated using dockers ?

      Reply

      1. Jon Langemak’s avatar

        Im not sure Im completely following but I think you’re driving at what kubernetes can do it terms of pod space being routed. I have a post coming up here shortly that starts the dive into kubernetes so maybe your question will be answered then.

        Thanks for reading!

        Reply

      2. KK’s avatar

        Why do you think host mode will have better performance ?. From the post, it seems like host mode still uses IPtables.

        Reply

        1. Jon Langemak’s avatar

          My thinking was more along the line of the container being in the same network namespace as the host. If you’re using the host IP address then its essentially mapped direct on the host. Do you think it wouldnt be any better than the default modes?

          Reply

        2. Leo’s avatar

          fantastic, I like the way you explain docker networking.

          Thanks

          Reply

        3. sam’s avatar

          Great Read. Infact I started off by reading your kubernetes blog, I’m still in the exploring mode- which is better, kubernetes or docker swarm. I’m trying to spin up the container using cassandra image with persistent storage and then link it up with AWS.

          Thanks for posting this

          Sam

          Reply

        4. Sai’s avatar

          Quick question on the IP table entry for the host mode. You created a tcp entry at port 80 but I don’t see source/dest. Is that entry for all IP addresses (0.0.0.0)

          Reply

          1. Jon Langemak’s avatar

            Yes – 0.0.0.0 means all IP addresses on the host. This can be limited if you like.

            Reply

          2. Binh Thanh Nguyen’s avatar

            Thanks, nice explanation

            Reply

          3. Gearoid Maguire’s avatar

            great article, Can you help me I have a spring app that needs to connect to a mysql DB I can run it locally on my windows laptop. I have docker for windows installed and I want to dockerize the web app. I want the container with my web app to communicate with my local mysql DB on my windows machine. Is this possible? here is the command I am running

            `docker run -it –name myapp –net=host -e “CATALINA_OPTS=-Dspring.profiles.active=dev -DPARAM1=DEV” -p 8080:8080 -p 8005:8005 -p 8009:8009 -p 3306:3306 -v C:\PathToApp\trunk\target\mywar.war:/usr/local/tomcat/webapps/mywar.war tomcat:8.0.38-jre8`

            Reply

Reply

Your email address will not be published. Required fields are marked *