Docker Networking 101 – Host mode

      22 Comments on Docker Networking 101 – Host mode

imageIn our last post we covered what docker does with container networking in a default configuration.  In this post, I’d like to start covering the remaining non-default network configuration modes.  There are really 4 docker ‘provided’ network modes in which you can run containers…

Bridge mode – This is the default, we saw how this worked in the last post with the containers being attached to the docker0 bridge.

Host mode – The docker documentation claims that this mode does ‘not containerize the containers networking!’.  That being said, what this really does is just put the container in the hosts network stack.  That is, all of the network interfaces defined on the host will be accessible to the container.  This one is sort of interesting and has some caveats but we’ll talk about those in greater detail below.

Mapped Container mode – This mode essentially maps a new container into an existing containers network stack.  This means that while other resources (processes, filesystem, etc) will be kept separate, the network resources such as port mappings and IP addresses of the first container will be shared by the second container.

None – This one is pretty straight forward.  It tells docker to put the container in its own network stack but not to do configure any of the containers network interfaces.  This allows for you to create custom network configuration which we’ll talk about more in a later post.

Keep in mind that all these modes area applied at the container level so we can certainly have a mix of different network modes on the same docker host.

For this post, I’m going to use the same lab I used in the first post but with a minor tweak…

image
Note that my docker1 host now has two IP interfaces.   Other than that, everything is the same.

Host Mode
Host mode seems pretty straight forward but there are some items you need to keep in mind when deploying it.  Namely, some of the configuration I thought might happen automagically doesn’t actually happen.  Let’s look at an example so you can see what I’m talking about.  Let’s start a basic web container on the docker2 host.  I’ll do so with this command…

docker run -d --name web1 --net=host jonlangemak/docker:webinstance1

Note: All of the containers I use in these labs are available in my public repo so feel free to download them for testing.  There are 4 images I use in this lab all of which are running CentOS with Apache.  The only difference is slight configuration changes in the index.html page so we can see which one is which as well as some Apache config which I talk about more below.

Note that I’m passing the ‘–net=host’ flag in the docker run command.  Also note that I’m not specifying any port mappings.  Once the image is downloaded docker will run the image as a container called ‘web1’.  Since we told docker to run this container as a daemon let’s connect to a bash shell on the container using this command…

docker exec –ti web1 /bin/bash

Once connected, let’s check and see what network interfaces we have in the container…

image

Note that we don’t have an IP address in the 172.17.0.0/16 address space.  Rather, we actually have all of the interfaces from the docker2 host.  For comparison sake, let’s see them side by side…

image
I hope that makes this obvious, but they’re totally identical.  This brings up some interesting possibilities.  But before we get carried away, let’s check and see what’s going on with our Apache server.  Since the IP of the container is the IP of the host one would assume that we should be able to hit our index.html on 10.20.30.101.  Let’s give it a try…

image

No dice.  So what’s going on?  Recall that docker makes rather extensive use of iptables for it’s bridging mode.  Let’s take a look at the iptables rule set and see what it has…

image
No rule to allow http.  One might suggest that since we didn’t use the ‘-p’ flag docker didn’t know to make a rule in iptables.  While that seems to be a possible fix it really isn’t.  Starting the container with a port mapping yields the same result.  That being said, it’s safe to say that you’re on your own when it comes to host mode networking.  Docker will expose the hosts network stack to the container but its then up to you to make the appropriate firewall rules.  So let’s add a rule that allows port 80 traffic through iptables…

iptables -I INPUT 5 -p tcp -m tcp --dport 80 -j ACCEPT

Note: Interestingly enough you could actually make this rule from the container itself if you were to pass the ‘privileged=true’ flag in the docker run command.  While this may seem appealing from a automation perspective it seems unnecessary and possibly a bad idea.

With the iptables rule in place we should be able to browse to the web page through the host IP address…

image
Cool, so now we’re up and running in host mode.  One thing to keep in mind is that this mode of operation severely limits the services you can run on a single host.  AKA, we don’t get port mapping anymore.  If we try to run another container that also wants to use port 80 we’re going to run into issues.  Let’s try so you can see what I’m talking about.  Let’s spin up a second container called webinstance2 on docker2…

docker run -d --name web2 --net=host jonlangemak/docker:webinstance2

If we check we can see that both containers are now running…

image
At this point I can still get to my web1 index page but what happened with web2?  Let’s log into web2 and see what’s going on…

docker exec –it web2 /bin/bash

Let’s check the service status…

image
Alright, that looks bad.  Let’s try and start httpd…

image
That looks even worse.  So this isn’t a docker problem, it just the fact that the web instance 1 container is already bound to port 80 on the interfaces of the docker2 host.  We can see this binding by checking out netstat from inside the web instance 2 container…

image
We don’t get the PID info since this container’s processes are different than web instance 1’s but if we head back to the host and run the command again we can see that port is being used by httpd…

image
So that all sort of makes sense.  You can’t use the same IP address for the same service on different containers.  So at this point, I’d argue that our diagram looks a lot more like this…

image
The docker2 host is still there but the container is really right up front on the physical edge sine it’s sharing the same network stack as the host.  So this all seems very limiting.  Fortunately, we do have an option for running multiple identical services on the same docker host.  Recall that docker1 now has two IP address, .100 and .200.  If we run two Apache instances in host network mode one should be able to use .100 and the other .200.

Again – This isn’t a docker configuration problem.  Just as if this was a physical server running Apache we need to tell Apache where to listen and on what port.  This is done by modifying the apache config (in my case /etc/httpd/conf/httpd.conf) and setting the ‘Listen’ command.  By default, Apache will listen on port 80 on every interface.  The default config would look something like ‘Listen 80’.  To make this work we need to change the config to something like what is shown below on each respective container…

image

image
Fortunately for you, I already have two containers pre-configured with this configuration.  Testweb1 is setup to listen on 10.20.30.100:80 and testweb2 is listening on 10.20.30.200:80.  Let’s run them on docker1 and see the result…

docker run -d --name web1 --net=host jonlangemak/docker:testweb1
docker run -d --name web2 --net=host jonlangemak/docker:testweb2

Once both are downloaded let’s ensure they are both running…

image
Now let’s test and see what we get on each IP address…

Note: Same rules apply here, add a rule in iptables to allow http on the host

image image

Looks good, let’s check the docker host to see what it thinks is going on…

image
Nice, so the docker host sees two Apache processes one listening on each of it’s interfaces.  Our final diagram shows each container almost as its own distinct host…

image
So as you can see, host mode networking is a little bit different than bridge mode and requires some additional config to get it working properly.  However, the tradeoff is performance.  Containers running in the hosts network stack should see a higher level of performance than those traversing the docker0 bridge and iptables port mappings.

Next up we’ll cover the container in container mode of docker networking, stay tuned!

22 thoughts on “Docker Networking 101 – Host mode

  1. Bhargav

    Another interesting read. Like the experiment with Host Mode with two containers running on same port with different IP-Address. Is this similar to kubernetes model ?

    Reply
    1. Jon Langemak Post author

      Very similar. Kubernetes uses the concept of pods. From what I understand a pod is a set of containers that can be deployed together on the same IP address. So you might have a container using port 80 and another using port 443 in the same pod. This clears up the port mapping confusion since each IP (pod) should be able to use the real service port. As to the network side of things I believe the pod IPs are just routed to the docker host.

      Im still playing around with Kubernetes so this is all just my current understanding. I’m hoping to have more time to play around with it in the coming weeks so I can get a blog out on their network model.

      Reply
  2. Bhargav

    From whatever limited knowledge i have WRT Kubernetes, multiple pods can be located in the same host. PoD-A with IP-A with have services running on different ports with IP-A and PoD-B with IP-B will have services running on different ports with IP-B. So, the underlying host NIC will have multiple IP’s (IP-A & IP-B) configured. In your example, it is 10.20.30.100 & 10.20.30.200.

    A socket is defined by IP & Port-No, so with experiment you have done, can we call a socket could be dis-aggregated using dockers ?

    Reply
    1. Jon Langemak Post author

      Im not sure Im completely following but I think you’re driving at what kubernetes can do it terms of pod space being routed. I have a post coming up here shortly that starts the dive into kubernetes so maybe your question will be answered then.

      Thanks for reading!

      Reply
  3. KK

    Why do you think host mode will have better performance ?. From the post, it seems like host mode still uses IPtables.

    Reply
    1. Jon Langemak Post author

      My thinking was more along the line of the container being in the same network namespace as the host. If you’re using the host IP address then its essentially mapped direct on the host. Do you think it wouldnt be any better than the default modes?

      Reply
  4. Pingback: [Setup] Docker in Docker | David Yang's Workspace

  5. Pingback: Connecting to a Apache web server in a Docker from a remote server - HTML CODE

  6. sam

    Great Read. Infact I started off by reading your kubernetes blog, I’m still in the exploring mode- which is better, kubernetes or docker swarm. I’m trying to spin up the container using cassandra image with persistent storage and then link it up with AWS.

    Thanks for posting this

    Sam

    Reply
  7. Sai

    Quick question on the IP table entry for the host mode. You created a tcp entry at port 80 but I don’t see source/dest. Is that entry for all IP addresses (0.0.0.0)

    Reply
  8. Gearoid Maguire

    great article, Can you help me I have a spring app that needs to connect to a mysql DB I can run it locally on my windows laptop. I have docker for windows installed and I want to dockerize the web app. I want the container with my web app to communicate with my local mysql DB on my windows machine. Is this possible? here is the command I am running

    `docker run -it –name myapp –net=host -e “CATALINA_OPTS=-Dspring.profiles.active=dev -DPARAM1=DEV” -p 8080:8080 -p 8005:8005 -p 8009:8009 -p 3306:3306 -v C:\PathToApp\trunk\target\mywar.war:/usr/local/tomcat/webapps/mywar.war tomcat:8.0.38-jre8`

    Reply
  9. Pingback: Home Server Architecture with Docker (part 3: docker containers) – OpenCoder

  10. Pingback: Dockers and Linux Containers 101 - Rouge Neuron

  11. Joe Fisher

    I am new to Docker, but I was under the impression that using the “-p” option in the docker run command, would allow you to alias a host port to a docker container port.

    In other words, I want to run multiple instances of the exact same application inside of Docker containers, all on the same server.

    My applications all listen on port 80.

    However, I was under the impression that I could alias the ports using the “docker run” command, where container 1 might be run as follows:

    docker run -p 81:80 ……………

    Container 2 might be run as follows:

    docker run -p 82:80 ……………

    Container 3 might be run as follows:

    docker run -p 83:80 ……………

    And so on…

    Is there a way to create my containers, using the HOST networking configuration, to route incoming traffic in this manner?

    Reply
    1. Joe Fisher

      I have not been able to test this solution yet, but I managed to get some “direction” on how this could be done.

      I can successfully create a new Docker container, using the following command:

      mkdir /Docker/BASE

      docker create centos:7.6.1810 –mount /Docker/BASE:/Docker/BASE -p 10.10.10.10:8800:80 -p 10.10.10.10:4400:443 /bin/bash

      Note that for security purposes, I did change the IP address in the above example.

      I have not been able to test the above container, because I am getting the following errors when I attempt to start it:

      COMMAND LINE ERRORS:

      Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused “exec: “–mount”: executable file not found in $PATH”: unknown
      Error: failed to start containers: e287091af6dc

      NOTE: I have logging set to debug, with all output going to the /var/log/messages file.

      LOGFILE MESSAGES:

      kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready

      kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

      kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth4358470: link becomes ready

      kernel: docker0: port 2(veth4358470) entered blocking state

      kernel: docker0: port 2(veth4358470) entered forwarding state

      dockerd: time=“2019-04-18T12:51:17.746586086-04:00” level=debug msg=“sandbox set key processing took 167.323401ms for container e287091af6dc0f744097284e98cfdc958c97b0634e3626d78f38ae5f349390f6”

      NetworkManager[4643]: [1555606277.7468] device (veth4358470): carrier: link connected

      containerd: time=“2019-04-18T12:51:17.839829084-04:00” level=info msg=“shim reaped” id=e287091af6dc0f744097284e98cfdc958c97b0634e3626d78f38ae5f349390f6

      dockerd: time=“2019-04-18T12:51:17.852340105-04:00” level=error msg=“stream copy error: reading from a closed fifo”

      dockerd: time=“2019-04-18T12:51:17.852396607-04:00” level=error msg=“stream copy error: reading from a closed fifo”

      dockerd: time=“2019-04-18T12:51:17.915502629-04:00” level=debug msg=“Revoking external connectivity on endpoint infallible_hellman (78338ce5a25ef25f08be59de418bbf45489eda259fc55847f6e4c7000253c141)”

      dockerd: time=“2019-04-18T12:51:17.919030220-04:00” level=debug msg=“DeleteConntrackEntries purged ipv4:0, ipv6:0”

      kernel: docker0: port 2(veth4358470) entered disabled state

      dockerd: time=“2019-04-18T12:51:18.100602888-04:00” level=debug msg=“Releasing addresses for endpoint infallible_hellman’s interface on network bridge”

      Reply
  12. Pingback: What does -net=host option in Docker command really do? - PhotoLens

  13. Pingback: What does –net=host option in Docker command really do? – w3toppers.com

  14. Pingback: What does –net=host option in Docker command really do? – Row Coding

  15. Pingback: [Docker] What does -net=host option in Docker command really do? - Pixorix

Leave a Reply to Jon Langemak Cancel reply

Your email address will not be published. Required fields are marked *