In this post I want to cover what I’m considering the final docker provided network mode. We haven’t covered the ‘none’ option but that will come up in future posts when we discuss more advanced deployment options. That being said, let’s get right into it.
Mapped container network mode is also referred to as ‘container in container’ mode. Essentially, all you’re doing is placing a new containers network interface inside an existing containers network stack. This leads to some interesting options in regards to how containers can consume network services.
In this blog I’ll be using two docker images…
web_container_80 – Runs CentOS image with Apache configured to listen on port 80
web_container_8080 – Runs CentOS image with Apache configured to listen on port 8080
These containers aren’t much different from what we had before, save the fact that I made the index pages a little more noticeable. So let’s download web_container_80 and run it with port 80 on the docker host mapped to port 80 on the container…
docker run -d --name web1 -p 80:80 jonlangemak/docker:web_container_80
Once it’s downloaded let’s take a look and make sure its running…
Here we can see that it’s running and that port 80 on the host has been mapped to port 80 on the container. So let’s try browsing to our host docker1 (10.20.30.100)…
Nothing new here, we just mapped a port into a container that’s running in the default bridge mode. Now, let’s run a second container in mapped container mode. We’ll do that with this command…
docker run -d --name web2 --net=container:web1 jonlangemak/docker:web_container_8080
So we run our second container but we specify that it’s network mode should be set to ‘container’ and then specify the container we want to map the new container (web2) into. Pretty easy right? So let’s hop into web1 and web2 containers with docker exec and see what we have…
So as you can see, the network config on both containers looks identical. This should remind you of the host mode demo we did where a container saw the exact network interfaces of the host. So since we share network interfaces it should make sense that we can communicate across them. Take for instance the host loopback address of 127.0.0.1. Let’s run some curl commands on the container web1 to see if we can access services on web2…
So first we curl to the loopback which will by default use port 80. As you can see, I get a return from the container running Apache on port 80. Next I curl to the same loopback but on port 8080 and I get a response from the container running Apache on port 8080. This brings up some interesting communication options for containers that will always run on the same host. Take for instance the web/app/db example. If a user will only ever talk to the web server why do you need to expose the ports for the app server? Same goes for app to db layer. If they can all communicate over the localhost interface there isn’t a need to expose their service ports externally.
I’m sure you noticed but we never mapped port 8080 on the container web2 to the docker host on port 8080. Let’s stop the container web2, destroy it, and run the container again with the correct port mapping…
Interesting, the port mapping didn’t stick. This brings up an important point about the mapped container network mode. Host port mapping must be done first. That is, I need to do all of the port mapping on the first container. Any container started in mapped container mode won’t be able to alter the host port mapping.
So let’s stop web1 and web2, do both port mappings, and then start both containers again…
Now let’s check and see if we can get to both containers from the host IP…
Perfect, working as expected. In addition to showing off mapped container mode this was also a good example of how docker network modes interoperate. The first container ran in bridge mode and the second container ran in mapped container mode. We can obtain very similar results by starting the first container in host mode and the second container in mapped container mode. The commands to do that would look like what’s shown below and provide the exact same output as the tests above…
docker run -d --name web1 --net=host jonlangemak/docker:web_container_80 iptables -I INPUT 5 -p tcp -m tcp --dport 80 -j ACCEPT iptables -I INPUT 5 -p tcp -m tcp --dport 8080 -j ACCEPT docker run -d --name web2 --net=container:web1 jonlangemak/docker:web_container_8080
So this wraps up are look at the docker provided network modes. Up next we’re going to start looking at some of the more advanced non-default docker networking options.
Is this dockers on dockers ?.
Nope – Just docker network namespaces being shared. Is that what you’re asking?
Your docker-networking-101 serials of articles are pretty well. I study a lots of docker networking in here. : – )
And docker networking is so interesting.
Can i do something like this in Container mode ?. Assume there are 2 application stack (A & B) with 2 containers each
App-A: Container-1(C-1) running on Port 80, Container-2(C-2) mapped to C-1 running on port 8080. This port 8080 not exposed to external world.
App-B: Container-100(C-100) running on Port 8000, Container-200(C-200) mapped to C-100 but running on Port 8080. This 8080 is also not exposed to external world.
Note that both A & B are having a container running on port 8080 but not exposed to external world.
Can they run on same machine ?
Is this something kubernetes does ?
In some sense, i create an application stack and exposing
Yes… this is a great series. Well documented. The question I have (maybe I missed it in part 2, I’ll have to go revisit) is, if I want to map an IP to a dns entry and always want that IP to connect to a given container (think openstack floating IP’s), is there a way to forcefully tell a swarm master to bring up an IP on a docker1 or docker2 host, connect the INT that hosts that IP to the docker bridge, and use that IP for the container that is being run?
Oh, Thanks for nice trick!
Hi, I saw that the –net-container has been replaced by the new network settings on docker. Is there any easy way to make a container route it’s network through another container using the new network settings?
Can you clarify what you mean by replaced?
Hi Jon, I’m running docker on my Synology NAS and it doesn’t support the –net=container parameter forcing me to choose between host, bridge or a new network.
I read a few articles saying that the –net=container was part of old parameters (such as links) that are in the process of being deprecated by the newer network settings.
The issue is that although I have managed to have more than 1 container under the same network, I can’t route it through another container like it was possible with the –net=container.
Ok, I think I got it all wrong. If I don’t touch the GUI settings and run everything via terminal it runs properly so it’s an issue with the Synology Docker app that removes the net=container option when ran via the app.
Thats interesting. Sounds like it’s just a Synology thing then? I know that other apps use that mode so I dont think it’s going away anytime soon.