docker

You are currently browsing articles tagged docker.

imageAs many of you have noticed I’ve been neglecting the blog for past few months.  The main reason for this is that the majority of my free time was being spent generating content for a new book.  I’m pleased to announce that the book, Docker Networking Cookbook, has now been released! 

Here’s a brief description of the book…

“Networking functionality in Docker has changed considerably since its first release, evolving to offer a rich set of built-in networking features, as well as an extensible plugin model allowing for a wide variety of networking functionality. This book explores Docker networking capabilities from end to end. Begin by examining the building blocks used by Docker to implement fundamental containing networking before learning how to consume built-in networking constructs as well as custom networks you create on your own. Next, explore common third-party networking plugins, including detailed information on how these plugins inter-operate with the Docker engine. Consider available options for securing container networks, as well as a process for troubleshooting container connectivity.  Finally, examine advanced Docker networking functions and their relevant use cases, tying together everything you need to succeed with your own projects.”

The book is available from Packt and I believe Amazon has it as well.  If you happen to buy a copy I would greatly appreciate it if you would send me any and all feedback you have.  This is my first attempt at writing a book so any feedback and critiques you can share would be really great.

A big thank you to all of the folks at Packt that made this possible and worked with me through the editing and publishing process.  I’d also like to thank the technical reviewer Francisco Souza for his review. 

Now that the book is published I look forward to spending my free time blogging again.  Thanks for hanging in there!

Tags:

I thought it would be a good idea to revisit my last Kubernetes build in which I was using Salt to automate the deployment.  The setup worked well at the time, but much has changed with Kubernetes since I initially wrote those state files.  That being said, I wanted to update them to make sure they worked with Kubernetes 1.0 and above.  You can find my Salt config for this build over at Github…

https://github.com/jonlangemak/saltstackv2

A couple of quick notes before we walk through how to use the repo…

-While I used the last version of this repo as a starting point, I’ve stripped this down to basics (AKA – Some of the auxiliary pods aren’t here (yet)).  I’ll be adding to this constantly and I do intend to add a lot more functionality to the defined state files.
-All of the Kubernetes related communication is unsecured.  That is – it’s all over HTTP.  I already started work on adding an option to do SSL if you so choose. 

That being said, let’s jump right into how to use this.  My lab looks like this…

image 
Here we have 3 hosts.  K8stest1 will perform the role of the master while k8stest2 and k8stest3 will play the role of nodes or minions.  Each host will be running Docker and will have a routable network segment configured on it’s Docker0 bridge interface.  Your upstream layer 3 device will need to have static routes pointing each Docker0 bridge network to their respective hosts physical interface (192.168.127.100x) as shown above.  In addition to these 3 hosts, I also have a separate build server that acts as the Salt master and initiates the cluster build.  That server is called ‘kubbuild’ and isn’t pictured because it only plays a part in the initial configuration.  So let’s get right into the build…

In my case, the base lab configuration looks like this…

-All hosts are running CentOS 7.1 and are fully updated
-The 3 lab hosts (k8stest[1-3]) are configured as Salt minions and are reachable by the salt-master.  If you don’t know how to do that see the section of this post that talks about configuring the Salt master and minions.

The first thing you need to do is clone my repo onto your build server…

The next thing we want to do is download the Kubernetes binaries we need.  In earlier posts we had built them from scratch but we’re now going to download them instead.  All of the Kubernetes releases can be downloaded as a TAR file from github.  In this case, let’s work off of the 1.1.7 release.  So download this TAR file…

Next we have to unpack this file, and another TAR file inside this one, to get to the actual binaries…

Next we move those extracted binaries to the correct place in the Salt folder structure…

Alright – That’s the hardest part!  Now let’s go take a look at our Salt pillar configuration.  Take a look at the file ‘/srv/pillar/kube_data.sls’…

All you need to do is update this YAML file with your relevant configuration.  The above example is just a textual version of the network diagram shown earlier.  Keep in mind that you can add minions later by just simply adding onto this file – I’ll demo that later on.  Once you have this updated to match your configuration, let’s make sure we can reach our Salt minions and then execute the state to push the configuration out…

image
Now sit back and enjoy a cup of coffee while Salt does it’s magic.  When it’s done, you should see the results of executing the states against the hosts you defined in the ‘kube_data.sls’ file…

image
If you scroll back up through all of the results you will likely see that it errors out on this section of the master…

image
This is expected and is a result of the etcd container not coming up in time in order for the ‘pods’ state to work.  The current fix is to wait until all of the Kubernetes master containers load and then just execute the highstate again.

So let’s head over to our master server and see if things are working as expected…

image
Perfect!  Our 2 nodes have been discovered.  Since we’re going to execute the Salt highstate again, let’s update the config to include another node…

Note: I’m assuming that the server k8stest4 has been added to the Salt master as a minion.

This run should provision the pods as well as we provision a new Kubernetes node, k8stest4.  So let’s run the highstate again and see what we get…

When the run has finished, let’s head back to the master server and see how many nodes we have…

image 
Perfect!  The Salt config works as expected.  At this point, we have a functioning Kubernetes cluster on our hands.  Let’s make sure everything is working as expected by deploying the guest book demo.  On the master, run this command…

This will create the services and the replication controllers for the example and expose them on the node physical interfaces.  Take note of the port it’s using when you create the services…

image 
Now we just need to wait for the containers to deploy.  Keep an eye on them by checking the pod status…

image
Once Kubernetes finishes deploying the containers, we should see them all listed as ‘Running’…

image
Now we can try and hit the guest book front end by browsing to a minion on the specified port…

image
The example should work as expected.  That’s it for now, much more to come soon!

Tags: ,

image In this post, I’d like to cover some of the new Docker network features.  Docker 1.9 saw the release of user defined networks and the most recent version 1.10 added some additional features.  In this post, we’ll cover the basics of the new network model as well as show some examples of what these new features provide.

So what’s new?  Well – lots.  To start with, let’s take a look at a Docker host running the newest version of Docker (1.10). 

Note: I’m running this demo on CentOS 7 boxes.  The default repository had version 1.8 so I had to update to the latest by using the update method shown in a previous post here.  Before you continue, verify that ‘docker version’ shows you on the correct release.

You’ll notice that the Docker CLI now provide a new option to interact with the network through the ‘docker network’ command…

image 
Alright – so let’s start with the basics and see what’s already defined…

image

By default a base Docker installation has these three networks defined.  The networks are permanent and can not be modified.  Taking a closer look, you likely notice that these predefined networks are the same as the network models we had in earlier versions of Docker.  You could start a container in bridge mode by default, host mode my specifying ‘–net=host’, and without an interface by specifying ‘–net=none’.  To level set – everything that was there before is still here.  To make sure everything still works as expected, let’s run through building a container under each network type.

Note: These exact commands were taken out my earlier series of Docker networking 101 posts to show that the command syntax has not changed with the addition of multi-host networking.  Those posts can be found here.

Host mode

Executing the above command will spin up a test web server with the containers network stack being mapped directly to that of the host.  Once the container is running, we should be able to access the web server through the Docker hosts IP address…

Note: You either need to disable firewalld on the Docker host or add the appropriate rules for this to work.

image Bridge Mode

Here we’re running the default bridge mode and mapping ports into the container.  Running those two containers should give you the web server you’re looking for on ports 8081 and 8082…

image 
In addition, if we connect to the containers directly, we can see that communication between the two containers occurs directly across the docker0 bridge never leaving the host…

image
  Here we can see that web1 has an APR entry for web2.  Looking at Web2, we can see the MAC address is identical…

image 
None mode

In this example we can see that the container doesn’t receive any interface at all…

image 
As you can see, all three modes work just as they had in previous versions of Docker.  So now that we’ve covered the existing network functions, lets’ talk about the new user defined networks…

User defined bridge networks
The easiest user defined network to use is the bridge.  Defining a new bridge is pretty easy.  Here’s a quick example…

Here I create a new bridge named ‘testbridge’ and provide the following attributes…

Gateway – In this case I set it to 192.168.127.1 which will be the IP of the bridge created on the docker host.  We can see this by looking at the Docker hosts interfaces…

image
Subnet – I specified this as 192.168.127.0/24.  We can see in the output above that this is the CIDR associated with the bridge.

IP-range – If you wish to define a smaller subnet from which Docker can allocate container IPs from your can use this flag.  The subnet you specify here must exist within the bridge subnet itself.    In my case, I specified the second half of the defined subnet.  When I start a container, I’ll get an IP out of the subnet if I assign the container to this bridge…

image 
Our new bridge acts much like the docker0 bridge.  Ports can be mapped to the physical host in the same manner.  In the above example, we mapped port 8081 to port 80 in the container.  Despite this container being on a different bridge, the connectivity works all the same…

image

We can make this example slightly more interesting by removing the existing container, removing the ‘testbridge’, and redefining it slightly differently…

The only change here is the addition of the ‘—internal’ flag.  This prevents any external communication from the bridge.  Let’s check this out by defining the container like this…

You’ll note that in this case, we can no longer access the web server container through the exposed port…

image 
It’s obvious that the ‘—internal’ flag prevents containers attached to the bridge from talking outside of the host.  So while we can now define new bridges and associate newly spawned containers to them, that by itself is not terribly interesting.  What would be more interesting is the ability to connect existing containers to these new bridges.  As luck would have it, we can use the docker network ‘connect’ and ‘disconnect’ commands to add and remove containers from any defined bridge.  Let’s start by attaching the container web1 to the default docker0 bridge (bridge)…

If we look at the network configuration of the container, we can see that it now has two NICs.  One associated with ‘bridge’ (the docker0 bridge), and another associated with ‘testbridge’…

image 
If we check again, we’ll see that we can now once again access the web server through the mapped port across the ‘bridge’ interface…

image
Next, let’s spin up our web2 container, and attach it to our default docker0 bridge…

Before we get too far – let’s take a logical look at where we stand…

image 
We have a physical host (docker1) with a NIC called ‘ENS0’ which sits on the physical network with the IP address of 10.20.30.230.  That host has 2 Linux bridges called ‘bridge’ (docker0) and ‘testbridge’ each with their own defined IP addresses.  We also have two containers, one called web1 which is associated with both bridges and a second, web2, that’s associated with only the native Docker bridge. 

Given this diagram, you might assume that web1 and web2 would be able to communicate directly with each other since they are connected to the same bridge.  However, if you recall our earlier posts, Docker has something called ICC (Inter Container Communication) mode.  When ICC is set to to false, containers can’t communicate with each other directly across the docker0 bridge.

Note: There’s a whole section on ICC and linking down below so if you don’t recall don’t worry!

In the case of this example, I have set ICC mode to false meaning that web1 can not talk to web2 across the docker0 bridge unless we define a link.  However, ICC mode only applies to the default bridge (docker0).  If we connect both containers to the bridge ‘testbridge’ they should be able to communicate directly across that bridge.  Let’s give it a try…

image
So let’s try from the container and see what happens…

image
Success.  User defined bridges are pretty easy to define and map containers to.  Before we move on to user defined overlays, I want to briefly talk about linking and how it’s changed with the introduction of user defined networks.

Container Linking
Docker linking has been around since the early versions and was commonly mistaken for some kind of network feature of function.  In reality, it has very little to do with network policy, particularly in Docker’s default configuration. Let’s take a quick look at how linking worked before user defined networks.

In a default configuration, Docker has the ICC value set to true. In this mode, all containers on the docker0 bridge can talk directly to each on any port they like. We saw this in action earlier with the bridge mode example where web1 and web2 were able to ping each other. If we change the default configuration, and disable ICC, we’ll see a different result. For instance, if we change the ICC value to ‘false’ in ‘/etc/sysconfig/docker’, we’ll notice that the above example no longer works…

image 
If we want web1 to be able to access web2 we can ‘link’ the containers.  Linking a container to another container allows the containers to talk to each other on the containers exposed ports.

image
Above, you can see that once the link is in place, I can’t ping web1 from web2, but I can access web1 on it’s exposed port. In this case, that port is 80. So linking with ICC disabled only allows linked containers to talk to each other on their exposed ports.  This is the only way in which linking interests with network or security policy.  The other feature linking gives you is name and service resolution.  For instance, let’s look at the environmental variables on web2 once we link it to web1…

image

In addition to the environmental variables, you’ll also notice that web2’s hosts file has been updated to include the IP address of the web1 container.  This means that I can now access the container by name rather than by IP address.  As you can see, linking in previous versions of Docker had its uses and that same functionality is still available today.

That being said, user defined networks offer a pretty slick alternative to linking.  So let’s go back to our example above where web1 and web2 are communicating across the ‘testbridge’ bridge.  At this point, we haven’t defined any links at all but lets’ trying pinging web2 by name from web1…

image 
Ok – so that’s pretty cool, but how is that working?  Let’s check the environmental variables and the hosts file on the container…

image
Nothing at all here that would statically map the name web2 to an IP address.  So how is this working?  Docker now has an embedded DNS server.  Container names are now registered with the Docker daemon and resolvable by any other containers on the same host.  However – this functionality ONLY exists on user defined networks.  You’ll note that the above ping returned an IP address associated with ‘testbridge’, not the default docker0 bridge. 

That means I no longer need to statically link containers together in order for them to be able to communicate via name.  In addition to this automatic behavior, you can also define global aliases and links on the user defined networks.   For example, now try running these two commands…

Above we removed web1 from ‘testbridge’ and then re-added it specifying a link and an alias.  When using the link flag with user defined networks, it functions much in the same way as it did in the legacy linking method.  Web1 will be able to resolve the container web2 either by it’s name, or it’s linked alias ‘webtwo’.  In addition, user defined networks also provide what are referred to as ‘network-scoped aliases’.  These aliases can be resolved by any container on the user defined network segment.  So whereas links are defined by the container that wishes to use the link, aliases are defined by the container advertising the alias. Let’s log into each container and try pinging via the link and the alias…

image
In the case of web1, it’s able to ping both the defined link as well as the alias name.  Let’s try web2…

image
So we can see that links used with user defined network are locally significant to the container.  On the other hand, aliases are associated to a container when it joins a network, and are globally resolvable by any container on that same user defined network.

User defined overlay networks
The second and last built-in user defined network type is the overlay.  Unlink the bridge type, the overlay requires an external key value store in order to store information such as the networks, endpoints, IP address, and discovery information.  In most examples, that key value store is generally Consul but it can also be Etcd or ZooKeeper.  So lets look at the lab we’re going to be using for this example…image 
Here we have 3 Docker hosts.  Docker1 and docker2 live on 10.20.30.0/24 and docker3 lives on 192.168.30.0/24.  Starting with a blank slate, all of the hosts have the docker0 bridge defined but no other user defined network or containers running.

The first thing we need to do is to tell all of the Docker hosts where to find the key value store.  This is done by editing the Docker configuration settings in ‘/etc/sysconfig/docker’ and adding some option flags.  In my case, my ‘OPTIONS’ now look like this…

Make sure that you adjust your options to account for the IP address of the Docker host running Consul and the interface name defined under the ‘cluster-advertise’ flag.  Update these options on all hosts participating in the cluster and then make sure you restart the Docker service.

Once Docker is back up and running, we need to deploy the aforementioned key value store for the overlay driver to use.  As luck would have it, Consul offers their service as a container. So let’s deploy that container on docker1…

Once the Consul container is running, we’re all set to start defining overlay networks.  Let’s go over to the docker3 host and define an overlay network…

Now if we look at docker1 or docker2, we should see the new overlay defined…

image
Perfect, so things are working as expected.  Let’s now run one of our web containers on the host docker3…

Note: Unlike bridges, overlay networks do not pre-create the required interfaces on the Docker host until they are used by a container.  Don’t be surprised if you don’t see these generated the instant you create the network.

Nothing too exciting here.  Much like our other examples, we can now access the web server by browsing to the host docker3 on port 8081…

image
Let’s fire up the same container on docker2 and see what we get…

image 
So it seems that container names across a user defined overlay can’t be common.  This makes sense, so let’s instead load the second web instance on this host…

Once this container is running, let’s test the overlay by pinging web1 from web2…

image
Very cool.  If we look at the physical network between docker2 and docker3 we’ll actually see the VXLAN encapsulated packets traversing the network between the two physical docker hosts…

image
It should be noted that there isn’t a bridge associated with the overlay itself.  However –  there is a bridge defined on each host which can be used for mapping ports of the physical host to containers that are a member of an overlay network.  For instance, let’s look at the interfaces defined on the host docker3…

image
Notice that there’s a ‘docker_gwbridge’ bridge defined.  If we look at the interfaces of the container itself, we see that it also has two interfaces…

image
Eth0 is a member of the overlay network, but eth1 is a member of the gateway bridge we saw defined on the host.  In the case that you need to expose a port from a container on an overlay network you would need to use the ‘docker_gwbridge’ bridge.  However, much like the user defined bridge, you can prevent external access by specifying the ‘—internal’ flag during network creation.  This will prevent the container from receiving an additional interface associated with the gateway bridge.  This does not however prevent the ‘docker_gwbridge’ from being created on the host.

Since our last example is going to use an internal overlay, let’s delete the web1 and web2 containers as well as the overlay network and rebuild the overlay network using the internal flag…

So now we have two containers, each with a single interface on the overlay network.  Let’s make sure they can talk to each other…

image
Perfect, the overlay is working.  So at this point – our diagram sort of looks like this…

image 
Not very exciting at this point especially considering we have no means to access the web server running on either of these containers from the outside world.  To remedy that, why don’t we deploy a load balancer on docker1. To do this we’re going to use HAProxy so our first step will be coming up with a config file.  The sample I’m using looks like this…

For the sake of this test – let’s just focus on the backend section which defines two servers.  One called web1 that’s accessible at the address of web1:80 and a second called web2 that’s accessible at the address of web2:80.  Save this config file onto your host docker1, in my case, I put it in ‘/root/haproxy/haproxy.cfg’.  Then we just fire up the container with this syntax…

After this container kicks off, our topology now looks more like this…

image
So while the HAProxy container can now talk to both the backend servers, we still can’t talk to it on the frontend.  If you recall, we defined the overlay as internal so we need to find another way to access the frontend.  This can easily be achieved by connecting the HAProxy container to the native docker0 bridge using the network connect command…

Once this is done, you should be able to hit the front end of the HAProxy container by hitting the docker1 host on port 80 since that’s the port we exposed.

image image
And with any luck, you should see the HAProxy node load balancing requests between the two web server containers.  Note that this also could have been accomplished by leaving the overlay network as an external network.   In that case, the port 80 mapping we did with the HAProxy container would have been exposed through the ‘docker_gwbridge’ bridge and we wouldn’t have needed to add a second bridge interface to the container.  I did it that way just to show you that you have options.

Bottom line – Lots of new features in the Docker networking space.  I’m hoping to get another blog out shortly to discuss other network plugins.  Thanks for reading!

Tags:

« Older entries