Kubernetes 101 – The build


NOTE: Kubernetes has evolved! These directions are no longer entirely accurate because of this.  If you’re looking for current info on deploying Kubernetes, please reference the related config files in my GitHub salt repo.  I’ll be doing my best to keep those as up to date as possible.

In this series of posts we’re going to tackle deploying a Kubernetes cluster.  Kubernetes is the open source container cluster manager that Google released some time ago.  In short, it’s a way to treat a large number of hosts as single compute instance that you can deploy containers against.  While the system itself is pretty straight forward to use, the install and initial configuration can be a little bit daunting if you’ve never done it before.  The other reason I’m writing this is because I had a hard time finding all of the pieces to build a bare metal kubernetes cluster.  Most of the other blogs you’ll read use some mix of an overlay (Weave or Flannel) so I wanted to document a build that used bare metal hosts along with non-overlay networking.

In this first post we’ll deal with getting things running.  This includes downloading the actual code from github, building it, deploying it to your machines, and configuring the services.  In the following posts we’ll actually start deploying pods (we’ll talk about what those are later on), discuss the deployment model, and dig into how Kubernetes handles container networking.  That being said, let’s jump right into what our topology is going to look like.

Our lab topology will look like this…

So we have a total of 6 hosts.  5 of these hosts will be used for the kubernetes cluster while the 6th one will be used for downloading, building, and distributing the kubernetes code to the cluster.  The network topology is fairly basic with a multi layer switch (MLS) supplying connectivity for two distinct subnets.  The MLS also acts as the default gateway for the subnets providing access out to the internet (not pictured).

Note: All of the hosts are starting with a basic configuration.  This includes installing CentOS 7 minimal, configuring a static IP address, configuring a DNS server, and doing a full update. 

Building kubbuild
So let’s dig right in.  The first thing we need to do is prepare the kubbuild host.  Let’s start by downloading the tools we’ll need to build and distribute the code and configuring the services required…

So at this point, we have a fairly basic system running Apache and docker.  You might be wondering why we’re running docker on this system.   The kubernetes team came up with a pretty cool way to build the kubernetes binaries.  Rather than having you build a system that had all of the required tools (Namely GO) the build script downloads a golang container that the build process runs inside of.  Pretty cool huh?

Let’s get started on the kubernetes build by copying the kubernetes repo from github…

This will copy the entire repository down from github.  If you want to put the repository somewhere specific make sure you CD over to that directory before running the git clone.  Since this machine is just being used for this explicit purpose I’m ok with the repository being right in the root folder.

Once the clone completes we can build the code.  Like I mentioned, the kubernetes team offers several build scripts that you can use to generate the required binaries to run kubernetes.  We’re going to use the ‘build-cross’ script that builds all the binaries for all of the platforms.  There’s more info on github about the various build scripts here.

This will kick off the build process.  It will make sure you have docker installed and then prompt you to download the golang container if you don’t already have it.  This piece of the build can take some time.  The golang container is over 400 meg and the build process itself takes some time to complete.  Go get a cup of coffee and wait for this step to complete…

When the build is complete, you should get some output indicating that the build was successful.  The next step is to get the binaries to all of the kubernetes nodes.  To do this, I’ll copy the files via http from kubbuild over to each of the hosts.

Note: There are probably better or easier ways to do this.  It doesn’t really matter how you do it so long as you get the correct binaries on the correct machines.

So the first thing I’ll do is copy the binaries into my http root.  The compiled binaries we’re looking for should be in the /kubernetes/_output/dockerized/bin/linux/amd64 directory.  Let’s go in there and see what we have…

So we can see the build script generated a series of binaries.  These are the actual executable service files that each host will need to run kubernetes.  Let’s get them copied over to the http root so the other servers can copy them down…

So now we have the kubernetes binaries somewhere where the other servers can pull them down from.  In addition to pulling down the binaries, the servers will also need some systemd unit files to run the kubernetes services.  The ones I use in this lab are available on my github account and can be downloaded like this…

We’ll walk through all of the service files when we build the kubernetes nodes.  So let’s move onto building our first kubernetes node, kubmasta.

Building kubmasta
Kubmasta is going to be the kubernetes control node.  In most deployments there will be more than one of these but for now we’ll start with one just to get our feet wet.  Kubmasta could also act as a minion but in this lab we’ll be keeping all of the minion services off of kubmasta.  So let’s log into kubmasta and start the config…

Let’s start by getting some of the base services installed and configured…

So at this point we have all the appropriate binaries on kubmasta, however the system doesn’t really know what to do with them.  Since this is a CentOS 7 box, it uses systemd for service initialization.  That being said, we need to define service files for each service so systemd can manage the services correctly.  Luckily for you, I’ve done that already.  If you cloned my repository onto kubbuild as shown above you can now copy them down the required service files to kubmasta.  Let’s copy them and take a quick look at the basic structure…

So let’s look at each service file…

A service file just tells systemd what to do.  Each service is also called a unit.  In this case, the unit file has 3 sections.  The unit section describes the name of the service along with any service requirements.  In this case, we give the unit a description and tell systemd that this service can start after the network.target starts.  The service section describes what we want the service to do.  The big item to pay attention to here is the ‘ExecStart’ command which tells systemd what process to run.

Note: I owe a blog entry (or two) dedicated just to systemd.  Im trying to cover enough to get you going without getting stuck in the weeds here.  For now, you can just use my unit files but I will be generating a blog later to talk specifically about what each and every item in the unit files does as well as dive deeper into targets.

The last section is the Install section.  This tells systemd when to launch this process.  In our case, we’re telling systemd that this unit is wanted by the multi-user.target.  For now, just know that this is the equivalent of enabling the service to run at system boot.

So there really wasn’t anything special about the etcd unit.  Let’s take a look at the next unit file for the API server…

Ok, so this one is a little more interesting.  Let’s focus on the service section of the unit file.  Note that we’re passing quite a few variables to the service when it runs.  If we want to see what these variables are for, we can take a look at the kubernetes-apiserver process itself.  Let’s see if we can correlate these…

So these descriptions make this pretty straight forward.  We’re defining the address that the API Server should listen on as which defines all the available interfaces.  We specify the port as 8080, the location of the etcd server (local), and we define a portal net.  The only odd ball there is the portal net but we’ll be covering that later when we talk about services.

Note: The kubernetes team has done a great job with the command documentation.  If you want to see the options you can pass to any of the other binaries you can do the same thing I did above by passing the ‘–help’ flag to the binary to see that flags you can specify on run.

So let’s look at the next service definition for the controller manager…

In the case of the controller manager service we pass the service two variables.  First we tell it who the master is, in this case, that’s the localhost.  Secondly, we tell it where the minions are.  In my case, I define IP addresses for each of the 4 minion machines.  Notice how some of the unit definitions for the services depend on other services as well.  In this case, we’re telling systemd not to start the controller manager until the API server is running.

The last service on the kubmasta is the scheduler service.  The definition is pretty basic…

Here we just tell the scheduler where the master is.  Again, that’s currently just the localhost.

So now that we have all of our services defined, we can tell systemd to enable and run the services.  We do so with this set of commands…

Once that’s done, kubmasta should be operational.  Let’s check and see if it sees any minion machines yet…

Success!  So we’re successfully interacting with the kubernetes.  Granted, none of the minions are running yet but things are looking good.  Now let’s move onto building the first minion.

Building kubminion1
The process for building the minions is rather similar to the process we used for building the kubmasta with the exception of what services we’re running.  The minions only need to run two services, kube-proxy and kubelet.  Let’s look at the entire build script up front and then dive into the specifics…

So this should all look pretty familiar.  Let’s run this on kubminion1 and then take a look at the unit files for the services…


Here we see that we define the IP and the port for the kubelet service as well as tell it where the etcd server (kubmasta) is.  Pretty straight forward.  Now let’s check out the kube-proxy unit file…

Nothing special here either.  We just tell it where the etcd server is.  You probably noticed that we also copied down a docker config file.  Notice that since we installed docker it’s configured to use a config file located in ‘/etc/sysconfig’.  The file we copied down overwirtes the existing config file and looks like this…

Now this is sort of interesting.  We added 3 items to the docker configuration.  We set the bridge IP of the docker0 bridge with the IP of  We also tell docker not to use iptables as well as not to use the default mechanism of hiding all the docker0 bridge traffic behind the host IP.  We’ll talk more about why this is required later on, but based on the rest of the minion docker config, we’ll see that our network config now looks like this…


Note: I’m going to talk about the network configuration in MUCH greater depth later on.  However, for now, we’ll add the following routes to our MLS to make sure that the docker0 bridge subnets are reachable…

ip route
ip route
ip route
ip route

So now that we have minion1 configured and the services enabled, let’s go back to kubmasta and see if it sees our minion…

Success!  The next step is to configure the remaining 3 minions.

Building kubminion{2,3,4}

The build scripts to do so are located below…

Build script for kubminion2

Build script for kubminion3

Build script for kubminion4

When all of your minions have been built, you should be able to head back to kubmasta and see them all online…

Awesome!  You’ve built your first kubernetes cluster!  To make sure the communication is working as expected, let’s do a quick container deployment against the cluster.  I’ll do so with this command…

Now let’s check and see what kubernetes thinks is running…

Alright, so it told the cluster to deploy the container to (kubminion2) and it assigned it the IP address  Let’s log into kubminion2 and see what its doing…

So it’s downloading the image, let’s give it a minute to finish the download and see what’s running…

So the image is downloaded and running.  Let’s try hitting that IP address that kubernetes assigned to the container and see what happens…


Just what we expect to see.  So it appears that kubernetes is running as it should be.  For now, use the following commands to clean up the cluster.

Keep in mind that we haven’t even scratched the surface in this post but we should now have a solid base to continue building on.  In the following posts we’ll start tearing into the kubernetes constructs and how they map into the cluster.

Tags: ,

  1. timo’s avatar

    thank you for the very interesting article. cannot wait for the next post on kubernetes!

    you create symbolic links
    ln -s /opt/kubernetes/kubectl /usr/local/bin/
    on minions, but don’t download kubectl to those machines. is that intentional?

    for minion1 there is a typo
    “We set the bridge IP of the docker0 bridge with the IP of”
    obv the IP should be


    1. Jon Langemak’s avatar

      Im writing the next one right now so its coming up shortly. Yes, for now I only want to control the cluster from kubmasta (thats what kubectl is used for) so I didnt copy it down to the minions. Making the symlink on the minions was a mistake. Thanks for spotting that! I’ve fixed that and the IP issue you caught. Thanks for reading!


    2. L Chang’s avatar

      Thank you for sharing the info!


    3. Dinesh’s avatar

      for latest versions, kube-controll-manager does not support machines flag…can you explain how to get through this


    4. Jack47’s avatar

      There is a little mistaken in the network config diagram. The `kubbuild` machine’s ip should be ``



Your email address will not be published. Required fields are marked *