Kubernetes 101 – The build

      5 Comments on Kubernetes 101 – The build

kubernetes

NOTE: Kubernetes has evolved! These directions are no longer entirely accurate because of this.  If you’re looking for current info on deploying Kubernetes, please reference the related config files in my GitHub salt repo.  I’ll be doing my best to keep those as up to date as possible.

In this series of posts we’re going to tackle deploying a Kubernetes cluster.  Kubernetes is the open source container cluster manager that Google released some time ago.  In short, it’s a way to treat a large number of hosts as single compute instance that you can deploy containers against.  While the system itself is pretty straight forward to use, the install and initial configuration can be a little bit daunting if you’ve never done it before.  The other reason I’m writing this is because I had a hard time finding all of the pieces to build a bare metal kubernetes cluster.  Most of the other blogs you’ll read use some mix of an overlay (Weave or Flannel) so I wanted to document a build that used bare metal hosts along with non-overlay networking.

In this first post we’ll deal with getting things running.  This includes downloading the actual code from github, building it, deploying it to your machines, and configuring the services.  In the following posts we’ll actually start deploying pods (we’ll talk about what those are later on), discuss the deployment model, and dig into how Kubernetes handles container networking.  That being said, let’s jump right into what our topology is going to look like.

Our lab topology will look like this…

image
So we have a total of 6 hosts.  5 of these hosts will be used for the kubernetes cluster while the 6th one will be used for downloading, building, and distributing the kubernetes code to the cluster.  The network topology is fairly basic with a multi layer switch (MLS) supplying connectivity for two distinct subnets.  The MLS also acts as the default gateway for the subnets providing access out to the internet (not pictured).

Note: All of the hosts are starting with a basic configuration.  This includes installing CentOS 7 minimal, configuring a static IP address, configuring a DNS server, and doing a full update. 

Building kubbuild
So let’s dig right in.  The first thing we need to do is prepare the kubbuild host.  Let’s start by downloading the tools we’ll need to build and distribute the code and configuring the services required…

!Disable and stop the firewalld service 
systemctl disable firewalld
systemctl stop firewalld

!Download git, docker, and Apache web server
yum install -y git docker httpd

!Configure docker to start on boot and start the service
systemctl enable docker
systemctl start docker

!Configure Apache to start on boot and start the service
systemctl enable httpd
systemctl start httpd

So at this point, we have a fairly basic system running Apache and docker.  You might be wondering why we’re running docker on this system.   The kubernetes team came up with a pretty cool way to build the kubernetes binaries.  Rather than having you build a system that had all of the required tools (Namely GO) the build script downloads a golang container that the build process runs inside of.  Pretty cool huh?

Let’s get started on the kubernetes build by copying the kubernetes repo from github…

git clone https://github.com/GoogleCloudPlatform/kubernetes.git

This will copy the entire repository down from github.  If you want to put the repository somewhere specific make sure you CD over to that directory before running the git clone.  Since this machine is just being used for this explicit purpose I’m ok with the repository being right in the root folder.

Once the clone completes we can build the code.  Like I mentioned, the kubernetes team offers several build scripts that you can use to generate the required binaries to run kubernetes.  We’re going to use the ‘build-cross’ script that builds all the binaries for all of the platforms.  There’s more info on github about the various build scripts here.

cd kubernetes/build
./run.sh hack/build-cross.sh

This will kick off the build process.  It will make sure you have docker installed and then prompt you to download the golang container if you don’t already have it.  This piece of the build can take some time.  The golang container is over 400 meg and the build process itself takes some time to complete.  Go get a cup of coffee and wait for this step to complete…

image
When the build is complete, you should get some output indicating that the build was successful.  The next step is to get the binaries to all of the kubernetes nodes.  To do this, I’ll copy the files via http from kubbuild over to each of the hosts.

Note: There are probably better or easier ways to do this.  It doesn’t really matter how you do it so long as you get the correct binaries on the correct machines.

So the first thing I’ll do is copy the binaries into my http root.  The compiled binaries we’re looking for should be in the /kubernetes/_output/dockerized/bin/linux/amd64 directory.  Let’s go in there and see what we have…

image
So we can see the build script generated a series of binaries.  These are the actual executable service files that each host will need to run kubernetes.  Let’s get them copied over to the http root so the other servers can copy them down…

cd /root/kubernetes/_output/dockerized/bin/linux/amd64
cp * /var/www/html

So now we have the kubernetes binaries somewhere where the other servers can pull them down from.  In addition to pulling down the binaries, the servers will also need some systemd unit files to run the kubernetes services.  The ones I use in this lab are available on my github account and can be downloaded like this…

cd /var/www/html
git clone https://github.com/jonlangemak/kubernetes_build.git

We’ll walk through all of the service files when we build the kubernetes nodes.  So let’s move onto building our first kubernetes node, kubmasta.

Building kubmasta
Kubmasta is going to be the kubernetes control node.  In most deployments there will be more than one of these but for now we’ll start with one just to get our feet wet.  Kubmasta could also act as a minion but in this lab we’ll be keeping all of the minion services off of kubmasta.  So let’s log into kubmasta and start the config…

Let’s start by getting some of the base services installed and configured…

!Disable and stop the firewalld service 
systemctl disable firewalld 
systemctl stop firewalld

!Download docker and wget
yum install -y docker wget

!Configure docker to start on boot and start the service
systemctl enable docker
systemctl start docker

!Download and install etcd
mkdir /opt/etcd
wget https://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz
tar xzvf etcd-v0.4.6-linux-amd64.tar.gz
cd etcd-v0.4.6-linux-amd64
cp * /opt/etcd

!Cleanup the files
cd ..
rm -f etcd-v0.4.6-linux-amd64.tar.gz
rm -rf etcd-v0.4.6-linux-amd64
ln -s /opt/etcd/etcd /usr/local/bin/

!Copy the required kubernetes binaries over from kubbuild
mkdir /opt/kubernetes
wget -O /opt/kubernetes/kubectl http://kubbuild/kubectl
wget -O /opt/kubernetes/kube-scheduler http://kubbuild/kube-scheduler
wget -O /opt/kubernetes/kube-controller-manager http://kubbuild/kube-controller-manager
wget -O /opt/kubernetes/kube-apiserver http://kubbuild/kube-apiserver
!Set appropriate permissions and create a link to kubectl
ln -s /opt/kubernetes/kubectl /usr/local/bin/
chmod 755 -R /opt/kubernetes

So at this point we have all the appropriate binaries on kubmasta, however the system doesn’t really know what to do with them.  Since this is a CentOS 7 box, it uses systemd for service initialization.  That being said, we need to define service files for each service so systemd can manage the services correctly.  Luckily for you, I’ve done that already.  If you cloned my repository onto kubbuild as shown above you can now copy them down the required service files to kubmasta.  Let’s copy them and take a quick look at the basic structure…

wget -O /usr/lib/systemd/system/etcd.service http://kubbuild/kubernetes_build/kubernetes_masta/etcd.service
wget -O /usr/lib/systemd/system/kubernetes-apiserver.service http://kubbuild/kubernetes_build/kubernetes_masta/kubernetes-apiserver.service
wget -O /usr/lib/systemd/system/kubernetes-controller-manager.service http://kubbuild/kubernetes_build/kubernetes_masta/kubernetes-controller-manager.service
wget -O /usr/lib/systemd/system/kubernetes-scheduler.service http://kubbuild/kubernetes_build/kubernetes_masta/kubernetes-scheduler.service

So let’s look at each service file…

image
A service file just tells systemd what to do.  Each service is also called a unit.  In this case, the unit file has 3 sections.  The unit section describes the name of the service along with any service requirements.  In this case, we give the unit a description and tell systemd that this service can start after the network.target starts.  The service section describes what we want the service to do.  The big item to pay attention to here is the ‘ExecStart’ command which tells systemd what process to run.

Note: I owe a blog entry (or two) dedicated just to systemd.  Im trying to cover enough to get you going without getting stuck in the weeds here.  For now, you can just use my unit files but I will be generating a blog later to talk specifically about what each and every item in the unit files does as well as dive deeper into targets.

The last section is the Install section.  This tells systemd when to launch this process.  In our case, we’re telling systemd that this unit is wanted by the multi-user.target.  For now, just know that this is the equivalent of enabling the service to run at system boot.

So there really wasn’t anything special about the etcd unit.  Let’s take a look at the next unit file for the API server…

image
Ok, so this one is a little more interesting.  Let’s focus on the service section of the unit file.  Note that we’re passing quite a few variables to the service when it runs.  If we want to see what these variables are for, we can take a look at the kubernetes-apiserver process itself.  Let’s see if we can correlate these…

image
So these descriptions make this pretty straight forward.  We’re defining the address that the API Server should listen on as 0.0.0.0 which defines all the available interfaces.  We specify the port as 8080, the location of the etcd server (local), and we define a portal net.  The only odd ball there is the portal net but we’ll be covering that later when we talk about services.

Note: The kubernetes team has done a great job with the command documentation.  If you want to see the options you can pass to any of the other binaries you can do the same thing I did above by passing the ‘–help’ flag to the binary to see that flags you can specify on run.

So let’s look at the next service definition for the controller manager…

image
In the case of the controller manager service we pass the service two variables.  First we tell it who the master is, in this case, that’s the localhost.  Secondly, we tell it where the minions are.  In my case, I define IP addresses for each of the 4 minion machines.  Notice how some of the unit definitions for the services depend on other services as well.  In this case, we’re telling systemd not to start the controller manager until the API server is running.

The last service on the kubmasta is the scheduler service.  The definition is pretty basic…

image
Here we just tell the scheduler where the master is.  Again, that’s currently just the localhost.

So now that we have all of our services defined, we can tell systemd to enable and run the services.  We do so with this set of commands…

!Tell systemd to reload looking for new or changed unit files
systemctl daemon-reload

!Enable and start all the services we just defined
systemctl enable etcd
systemctl start etcd
systemctl enable kubernetes-apiserver.service
systemctl start kubernetes-apiserver.service
systemctl enable kubernetes-controller-manager
systemctl start kubernetes-controller-manager
systemctl enable kubernetes-scheduler
systemctl start kubernetes-scheduler

Once that’s done, kubmasta should be operational.  Let’s check and see if it sees any minion machines yet…

image
Success!  So we’re successfully interacting with the kubernetes.  Granted, none of the minions are running yet but things are looking good.  Now let’s move onto building the first minion.

Building kubminion1
The process for building the minions is rather similar to the process we used for building the kubmasta with the exception of what services we’re running.  The minions only need to run two services, kube-proxy and kubelet.  Let’s look at the entire build script up front and then dive into the specifics…

!Disable firewalld
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

!Install docker
yum install -y docker

!Copy the required binaries down
mkdir /opt/kubernetes
wget -O /opt/kubernetes/kubelet http://kubbuild/kubelet
wget -O /opt/kubernetes/kube-proxy http://kubbuild/kube-proxy
!Set appropriate permissions
chmod 755 -R /opt/kubernetes

!Copy my unit files down
wget -O /usr/lib/systemd/system/kubernetes-kubelet.service http://kubbuild/kubernetes_build/kubernetes_minion1/kubernetes-kubelet.service
wget -O /usr/lib/systemd/system/kubernetes-proxy.service http://kubbuild/kubernetes_build/kubernetes_minion1/kubernetes-proxy.service
wget -O /etc/sysconfig/docker http://kubbuild/kubernetes_build/kubernetes_minion1/docker

!Enable and start the services
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl enable kubernetes-kubelet
systemctl start kubernetes-kubelet
systemctl enable kubernetes-proxy
systemctl start kubernetes-proxy

So this should all look pretty familiar.  Let’s run this on kubminion1 and then take a look at the unit files for the services…

image

Here we see that we define the IP and the port for the kubelet service as well as tell it where the etcd server (kubmasta) is.  Pretty straight forward.  Now let’s check out the kube-proxy unit file…

image
Nothing special here either.  We just tell it where the etcd server is.  You probably noticed that we also copied down a docker config file.  Notice that since we installed docker it’s configured to use a config file located in ‘/etc/sysconfig’.  The file we copied down overwirtes the existing config file and looks like this…

image
Now this is sort of interesting.  We added 3 items to the docker configuration.  We set the bridge IP of the docker0 bridge with the IP of 10.10.10.1.  We also tell docker not to use iptables as well as not to use the default mechanism of hiding all the docker0 bridge traffic behind the host IP.  We’ll talk more about why this is required later on, but based on the rest of the minion docker config, we’ll see that our network config now looks like this…

image

Note: I’m going to talk about the network configuration in MUCH greater depth later on.  However, for now, we’ll add the following routes to our MLS to make sure that the docker0 bridge subnets are reachable…

ip route 10.10.10.0 255.255.255.0 10.20.30.62
ip route 10.10.20.0 255.255.255.0 10.20.30.63
ip route 10.10.30.0 255.255.255.0 192.168.10.64
ip route 10.10.40.0 255.255.255.0 192.168.10.65

So now that we have minion1 configured and the services enabled, let’s go back to kubmasta and see if it sees our minion…

image
Success!  The next step is to configure the remaining 3 minions.

Building kubminion{2,3,4}

The build scripts to do so are located below…

Build script for kubminion2

!Disable firewalld
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

!Install docker
yum install -y docker

!Copy the required binaries down
mkdir /opt/kubernetes
wget -O /opt/kubernetes/kubelet http://kubbuild/kubelet
wget -O /opt/kubernetes/kube-proxy http://kubbuild/kube-proxy
!Set appropriate permissions
chmod 755 -R /opt/kubernetes

!Copy my unit files down
wget -O /usr/lib/systemd/system/kubernetes-kubelet.service http://kubbuild/kubernetes_build/kubernetes_minion2/kubernetes-kubelet.service
wget -O /usr/lib/systemd/system/kubernetes-proxy.service http://kubbuild/kubernetes_build/kubernetes_minion2/kubernetes-proxy.service
wget -O /etc/sysconfig/docker http://kubbuild/kubernetes_build/kubernetes_minion2/docker

!Enable and start the services
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl enable kubernetes-kubelet
systemctl start kubernetes-kubelet
systemctl enable kubernetes-proxy
systemctl start kubernetes-proxy

Build script for kubminion3

!Disable firewalld
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

!Install docker
yum install -y docker

!Copy the required binaries down
mkdir /opt/kubernetes
wget -O /opt/kubernetes/kubelet http://kubbuild/kubelet
wget -O /opt/kubernetes/kube-proxy http://kubbuild/kube-proxy
!Set appropriate permissions
chmod 755 -R /opt/kubernetes

!Copy my unit files down
wget -O /usr/lib/systemd/system/kubernetes-kubelet.service http://kubbuild/kubernetes_build/kubernetes_minion3/kubernetes-kubelet.service
wget -O /usr/lib/systemd/system/kubernetes-proxy.service http://kubbuild/kubernetes_build/kubernetes_minion3/kubernetes-proxy.service
wget -O /etc/sysconfig/docker http://kubbuild/kubernetes_build/kubernetes_minion3/docker

!Enable and start the services
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl enable kubernetes-kubelet
systemctl start kubernetes-kubelet
systemctl enable kubernetes-proxy
systemctl start kubernetes-proxy

Build script for kubminion4

!Disable firewalld
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

!Install docker
yum install -y docker

!Copy the required binaries down
mkdir /opt/kubernetes
wget -O /opt/kubernetes/kubelet http://kubbuild/kubelet
wget -O /opt/kubernetes/kube-proxy http://kubbuild/kube-proxy
!Set appropriate permissions
chmod 755 -R /opt/kubernetes

!Copy my unit files down
wget -O /usr/lib/systemd/system/kubernetes-kubelet.service http://kubbuild/kubernetes_build/kubernetes_minion4/kubernetes-kubelet.service
wget -O /usr/lib/systemd/system/kubernetes-proxy.service http://kubbuild/kubernetes_build/kubernetes_minion4/kubernetes-proxy.service
wget -O /etc/sysconfig/docker http://kubbuild/kubernetes_build/kubernetes_minion4/docker

!Enable and start the services
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl enable kubernetes-kubelet
systemctl start kubernetes-kubelet
systemctl enable kubernetes-proxy
systemctl start kubernetes-proxy

When all of your minions have been built, you should be able to head back to kubmasta and see them all online…

image
Awesome!  You’ve built your first kubernetes cluster!  To make sure the communication is working as expected, let’s do a quick container deployment against the cluster.  I’ll do so with this command…

kubectl run-container web1 --image=jonlangemak/docker:web_container_80

image
Now let’s check and see what kubernetes thinks is running…

image
Alright, so it told the cluster to deploy the container to 10.20.30.63 (kubminion2) and it assigned it the IP address 10.10.20.3.  Let’s log into kubminion2 and see what its doing…

image
So it’s downloading the image, let’s give it a minute to finish the download and see what’s running…

image
So the image is downloaded and running.  Let’s try hitting that IP address that kubernetes assigned to the container and see what happens…

image

Just what we expect to see.  So it appears that kubernetes is running as it should be.  For now, use the following commands to clean up the cluster.

kubectl resize --replicas=0 rc web1
kubectl delete rc web1

Keep in mind that we haven’t even scratched the surface in this post but we should now have a solid base to continue building on.  In the following posts we’ll start tearing into the kubernetes constructs and how they map into the cluster.

5 thoughts on “Kubernetes 101 – The build

  1. timo

    thank you for the very interesting article. cannot wait for the next post on kubernetes!

    you create symbolic links
    ln -s /opt/kubernetes/kubectl /usr/local/bin/
    on minions, but don’t download kubectl to those machines. is that intentional?

    for minion1 there is a typo
    “We set the bridge IP of the docker0 bridge with the IP of 10.10.20.1.”
    obv the IP should be 10.10.10.1

    Reply
    1. Jon Langemak Post author

      Im writing the next one right now so its coming up shortly. Yes, for now I only want to control the cluster from kubmasta (thats what kubectl is used for) so I didnt copy it down to the minions. Making the symlink on the minions was a mistake. Thanks for spotting that! I’ve fixed that and the IP issue you caught. Thanks for reading!

      Reply
  2. Dinesh

    for latest versions, kube-controll-manager does not support machines flag…can you explain how to get through this

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *