Kubernetes with SaltStack revisited

      4 Comments on Kubernetes with SaltStack revisited

I thought it would be a good idea to revisit my last Kubernetes build in which I was using Salt to automate the deployment.  The setup worked well at the time, but much has changed with Kubernetes since I initially wrote those state files.  That being said, I wanted to update them to make sure they worked with Kubernetes 1.0 and above.  You can find my Salt config for this build over at Github…

https://github.com/jonlangemak/saltstackv2

A couple of quick notes before we walk through how to use the repo…

-While I used the last version of this repo as a starting point, I’ve stripped this down to basics (AKA – Some of the auxiliary pods aren’t here (yet)).  I’ll be adding to this constantly and I do intend to add a lot more functionality to the defined state files.
-All of the Kubernetes related communication is unsecured.  That is – it’s all over HTTP.  I already started work on adding an option to do SSL if you so choose. 

That being said, let’s jump right into how to use this.  My lab looks like this…

image 
Here we have 3 hosts.  K8stest1 will perform the role of the master while k8stest2 and k8stest3 will play the role of nodes or minions.  Each host will be running Docker and will have a routable network segment configured on it’s Docker0 bridge interface.  Your upstream layer 3 device will need to have static routes pointing each Docker0 bridge network to their respective hosts physical interface (192.168.127.100x) as shown above.  In addition to these 3 hosts, I also have a separate build server that acts as the Salt master and initiates the cluster build.  That server is called ‘kubbuild’ and isn’t pictured because it only plays a part in the initial configuration.  So let’s get right into the build…

In my case, the base lab configuration looks like this…

-All hosts are running CentOS 7.1 and are fully updated
-The 3 lab hosts (k8stest[1-3]) are configured as Salt minions and are reachable by the salt-master.  If you don’t know how to do that see the section of this post that talks about configuring the Salt master and minions.

The first thing you need to do is clone my repo onto your build server…

git clone https://github.com/jonlangemak/saltstackv2.git /srv

The next thing we want to do is download the Kubernetes binaries we need.  In earlier posts we had built them from scratch but we’re now going to download them instead.  All of the Kubernetes releases can be downloaded as a TAR file from github.  In this case, let’s work off of the 1.1.7 release.  So download this TAR file…

wget https://github.com/kubernetes/kubernetes/releases/download/v1.1.7/kubernetes.tar.gz

Next we have to unpack this file, and another TAR file inside this one, to get to the actual binaries…

tar -xzvf kubernetes.tar.gz
tar -xzvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz

Next we move those extracted binaries to the correct place in the Salt folder structure…

mkdir /srv/salt/kubebinaries
cd /root/kubernetes/server/bin
cp * /srv/salt/kubebinaries

Alright – That’s the hardest part!  Now let’s go take a look at our Salt pillar configuration.  Take a look at the file ‘/srv/pillar/kube_data.sls’…

cluster_info:
  domainname: interubernet.local
kube_nodes:
  k8stest1:
    type: master
    ipaddress: 192.168.127.100
    docker0_bip: 172.10.10.1
    docker0_mask: /27
    portal_net: 10.100.0.0/16
  k8stest2:
    type: minion
    ipaddress: 192.168.127.101
    docker0_bip: 172.10.10.33
    docker0_mask: /27
  k8stest3:
    type: minion
    ipaddress: 192.168.127.102
    docker0_bip: 172.10.10.65
    docker0_mask: /27
kube_pods:
  skydns:
    portalip: 10.0.0.10
    dnsname: kubdomain.local

All you need to do is update this YAML file with your relevant configuration.  The above example is just a textual version of the network diagram shown earlier.  Keep in mind that you can add minions later by just simply adding onto this file – I’ll demo that later on.  Once you have this updated to match your configuration, let’s make sure we can reach our Salt minions and then execute the state to push the configuration out…

image
Now sit back and enjoy a cup of coffee while Salt does it’s magic.  When it’s done, you should see the results of executing the states against the hosts you defined in the ‘kube_data.sls’ file…

image
If you scroll back up through all of the results you will likely see that it errors out on this section of the master…

image
This is expected and is a result of the etcd container not coming up in time in order for the ‘pods’ state to work.  The current fix is to wait until all of the Kubernetes master containers load and then just execute the highstate again.

So let’s head over to our master server and see if things are working as expected…

image
Perfect!  Our 2 nodes have been discovered.  Since we’re going to execute the Salt highstate again, let’s update the config to include another node…

cluster_info:
  domainname: interubernet.local
kube_nodes:
  k8stest1:
    type: master
    ipaddress: 192.168.127.100
    docker0_bip: 172.10.10.1
    docker0_mask: /27
    portal_net: 10.100.0.0/16
  k8stest2:
    type: minion
    ipaddress: 192.168.127.101
    docker0_bip: 172.10.10.33
    docker0_mask: /27
  k8stest3:
    type: minion
    ipaddress: 192.168.127.102
    docker0_bip: 172.10.10.65
    docker0_mask: /27
  k8stest4:
    type: minion
    ipaddress: 192.168.127.103
    docker0_bip: 172.10.10.97
    docker0_mask: /27
kube_pods:
  skydns:
    portalip: 10.0.0.10
    dnsname: kubdomain.local

Note: I’m assuming that the server k8stest4 has been added to the Salt master as a minion.

This run should provision the pods as well as we provision a new Kubernetes node, k8stest4.  So let’s run the highstate again and see what we get…

salt '*' state.highstate

When the run has finished, let’s head back to the master server and see how many nodes we have…

image 
Perfect!  The Salt config works as expected.  At this point, we have a functioning Kubernetes cluster on our hands.  Let’s make sure everything is working as expected by deploying the guest book demo.  On the master, run this command…

kubectl create -f /etc/kubernetes/examples

This will create the services and the replication controllers for the example and expose them on the node physical interfaces.  Take note of the port it’s using when you create the services…

image 
Now we just need to wait for the containers to deploy.  Keep an eye on them by checking the pod status…

image
Once Kubernetes finishes deploying the containers, we should see them all listed as ‘Running’…

image
Now we can try and hit the guest book front end by browsing to a minion on the specified port…

image
The example should work as expected.  That’s it for now, much more to come soon!

4 thoughts on “Kubernetes with SaltStack revisited

  1. Pingback: TechNewsLetter Vol:12 | Devops Enthusiast

  2. Pingback: KubeWeekly #26 – KubeWeekly

  3. Upendra Sahu

    These services are not getting started,
    kube-apiserver
    kube-controller-manager
    kube-scheduler

    I dont see script file masterinstall.sls starting these searvices also, please let me know how to start these services.

    Reply
  4. N

    Where do you get all info for yaml file to update? I am using AWS as provider. Where to find docker0_bip, docker0_mask, portal-net?

    cluster_info:
    domainname: interubernet.local
    kube_nodes:
    k8stest1:
    type: master
    ipaddress: 192.168.127.100
    docker0_bip: 172.10.10.1
    docker0_mask: /27
    portal_net: 10.100.0.0/16
    k8stest2:
    type: minion
    ipaddress: 192.168.127.101
    docker0_bip: 172.10.10.33
    docker0_mask: /27
    k8stest3:
    type: minion
    ipaddress: 192.168.127.102
    docker0_bip: 172.10.10.65
    docker0_mask: /27
    k8stest4:
    type: minion
    ipaddress: 192.168.127.103
    docker0_bip: 172.10.10.97
    docker0_mask: /27
    kube_pods:
    skydns:
    portalip: 10.0.0.10
    dnsname: kubdomain.local

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *