Dynamic Kubernetes installation/configuration with SaltStack

I’ve been playing more with SaltStack recently and I realized that my first attempt at using Salt to provision my cluster was a little shortsighted.  The problem was, it only worked for my exact lab configuration.  After playing with Salt some more, I realized that the Salt configuration could be MUCH more dynamic than what I had initially deployed.  That being said, I developed a set of Salt states that I believe can be consumed by anyone wanting to deploy a Kubernetes lab on bare metal.  To do this, I used a few more of the features that SaltStack has to offer.  Namely, pillars and the built-in Jinja templating language.

My goal was to let anyone with some Salt experience be able to quickly deploy a fully working Kubernetes cluster.  That being said, the Salt configuration can be tuned to your specific environment.  Have 3 servers you want to try Kubernetes on?  Have 10?  All you need to do is have some servers that meet the following prerequisites and tune the Salt config to your environment.

Environment Prerequisites
-You need at least 2 servers, one for the master and one for the minion (might work with 1 but I haven’t tried it)
-All servers used in the config must be resolvable in local DNS
-All servers used as minions need to have a subnet routed to them for the docker0 bridge IP space (I use a /27 but you decide based on your own requirements)
-You have Salt installed and configured on the servers you wish to deploy against.  The Salt master needs to be configured to host files and pillars.  You can see the pillar configuration I did in this post and the base Salt config I did in this post.
-You have Git and Docker installed on the server you intend to build the Kubernetes binaries on (yum -y install docker git)

That’s all you need.  To prove that this works, I built another little lab to try and deploy this against.  It looks like this…

image
In this example, I have 3 servers.  K8stest1 will be used as the Kubernetes master and the remaining two will act as Kubernetes minions.  From a Salt perspective, k8stest1 will be the master and all the servers (including k8stest1) will be minions.  Notice that all the servers live on the 192.168.127.0/24 segment and I’ve routed space from 172.10.10.0 /24 to each Kubernetes minion in /27 allocations.  Let’s verify that Salt is working as I expect by ‘pinging’ each minion from the master…

image
Cool – So Salt can talk to all the servers we’re using in this lab.  The next step is to download and build the Kubernetes code we’ll be using in this lab.  To do this, we’ll clone the Kubernetes GitHub repo and then build the binaries…

Note: Kubernetes is changing VERY quickly.  I’ve been doing all of my testing based off of the .13 branch and that’s what all of these scripts are built to use.  Once things settle down I’ll update these posts with the most recent stable code but at the time of this writing the stable version I found was .13.  Make sure you clone the right branch!

git clone -b release-0.13 https://github.com/GoogleCloudPlatform/kubernetes.git

Once the code is downloaded, build the binaries using these commands…

cd kubernetes/build
./run.sh hack/build-cross.sh

Again – this will take some time.  Go fill up the coffee cup and wait for the code to finish building.  (Hit Y to accept downloading the golang image before you walk away though!)

Once the binaries are built, we can clone down my Salt build scripts from my repository…

git clone https://github.com/jonlangemak/saltstackv2.git /srv/

This will put the code into the ‘/srv/’ directory and create it if it doesn’t exist.  Next up, we need to move the freshly built binaries into the salt state directory so that we can pull them from the client…

mkdir /srv/salt/kubebinaries
cd /root/kubernetes/_output/dockerized/bin/linux/amd64
cp * /srv/salt/kubebinaries

The last step before we build the cluster is to tell the Salt configuration what our environment looks like.  This is done by editing the ‘kube_data’ pillar and possibly changing the Salt top.sls file to make sure we match the correct hosts.  Let’s look at the ‘kube_data’ pillar first.  It’s located at ‘/srv/pillar/kube_data.sls’…

kube_master:
  ipaddress: 10.20.30.61
  portal_net: 10.100.0.0/16

kube_minions:
  kubminion1:
    ipaddress: 10.20.30.62
    docker0_bip: 10.10.10.1/24
  kubminion2:
    ipaddress: 10.20.30.63
    docker0_bip: 10.10.20.1/24
  kubminion3:
    ipaddress: 192.168.10.64
    docker0_bip: 10.10.30.1/24
  kubminion4:
    ipaddress: 192.168.10.65
    docker0_bip: 10.10.40.1/24

kube_pods:
  skydns:
    portalip: 10.100.0.10
    dnsname: kubdomain.local

So this is a sample pillar file that I used to build the config in my initial lab.  You’ll notice that I have 4 minions and 1 master defined.  To change this to look like our new environment, I would update the file to look like this…

kube_master:
  ipaddress: 192.168.127.100
  portal_net: 10.100.0.0/16

kube_minions:
  k8stest2:
    ipaddress: 192.168.127.101
    docker0_bip: 172.10.10.1/27
  k8stest3:
    ipaddress: 192.168.127.102
    docker0_bip: 172.10.10.33/27

kube_pods:
  skydns:
    portalip: 10.100.0.10
    dnsname: kubdomain.local

So not too hard right?  Let’s walk through what I changed.  I had said that I wanted to use k8stest1(192.168.127.100) as the master so I changed the relevant IP address under the ‘kube_master’ definition to match k8stest1.  I have 2 minions in this lab so I removed the other 2 definitions and updated the first 2 to match my new environment.  Notice that you define the minion name and then define it’s IP address and docker0 bridge as attributes.  If you had more minions you could define them in here as well.

Note: The minion names have to be exact and do NOT include the FQDN, just the host part of the name.

The last thing we have to look at is the ‘top.sls’ file for salt.  Let’s look at it to see what the default is…

# This file assumes that you use the words 'master' and 'minion' in your
# naming convention for your nodes.  If thats not the case, you'll need
# to update these so the proper state files match the proper servers 
base:
  '*':
    - baseinstall
  '*minion*':
    - minioninstall
  '*master*':
    - masterinstall
    - pods

Notice  that if you happen to have named your servers according to role, then you might not have to change anything.  If your host names happen to include ‘master’ and ‘minion’ in their name based on the role they’ll play in the Kubernetes cluster, then you’re already done.  If not, we’ll need to change this to match our lab.  Let’s update it to match our lab…

# This file assumes that you use the words 'master' and 'minion' in your
# naming convention for your nodes.  If thats not the case, you'll need
# to update these so the proper state files match the proper servers 
base:
  '*':
    - baseinstall
  'k8stest[2-3]*':
    - minioninstall
  'k8stest1*':
    - masterinstall
    - pods

Above you can see what used glob matching and specified the hostnames we were using for the master and minion roles.  Pretty straight forward, just make sure that you match the right server to the right role based on the defaults.

Now all we have to do is let Salt take care of building the Kubernetes cluster for us…

salt ‘*’ state.highstate

Once Salt has finished executing, we should be able to check and see what the Kubernetes cluster is doing…

image

Hard to see there (click to make it bigger) but Kubernetes is now busily working to deploy all of the pods.  The default build will provision SkyDNS, Heapster, and the fluentD/ElasticSearch Logging combo.  After all of the pods are deployed, we can access those cluster add-ons at the following URLs…

Heapster

http://<Kubernetes Master>:8080/api/v1beta1/proxy/services/monitoring-grafana/

image

FluentD/ElasticSearch

http://<Kubernetes Master>:8080/api/v1beta1/proxy/services/kibana-logging/

image

So there you have it.  Pretty easy to do right?  You should have a fully operational Kubernetes cluster at this point.  Would love to have some other people try this out and give me feedback on it.  Let me know what you think!

4 thoughts on “Dynamic Kubernetes installation/configuration with SaltStack

  1. Pingback: Kubernetes Weekly: Issue #5 – Kubernetes Weekly

  2. Gil Lee

    Hey! I’m a your follower 🙂
    Since your post about DNS config in Kubernetes, I’ve not been making any progress 🙁
    This time I tried out saltstack on 7 bare-metal nodes (1 for build & salt-master, 1 for kube-master and 5 for kube-minions).
    The following is IP address of the nodes:
    hostname|IP|dockerIP|etc
    giljael-phy1|10.244.34.41|192.168.2.33/27|minion1
    giljael-phy2|10.244.34.42|192.168.2.65/27|minion2
    giljael-phy3|10.244.34.43|192.168.2.97/27|minion3
    giljael-phy4|10.244.34.44|192.168.2.129/27|minion4
    giljael-phy5|10.244.34.45|192.168.2.161/27|minion5
    giljael-phy6|10.244.34.46|192.168.2.193/27|master
    giljael-phy7|10.244.34.47|192.168.2.225/27|build & salt-master

    All Docker bridge IPs are reachable to each other.
    Up to pods deployment, everything looks good. All the services including skydns are running.
    The following shows a part of kube2sky logs:
    2015/05/15 02:24:48 Setting dns record: elasticsearch-logging.default.kubdomain.local. -> 10.100.102.201:9200
    2015/05/15 02:24:48 Setting dns record: kibana-logging.default.kubdomain.local. -> 10.100.171.158:5601
    2015/05/15 02:24:49 Setting dns record: kube-dns.default.kubdomain.local. -> 10.100.0.10:53
    2015/05/15 02:24:49 Setting dns record: kubernetes.default.kubdomain.local. -> 10.100.0.2:443
    2015/05/15 02:24:49 Setting dns record: kubernetes-ro.default.kubdomain.local. -> 10.100.0.1:80
    2015/05/15 02:24:49 Setting dns record: monitoring-grafana.default.kubdomain.local. -> 10.100.15.206:80
    2015/05/15 02:24:49 Setting dns record: monitoring-heapster.default.kubdomain.local. -> 10.100.151.29:80
    2015/05/15 02:24:49 Setting dns record: monitoring-influxdb.default.kubdomain.local. -> 10.100.63.225:80

    Then, when I run on kube master node these:
    $ lynx http://giljael-phy6:8080/api/v1beta1/proxy/services/monitoring-grafana/
    I got the error –
    Error: ‘dial tcp 192.168.2.164:8080: no route to host’
    Trying to reach: ‘http://192.168.2.164:8080/’

    $ lynx http://giljael-phy6:8080/api/v1beta1/proxy/services/kibana-logging/
    Error: ‘dial tcp 192.168.2.165:80: no route to host’
    Trying to reach: ‘http://192.168.2.165:80/’

    A part of “kubectl get pods” shows below:
    # kubectl get pods

    kibana-logging-controller-k1hh2 192.168.2.165 kibana-logging gcr.io/google_containers/kibana:1.2 giljael-phy5/10.244.34.45 kubernetes.io/cluster-service=true,name=kibana-logging Running 30 minutes
    monitoring-influx-grafana-controller-3e013 192.168.2.164 influxdb gcr.io/google_containers/heapster_influxdb:v0.3 giljael-phy5/10.244.34.45 kubernetes.io/cluster-service=true,name=influxGrafana Running 30 minutes
    grafana gcr.io/google_containers/heapster_grafana:v0.6

    Could you let me know where I can check to make my cluster run the services? Thanks.

    Reply
  3. entretien ménager COMMERCIAL

    Toujours plus performant ett soucieux de satisfaire au mieux les besoions de sa clientèle,
    G.E.M vous propose une solution unique avec ses forfaits
    mensuels !

    Toujojrs des produits écologiques, carr G.E.M esst respectueux de l’environnement et de votre
    santé !

    Toujours des tarifs parfaitement adaptés et étudiés spécialement pour vous parr G.E.M selon le
    type de votre résidence.service de nettoyage résidentiel, Entretien ménager

    Notre service résidentiel d’entretien ett dde nettoyage personnalisé vous garantit
    toute l’attention nécessaire à assurer la salubrité de votre maison. Nos
    produits écologiques et notre technologie haut de gamme nous permettent de débarrasser votre résidence dde bactéries
    et dee graisse sans utilisation dee produits chimiques néfastes à votre environnement.
    Nous assurons un nettoyage complet des chambres
    à coucher, salles de bains, cuisines et salles de
    séjour, ainsi quue tout endroit difficille à traiter.
    Quelle que soit la difficulté du projet

    Reply

Leave a Reply to Diogene Cancel reply

Your email address will not be published. Required fields are marked *