saltstack

You are currently browsing articles tagged saltstack.

I’ve been playing more with SaltStack recently and I realized that my first attempt at using Salt to provision my cluster was a little shortsighted.  The problem was, it only worked for my exact lab configuration.  After playing with Salt some more, I realized that the Salt configuration could be MUCH more dynamic than what I had initially deployed.  That being said, I developed a set of Salt states that I believe can be consumed by anyone wanting to deploy a Kubernetes lab on bare metal.  To do this, I used a few more of the features that SaltStack has to offer.  Namely, pillars and the built-in Jinja templating language.

My goal was to let anyone with some Salt experience be able to quickly deploy a fully working Kubernetes cluster.  That being said, the Salt configuration can be tuned to your specific environment.  Have 3 servers you want to try Kubernetes on?  Have 10?  All you need to do is have some servers that meet the following prerequisites and tune the Salt config to your environment.

Environment Prerequisites
-You need at least 2 servers, one for the master and one for the minion (might work with 1 but I haven’t tried it)
-All servers used in the config must be resolvable in local DNS
-All servers used as minions need to have a subnet routed to them for the docker0 bridge IP space (I use a /27 but you decide based on your own requirements)
-You have Salt installed and configured on the servers you wish to deploy against.  The Salt master needs to be configured to host files and pillars.  You can see the pillar configuration I did in this post and the base Salt config I did in this post.
-You have Git and Docker installed on the server you intend to build the Kubernetes binaries on (yum -y install docker git)

That’s all you need.  To prove that this works, I built another little lab to try and deploy this against.  It looks like this…

image
In this example, I have 3 servers.  K8stest1 will be used as the Kubernetes master and the remaining two will act as Kubernetes minions.  From a Salt perspective, k8stest1 will be the master and all the servers (including k8stest1) will be minions.  Notice that all the servers live on the 192.168.127.0/24 segment and I’ve routed space from 172.10.10.0 /24 to each Kubernetes minion in /27 allocations.  Let’s verify that Salt is working as I expect by ‘pinging’ each minion from the master…

image
Cool – So Salt can talk to all the servers we’re using in this lab.  The next step is to download and build the Kubernetes code we’ll be using in this lab.  To do this, we’ll clone the Kubernetes GitHub repo and then build the binaries…

Note: Kubernetes is changing VERY quickly.  I’ve been doing all of my testing based off of the .13 branch and that’s what all of these scripts are built to use.  Once things settle down I’ll update these posts with the most recent stable code but at the time of this writing the stable version I found was .13.  Make sure you clone the right branch!

Once the code is downloaded, build the binaries using these commands…

Again – this will take some time.  Go fill up the coffee cup and wait for the code to finish building.  (Hit Y to accept downloading the golang image before you walk away though!)

Once the binaries are built, we can clone down my Salt build scripts from my repository…

This will put the code into the ‘/srv/’ directory and create it if it doesn’t exist.  Next up, we need to move the freshly built binaries into the salt state directory so that we can pull them from the client…

The last step before we build the cluster is to tell the Salt configuration what our environment looks like.  This is done by editing the ‘kube_data’ pillar and possibly changing the Salt top.sls file to make sure we match the correct hosts.  Let’s look at the ‘kube_data’ pillar first.  It’s located at ‘/srv/pillar/kube_data.sls’…

So this is a sample pillar file that I used to build the config in my initial lab.  You’ll notice that I have 4 minions and 1 master defined.  To change this to look like our new environment, I would update the file to look like this…

So not too hard right?  Let’s walk through what I changed.  I had said that I wanted to use k8stest1(192.168.127.100) as the master so I changed the relevant IP address under the ‘kube_master’ definition to match k8stest1.  I have 2 minions in this lab so I removed the other 2 definitions and updated the first 2 to match my new environment.  Notice that you define the minion name and then define it’s IP address and docker0 bridge as attributes.  If you had more minions you could define them in here as well.

Note: The minion names have to be exact and do NOT include the FQDN, just the host part of the name.

The last thing we have to look at is the ‘top.sls’ file for salt.  Let’s look at it to see what the default is…

Notice  that if you happen to have named your servers according to role, then you might not have to change anything.  If your host names happen to include ‘master’ and ‘minion’ in their name based on the role they’ll play in the Kubernetes cluster, then you’re already done.  If not, we’ll need to change this to match our lab.  Let’s update it to match our lab…

Above you can see what used glob matching and specified the hostnames we were using for the master and minion roles.  Pretty straight forward, just make sure that you match the right server to the right role based on the defaults.

Now all we have to do is let Salt take care of building the Kubernetes cluster for us…

Once Salt has finished executing, we should be able to check and see what the Kubernetes cluster is doing…

image

Hard to see there (click to make it bigger) but Kubernetes is now busily working to deploy all of the pods.  The default build will provision SkyDNS, Heapster, and the fluentD/ElasticSearch Logging combo.  After all of the pods are deployed, we can access those cluster add-ons at the following URLs…

Heapster

image

FluentD/ElasticSearch

image

So there you have it.  Pretty easy to do right?  You should have a fully operational Kubernetes cluster at this point.  Would love to have some other people try this out and give me feedback on it.  Let me know what you think!

Tags: , ,

In our last post about SaltStack, we introduced the concept of grains.  Grains are bits of information that the Salt minion can pull off the system it’s running on.  SaltStack also has the concept of pillars.  Pillars are sets of data that we can push to the minions and then consume in state or managed files.  When you couple this with the ability to template with Jinja, it becomes VERY powerful.  Let’s take a quick look at how we can start using pillars and templates. 

Prep the Salt Master
The first thing we need to do is to tell Salt that we want to use Pillars.  To do this, we just tell the Salt master where the pillar state files are.  Let’s edit the salt master config file…

Now find the ‘Pillar Settings’ section and uncomment the line I have highlighted in red below…

image 
Then restart the salt-master service…

So we just told Salt that it should use the ‘/srv/pillar/’ directory for pillar info so we need to now go and create it…

Now we’re all set.  Pillar information is exported to the minions in the exact same way that states are executed.  We define a ‘top.sls’ file in the pillar directory and tell it what information to send where.  Let’s create an example top file now…

image
You can probably figure out that this means Salt should distribute the pillars ‘kube_master’ and ‘kube_minions’ to all (*) of the minions.  So now let’s define these files…

image
So nothing terribly fancy in these two files, just a couple of lists that define my Kubernetes minions and masters.  So now that we have this configured, let’s run our state against the minions again…

Once it’s run, let’s use the following command to query all of the pillars that exists on kubminion1…

Just like with grains, there are some predefined pillars.  However, notice at the top we now have the two lists we just created…

image 
Cool  huh?  But now what do we do with this data?  This is where Saltstack starts to shine.  We can use this data (or grain data) to template configuration files.  This is huge!  Let’s look at an example so you can see what I’m talking about.  Let’s take the master systemd service definition for kube_controller.  It looks like this…

Rather than us, pre-populating that config file, why don’t we pull that info in from the pillar?  To do that, we use the Jinja templating language which has been integrated directly into SaltStack.  Let’s change our file to this…

Since we used Jinja in the file, we need to tell Salt about it otherwise it wont apply the Jinja logic and will copy the file as is.  This is done by modifying the file.managed statement to look like this…

Now all we need to do is rerun our state…

After it completes, we can check out the config file on the host…

Pretty awesome huh?  We can use Jinja to pull all sorts of data from pillars or grains directly into the configuration files.  Here are a couple more examples…

Pulling in a Grain from the host

In this case, we’re telling the minion to pull it’s ‘host’ grain into the template.  This would return the exact value of the host (not the FQDN) but could be used to pull any grain that’s defined.

Pulling in a specific attribute from a pillar

This shows us how to pull a specific variable out of a pillar.  Take for example this pillar…

If I want to just pull the ip address, I can use the exact example as shown above.  If I wanted the protocol variable, my statement would look like…

As you can see, this makes pulling nested data out of a pillar very straightforward.

Pulling pillar information that’s host specific

This is a really interesting and powerful combo.  Say you have information defined in a pillar about your entire cluster.  While this is handy to have on all hosts, sometimes you just want information relevant for the host you’re working on at the time.  For instance, look at this pillar…

When you’re deploying your config files, you want to build a generic template that can be used on each host.  The above syntax tells Salt to pull the value of ‘ipaddress’ from ‘kube_hosts:<the host you’re running on currently>’.  So we can use grain data to pull in more specific data from the pillars. 

In these examples, we’re really just using Jinja to call Salt functions.  Jinja can also be used to provide logic in state files.  For example, take a look at this example for the Kubernetes GitHub repo…

Above you can see that they’re using the data from the grains to determine things like config file location and what packages need to be installed. 

So again – this was just a taste, but I hope you’re starting to see that all of these components combine to make SaltStack a very powerful tool.

Tags:

Salt – The basics

In my last post, I showed you how I automated my Kubernetes lab build out by using Salt.  This took the build time and cut it by more than 70% (Im guessing here but you get the point).  In addition, I’ve been making all of my changes for the cluster in Salt rather than applying them directly to the host.  Not only does this give me better documentation, it allows me to apply changes across multiple nodes very quickly.  You might be wondering why I chose Salt since I’ve blogged about Chef in the past.  The answer isn’t cut and dry, but Salt just made sense to me.  On top of that, there is VERY good documentation out there about all of the state and state functions so it’s pretty easily consumable.    As I walk through the process I used to create the lab build scripts, I hope you’ll start to catch onto some of the reasons that made me decide to learn Salt.

Let’s start by taking a look at me GitHub repo…

imageWhile there’s a lot here, the pieces we really want to talk about are the files that end in ‘sls’.  These are what Salt calls ‘State’ files.  To quote their own page, Salt defines state files as…

“The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management.”

So that being said, we can assume that each of these files represents a ‘state’ of configuration that we can apply to end hosts.  In my case, the master (kubbuild) applies these states to the minions (kubmasta and kubminion[1-4]) to give me the desired end result of a working Kubernetes lab.  State files can be applied to hosts by issuing the following command on the master…

This will apply the state you specify to all of the minions.  If I want to apply a state file to a single minion, I can do so by removing the wild card and specifying an exact minion name or a glob for matching.  Here are some examples…

So while this makes, good sense, it’s not exactly what I did in the last post to apply state to me minions.  I ran the command…

This causes the nodes to examine the file ‘top.sls’ to determine what states should be applied to what minions.  Let’s look at my top.sls file to see what we’re talking about…

The goal of the top file is to apply state ,or a series of states, to a series of minions based on matching criteria.  We can see that the 2nd and 3rd line tell Salt that all (*) hosts should have the state ‘baseinstall’ applied to them.

Note: You’ll notice that you dont need to add the file extension (.sls) when referencing state files in Salt.  Keep this in mind when writing your state files.

The 4th and 5th line do a glob match for hosts with ‘masta’ in the name and apply the ‘masterinstall’ state to them.  The rest of the config uses a different type of matching which is referred to as ‘grain matching’.  Salt uses the concept of ‘grains’ to define specific attributes of a specific minion.  By default, there are a preset number of predefined grains that you can query against.  To see what Grains are available, we can use the following command to query them from a particular minion…

This output will show us a list of grains on the kubmasta server.  The output looks like this…

image
This is just a section of the output, there’s ~50 predefined grains that you can query against.  So in our case, we applied state in the top file based on the grain called ‘host’.  We need to tell the top file that we want to match on a grain and then tell it what states to run against those matches.  Pretty slick huh?  You could see how powerful this would be if you needed to apply a patch against all ‘CentOS 7’ hosts or something along those lines.

So now that we know how the state files are applied, let’s look at the actual state files.  Let’s take one of the minion configs to use as an example and walk through a couple of the sections…

In this example, we’re pulling down the docker configuration file from the master to the host.  Recall in our last post we talked about the master ‘file_root’ that we defined to be ‘/srv/salt/’.  This tells the minion to download the file ‘/srv/salt/minion1/docker’ and place it on the minion as ‘/etc/sysconfig/docker’.  We also set some attributes of the file such as owner, group, and permissions.

Before we move on, let’s talk about the layout of this state file as that was something that was initially confusing to me…

The first thing we need to pay attention to is the ID Declaration.  In this case, I’m defining the file name I want to use as the target.  This is a little confusing because the ID in this case not only defines the identification for this state, but it’s also the target destination of the state, AKA where I want to put the file.  In other cases we’ll see later, the ID is nothing more than a unique identifier and plays not part in the actual configuration.

Note: The ID can NOT be duplicated either in a single state file, or a set of state files executed on the same minion.

Once we define an ID, we need to tell it what state we want to use.  In this case, we’re manipulating a file so we want to use the ‘file’ state.

Note: SaltStack has some great doco out there and this doc in particular is awesome since it lists all of the states inherently built into Salt –> http://docs.saltstack.com/en/latest/ref/states/all/

It’s important to note here that lines 3 and 4 above could be made into one line that read ‘file.managed:’.  I didn’t know that at first and that syntax makes a lot more sense to me so I think I’ll use that going forward.

So now that we know what state we’re using, we can use one of that states functions.  In this case, we use the managed function which allows you to download files from the master and place them on the minions.  The rest of the configuration consists of function arguments on what file we want to place and the associated ownership.

Now that we know the basic outline, let’s checkout some of the other states that the minion1.sls file uses and see what else Salt can do…

Here we have a state that does two things.  Here the ID is ‘docker’ and we’re applying both the pkg state to install docker as well as the service state to enable and start the process.

Here’s an example of how to make a directory on the minion.  I hope by this point, the state configuration layout is making more sense to you.  Let’s look at a couple other states from the masterinstall.sls to see a couple more things Salt can do…

Create a symlink

Running a command on the minion

Downloading and un-tarring a file on the minion

So as you can see, Salt is pretty flexible and this isn’t even scratching the surface of the power of salt.  The next item up for investigation are what Salt calls Pillars which look pretty awesome.

I know this was a brief intro, but I wanted to get some info out there after my last post so you could start digging into Salt some more.  I’m hoping to have some time later this week to dig more into Salt pillars and as I learn more I’ll do my best to blog about them.

Tags:

« Older entries