DevOps

You are currently browsing the archive for the DevOps category.

Since starting to play with golang I’ve run into a couple of interesting items I thought worth writing about.  For those of you that are seasoned developers, I assure you, this wont be interesting.  But for us that are getting started this might be worth reading. 

Pointers
Nothing super exciting here if you’ve used them in other languages but it’s worth talking about since it can be confusing.  Pointers are really just a way for us to gain access to the ‘real’ variable when you aren’t in the function that defines it.  Put another way, when you call a function that takes a variable, you are only giving that function a copy of the variable, not the real variable.  Pointers  allow us to reference the actual location in memory where the value is stored rather than the value itself.  Examples always make this more clear.  Take for instance this example of code…

If we run this sample code, the output we’ll get will look like this…

I define two functions, one called ‘rename’ that takes a variable called ‘myname’ as a type of string.  The ‘pointerrename’ function takes a variable called ‘myname’ which is a type int pointer.  We denote the pointer by using a ‘*’.  Note that in the main function when I want to pass the pointer to the function I use a ‘&’.  The ‘&’ tells golang to find the location of the variable and pass the pointer to the function which is expecting to receive a pointer of type int. 

Structs
Structs allow you to sort of create you own variable type.  And when I say your own variable type, I mean more like a combination of multiple variables into on.  Take for instance this example…

The output from this program will be…

So they’re sort of nice way to keep track of data that falls into key/value pairs. 

Methods
Once you have a struct, you can define a method associated with it.  What distinguishes methods from functions is the addition of defining the method type in the function.  Take this example for instance…

Here we have the same struct called ‘indexcard’ but we also have a couple of methods related to the construct.  Notice that these methods define the type of struct they are receiving, a variable named person of type ‘indexcard’.  Notice that the ‘person’ variable is a pointer reference.  Also note that we aren’t using the ‘&’ when we call the function from the main function.  Golang is smart enough to know to do the pointer magic when calling methods. 

So what does the output of this look like?…

As you can see, we called the methods in two different manners from the main function.  To make Bob older we called the method and the function call returns an integer based on adding 5 to Bob’s existing age.   To make Bob younger we call the method directly.  The method ‘makeyounger’ is not a return function and instead updates the pointer directly.  When back in the main function we print Bob’s age again we get his much younger age of 45.

Slices
Slices are sort of the array of golang.  I mean – they’re really sort of a wrapper around an array.  The only difference between an array and a slice in golang is you define the size of an array whereas a slice you don’t define the size.  There’s tons of info out there on the differences but from what I can discern, that’s really it.  So that being said, here’s a quick example…

Slices work much like arrays do but ,in my opinion, are easier to work with. 

Embedded Types
Embedded types are a way to embed a struct within another struct.  When you do this, you can reference values of the embedded struct directly.  An example will clear up any remaining confusion…

So not too much new here, just showing that we can create structs that are composed of other structs.

That’s all for tonight, more this weekend hopefully. 

Tags:

I’ve been playing more with SaltStack recently and I realized that my first attempt at using Salt to provision my cluster was a little shortsighted.  The problem was, it only worked for my exact lab configuration.  After playing with Salt some more, I realized that the Salt configuration could be MUCH more dynamic than what I had initially deployed.  That being said, I developed a set of Salt states that I believe can be consumed by anyone wanting to deploy a Kubernetes lab on bare metal.  To do this, I used a few more of the features that SaltStack has to offer.  Namely, pillars and the built-in Jinja templating language.

My goal was to let anyone with some Salt experience be able to quickly deploy a fully working Kubernetes cluster.  That being said, the Salt configuration can be tuned to your specific environment.  Have 3 servers you want to try Kubernetes on?  Have 10?  All you need to do is have some servers that meet the following prerequisites and tune the Salt config to your environment.

Environment Prerequisites
-You need at least 2 servers, one for the master and one for the minion (might work with 1 but I haven’t tried it)
-All servers used in the config must be resolvable in local DNS
-All servers used as minions need to have a subnet routed to them for the docker0 bridge IP space (I use a /27 but you decide based on your own requirements)
-You have Salt installed and configured on the servers you wish to deploy against.  The Salt master needs to be configured to host files and pillars.  You can see the pillar configuration I did in this post and the base Salt config I did in this post.
-You have Git and Docker installed on the server you intend to build the Kubernetes binaries on (yum -y install docker git)

That’s all you need.  To prove that this works, I built another little lab to try and deploy this against.  It looks like this…

image
In this example, I have 3 servers.  K8stest1 will be used as the Kubernetes master and the remaining two will act as Kubernetes minions.  From a Salt perspective, k8stest1 will be the master and all the servers (including k8stest1) will be minions.  Notice that all the servers live on the 192.168.127.0/24 segment and I’ve routed space from 172.10.10.0 /24 to each Kubernetes minion in /27 allocations.  Let’s verify that Salt is working as I expect by ‘pinging’ each minion from the master…

image
Cool – So Salt can talk to all the servers we’re using in this lab.  The next step is to download and build the Kubernetes code we’ll be using in this lab.  To do this, we’ll clone the Kubernetes GitHub repo and then build the binaries…

Note: Kubernetes is changing VERY quickly.  I’ve been doing all of my testing based off of the .13 branch and that’s what all of these scripts are built to use.  Once things settle down I’ll update these posts with the most recent stable code but at the time of this writing the stable version I found was .13.  Make sure you clone the right branch!

Once the code is downloaded, build the binaries using these commands…

Again – this will take some time.  Go fill up the coffee cup and wait for the code to finish building.  (Hit Y to accept downloading the golang image before you walk away though!)

Once the binaries are built, we can clone down my Salt build scripts from my repository…

This will put the code into the ‘/srv/’ directory and create it if it doesn’t exist.  Next up, we need to move the freshly built binaries into the salt state directory so that we can pull them from the client…

The last step before we build the cluster is to tell the Salt configuration what our environment looks like.  This is done by editing the ‘kube_data’ pillar and possibly changing the Salt top.sls file to make sure we match the correct hosts.  Let’s look at the ‘kube_data’ pillar first.  It’s located at ‘/srv/pillar/kube_data.sls’…

So this is a sample pillar file that I used to build the config in my initial lab.  You’ll notice that I have 4 minions and 1 master defined.  To change this to look like our new environment, I would update the file to look like this…

So not too hard right?  Let’s walk through what I changed.  I had said that I wanted to use k8stest1(192.168.127.100) as the master so I changed the relevant IP address under the ‘kube_master’ definition to match k8stest1.  I have 2 minions in this lab so I removed the other 2 definitions and updated the first 2 to match my new environment.  Notice that you define the minion name and then define it’s IP address and docker0 bridge as attributes.  If you had more minions you could define them in here as well.

Note: The minion names have to be exact and do NOT include the FQDN, just the host part of the name.

The last thing we have to look at is the ‘top.sls’ file for salt.  Let’s look at it to see what the default is…

Notice  that if you happen to have named your servers according to role, then you might not have to change anything.  If your host names happen to include ‘master’ and ‘minion’ in their name based on the role they’ll play in the Kubernetes cluster, then you’re already done.  If not, we’ll need to change this to match our lab.  Let’s update it to match our lab…

Above you can see what used glob matching and specified the hostnames we were using for the master and minion roles.  Pretty straight forward, just make sure that you match the right server to the right role based on the defaults.

Now all we have to do is let Salt take care of building the Kubernetes cluster for us…

Once Salt has finished executing, we should be able to check and see what the Kubernetes cluster is doing…

image

Hard to see there (click to make it bigger) but Kubernetes is now busily working to deploy all of the pods.  The default build will provision SkyDNS, Heapster, and the fluentD/ElasticSearch Logging combo.  After all of the pods are deployed, we can access those cluster add-ons at the following URLs…

Heapster

image

FluentD/ElasticSearch

image

So there you have it.  Pretty easy to do right?  You should have a fully operational Kubernetes cluster at this point.  Would love to have some other people try this out and give me feedback on it.  Let me know what you think!

Tags: , ,

In our last post about SaltStack, we introduced the concept of grains.  Grains are bits of information that the Salt minion can pull off the system it’s running on.  SaltStack also has the concept of pillars.  Pillars are sets of data that we can push to the minions and then consume in state or managed files.  When you couple this with the ability to template with Jinja, it becomes VERY powerful.  Let’s take a quick look at how we can start using pillars and templates. 

Prep the Salt Master
The first thing we need to do is to tell Salt that we want to use Pillars.  To do this, we just tell the Salt master where the pillar state files are.  Let’s edit the salt master config file…

Now find the ‘Pillar Settings’ section and uncomment the line I have highlighted in red below…

image 
Then restart the salt-master service…

So we just told Salt that it should use the ‘/srv/pillar/’ directory for pillar info so we need to now go and create it…

Now we’re all set.  Pillar information is exported to the minions in the exact same way that states are executed.  We define a ‘top.sls’ file in the pillar directory and tell it what information to send where.  Let’s create an example top file now…

image
You can probably figure out that this means Salt should distribute the pillars ‘kube_master’ and ‘kube_minions’ to all (*) of the minions.  So now let’s define these files…

image
So nothing terribly fancy in these two files, just a couple of lists that define my Kubernetes minions and masters.  So now that we have this configured, let’s run our state against the minions again…

Once it’s run, let’s use the following command to query all of the pillars that exists on kubminion1…

Just like with grains, there are some predefined pillars.  However, notice at the top we now have the two lists we just created…

image 
Cool  huh?  But now what do we do with this data?  This is where Saltstack starts to shine.  We can use this data (or grain data) to template configuration files.  This is huge!  Let’s look at an example so you can see what I’m talking about.  Let’s take the master systemd service definition for kube_controller.  It looks like this…

Rather than us, pre-populating that config file, why don’t we pull that info in from the pillar?  To do that, we use the Jinja templating language which has been integrated directly into SaltStack.  Let’s change our file to this…

Since we used Jinja in the file, we need to tell Salt about it otherwise it wont apply the Jinja logic and will copy the file as is.  This is done by modifying the file.managed statement to look like this…

Now all we need to do is rerun our state…

After it completes, we can check out the config file on the host…

Pretty awesome huh?  We can use Jinja to pull all sorts of data from pillars or grains directly into the configuration files.  Here are a couple more examples…

Pulling in a Grain from the host

In this case, we’re telling the minion to pull it’s ‘host’ grain into the template.  This would return the exact value of the host (not the FQDN) but could be used to pull any grain that’s defined.

Pulling in a specific attribute from a pillar

This shows us how to pull a specific variable out of a pillar.  Take for example this pillar…

If I want to just pull the ip address, I can use the exact example as shown above.  If I wanted the protocol variable, my statement would look like…

As you can see, this makes pulling nested data out of a pillar very straightforward.

Pulling pillar information that’s host specific

This is a really interesting and powerful combo.  Say you have information defined in a pillar about your entire cluster.  While this is handy to have on all hosts, sometimes you just want information relevant for the host you’re working on at the time.  For instance, look at this pillar…

When you’re deploying your config files, you want to build a generic template that can be used on each host.  The above syntax tells Salt to pull the value of ‘ipaddress’ from ‘kube_hosts:<the host you’re running on currently>’.  So we can use grain data to pull in more specific data from the pillars. 

In these examples, we’re really just using Jinja to call Salt functions.  Jinja can also be used to provide logic in state files.  For example, take a look at this example for the Kubernetes GitHub repo…

Above you can see that they’re using the data from the grains to determine things like config file location and what packages need to be installed. 

So again – this was just a taste, but I hope you’re starting to see that all of these components combine to make SaltStack a very powerful tool.

Tags:

« Older entries