kubernetes

You are currently browsing articles tagged kubernetes.

Kubernetes offers several different authentication mechanisms or plugins.  The goal of this post is to review each of them and provide a brief example of how they work.  In addition, we’ll talk about the ‘kubeconfig’ file and how it’s used in association with authentication plugins.

Note: In theory there’s no requirement to use any of these authentication plugins.  With the proper configuration, the API server can accept requests over HTTP on any given insecure port you like.  However – doing so is insecure and somewhat limiting because some features of Kubernetes rely on using authentication so it’s recommended to use one or more of the following plugins.

Kubernetes offers 3 default authentication plugins as of version 1.0.  These plugins are used to authenticate requests against the API server.  Since they’re used for communication to the API, that means that they apply to both the Kubelet and Kube-Proxy running on your server nodes as well as any requests or commands you issue through the kubectl CLI tool.  Let’s take a look at each option…

Client Certificate Authentication
This is the most common method of authentication and is widely used to authentication node back to the master.  This configuration option relies on valid certificates from the client being presented to the API server which has a defined CA certificate.  The most common method for achieving this is to generate certificates using the ‘make-ca-cert’ shell script from the Kubernetes Github page located here…

https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/generate-cert/make-ca-cert.sh

To use this I run something that looks like this…

In my case, running the script looks like this…

After running the script, head on over to the ‘/srv/kubernetes’ directory and you should see all of the certs required…

image

These will be the certificates we use on the API server and on any remote client (Kubelet or kubectl) that need to authenticate against the API server.  To tell the API server to use certificate authentication, we need to pass the process (or hyperkube container in my case) these options…

Note: In addition, since I run the API server using the hyperkube container image, I also need to make sure that the correct volumes are mounted to this container so it can consume these certificates.

HTTP Basic Authentication
Another option for authentication is to use HTTP basic authentication.  In this mode, you provide the API server a CSV file containing the account information you wish for it to use.  In it’s current implementation these credentials last forever and can not be modified without restarting the API server instance.  This mode is really intended for convenience during testing.  An example CSV file would look something like this…

Telling the API server to us HTTP basic authentication is as simple as passing this single flag to the API server…

Token Authentication
The last option for authentication is to use Tokens.  Much like the basic authentication option, these tokens are provided to the API server in a CSV file.  The same limitations apply in regards to them being valid forever and requiring a restart of the API server to load new tokens.  These types of authentication tokens are referred to as ‘bearer tokens’ and allow requests to be authenticated by passing a token rather than a standard username/password combination.  An example CSV token file looks like this…

Token authentication is enabled on the API server by passing this single flag to the API server…

Consuming the authentication plugins
Now that we’ve covered the different configuration options on the master, we need to know how to consume these plugins from a client perspective.  From a node (minion) side of things both the Kubelet and Kube-Proxy service need to be able to talk to the API server.  From a management perspective kubectl also needs to talk to the API server.  Luckily for us, Kubernetes has the ‘kubeconfig’ construct that can be used for both the node services as well as the command line tools.  Let’s take a quick look at a sample Kubeconfig file…

Here’s the kubeconfig I use in my SaltStack Kubernetes build for authentication on the nodes.  Let’s break this down a little bit…

image

It’s easiest in my mind to look at this from the bottom up.  The current context is what specifies the context we’re using.  As we can see in red, the current context is ‘kubelet-context’.  Under contexts we have a matching ‘kubelet-context’ that specifies a cluster (green) and a user (blue).  Both of those have matching definitions under those the users and clusters definitions of the file.  So what we really end up with here is something like this…

image
So let’s make this a little more interesting and define some more options…

Now let’s look at that with the color coding again so we can see what’s associated with what more easily…

image

This file defines 3 different authentication contexts. 

Context-certauth uses certificates for authentication and accesses the master through the secure URL of https://192.168.127.100:6443

Context-tokenauth uses a token for authentication and accesses the master through the insecure URL of http://192.168.127.100:8080

Context-basicauth uses basic authentication (username/password) and accesses the master through the secure URL of https://k8stest1:6443.

You likely noticed that I have two different clusters defined that both use HTTPS (cluster-ssl and cluster-sslskip).  The difference between the two is solely around the certificates being used.  In the case of cluster-ssl I need to use the IP address in the URL since the cert was built using the IP rather than the name.  In the case of cluster-sslskip, I use the DNS name but tell the system to ignore cert warnings since I may or may not have defined certs to do a proper TLS handshake with. 

So let’s see this in action.  Let’s move to a new workstation that has never talked to me lab Kubernetes cluster.  Let’s download kubectl and try to talk to the cluster…

image 
So we can see that by default kubectl attempts to connect to an API server that’s running locally on HTTP over port 8080.  This is why in all of our previous examples kubectl has just worked since we’ve always run it on the master.  So while we can pass kubectl a lot of flags on the CLI, that’s not terribly useful. Rather, we can define the kubeconfig file shown above locally and then use it for connectivity information.  By default, kubectl will look in the path ‘~/.kube/config’ for a config file so let’s create it there and try again…

image 
Awesome!  It works!  Note that our file above lists a ‘current-context’.  Since we didn’t tell kubectl what context to use, the current-context from kubeconfig is used.  So let’s remove that line and then try again…

image 
Here we can see that we can pass kubectl a ‘context’ through the CLI.  In this case, we use the basic auth context, but we can use any of the other ones as well…

image
We can tell it’s using different contexts because it complains about not having the certs when attempting to do certificate authentication.  This can be remedied by placing the certs on this machine locally.

Kubectl vs Kube-Proxy and Kube-Kubelet
The previous example shows how to use kubeconfig with the kubectl CLI tool.  However, the same kubeconfig file is also used for the Kubelet and Kube-Proxy services when defining the authentication for talking to the API server.  However, in that instance it appears to only be used for defining authentication.  In other words – you still need to pass the API server to the service directly through the ‘master’ or ‘api_servers’ flag.  Based on my testing – you can define the server in kubeconfig on the nodes, that information is not used when the Kube-Proxy and Kubelet processes attempt to talk to the API server.  Bottom line being that the kubeconfig file is only used for defining authentication parameters for Kubernetes services.  It is not used to define the API server as it is when using kubectl. 

SSL Transport requirement
I want to point out that the authentication plugins only work when you are talking to the API server over HTTPS transport.  If you were watching closely, you might have noticed that I had a typo in the above configuration.  My token was defined as ‘TokenofTheJon’ but in the kubeconfig it was configured as ‘tokenoftheJ0n’ with a zero instead of the letter ‘o’.  You’ll also notice that when I used the ‘tokenauth’ context that the request did not fail.  The only reason this worked was because that particular context was accessing the API through it’s insecure port of 8080 over HTTP.  From the Kubernetes documentation here

“Localhost Port – serves HTTP – default is port 8080, change with –insecure-port flag. – defaults IP is localhost, change with –insecure-bind-address flag. – no authentication or authorization checks in HTTP – protected by need to have host access”

My above example worked because my API server is using an insecure bind address of 0.0.0.0 which means anyone can access the API without authentication.  That’s certainly not a great idea and I only have it on in my lab for testing and troubleshooting.  Not passing authentication across HTTP saves you from accidentally transmitting tokens or credentials in clear text.  However – you likely shouldn’t have your API server answering requests on 8080 for anything besides localhost to start with. 

I hope you see the value and uses of kubeconfig files.  Used appropriately they can certainly make your life easier.  In the next post we’ll talk more about tokens as we discuss Kubernetes secrets and service accounts.

Tags:

I thought it would be a good idea to revisit my last Kubernetes build in which I was using Salt to automate the deployment.  The setup worked well at the time, but much has changed with Kubernetes since I initially wrote those state files.  That being said, I wanted to update them to make sure they worked with Kubernetes 1.0 and above.  You can find my Salt config for this build over at Github…

https://github.com/jonlangemak/saltstackv2

A couple of quick notes before we walk through how to use the repo…

-While I used the last version of this repo as a starting point, I’ve stripped this down to basics (AKA – Some of the auxiliary pods aren’t here (yet)).  I’ll be adding to this constantly and I do intend to add a lot more functionality to the defined state files.
-All of the Kubernetes related communication is unsecured.  That is – it’s all over HTTP.  I already started work on adding an option to do SSL if you so choose. 

That being said, let’s jump right into how to use this.  My lab looks like this…

image 
Here we have 3 hosts.  K8stest1 will perform the role of the master while k8stest2 and k8stest3 will play the role of nodes or minions.  Each host will be running Docker and will have a routable network segment configured on it’s Docker0 bridge interface.  Your upstream layer 3 device will need to have static routes pointing each Docker0 bridge network to their respective hosts physical interface (192.168.127.100x) as shown above.  In addition to these 3 hosts, I also have a separate build server that acts as the Salt master and initiates the cluster build.  That server is called ‘kubbuild’ and isn’t pictured because it only plays a part in the initial configuration.  So let’s get right into the build…

In my case, the base lab configuration looks like this…

-All hosts are running CentOS 7.1 and are fully updated
-The 3 lab hosts (k8stest[1-3]) are configured as Salt minions and are reachable by the salt-master.  If you don’t know how to do that see the section of this post that talks about configuring the Salt master and minions.

The first thing you need to do is clone my repo onto your build server…

The next thing we want to do is download the Kubernetes binaries we need.  In earlier posts we had built them from scratch but we’re now going to download them instead.  All of the Kubernetes releases can be downloaded as a TAR file from github.  In this case, let’s work off of the 1.1.7 release.  So download this TAR file…

Next we have to unpack this file, and another TAR file inside this one, to get to the actual binaries…

Next we move those extracted binaries to the correct place in the Salt folder structure…

Alright – That’s the hardest part!  Now let’s go take a look at our Salt pillar configuration.  Take a look at the file ‘/srv/pillar/kube_data.sls’…

All you need to do is update this YAML file with your relevant configuration.  The above example is just a textual version of the network diagram shown earlier.  Keep in mind that you can add minions later by just simply adding onto this file – I’ll demo that later on.  Once you have this updated to match your configuration, let’s make sure we can reach our Salt minions and then execute the state to push the configuration out…

image
Now sit back and enjoy a cup of coffee while Salt does it’s magic.  When it’s done, you should see the results of executing the states against the hosts you defined in the ‘kube_data.sls’ file…

image
If you scroll back up through all of the results you will likely see that it errors out on this section of the master…

image
This is expected and is a result of the etcd container not coming up in time in order for the ‘pods’ state to work.  The current fix is to wait until all of the Kubernetes master containers load and then just execute the highstate again.

So let’s head over to our master server and see if things are working as expected…

image
Perfect!  Our 2 nodes have been discovered.  Since we’re going to execute the Salt highstate again, let’s update the config to include another node…

Note: I’m assuming that the server k8stest4 has been added to the Salt master as a minion.

This run should provision the pods as well as we provision a new Kubernetes node, k8stest4.  So let’s run the highstate again and see what we get…

When the run has finished, let’s head back to the master server and see how many nodes we have…

image 
Perfect!  The Salt config works as expected.  At this point, we have a functioning Kubernetes cluster on our hands.  Let’s make sure everything is working as expected by deploying the guest book demo.  On the master, run this command…

This will create the services and the replication controllers for the example and expose them on the node physical interfaces.  Take note of the port it’s using when you create the services…

image 
Now we just need to wait for the containers to deploy.  Keep an eye on them by checking the pod status…

image
Once Kubernetes finishes deploying the containers, we should see them all listed as ‘Running’…

image
Now we can try and hit the guest book front end by browsing to a minion on the specified port…

image
The example should work as expected.  That’s it for now, much more to come soon!

Tags: ,

I’ve been playing more with SaltStack recently and I realized that my first attempt at using Salt to provision my cluster was a little shortsighted.  The problem was, it only worked for my exact lab configuration.  After playing with Salt some more, I realized that the Salt configuration could be MUCH more dynamic than what I had initially deployed.  That being said, I developed a set of Salt states that I believe can be consumed by anyone wanting to deploy a Kubernetes lab on bare metal.  To do this, I used a few more of the features that SaltStack has to offer.  Namely, pillars and the built-in Jinja templating language.

My goal was to let anyone with some Salt experience be able to quickly deploy a fully working Kubernetes cluster.  That being said, the Salt configuration can be tuned to your specific environment.  Have 3 servers you want to try Kubernetes on?  Have 10?  All you need to do is have some servers that meet the following prerequisites and tune the Salt config to your environment.

Environment Prerequisites
-You need at least 2 servers, one for the master and one for the minion (might work with 1 but I haven’t tried it)
-All servers used in the config must be resolvable in local DNS
-All servers used as minions need to have a subnet routed to them for the docker0 bridge IP space (I use a /27 but you decide based on your own requirements)
-You have Salt installed and configured on the servers you wish to deploy against.  The Salt master needs to be configured to host files and pillars.  You can see the pillar configuration I did in this post and the base Salt config I did in this post.
-You have Git and Docker installed on the server you intend to build the Kubernetes binaries on (yum -y install docker git)

That’s all you need.  To prove that this works, I built another little lab to try and deploy this against.  It looks like this…

image
In this example, I have 3 servers.  K8stest1 will be used as the Kubernetes master and the remaining two will act as Kubernetes minions.  From a Salt perspective, k8stest1 will be the master and all the servers (including k8stest1) will be minions.  Notice that all the servers live on the 192.168.127.0/24 segment and I’ve routed space from 172.10.10.0 /24 to each Kubernetes minion in /27 allocations.  Let’s verify that Salt is working as I expect by ‘pinging’ each minion from the master…

image
Cool – So Salt can talk to all the servers we’re using in this lab.  The next step is to download and build the Kubernetes code we’ll be using in this lab.  To do this, we’ll clone the Kubernetes GitHub repo and then build the binaries…

Note: Kubernetes is changing VERY quickly.  I’ve been doing all of my testing based off of the .13 branch and that’s what all of these scripts are built to use.  Once things settle down I’ll update these posts with the most recent stable code but at the time of this writing the stable version I found was .13.  Make sure you clone the right branch!

Once the code is downloaded, build the binaries using these commands…

Again – this will take some time.  Go fill up the coffee cup and wait for the code to finish building.  (Hit Y to accept downloading the golang image before you walk away though!)

Once the binaries are built, we can clone down my Salt build scripts from my repository…

This will put the code into the ‘/srv/’ directory and create it if it doesn’t exist.  Next up, we need to move the freshly built binaries into the salt state directory so that we can pull them from the client…

The last step before we build the cluster is to tell the Salt configuration what our environment looks like.  This is done by editing the ‘kube_data’ pillar and possibly changing the Salt top.sls file to make sure we match the correct hosts.  Let’s look at the ‘kube_data’ pillar first.  It’s located at ‘/srv/pillar/kube_data.sls’…

So this is a sample pillar file that I used to build the config in my initial lab.  You’ll notice that I have 4 minions and 1 master defined.  To change this to look like our new environment, I would update the file to look like this…

So not too hard right?  Let’s walk through what I changed.  I had said that I wanted to use k8stest1(192.168.127.100) as the master so I changed the relevant IP address under the ‘kube_master’ definition to match k8stest1.  I have 2 minions in this lab so I removed the other 2 definitions and updated the first 2 to match my new environment.  Notice that you define the minion name and then define it’s IP address and docker0 bridge as attributes.  If you had more minions you could define them in here as well.

Note: The minion names have to be exact and do NOT include the FQDN, just the host part of the name.

The last thing we have to look at is the ‘top.sls’ file for salt.  Let’s look at it to see what the default is…

Notice  that if you happen to have named your servers according to role, then you might not have to change anything.  If your host names happen to include ‘master’ and ‘minion’ in their name based on the role they’ll play in the Kubernetes cluster, then you’re already done.  If not, we’ll need to change this to match our lab.  Let’s update it to match our lab…

Above you can see what used glob matching and specified the hostnames we were using for the master and minion roles.  Pretty straight forward, just make sure that you match the right server to the right role based on the defaults.

Now all we have to do is let Salt take care of building the Kubernetes cluster for us…

Once Salt has finished executing, we should be able to check and see what the Kubernetes cluster is doing…

image

Hard to see there (click to make it bigger) but Kubernetes is now busily working to deploy all of the pods.  The default build will provision SkyDNS, Heapster, and the fluentD/ElasticSearch Logging combo.  After all of the pods are deployed, we can access those cluster add-ons at the following URLs…

Heapster

image

FluentD/ElasticSearch

image

So there you have it.  Pretty easy to do right?  You should have a fully operational Kubernetes cluster at this point.  Would love to have some other people try this out and give me feedback on it.  Let me know what you think!

Tags: , ,

« Older entries