Kubernetes Authentication plugins and kubeconfig

Kubernetes offers several different authentication mechanisms or plugins.  The goal of this post is to review each of them and provide a brief example of how they work.  In addition, we’ll talk about the ‘kubeconfig’ file and how it’s used in association with authentication plugins.

Note: In theory there’s no requirement to use any of these authentication plugins.  With the proper configuration, the API server can accept requests over HTTP on any given insecure port you like.  However – doing so is insecure and somewhat limiting because some features of Kubernetes rely on using authentication so it’s recommended to use one or more of the following plugins.

Kubernetes offers 3 default authentication plugins as of version 1.0.  These plugins are used to authenticate requests against the API server.  Since they’re used for communication to the API, that means that they apply to both the Kubelet and Kube-Proxy running on your server nodes as well as any requests or commands you issue through the kubectl CLI tool.  Let’s take a look at each option…

Client Certificate Authentication
This is the most common method of authentication and is widely used to authentication node back to the master.  This configuration option relies on valid certificates from the client being presented to the API server which has a defined CA certificate.  The most common method for achieving this is to generate certificates using the ‘make-ca-cert’ shell script from the Kubernetes Github page located here…

https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/generate-cert/make-ca-cert.sh

To use this I run something that looks like this…

#Script relies on the 'kube-cert' group existing
groupadd -r kube-cert

#Download the script and set permissions
wget /opt/bin https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/v0.21.1/cluster/saltbase/salt/generate-cert/make-ca-cert.sh
chmod 775 make-ca-cert.sh

#Run the script passing in your relevant info
./make-ca-cert.sh  IP:,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local

In my case, running the script looks like this…

./make-ca-cert.sh 192.168.127.100 IP:192.168.127.100,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local

After running the script, head on over to the ‘/srv/kubernetes’ directory and you should see all of the certs required…

image

These will be the certificates we use on the API server and on any remote client (Kubelet or kubectl) that need to authenticate against the API server.  To tell the API server to use certificate authentication, we need to pass the process (or hyperkube container in my case) these options…

--client-ca-file=/etc/kubernetes/ssl/ca.crt
--tls-cert-file=/etc/kubernetes/ssl/server.cert
--tls-private-key-file=/etc/kubernetes/ssl/server.key

Note: In addition, since I run the API server using the hyperkube container image, I also need to make sure that the correct volumes are mounted to this container so it can consume these certificates.

HTTP Basic Authentication
Another option for authentication is to use HTTP basic authentication.  In this mode, you provide the API server a CSV file containing the account information you wish for it to use.  In it’s current implementation these credentials last forever and can not be modified without restarting the API server instance.  This mode is really intended for convenience during testing.  An example CSV file would look something like this…

#Format
[username],[password],[user]

#Example
PasswordOfTheJon,jlangemak,1

Telling the API server to us HTTP basic authentication is as simple as passing this single flag to the API server…

--basic-auth-file=/etc/kubernetes/basicauth.csv

Token Authentication
The last option for authentication is to use Tokens.  Much like the basic authentication option, these tokens are provided to the API server in a CSV file.  The same limitations apply in regards to them being valid forever and requiring a restart of the API server to load new tokens.  These types of authentication tokens are referred to as ‘bearer tokens’ and allow requests to be authenticated by passing a token rather than a standard username/password combination.  An example CSV token file looks like this…

#Format
[token],[username],[user]

#Example
TokenOfTheJon,jlangemak,1

Token authentication is enabled on the API server by passing this single flag to the API server…

--token-auth-file=/etc/kubernetes/tokenauth.csv

Consuming the authentication plugins
Now that we’ve covered the different configuration options on the master, we need to know how to consume these plugins from a client perspective.  From a node (minion) side of things both the Kubelet and Kube-Proxy service need to be able to talk to the API server.  From a management perspective kubectl also needs to talk to the API server.  Luckily for us, Kubernetes has the ‘kubeconfig’ construct that can be used for both the node services as well as the command line tools.  Let’s take a quick look at a sample Kubeconfig file…

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.crt
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/kubecfg.crt
    client-key: /etc/kubernetes/ssl/kubecfg.key
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

Here’s the kubeconfig I use in my SaltStack Kubernetes build for authentication on the nodes.  Let’s break this down a little bit…

image

It’s easiest in my mind to look at this from the bottom up.  The current context is what specifies the context we’re using.  As we can see in red, the current context is ‘kubelet-context’.  Under contexts we have a matching ‘kubelet-context’ that specifies a cluster (green) and a user (blue).  Both of those have matching definitions under those the users and clusters definitions of the file.  So what we really end up with here is something like this…

image
So let’s make this a little more interesting and define some more options…

apiVersion: v1
kind: Config
clusters:
- name: cluster-ssl
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.crt
    api-version: v1
    server: https://192.168.127.100:6443
- name: cluster-nossl
  cluster:
    api-version: v1
    server: http://k8stest1:8080
- name: cluster-sslskip
  cluster:
    api-version: v1
    server: https://k8stest1:6443
    insecure-skip-tls-verify: true
users:
- name: user-ssl
  user:
    client-certificate: /etc/kubernetes/ssl/kubecfg.crt
    client-key: /etc/kubernetes/ssl/kubecfg.key
- name: user-token
  user:
    token: TokenOfTheJ0n
- name: user-basicauth
  user:
    username: jlangemak
    password: PasswordOfTheJon
contexts:
- context:
    cluster: cluster-ssl
    user: user-ssl
  name: context-certauth
- context:
    cluster: cluster-nossl
    user: user-token
  name: context-tokenauth
- context:
    cluster: cluster-sslskip
    user: user-basicauth
  name: context-basicauth
current-context: context-basicauth

Now let’s look at that with the color coding again so we can see what’s associated with what more easily…

image

This file defines 3 different authentication contexts. 

Context-certauth uses certificates for authentication and accesses the master through the secure URL of https://192.168.127.100:6443

Context-tokenauth uses a token for authentication and accesses the master through the insecure URL of http://192.168.127.100:8080

Context-basicauth uses basic authentication (username/password) and accesses the master through the secure URL of https://k8stest1:6443.

You likely noticed that I have two different clusters defined that both use HTTPS (cluster-ssl and cluster-sslskip).  The difference between the two is solely around the certificates being used.  In the case of cluster-ssl I need to use the IP address in the URL since the cert was built using the IP rather than the name.  In the case of cluster-sslskip, I use the DNS name but tell the system to ignore cert warnings since I may or may not have defined certs to do a proper TLS handshake with. 

So let’s see this in action.  Let’s move to a new workstation that has never talked to me lab Kubernetes cluster.  Let’s download kubectl and try to talk to the cluster…

image 
So we can see that by default kubectl attempts to connect to an API server that’s running locally on HTTP over port 8080.  This is why in all of our previous examples kubectl has just worked since we’ve always run it on the master.  So while we can pass kubectl a lot of flags on the CLI, that’s not terribly useful. Rather, we can define the kubeconfig file shown above locally and then use it for connectivity information.  By default, kubectl will look in the path ‘~/.kube/config’ for a config file so let’s create it there and try again…

image 
Awesome!  It works!  Note that our file above lists a ‘current-context’.  Since we didn’t tell kubectl what context to use, the current-context from kubeconfig is used.  So let’s remove that line and then try again…

image 
Here we can see that we can pass kubectl a ‘context’ through the CLI.  In this case, we use the basic auth context, but we can use any of the other ones as well…

image
We can tell it’s using different contexts because it complains about not having the certs when attempting to do certificate authentication.  This can be remedied by placing the certs on this machine locally.

Kubectl vs Kube-Proxy and Kube-Kubelet
The previous example shows how to use kubeconfig with the kubectl CLI tool.  However, the same kubeconfig file is also used for the Kubelet and Kube-Proxy services when defining the authentication for talking to the API server.  However, in that instance it appears to only be used for defining authentication.  In other words – you still need to pass the API server to the service directly through the ‘master’ or ‘api_servers’ flag.  Based on my testing – you can define the server in kubeconfig on the nodes, that information is not used when the Kube-Proxy and Kubelet processes attempt to talk to the API server.  Bottom line being that the kubeconfig file is only used for defining authentication parameters for Kubernetes services.  It is not used to define the API server as it is when using kubectl. 

SSL Transport requirement
I want to point out that the authentication plugins only work when you are talking to the API server over HTTPS transport.  If you were watching closely, you might have noticed that I had a typo in the above configuration.  My token was defined as ‘TokenofTheJon’ but in the kubeconfig it was configured as ‘tokenoftheJ0n’ with a zero instead of the letter ‘o’.  You’ll also notice that when I used the ‘tokenauth’ context that the request did not fail.  The only reason this worked was because that particular context was accessing the API through it’s insecure port of 8080 over HTTP.  From the Kubernetes documentation here

“Localhost Port – serves HTTP – default is port 8080, change with –insecure-port flag. – defaults IP is localhost, change with –insecure-bind-address flag. – no authentication or authorization checks in HTTP – protected by need to have host access”

My above example worked because my API server is using an insecure bind address of 0.0.0.0 which means anyone can access the API without authentication.  That’s certainly not a great idea and I only have it on in my lab for testing and troubleshooting.  Not passing authentication across HTTP saves you from accidentally transmitting tokens or credentials in clear text.  However – you likely shouldn’t have your API server answering requests on 8080 for anything besides localhost to start with. 

I hope you see the value and uses of kubeconfig files.  Used appropriately they can certainly make your life easier.  In the next post we’ll talk more about tokens as we discuss Kubernetes secrets and service accounts.

3 thoughts on “Kubernetes Authentication plugins and kubeconfig

  1. Kobe

    Good stuff jon.

    in #Run the script passing in your relevant info

    Is IP:10.0.0.1 means CIDR Range or kubernetes service cluster IP

    I’ve used these steps on my master node
    1. Generate a signing key:
    openssl genrsa -out /tmp/serviceaccount.key 2048
    2. Update /etc/kubernetes/apiserver:
    KUBE_API_ARGS=”–service_account_key_file=/tmp/serviceaccount.key”
    3. Update /etc/kubernetes/controller-manager:
    KUBE_CONTROLLER_MANAGER_ARGS=”–service_account_private_key_file=/tmp/serviceaccount.key”

    Everything was working fine until I decided to explore about skyDNS I’m getting started getting this :

    Falling back to default configuration, could not read from etcd: 501: All the given peers are not reachable (failed to propose on members [http://127.0.0.1:4001] twice [last error: Get http://127.0.0.1:4001/v2/keys/skydns/config?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: connection refused]) [0]
    skydns: ready for queries on k8s.local. for tcp://0.0.0.0:53 [rcache 0]
    skydns: ready for queries on k8s.local. for udp://0.0.0.0:53 [rcache 0]
    skydns: failure to forward request “read udp 8.8.8.8:53: i/o timeout”

    Thanks
    Sam

    Reply
  2. James

    Hi Das,

    Am having issue. authenicating my new kubernetes server, version 1.11.0 from jenkins plugin.
    Please can you help?

    below is the error am getting?

    Error testing connection https://10.13.52.93:6443: Failure executing: GET at: https://10.13.52.93:6443/api/v1/namespaces/defaults/pods. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={})

    my jenkins version is: 2.12.0
    my k8s version is: 1.11.0

    The Jenkins is running from a standalone server, and trying to connect to the kubernetes cluster.

    Kind Regards,
    James

    Reply

Leave a Reply to Kobe Cancel reply

Your email address will not be published. Required fields are marked *