Kubernetes 101 – The constructs

      6 Comments on Kubernetes 101 – The constructs

In our last post, we got our base lab up and running.  In this post, I’d like to walk us through the four main constructs that Kubernetes uses.  Kubernetes defines these constructs through configuration files which can be either YAML or JSON.  Let’s walk through each construct so we can define it, show possible configurations, and lastly an example of how it works on our lab cluster.

Pods
Pods are the basic deployment unit in Kubernetes.  A pod consists of one or more containers.  Recall that Kubernetes is a container cluster management solution.  It concerns itself with workload placement not individual container placement.  Kubernetes defines a pod as a group of ‘closely related containers’.  Some people would go as far as saying a pod is a single application.  I’m hesitant of that definition since it it seems too broad.  I think what it really boils down to is grouping containers together that make sense.  From a network point of view, a pod has a single IP address.  Multiple containers that run in a pod all share that common network name space.  This also means that containers in a pod will always be deployed to the same host.

id: "webpod"
kind: "Pod"
apiVersion: "v1beta1"
desiredState: 
  manifest: 
    version: "v1beta1"
    id: "webpod"
    containers: 
      - name: "webpod80"
        image: "jonlangemak/docker:web_container_80"
        cpu: 100
        ports: 
          - containerPort: 80
            hostPort: 80
      - name: "webpod8080"
        image: "jonlangemak/docker:web_container_8080"
        cpu: 100
        ports: 
          - containerPort: 8080
            hostPort: 8080
labels: 
  name: "web"

As mentioned above, you can define the files in either JSON or YAML. This one is defined in YAML since I’m sort of on a YAML kick at the moment. Looking at the file, we can tell that this format isn’t totally specific to pod definitions. The first couple of lines define the name or ID of the pod, the type of object this is (a pod), as well as what API version this is written for. The next section describes the desired state of the pod. You can see that we’re defining two containers in the pod, one called webpod80 and another called webpod8080. We define the container image we want to use for each container as well as some other specifics around container definition. Lastly, we define a label for the pod. So now that we have the file, what do we do with it? If you noticed during the kubmasta build we made a symlink for a binary called ‘kubectl’.  Kubectl is the command line means to interact with the Kubernetes cluster. You saw us using it in the last post to query the state of the minions and run a container. Now we’ll use it to deploy a pod to the Kubernetes cluster. First off, save your definition to a file called ‘myfirstpod’. Then to load the definition into Kubernetes use the following command…

kubectl create -f myfirstpod

The command is telling Kubernetes to create something based off the file (-f) you specify. If Kubernetes accepts the command, it should return the name of the pod like this…

image
Now we can check the status of the pod.  Let’s do this by using this command…

kubectl get pods

image
Status of Pending in most cases means that the docker host is in the process of downloading the container images required to run the pod.  If we check again in a couple of minutes we should see the pod listed as running…

image
So now that the pod is running, what do we really have?  Recall that all containers in a pod will share a common IP address.  Also recall that we have two containers in this pod, one running Apache on port 80 and another running Apache on port 8080.  Looking at the output of the ‘get pod’ command we can see that Kubernetes assigned the IP address of 10.10.10.4 to this pod.  So let’s try browsing to that IP address from outside the cluster…

image
That looks like what we’d expect, now let’s try accessing the same IP on port 8080…

image
Looks good!  So Kubernetes seems to be running as we’d expect at this point.  Let’s clean up the pod we deployed and move on to the next construct…

kubectl stop pod webpod

Services
Services initially confused me.  At first glance I assumed it was a way to expose services running in the containers to the outside world.  Not the case.  Services allow pods running the Kubernetes cluster to access services on other pods without contacting them directly through the pod IP. Let’s take a look at a basic service definition…

  id: "webfrontend"
  kind: "Service"
  apiVersion: "v1beta1"
  port: 80
  containerPort: 80
  selector: 
    name: "web"
  labels: 
    name: "webservice"

This definition is even more straight forward than what we saw for the pod.  Essentially, what we’re saying is that we want to define a service that listens on port 80, and sends traffic to the backend port (containerPort) of 80.  The interesting part of this definition is the ‘selector’ section.  Here we define ‘web’.  We also give this service a label as well.

Let’s define this service on Kubernetes using the same create command we used before…

kubectl create –f myfirstservice

Once again, we should get a response from kubectl telling us that the service creation was successful…

image
We can verify the service is defined with this command…

kubectl get services

image
Note in the output above that the service we created is not the only one present.  Kubernetes uses service definitions for the API service as well.

So while I haven’t finished defining or showing you services, bear with me for a moment while I put this on hold and talk about the third construct, replication controllers.

Replication Controllers
Replication controllers are sort of an enhanced version of a pod.  Replication controller objects are the means in which pods can scale horizontally across the cluster.  When you define a replication controller object, you define how many copies (replicas) of the pod you want in the cluster.  Kubernetes then takes responsibility of the object and ensures that the number of copies you want are always present in the cluster.  If the cluster loses a host that had a replicated pod on it Kubernetes will attempt to start that container on another host to bring the replication controller back to its desired state.  Same thing happens if too many pods are running, Kubernetes will shutdown unneeded copies of the pod.  So let’s look at a quick definition of a replication controller that I’m calling ‘myfirstcontroller’…

id: web-controller
apiVersion: v1beta1
kind: ReplicationController
desiredState:
  replicas: 2
  replicaSelector:
    name: web
  podTemplate:
    desiredState:
      manifest:
        version: v1beta1
        id: webpod
        containers:
          - name: webpod
            image: jonlangemak/docker:web_container_80
            ports:
              - containerPort: 80
                hostPort: 80
    labels:
      name: web

The definition looks very similar to the pod definition.  Let’s also create a second replication controller definition file called ‘mysecondcontroller’ that’s almost identical to the one above, just a different name and a different container image…

id: web-controller-2
apiVersion: v1beta1
kind: ReplicationController
desiredState:
  replicas: 2
  replicaSelector:
    name: web8080
  podTemplate:
    desiredState:
      manifest:
        version: v1beta1
        id: webpod
        containers:
          - name: webpod
            image: jonlangemak/docker:web_container_8080
            ports:
              - containerPort: 8080
                hostPort: 8080
    labels:
      name: web8080

Assuming you’ve saved both definitions locally you can use these commands to deploy the replication controllers…

kubectl create -f myfirstcontroller
kubectl create -f mysecondcontroller

So now that we have two replication controllers defined, let’s look at the pods and the replication controllers from Kubernetes point of view…

image
As you can see, we have 4 pods running.  One of them is still pending meaning that host kubminion4 is still working on downloading the container image.  Kubernetes knows that its tracking the state of 2 replication controllers and that the number of pods each controller should maintain is 2.  Let’s edit our file ‘myfirstcontroller’ and change the number of replicas from 2 to 3.    After editing the file, run this command…

kubectl update –f myfirstcontroller

Now let’s check the status again…

image
After the update Kubernetes updated the number of replicas to 3 and we we can see this reflected in the set of pods listed.

We can also directly update the controller replica size by using the command below rather than updating the config file…

kubectl resize --current-replicas=2 --replicas=3 rc web-controller

So now we should have 3 pods running with the web controller service and 2 pods running with the web controller two service.  Now let’s test and see how the replication controller keeps track of desired state.  Recall the pod definition we used above?  It had two containers in the pod, one called web_container_80 and another called web_container_8080.   Let’s try loading that container into Kubernetes and see what happens…

image
The first thing we do is deploy the pod.  The output in the blue box shows the output of the ‘get pods’ command that ran directly after deploying the pod.  The output in the red box was run seconds later.

We can see in the blue box that we have all of the replication controllers replicas (5 in total) in addition to the pod we just created which gives us a total of 6 pods.  The red box shows a total of 5 pods running.  So what happened?

When the replication controller for myfirstcontroller saw that there were 4 pods running with the label ‘web’ it stopped one of the pods to bring the replication controller back into the desired state.  This is an interesting example for a couple of reasons.  First off, the pod we loaded was not the same as the pod the replication controllers are generating.  The only similarity was the label used to identify the pods.  This shows that the replication controller are only looking at the pod label, not the pod content.  Secondly this shows that the pod label has little or nothing to do with the containers inside of the pod.  Our replication controllers are monitoring pods with a label of web and web8080.  Those pods happen to run the same containers used in the standalone pod definition.  However, the replication controller doesn’t care what we’re running in the pod, it only cares about the pod.  Maybe a graphic would help convey this point since I feel like I’m not doing a good job of describing this with words…

So this is our state prior to adding the pod…

image

Our rules are listed at the top and the diagram shows an accurate representation of the desired replication controller rules.  Notice how each pod runs a single instance of a container, either web_container_80 or web_container_8080.  Now let’s see what things look like immediately after we add the standalone pod…

image
The replication controller is now out of state.  There are 4 pods labeled web rather than the desired 3.  To get back to the desired state, it has to stop a pod with the label of ‘web’.  It does this and our cluster now looks like this…

image
So now we’re back in state with the desired configuration of the replication controller.  What I wanted to prove with all of this is that Kubernetes doesn’t care that we now have a total of 6 containers running instead of 5.  It only cares about the pod itself.  Would you deploy replication and pods like this in real life?  Likely not, but this example does prove my point.

The last interesting point was something we saw in the output of the last ‘kubectl get pods’ command.  This is something that caught me off guard so let’s do a little bit of live troubleshooting and see where this takes us.  Notice that the status of our manually deployed pod shows as ‘pending’…

image
This is a normal status if the node is in process on downloading the container image, but if it persists, there’s something else wrong.  Notice how it hasn’t yet assigned a host to the pod.  That being the case, we can safely assume that its something with the Kubernetes master rather than the host (minion) itself.  So let’s dig into this a little bit.  First thing I’ll do is check the logs of the Kubernetes scheduler service…

Note: Again, bear with me on some of the commands related to systemd, we’ll cover them in much greater detail in a later post dedicated to systemd.

image

The journal for the Kubernetes-Scheduler service shows this repeating over and over again in the log.  What does this mean?  I suspect this means that there’s some kind of conflict that prevents Kubernetes from scheduling the pod.  The only overlapping thing I can think of is the port, but since each pod get’s its own IP, I’m not sure how this could be overlapping.  Let’s log into one of the minions and see what’s going on.  Let’s check and see what’s running in docker…

image
So this is interesting, notice how one of the containers has port 80 on the host mapped to port 80 on the container.  This seems odd. The only container ports used should be the ones on the pods, not on the host itself.  After some digging around on Google groups I figured out that I should NOT be using the ‘hostPort’ configuration item in my pod and replication controller configurations.  Since I used ‘hostPort’, there’s a conflict with the port mapping since each of the 4 minions already has a host running 80 or 8080.  I went back and took the port mappings out of the replication controller configs, updated them, and then checked the status again.  Still no luck.  I ended up having to delete the replication controllers and reload them.  After I did that, I tried creating the webpod again.  I had success this time…

image
So lesson learned, don’t use the hostPort when defining pods.  At this point, I can hit any of the services from outside the cluster on it’s associated pod IP address.  This means that I can browse to…

10.10.30.9 on port 8080
10.10.30.10 on port 8080
10.10.40.4 on port 80
10.10.10.8 on port 80
10.10.30.11 on port 80 and 8080

But is that what I really want to do?  Access services in pods directly?  What if pods want to access services on other pods and not worry about how many pods are running and their associated IP addresses?  Let’s go back to services so we can see…

Services continued
When we looked at services before, we defined a service called ‘webfrontend’.  Let’s check the cluster and make sure it’s still there…

image
Perfect.  So what does this get us?  As I mentioned above, services allow pods to access services hosted on other pods but in an abstracted sort of way.  Notice how the selector for our service is ‘web’ and the IP address assigned to it is 10.100.143.163.   By this point, I’m guessing you know what the selector is for, but what about the IP address?  If you recall during the Kubernetes build post, we set a range of IPs on the API service that was referred to as the ‘Portal Net’.  Any services you define will get an IP address out of this range.  So where do we route the portal net to on the physical network?  Nowhere.  The portal net is only locally significant to the Kubernetes host.

Note: I’m going to dive WAY deeper into the mechanics behind Kubernetes networking in my next post.  The goal of this post is to understand the concepts.

To see what I’m talking about, play along with me here and make the following changes…

-Log into each Kubernetes minion that’s hosting a ‘web’ container.  In my case, that would be kubminion1, 3, and 4.
-Use the ‘docker exec’ command to log into each instance of the container ‘web_container_80’ and slightly modify the index.html file located in ‘/var/www/html’.  In my case I’ll add the name of the minion to the frontend of the text.  To get the container name to use in the docker exec command do a ‘docker –ps’ and copy the container ID.  Then exec into the container by doing a ‘docker exec –it <container ID> /bin/bash’.
-Save the file and exit out of the container without killing it (just type exit when you’re done)

So what we’ve done is made each pod uniquely identifiable.  This defeats the purpose of services since each pod should be a replica but bear with me on this since it will help explain how services work.

On the last host you make the change on, stay connected to the container.  Let’s take a look at the environment variables accessible in the container…

image
Interesting, so we have some environmental variables preset for us that reference the service we created.  This is very similar to the functionality we saw earlier when we talked about container linking with native docker.  So what happens when we try to connect to that service from the host?  Let’s run a couple of curl commands to the service IP and see what we get…

image
So what we just saw is that the service acts as a sort of load balancer to the pods the service is attached to.  Every time we connect to the service IP and port, we get load balanced to a different pod that’s associated with that service.  Again, the idea here is for all the pods attached to the service to provide the same function, I just changed the index files so you could see the load balancing first hand.  The selector we defined in the service is what tells Kubernetes which pods to include in the service.  So let’s do one last test.  Update the number of replicas for the ‘myfirstservice’ service definition from 3 to 4.  Change your grep in the curl command from ‘kub’ to something else in the string like ‘running’ and continually run the curl command again.  It shouldn’t take long before you see a 4th server responding on the service IP address…

image
So our 4th unedited replica came online and automatically got inserted into the service load balancing.  This is where the power of our next construct really comes into play…

Labels
Kubernetes uses labels to mark items as being part of a group.  A label is really nothing more than a tag assigned to an object.  When we create pods, services, or replication controllers we have the option to set a label.  Some of the objects we created consume the labels as we just saw in the above service example.  Others like the replication controller ensure that all the pods they create have the same label.  Objects can have multiple labels.  For instance, you might have a set of labels on a replication controller that indicate that a pod is production and also part of the web tier.  Services can use the labels as selectors to know which pods should be used.  Labels make it very easy to select groups of objects from within the cluster making them a fairly powerful piece of the Kubernetes architecture.

What’s next?
In the next post, we’ll dig deep into how Kubernetes does all of this networking magic as well as talk about how to expose services running in the Kubernetes cluster to hosts outside of the cluster.

6 thoughts on “Kubernetes 101 – The constructs

  1. Bhargav

    In the example you have considered, services run on port 80. Assuming if services run on different port say X. Will curl for 10.11.143.163 land to correct port no ?. Basically, if this is true then application should have login of getting to learn the port # in-addition to hostname

    Reply
  2. david

    the example command:

    kubectl create –f myfirstcontroller
    kubectl create –f mysecondcontroller

    has a non – ascii character for the dash.

    kubectl create -f myfirstcontroller

    Reply
  3. Shrikar

    Thanks for the detailed posts do you have a link for the next post about exposing services running in kubernetes cluster to hosts outside the cluster?

    Reply
  4. docker nerd

    Great article but a poor title
    In German lichten is plural and one should
    say Die Lichten and not Das as the heading says

    Sorry I call a mistake out

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *