Logging in Kubernetes with Fluentd and Elasticsearch

image In previous posts, we talked about running skyDNS and Heapster on your Kubernetes cluster.  In this post, I want to talk about the last of the cluster ‘addons’ available today in the Kubernetes repository.  This add on is a combination of Fluentd, Elasticsearch, and Kibana that makes a pretty powerful logging aggregation system on top of your Kubernetes cluster.   One of the major struggles with any large deployment is logging. Having a central place to aggregate logs makes troubleshooting and analysis considerably easier to do.   That being said, let’s jump right into the configuration.

Note: I have an open PR on this addon to make it a little more flexible from a configuration perspective.  Namely, I want to be able to specify the port and protocol used by the API server to access the backend service when using the API server as a service proxy.  That being said, some of my pod/controller definitions will be different from what you see on GitHub.  I’ll point out the differences below when we come across them.  The PR got merged!  We still need to slightly tweak the controller definition but other than that it should be identical to what you see out on the official repo.  More info on the tweaks below when we talk about the controller definition, see ‘Note v2:’.

The first step is to have the Kubernetes nodes collect the logs.  This is done with a local Fluentd container running on each node.  To get this to run on each host, we’ll use a local manifest on each node to tell it to run the container.  If you’ve been reading my other posts, you’ll recall that in the initial cAdvisor deployment we used a manifest to deploy that local container before it got integrated into the kubelet.  If you still have the kubelet service configured for manifests you’re all set and can skip to the next part where we define the manifest.  If you don’t and are starting from scratch, follow the below directions…

Note: If you’re going off of my saltstack repo for my lab build this is entire build is already integrated for you.

The first thing we need to do is to tell the kubelet process to look for manifests.  Manifests are almost identical to pod definitions and they define a container you want the kubelet process to run when it starts.  Sort of like a locally significant pod that the kubelet keeps an eye on and ensures that it’s always running.  So let’s checkout our kubelet systemd unit file…

Notice the line in the service config that defines a ‘config’ directory.  That tells the kubelet process to look in ‘/etc/kubernetes/manifests’ for any manifest definition files.  If you don’t have this line in your service definition, you’ll need to add it.  So let’s look at the manifest definition we’ll be using for Fluentd…

This should look very similar to you since it’s essentially a pod definition we would deploy into the Kubernetes cluster.  Once you have the manifest in place and the kubelet service updated, restart the kubelet so it picks up the new config.  Once it has, you should see the individual hosts download and starting the container defined in the manifest…

image
Good, so now what exactly does this container do?  If we look at the code out on GitHub, we can see that the Fluentd config is grabbing logs from a couple of places…

image The first two sections tell Fluentd to grab the container logs.  We do this by wildcarding all of the logs in ‘var/lib/docker/containers/*’.  The last two sections grab the kubelet log file off of each local host.  So we’re grabbing all of these logs and sending them to ‘elasticsearch-logging.default’.  So what is that host?  A Kubernetes service of course.  Let’s define that service…

So this service will get registered in DNS (requires the skyDNS addon) and that’s how Fluentd will reach ElasticSearch.  This of course means that we need to define the pod that will match the given service selector.  Let’s do that now…

Easy enough.  So all of the Fluent containers will send the their logs to this ElasticSearch pod.  The next step is to create a Kibana frontend so that we can view and search the logs.  Let’s define that..

Note: This is where the changes I made in my PR come into play.  I needed a way to tell the pod what port and protocol to use on the API server.  The definition from GitHub assumes you’re running the API server on port 443 and doesn’t allow you to configure this service to be accessed over another port or protocol.  I added in the ENV variable section and also rebuilt the Docker image to take the new variables into account.  

Note v2: The above note is still accurate, but please note that this is now the official controller definition from the Kubernetes project.  The only tweaks made were changing the SCHEME to ‘http’ and inserting the ‘:8080’ since that’s what my API server is listening on.  The official image also got updated so you don’t need to use my any more.

This definition assumes you’ll be accessing the Kibana frontend through the API server proxy on HTTP.  So now that we have the frontend pod defined, we need to define a service to access it so we can use the API server proxy…

Once you define all 4 components into the cluster, you should be able to access Kibana through the following URL…

Note: You NEED the trailing slash in the URL!!!

image
Pretty slick huh?  Now let’s see if it actually works.  To do this, I’ll load a pod definition into Kubernetes that launches a test web pod I created. The pod definition looks like this…

Once the pods loads, let’s go to the local node it’s running on and see if it generated any logs that Docker caught…

image
Looks like it did, so let’s go back to the Kibana frontend and search for these logs…

image
Awesome!  So it’s working as expected.  So as you can see, this would be a pretty handy tool to have as your cluster continues to grow.  Some pretty awesome open source logging functionality here.

Tags: ,

  1. Sigmund Lundgren’s avatar

    Any setup instructions for bare metal ubuntu cluster?
    found some files in cluster/addons/fluentd-elasticsearch but seems to miss fluentd image pod, rc

    /S

    Reply

    1. Michael Lev’s avatar

      Hi,
      I saw your comment here and I think that you asked in another place too.
      You can bring up a pod (as a RC if you wish) of fluentd,as in cluster/saltbase/salt/fluentd-es/fluentd-es.yaml and that should work.
      Adding a persistent volume might be a good idea too.

      Reply

    2. herbalizer404’s avatar

      Hi,
      Could you explain how did you do the following part exactly:

      Note: This is where the changes I made in my PR come into play. I needed a way to tell the pod what port and protocol to use on the API server. The definition from GitHub assumes you’re running the API server on port 443 and doesn’t allow you to configure this service to be accessed over another port or protocol. I added in the ENV variable section and also rebuilt the Docker image to take the new variables into account.

      I have the same problem with this similar pod definitions but with these images:
      – image: gcr.io/google_containers/elasticsearch:1.8

      It tries to reach https://10.254.0.1:443, and my api serve works only on 8080.

      I putted these env vars:
      env:
      – name: TRANSPORT_PORT
      value: “http”
      – name: HTTP_PORT
      value: “8080”

      But it doesnt work 🙁

      Reply

    3. DT’s avatar

      Hi,

      Newbie to fluentd. Can you explain how the tag docker.* in source is use to match docker.**, and why use es-containers.log for pos_file?

      Thanks,
      DT

      Reply

Reply

Your email address will not be published. Required fields are marked *