In previous posts, we talked about running skyDNS and Heapster on your Kubernetes cluster. In this post, I want to talk about the last of the cluster ‘addons’ available today in the Kubernetes repository. This add on is a combination of Fluentd, Elasticsearch, and Kibana that makes a pretty powerful logging aggregation system on top of your Kubernetes cluster. One of the major struggles with any large deployment is logging. Having a central place to aggregate logs makes troubleshooting and analysis considerably easier to do. That being said, let’s jump right into the configuration.
Note: I have an open PR on this addon to make it a little more flexible from a configuration perspective. Namely, I want to be able to specify the port and protocol used by the API server to access the backend service when using the API server as a service proxy. That being said, some of my pod/controller definitions will be different from what you see on GitHub. I’ll point out the differences below when we come across them. The PR got merged! We still need to slightly tweak the controller definition but other than that it should be identical to what you see out on the official repo. More info on the tweaks below when we talk about the controller definition, see ‘Note v2:’.
The first step is to have the Kubernetes nodes collect the logs. This is done with a local Fluentd container running on each node. To get this to run on each host, we’ll use a local manifest on each node to tell it to run the container. If you’ve been reading my other posts, you’ll recall that in the initial cAdvisor deployment we used a manifest to deploy that local container before it got integrated into the kubelet. If you still have the kubelet service configured for manifests you’re all set and can skip to the next part where we define the manifest. If you don’t and are starting from scratch, follow the below directions…
Note: If you’re going off of my saltstack repo for my lab build this is entire build is already integrated for you.
The first thing we need to do is to tell the kubelet process to look for manifests. Manifests are almost identical to pod definitions and they define a container you want the kubelet process to run when it starts. Sort of like a locally significant pod that the kubelet keeps an eye on and ensures that it’s always running. So let’s checkout our kubelet systemd unit file…
[Unit] Description=Kubernetes Kubelet After=etcd.service After=docker.service Wants=etcd.service Wants=docker.service [Service] ExecStart=/opt/kubernetes/kubelet \ --address=10.20.30.62 \ --port=10250 \ --hostname_override=10.20.30.62 \ --api_servers=http://10.20.30.61:8080 \ --cluster_dns=10.100.0.10 \ --cluster_domain=kubdomain.local \ --config=/etc/kubernetes/manifests \ --logtostderr=true Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
Notice the line in the service config that defines a ‘config’ directory. That tells the kubelet process to look in ‘/etc/kubernetes/manifests’ for any manifest definition files. If you don’t have this line in your service definition, you’ll need to add it. So let’s look at the manifest definition we’ll be using for Fluentd…
version: v1beta2 id: fluentd-to-elasticsearch containers: - name: fluentd-es image: gcr.io/google_containers/fluentd-elasticsearch:1.3 env: - name: FLUENTD_ARGS value: -qq volumeMounts: - name: containers mountPath: /var/lib/docker/containers - name: varlog mountPath: /varlog volumes: - name: containers source: hostDir: path: /var/lib/docker/containers - name: varlog source: hostDir: path: /var/log
This should look very similar to you since it’s essentially a pod definition we would deploy into the Kubernetes cluster. Once you have the manifest in place and the kubelet service updated, restart the kubelet so it picks up the new config. Once it has, you should see the individual hosts download and starting the container defined in the manifest…
Good, so now what exactly does this container do? If we look at the code out on GitHub, we can see that the Fluentd config is grabbing logs from a couple of places…
The first two sections tell Fluentd to grab the container logs. We do this by wildcarding all of the logs in ‘var/lib/docker/containers/*’. The last two sections grab the kubelet log file off of each local host. So we’re grabbing all of these logs and sending them to ‘elasticsearch-logging.default’. So what is that host? A Kubernetes service of course. Let’s define that service…
apiVersion: v1beta1 kind: Service id: elasticsearch-logging port: 9200 containerPort: 9200 labels: name: elasticsearch-logging kubernetes.io/cluster-service: "true" selector: name: elasticsearch-logging
So this service will get registered in DNS (requires the skyDNS addon) and that’s how Fluentd will reach ElasticSearch. This of course means that we need to define the pod that will match the given service selector. Let’s do that now…
apiVersion: v1beta1 kind: ReplicationController id: elasticsearch-logging-controller desiredState: replicas: 1 replicaSelector: name: elasticsearch-logging podTemplate: desiredState: manifest: version: v1beta1 id: es-log-ingestion containers: - name: elasticsearch-logging image: gcr.io/google_containers/elasticsearch:1.0 ports: - name: es-port containerPort: 9200 - name: es-transport-port containerPort: 9300 volumeMounts: - name: es-persistent-storage mountPath: /data volumes: - name: es-persistent-storage source: emptyDir: {} labels: name: elasticsearch-logging kubernetes.io/cluster-service: "true" labels: name: elasticsearch-logging kubernetes.io/cluster-service: "true"
Easy enough. So all of the Fluent containers will send the their logs to this ElasticSearch pod. The next step is to create a Kibana frontend so that we can view and search the logs. Let’s define that..
Note: This is where the changes I made in my PR come into play. I needed a way to tell the pod what port and protocol to use on the API server. The definition from GitHub assumes you’re running the API server on port 443 and doesn’t allow you to configure this service to be accessed over another port or protocol. I added in the ENV variable section and also rebuilt the Docker image to take the new variables into account.
Note v2: The above note is still accurate, but please note that this is now the official controller definition from the Kubernetes project. The only tweaks made were changing the SCHEME to ‘http’ and inserting the ‘:8080’ since that’s what my API server is listening on. The official image also got updated so you don’t need to use my any more.
apiVersion: v1beta1 kind: ReplicationController id: kibana-logging-controller desiredState: replicas: 1 replicaSelector: name: kibana-logging podTemplate: desiredState: manifest: version: v1beta1 id: kibana-viewer containers: - name: kibana-logging image: gcr.io/google_containers/kibana:1.2 env: - name: "ES_SCHEME" value: "http" - name: "ES_HOST" value: "\"+window.location.hostname+\":8080/api/v1beta1/proxy/services/elasticsearch-logging" ports: - name: kibana-port containerPort: 80 labels: name: kibana-logging kubernetes.io/cluster-service: "true" labels: name: kibana-logging kubernetes.io/cluster-service: "true"
This definition assumes you’ll be accessing the Kibana frontend through the API server proxy on HTTP. So now that we have the frontend pod defined, we need to define a service to access it so we can use the API server proxy…
apiVersion: v1beta1 kind: Service id: kibana-logging containerPort: 80 port: 5601 labels: name: kibana-logging kubernetes.io/cluster-service: "true" selector: name: kibana-logging
Once you define all 4 components into the cluster, you should be able to access Kibana through the following URL…
http://<Kubernetes Master>:8080/api/v1beta1/proxy/services/kibana-logging/
Note: You NEED the trailing slash in the URL!!!
Pretty slick huh? Now let’s see if it actually works. To do this, I’ll load a pod definition into Kubernetes that launches a test web pod I created. The pod definition looks like this…
id: "webpod" kind: "Pod" apiVersion: "v1beta1" desiredState: manifest: version: "v1beta1" id: "webpod" containers: - name: "webpod8080" image: "jonlangemak/docker:web_container_8080" cpu: 100 ports: - containerPort: 8080 hostPort: 8080 labels: name: "web"
Once the pods loads, let’s go to the local node it’s running on and see if it generated any logs that Docker caught…
Looks like it did, so let’s go back to the Kibana frontend and search for these logs…
Awesome! So it’s working as expected. So as you can see, this would be a pretty handy tool to have as your cluster continues to grow. Some pretty awesome open source logging functionality here.
Pingback: Links & Reads from 2015 Week 14 | Martin's Weekly Curations
Any setup instructions for bare metal ubuntu cluster?
found some files in cluster/addons/fluentd-elasticsearch but seems to miss fluentd image pod, rc
/S
Hi,
I saw your comment here and I think that you asked in another place too.
You can bring up a pod (as a RC if you wish) of fluentd,as in cluster/saltbase/salt/fluentd-es/fluentd-es.yaml and that should work.
Adding a persistent volume might be a good idea too.
Hi,
Could you explain how did you do the following part exactly:
Note: This is where the changes I made in my PR come into play. I needed a way to tell the pod what port and protocol to use on the API server. The definition from GitHub assumes you’re running the API server on port 443 and doesn’t allow you to configure this service to be accessed over another port or protocol. I added in the ENV variable section and also rebuilt the Docker image to take the new variables into account.
I have the same problem with this similar pod definitions but with these images:
– image: gcr.io/google_containers/elasticsearch:1.8
It tries to reach https://10.254.0.1:443, and my api serve works only on 8080.
I putted these env vars:
env:
– name: TRANSPORT_PORT
value: “http”
– name: HTTP_PORT
value: “8080”
But it doesnt work 🙁
Hi,
Newbie to fluentd. Can you explain how the tag docker.* in source is use to match docker.**, and why use es-containers.log for pos_file?
Thanks,
DT
Pingback: A curated list for Kubernetes sources | Au moelleux