Kubernetes is a container orchestration tool, but its functionality extends far beyond just orchestrating containers in a narrow sense. It offers a range of additional features that—to a limited extent—address needs such as load balancing, access control, security policy enforcement, and even logging and monitoring. Indeed, Kubernetes’s broad functionality has led some folks to call it an “operating system” in its own right.
That said, many of the extra features that Kubernetes provides are not full-fledged solutions. On the security front, for example, Kubernetes provides some tools to prevent abuse. Still, it’s hardly a sufficient solution on its own to address every security aspect of a given workload. For load-balancing, Kubernetes manages the way traffic is distributed to workloads within a cluster, but it’s not as if it will load-balance your entire network.
The same type of limitation applies to Kubernetes’s logging and monitoring features: While Kubernetes offers some basic logging and monitoring facilities, it’s a far cry from a complete logging and monitoring solution.
Because of these limitations, understanding what Kubernetes can do natively, and when it requires help from external tools to address a particular need, is critical for deploying Kubernetes successfully.
With that reality in mind, let’s take a look at Kubernetes’s built-in logging and monitoring functionality and what’s missing out-of-the-box on the logging and monitoring front in Kubernetes.
The built-in monitoring and logging tooling in Kubernetes is basic but effective for certain types of needs. Essentially, it boils down to two types of functionality: log access and log storage.
Using a command like kubectl logs [container name], you can read the “logs” of every container running within a Kubernetes cluster.
The caveat here (and the reason “logs” is in scare quotes) is that the “logs” you can access this way are not actually log files in the traditional sense, but rather the stdout and stderr messages generated by containers as they run. Kubernetes collects this data and stores it in a file that you can access with kubectl, assuming the container is running or, in the case of a failure, the container itself failed instead of the pod. If a pod is evicted, however, you'll get logs from the evicted pod on why it failed as that's stored at the system (platform) level, not at the app level. The container logs are gone, though, unless you're piping them somewhere.
Kubernetes also logs data from various components of Kubernetes itself to files that you can access by logging into Kubernetes nodes directly.
Specifically, the Kubernetes master node (or nodes, if you have multiple masters) offers log data at /var/log/kube-apiserver.log, /var/log/kube-scheduler.log, and /var/log/kube-controller-manager.log, and each worker node has /var/log/kubelet.log and /var/log/kube-proxy.log files.
The two types of logging facilities described above come in handy if you need to check information quickly or research a one-time event that occurred within your Kubernetes cluster. They’re kind of akin to the information you could get by running dmesg | tail in a Bash shell on a Linux server, in that they are a quick and easy way of accessing small amounts of information, especially if you already know what kind of information you are looking for.
When it comes to more complex logging and monitoring needs, however, Kubernetes alone doesn’t cut it. Kubernetes lacks native features for the following critical tasks:
Although Kubernetes creates logs for each container and for Kubernetes itself, it doesn’t automatically rotate or archive this data. On the contrary, it expects you to handle log rotation, and if you don’t, you risk having your log files eat up all of the storage space on your nodes.
For the record, I should point out that most Kubernetes distributions do set up log rotation facilities for you when you install them. However, Kubernetes itself doesn’t handle log rotation, and if your distribution doesn’t provide a solution for this task automatically, you need to implement one manually.
Likewise, Kubernetes doesn’t offer any tools for aggregating log data in a single location or merging similar types of logs together. It lets you view logs for containers and nodes on an individual, one-off basis, which is useful if you need to pull some quick information about a particular container or node.
But, what if you want to monitor all of your containers at once, or trace monitoring data related to a particular event across multiple containers or nodes? The only way to do that natively in Kubernetes would be to access each log manually, which is not practical to do at scale.
Kubernetes will show you log data, but it does nothing to help you read or interpret it. It doesn’t offer visualization features, or even alerts or notifications about monitoring events that could signal a problem.
In most Kubernetes distributions, the container logs available from kubectl are limited to a mere 10 megabytes in size. Kubernetes automatically deletes older data if the logs exceed this limit.
This may not be much of an issue if you only have a few containers running and generating log data. But if you have dozens, your log file won’t be of much use because it won’t be large enough to accommodate all of your containers.
For similar reasons, accessing log data through kubectl is not very helpful if you need to access information about a historical event. Kubernetes may have deleted that data in order to keep the log file under 10 megabytes.
In short, Kubernetes offers enough built-in logging and monitoring functionality to allow you to monitor workloads on a small scale or research one-off events that occurred in the recent past.
However, Kubernetes on its own falls far short of offering a full-fledged logging and monitoring solution. To fill the gaps, you need to pair Kubernetes with external tools that can handle log rotation and aggregation, store historical log data over the long term, and provide you with the analytics features you need to achieve true monitoring visibility.
There are different ways to implement this, with the most common being to run a “sidecar” container in each pod that interfaces between the pod and an external log manager. Setting up this type of solution requires a little extra work. No matter how you ensure you gather data for all of your stack to fill the gaps that Kuberentes has, it’s critica to do so if you want to be able to monitor and provide logging for your Kubernetes workloads at scale.