Keeping track of what's going on in Kubernetes isn't easy. It's an environment where things move quickly, individual containers come and go, and a large number of independent processes involving separate users may all be happening at the same time. Container-based systems are by their nature optimized for rapid, efficient response to a heavy load of requests from multiple users in a highly abstracted environment and not for high-visibility, real-time monitoring.
However, the fact remains that you need to know what's happening inside your Kubernetes clusters; the cost of not knowing is simply too high. You need to be able to track bugs, to identify bottlenecks and other performance issues, and to detect security problems—and you may need to do these things in at least close-to-real time. So how can you gain the necessary insight into your Kubernetes system?
If you search around, you may see a lot of answers to that question, but the truth is that they pretty much all come down to this: Look at the logs. Logs are the number one resource for staying on top of application development and deployment, and that is as true for Kubernetes as it is for any other platform.
But all too often, logs themselves aren't that easy to find, let alone manage or read. Kubernetes container logs last only as long as the pod itself, and every platform, tool, and application has its own logging system. How do you organize and keep track of them all? And, when you find them and read them, how do you pick your way through the contents when each log may have its own syntax and method of organization, and any or all of them may be excessively verbose, or cryptic, or both?
To make effective use of the logs that are available, you need to automate the process of finding log data, then gathering and processing that data, presenting it in clearly understandable formats, and turning high-priority items into alerts or other calls to action. You need a comprehensive, powerful tool to handle the complex and demanding tasks required for truly useful log management.
That's where LogDNA comes in. LogDNA does the heavy lifting when it comes to managing logs and log data. It can pull in log data from an extraordinarily wide variety of sources (including Docker, Heroku, CloudFoundry, AWS, and IBM, plus those here-one-millisecond-gone-the-next Kubernetes container logs), as well as parse, manage, and organize the data; display log contents along with metrics in a visual format; archive and export log data; and generate alerts via your favorite alerting system.
What can LogDNA do for your DevOps team? We've already seen that LogDNA provides access to a wealth of log data, so this may be a better way to ask the question: What can the information contained in application, infrastructure, and platform logs do for your team at key points in your DevOps delivery chain? Let's take a quick look:
For developers, logging at the code-library level can provide key insights into application behavior, including cumulative effects, patterns of action over time, and anomalous behavior that may not generate an error. LogDNA includes official integrations with NodeJS, Python, Go, and Ruby among others, as well as unofficial integrations with Java and other major languages and libraries.
LogDNA also makes it easy to monitor the development process using GitHub event integration. You can monitor team activities for individual repositories including push, pull, commit, fork, create, and comment events.
At the deployment end, operating system-specific or platform-specific integrations can provide additional insight into individual apps, as well as allow you to monitor application interactions with other apps and the deployment infrastructure.
For quality assurance, application, platform, and infrastructure logs can all provide important information. For performance monitoring, for example, log data from both the deployment and host infrastructure can give you valuable insight into time and resource use at the level of individual containers and microservices; it can also help you uncover resource-use conflicts and other potential bottlenecks.
At the level of operations, you need quick, centralized access to key log data items not only from your deployment and hosting platforms but also from infrastructure-related management and resource services. Real-time and near real-time log monitoring makes it possible to anticipate such things as shifting load requirements over time and changing patterns of user access, as well as to make the necessary adjustments before they become a problem.
LogDNA integrates with most major alerting services in addition to providing generic email alerts. You can set up alerts based on the number or frequency of specific log events, then use the alert service's features to notify the appropriate team members based on the time, nature, and severity of the alert.
Security is no longer an option in DevOps—it's a necessity. Log aggregation, monitoring, and analysis can and should play a key role in your security operations. Platform and infrastructure-level service provider logs are typically the best (and often the only) way to track user access; to detect unusual load patterns, anomalous behavior, and suspicious incidents (such as repeated login errors); and to trace the actions of intruders. Your alert-system integration should include any security-related events and enable quick security-team access to all relevant log data.
What's the bottom line? Maybe it's this: Your application, service, and infrastructure logs are a valuable resource, and if you don't make good use of them, you are depriving your operation of important—and in many ways vital—tools. LogDNA makes it easy to harness the full power of your system's logs and put it to work for your DevOps team.