How Log Management Improves Your Release Cycle

Learning objectives
  • Understand the DevOps release cycle
  • Learn about software development automation
  • See how to use logs in CI/CD

In a previous post, we discussed the vital role that logging plays in a microservices architecture. This post will expand the logging and log management role and explore its benefits to our release cycle.

We’ll begin our journey by looking at the DevOps release cycle and breaking it down into its parts. We’ll talk about the benefits of using this approach and how to optimize it. DevOps teams find that they can fully realize the advantages of this cycle when they automate the process. We’ll talk about the role that logs play in supporting automation efforts; and finally, we’ll give you some resources to learn more, and we’ll elaborate on the next steps you can take.


The DevOps Release Cycle

In the past, the traditional software development life cycle would begin with the development team working on a solution in its entirety. Once development was complete, they would pass it on to the quality assurance (QA) team for testing.  Assuming the product got approval from QA, it would then be passed to an operations team to deploy and manage the solution in the production environment. While this process worked, it wasn’t efficient and would often result in adversarial relationships between the teams.

Most modern development teams have adopted the DevOps approach for developing and releasing software into a production environment. The objective of DevOps is to centralize all aspects of development and operations into a single team. When practiced effectively this has the effect of shortening the release cycle, and improving quality. A single team is responsible for managing the entire lifecycle of a product, increasing a sense of ownership and reducing friction. DevOps can be incredibly effective, but it requires a paradigm shift in the way teams work; and teams need access to new and more efficient tools.

The DevOps release cycle works as follows:

  1. A small unit of work is selected, and the team designs and builds a solution.
  2. The solution includes additions to the test suite, such as unit tests, integration tests and performance tests.
  3. The code is reviewed and merged into the main code base.
  4. The code merge triggers a continuous integration (CI) pipeline, which executes all tests within the test suite, and may also run static analysis and security scans against the updated code base.
  5. If successful, a continuous delivery or deployment (CD) pipeline builds and packages the code, and deploys it into either a test or a production environment.
  6. The team monitors the new deployment to determine its effectiveness and stability, and takes any additional action to ensure that it works as expected.

Steps 4, 5 and 6 are often combined into a single utility, known as a CI/CD pipeline, and the effectiveness of this pipeline has a massive bearing on the team's effectiveness and success.


The Power of Automation

We use automation with software development to streamline processes and ensure that repeatable steps are always performed, without relying on human input or intervention. An effective CI/CD pipeline leverages the power of automation to take changes to the code base, and move them through a carefully designed series of tests and validations until they are either rejected or deployed into an environment. 

The automated processes within the pipeline rely on feedback to validate their results. Feedback during the testing process comes from the results of each of the tests. Once the product is deployed, the pipeline needs access to information that validates the performance of the application or service itself. An effective log management solution is critical at this point.


Using Logs to Drive Automation

Traditionally, software engineers have used logs to troubleshoot applications and gain insights into the performance of the services they support. Modern log management systems like LogDNA aggregate, index, and analyze logs automatically to provide these insights and expose them programmatically to utilities like a CI/CD pipeline.

The CI/CD pipeline can use data from the log management system in a couple of ways. One way is as a validation source for a canary deployment within a microservice architecture. The team might deploy a specific service in a high-availability configuration in a microservices architecture consisting of several containers or instances running behind a load balancer. 

Let’s consider an example of a cluster of five instances running a particular service. Instead of replacing all the instances at once, the pipeline deploys a single instance. Once successful, it configures the load balancer to route 5% of the traffic to the new instance. The log management system collects the logs from the new instance. The pipeline can query the system to get the new instance’s logs and determine error rates, success rates, and metrics such as latency and processing times. 

The pipeline compares these results to those from the instances running the previous version of the software. Suppose the results fall within an acceptable range or show an improvement. In that case, the release is determined to be successful, and the remaining instances are updated or replaced with new instances executing the new code. If the results show an increase in errors or performance degradation, the pipeline fails the deployment and removes the canary instance.


Learning More

Automating the code deployment process is just one use case where log management can drive automation and improve DevOps performance. In future articles, we’ll discuss how log management is an essential component of a successful digital transformation project. We’ll also talk about how you can leverage log management to understand the impact that code changes have on your pipeline. 

If you’d like to learn more about setting up a CI/CD pipeline and how to integrate log management, I’d recommend an excellent article by Thu Nguyen. Maximize Observability of your CI/CD Pipeline with LogDNA provides some additional insights and advice for setting up a new pipeline. 

Mike Mackrory
Mike Mackrory is a Global citizen who has settled down in the Pacific Northwest - for now. By day he works as an Engineer Manager for a DevOps team, and by night he writes and tinkers with other technology projects. When he's not tapping on the keys, he can be found trail-running, hiking and exploring both the urban and the rural landscape with his kids. Always happy to help out another developer, he has a definite preference for helping those who bring gifts of gourmet donuts, craft beer and/or Single-malt Scotch.
Table of contents

Logging in the Age of DevOps eBook

Download Now