Amazon CloudWatch is a log monitoring and management service from Amazon Web Services (AWS). It’s designed to provide DevOps engineers, site reliability engineers (SREs), IT administrators, and software engineers access to metrics, data, and statistics to help understand their operations. It allows users to gather metrics and logs from three sources: vendor logs that are supported and published by AWS on behalf of the customer, logs published by AWS Services, and custom logs. CloudWatch also allows users to collect and aggregate pre-ordained and custom metrics. The built-in default metrics pull from AWS services, including AWS Lambda, API Gateway, EC2, S3, ECS, and DynamoDB.
CloudWatch is optimized for AWS logs. Although it supports log ingestion from sources outside of AWS, you can only ingest them using their agent. It doesn’t support ingestion via Syslog, APIs, or code libraries. Depending on your needs, there is a lot of customization work required to set it up, which may require an expert on your team to get started. Once you’ve managed to get your logs into the service, searching them in the CloudWatch user interface (UI) and command line interface (CLI) is tedious and complex. Other expected features like data visualizations and alerts are extremely limited in CloudWatch. It is missing integrations with commonly used tools like Slack and PagerDuty and doesn't support Webhooks, making it difficult for engineers to receive notifications of issues when they arise.
Lastly, CloudWatch offers complicated pricing and is notorious for surprise overages. With the paid tier, users pay for utilization every month. The first 10,000 metrics are billed at $.30/mo, after which the next 240,000 metrics are $.10/mo, the next 750,000 at $.05/mo, and anything upwards of 1,000,000 metrics is billed at $.02/mo. Up to 5 statistics for the same metric in single GetMetricData API requests are included. Afterwards, the bill starts at $.01 per 1,000 metric requests, except GetMetricWidgetImage metrics, which are $.02 per 1,000 requests. Metric streams are available at $.003 per 1,000 metric updates.
Dashboards cost $3 per dashboard per month. Alarms range in price from $.10 to $.90 per alarm, depending on the resolution and nature of the alarm.
There is no cost for log data transfer, but data transfer out is priced depending on where and how much data goes. Collection is billed at $.50/GB, Storage at $.03/GB, and Analysis at $.005/GB. Vendor logs are billed at tiers up to 10TB..
Events bill users at a rate of $1 per million events and $1 per million cross-account events.
CloudWatch Contributor Insights for Cloud Watch bill at $.50/rule per month and $.02 per one million log events that match the rule. CloudWatch Contributor Insights for DynamoDB also bills at $.50/rule per month, but events bill at $.03 per one million logs that match the event per month.
Canaries run at $.0012 per canary run, although they may incur additional charges for other AWS services that are also utilizing them.
If you made it through all of that, congrats. If you’re confused, take a deep breath and read the next section—your logging bill doesn’t have to give you anxiety.
A lot of people on the internet have nice things to say about LogDNA. We chose this review from G2 Crowd because it sums up the benefit of using LogDNA over CloudWatch pretty darn well.
Use the LogDNA Agent, Syslog, APIs, or code libraries to aggregate your logs from any source and we’ll intelligently detect the log type, automatically parse them, and index them in a way that makes keyword search fast and easy to use. You can save Views for a shortcut to a specific set of filters and search queries, visualize your logs using Boards, Graphs, and Screens, and receive real-time Alerts in PagerDuty, Slack, email, or anywhere else via a custom Webhook.
LogDNA charges based on the volume of logs stored for a set period of time. Simple, right? All of our self service plans are compliant with SOC 2 Type 2, PCI-DSS, GDPR, EU-US Privacy Shield, and CCPA, and we offer a HIPAA-compliant logging plan—no contract needed! We offer multiple control features to self service customers including Exclusion Rules, which allow you to filter out logs by sources, by apps, or by specific queries that you want to see in Live Tail and be alerted on, but that you don’t want to store. Since we charge based on retention, these logs won’t count towards your bill. You can also automatically archive logs to Amazon S3, IBM Cloud Object Storage, Google Cloud Storage, Azure Blob Storage, and more.
For enterprise customers who need advanced control features, LogDNA launched the Spike Protection Bundle, which includes Index Rate Alerting and Usage Quotas. When there’s an unexpected spike in ingestion, Index Rate Alerting provides actionable insight into which applications, sources, or tags produced the data spike, as well as any recently added sources. Usage Quotas let you specify soft and hard thresholds for the volume of log files to store. When the threshold is met, LogDNA will notify you and apply your defined configurations.
Earlier this month, we announced our early access program for Variable Retention, which gives you the flexibility to save logs in LogDNA only for the amount of time that they're relevant. This makes it possible to ingest new types of log data while keeping cost under control, allowing teams to leverage their logs for even more use cases. Our engineering team has even more control features on the product roadmap that will become available over the course of the next year.
Simply put, if you want to centralize your logs from any source, be alerted when something’s not right, search them with natural language, and visualize them to identify trends and anomalies, choose LogDNA. Get started with your fully-featured 14-day free trial now.