Docker logs are an indispensable tool for monitoring container operations, providing valuable insights into system behavior and streamlining the troubleshooting process. Gaining proficiency in accessing, interpreting, and managing these logs is key to maximizing performance in containerized environments. This guide offers a clear, step-by-step approach—from fundamental log viewing commands to advanced logging techniques. Discover how to craft an effective logging strategy and unlock critical insights from your containers with ease.
What are docker logs?
Docker logs provide a detailed record of events, messages, and output generated by a container and the application running inside it. These logs capture everything sent to the container's stdout and stderr streams, offering critical insights into its behavior and performance.
Key components of Docker logs include:
- Application messages, warnings, and errors generated within the container
- Kernel messages and system errors from the container's operating system
- Lifecycle events such as container start and stop times
- Resource usage metrics like CPU, memory, and disk consumption
- Network activity, including incoming and outgoing requests and responses
- Security-related events, such as vulnerabilities or unauthorized access attempts
By analyzing Docker logs, you gain an in-depth understanding of a container’s operations, enabling more effective monitoring, troubleshooting, and optimization.
Docker logs include:
- Messages, warnings, and errors generated by the containerized application
- Kernel messages and system errors from the container OS
- Container lifecycle data such as start and stop times
- Resource metrics such as CPU, memory, and disk usage
- Information on network requests and responses
- Security events, vulnerabilities, and unauthorized access attempts
This abundance of information allows DevOps teams to pinpoint the root causes of issues swiftly, eliminating the need for direct container access. Logs are especially valuable in complex microservices architectures, where traditional debugging methods often prove inadequate. In distributed systems running across multiple containers, comprehensive logging serves as a vital lens into container operations. It enables teams to trace request flows between services, identify bottlenecks, and maintain seamless system performance.
Why view docker logs?
Docker logs are an invaluable resource for troubleshooting containerized applications that fail or behave unexpectedly. When a container crashes or an application encounters errors, logs often reveal critical details such as exception messages, stack traces, or warnings leading up to the failure—insights that would otherwise remain concealed within the container. Additionally, these logs can uncover performance bottlenecks by highlighting response time trends, resource spikes, or slow database queries that may be hindering your application’s performance.
Proactively monitoring Docker logs goes beyond simple troubleshooting—it helps establish a baseline for container behavior, enabling you to detect anomalies before they disrupt production systems. This approach allows you to identify potential issues, such as memory leaks or connection errors, long before they affect end users. Additionally, Docker logs play a vital role in security audits and compliance, offering a detailed record of system activity, authentication attempts, and data access patterns. These logs are often essential for meeting regulatory standards like GDPR, HIPAA, or SOC 2 certification, making them a cornerstone of both operational efficiency and accountability.
How to see docker logs
By default, Docker stores logs as JSON-formatted files on the host system in the /var/lib/docker/containers/<container_id>/<container-id>-json.log
path. But you don’t need to access these files directly to check your logs—Docker maintains them and makes them accessible through the docker logs
command.
To view your docker logs, run the docker logs command followed by the container ID or name in your terminal. This will show you all log messages generated by the container since it started.
Using the docker logs command
The docker logs
command is the primary tool for accessing container logs. Here are some common use cases and how to apply them:
- View all logs for a container:
To check all logs for a given container, run:
docker logs <container_id>
Replace <container_id>
with the actual ID or name of the container—for example:
docker logs my-project
- View all logs with timestamps:
To see all logs with corresponding timestamps, run:
docker logs --timestamps <container_id>
or
docker logs -t <container_id>
- View all logs with additional details:
To see all logs with extra information that might be stored (such as environment variables or labels), run:
docker logs --details <container_id>
- View recent log entries:
To view the most recent log entries, run:
docker logs --tail <number> <container_id>
or,
docker logs -n <number> <container_id>
Replace <number>
with the number of recent lines you want to view—for example:
docker logs --tail 50 my-project
This will display the 50 most recent lines from the my-project container.
- View logs from a specific time:
To see all logs generated after a certain time, run:
docker logs --since <timestamp> <container_id>
Replace <timestamp>
with the specific date or time—for example:
docker logs --since 2025-03-20 my-project
This will display all logs from the my-project
container created since March 20, 2025.
You can also use relative durations of time—for example:
docker logs --since 45m my-project
This will display all logs from the my-project
container created in the last 45 minutes.
- View logs up to a specific time:
To check all logs generated before a certain time, run:
docker logs --until <timestamp> <container_id>
Replace <timestamp>
with the specific date or time—for example:
docker logs --until 2025-03-20 my-project
This will display all logs from the my-project
container created before March 20, 2025.
You can also use relative durations of time—for example:
docker logs --until 30m my-project
This will display all logs from the my-project
container created before the last 30 minutes.
- View real-time log monitoring:
To view log output in the console as it’s being generated in real-time, run:
docker logs --follow <container_id>
or
docker logs -f <container_id>
This feature is essential for live debugging.
- Combine options:
You can combine any of the options above to fine-tune your log output.
For example, the command below displays the last 100 log entries with timestamps for the my-project
container:
docker logs -n 100 -t my-project
Docker logs command options | Description |
---|---|
Docker logs command options
| DescriptionShow timestamps with each log entry. |
Docker logs command options
| Description Display additional log details. |
Docker logs command options--tail , -n | DescriptionSpecify the number of lines to show from the end of the logs. |
Docker logs command options
| Description Display logs created after a specified timestamp or duration. |
Docker logs command options
| Description Display logs created before a specified timestamp or duration. |
Docker logs command options
| Description Stream log output to the console in real-time. |
Filtering docker logs
The most common patterns for filtering docker logs include the aforementioned tailing and following to trace recent events. To fine-tune your filtering even further, you can use Unix tools like grep and awk to search for specific text within the entries.
Here’s how to structure docker logs commands with grep:
docker logs <container_id> | grep pattern
- To show all entries containing the word “error”, run:
docker logs <container_id> | grep “error”
- To search for multiple patterns, use grep with the
-E
flag—for example, to help find all potential sources for errors, you could run:
docker logs <container_id> | grep -E “error|warn|critical”
You can add the -i
flag for case-insensitive matching. The -v
flag inverts the match, showing lines that don’t contain the text specified.
This only scratches the surface of grepping docker logs, but hopefully gives you an idea of what’s possible.
Advanced docker logging techniques
We’ve covered the most common docker logs commands you’ll need in your arsenal from day to day, but there are other, more sophisticated techniques to consider when establishing and scaling your log management.
- Structured logging: You can design your applications to output logs in JSON format rather than plain text to make them machine-readable and easier to parse, filter, and analyze.
- Aggregation across services: In microservice architectures, you can implement correlation IDs that follow requests across different services so you can trace entire transaction flows through distributed systems.
- Log sampling: For high-volume applications, consider sampling logs rather than collecting every event to reduce storage requirements while still maintaining visibility into system behavior.
- External storage and analysis: For production environments, you can direct logs to a specialized platform like New Relic for advanced querying, visualization, and alerting capabilities beyond native Docker functionality.
Configuring docker logging drivers
Docker supports various logging drivers that determine where container logs are sent and how they’re formatted. Here are the most commonly used drivers:
- JSON file (default):
docker run --log-driver=json-file nginx
- Syslog:
docker run --log-driver=syslog nginx
- Fluentd:
docker run --log-driver=fluentd nginx
- AWS CloudWatch:
docker run --log-driver=awslogs nginx
- Setting a default driver: To configure a default logging driver for all containers, modify the Docker
daemon.json
file as shown below:
{
"log-driver": "syslog", // replace with driver of choice
}
Best practices for managing docker logs
Effective Docker logging requires a thoughtful approach that balances visibility and performance. Here are some best practices to consider implementing in your containerized environments:
- Always log to
stdout
andstderr
: Ensure applications in containers write logs to standard output channels rather than internal files to prevent data loss when containers are removed or restarted. - Structure your logs as JSON: Format log messages as JSON objects with consistent fields for easier parsing, filtering, and analysis.
- Secure sensitive information: Never log sensitive data like passwords, authentication tokens, or personal information. Implement masking, redaction, or encryption for any potentially sensitive fields.
- Use container orchestration logging features: If you’re using Kubernetes or Docker Swarm, leverage their built-in logging capabilities which are designed to handle containerized applications at scale.
- Monitor log volume and performance impact: Be aware that excessive logging can impact application performance. Consider adjusting verbosity levels based on environment (such as development vs. production) and implementing sampling for high-volume events.
Log rotation and retention strategies
Without proper management, log growth can spiral out of control and cause containers to consume excessive disk space, potentially crashing services. The simplest way to avoid such a catastrophe is to manage log storage with the max-size
and max-file
options available to Docker’s json-file
logging driver. Here’s an example of how to implement these options:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
Next steps
For production systems, where reliability and performance are paramount, take your log management to the next level with New Relic. By routing Docker logs to New Relic, you gain unified visibility across your entire stack, empowering you with real-time analytics, customizable dashboards, and intelligent alerts to quickly identify patterns and resolve issues with ease.
Our AI-powered log management tools go beyond the basics, cutting through the noise to uncover actionable insights. They automatically correlate logs with metrics and traces, helping you reduce mean time to resolution and focus on what matters most.
Don’t let logging be an afterthought. Make it a cornerstone of your observability strategy with New Relic and ensure your systems are running smoothly, no matter the demand.
The views expressed on this blog are those of the author and do not necessarily reflect the views of New Relic. Any solutions offered by the author are environment-specific and not part of the commercial solutions or support offered by New Relic. Please join us exclusively at the Explorers Hub (discuss.newrelic.com) for questions and support related to this blog post. This blog may contain links to content on third-party sites. By providing such links, New Relic does not adopt, guarantee, approve or endorse the information, views or products available on such sites.