For the complete documentation index see: llms.txt
All documentation pages available in markdown.
In traditional server environments, application logs are written to a file such as /var/log/app.log.
However, when working with Kubernetes, you need to collect logs for multiple transient pods (applications) across multiple nodes in the cluster, making this log collection method less than optimal.
Ways to collect logs in Kubernetes
Basic logging using stdout and stderr
The default Kubernetes logging framework captures the standard output (stdout) and standard error output (stderr) from each container on the node to a log file.
You can see the logs of a particular container by running the following commands:
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
If a pod is evicted from the node, all corresponding containers are also evicted along with their logs.
Cluster-level logging using node logging agent
With the help of cluster-level logging setup, you can access the logs even after the pod is deleted.
The logging agent is commonly a container that exposes logs or pushes logs to a backend.
Because the logging agent must run on every node, run the agent as a DaemonSet.
Managing logs on different platforms
Google Kubernetes Engine (GKE) cluster
For container and system logs, by default GKE deploys a per-node logging agent fluent-bit that reads container logs, adds helpful metadata, and then stores them in Cloud Logging. The logging agent checks for container logs in the following sources:
Standard output and standard error logs from containerized processes
kubelet and container runtime logs
Logs for system components, such as VM startup scripts
For events, GKE uses a deployment in the kube-system namespace which automatically collects events and sends them to Logging. For more details, see Managing GKE logs.
Use kubectl get pods -n kube-system to ensure the fluent-bit pods are up and running.
This command fetches the textPayload field from all the logs in the given range of timestamp of the container, pod, and cluster mentioned in the command, and populates the gcloudlogging.log file with that information.
Amazon EKS cluster
Amazon EKS does not include a per-node logging agent. You can install an agent such as the fluent-bit DaemonSet to aggregate Kubernetes logs and send them to AWS CloudWatch Logs.
Download the values file from grafana/loki-stack and customize it for your use case. In the following example, the values file deploys only Promtail, Loki, and Grafana.
For Loki, this configuration stores logs on a running Kubernetes cluster with 50 GB of persistent storage. The disk is provisioned automatically through the available CSI driver.
Depending on your Kubernetes setup or managed Kubernetes vendor, you may need a different StorageClass. Run kubectl get storageclass to list the available storage classes in your cluster.
Find the Grafana password. By default, Grafana uses basic authentication. You can get the password, with admin as the username, from the loki-grafana secret in the loki namespace:
Terminal window
kubectlgetsecretloki-grafana-nloki\
-otemplate\
--template'{{ index .data "admin-password" }}'|base64-d; echo
Port-Forward from localhost to Grafana
After you have the username and password, port-forward with kubectl port-forward and access Grafana from your local machine on port 8080:
To see the Grafana dashboard, open localhost:8080. Use admin as the username and the password you fetched from the secret.
Use LogQL queries to explore the logs on the Grafana dashboard. For details, see the official LogQL documentation.
Use the logcli command to fetch logs from the command line.
Port-Forward from localhost to Loki pod
Port-forward with kubectl port-forward so logcli can access Loki from your local machine on port 8080:
Terminal window
kubectlgetpod-nloki-lapp=loki
NAMEREADYSTATUSRESTARTSAGE
loki-01/1Running05d20h
kubectlport-forward-nlokiloki-08080:3100
Forwardingfrom127.0.0.1:8080 ->3100
Forwardingfrom [::1]:8080 -> 3100
For logcli to access Loki, export the Loki address and port number: