Avoiding Log Loss from Short-Lived Job Containers

May 25, 2020
kubernetes logging

In multiple situations and clusters I’ve encounted the problem of logs being lost from short-lived containers; ie, a container which is dynamically spun up to complete a single job, and then exiting. These containers will often only exist for a couple of seconds, which makes effectively collecting and forwarding their logs challenging.


There are two main log-collection methods in Kubernetes: using a log-forwarding sidecar (1), or a node-based collector agent watching log locations on the host machine (typically under /var/containers/logs) (2).

Log-forwarding Sidecar

Due to outstanding issues around the lifecycle of sidecar containers, this is not a viable solution when it comes to short-lived container. There is no guarantee that the log-forwarding sidecar will start in time to actually forward any logs, and no guarantee it will live long enough to forward shut-down logs. In fact, without explicit checks and logic built into your application/deployment logic, there is no guarantee that the sidecar container will run at all during the entire life of your job.

This approach can also be prohibitively resource intensive in large clusters, as every container requires it’s own log-forwarded.

The “explicit checks” mentioned above would involve having your application explicitly check the status of the log-forwarder, not run until it is available, and not fully shut-down until the log-forwarder has forwared all logs.

This is a clear break in the responsibility boundaries of your application, and a set of guarantees that Kubernetes should be addressing in the future.

Node-based Log Forwarder

This is the most common implementation. It is much less resource intensive, and bypasses most of the lifecycle coordination issues present with a sidecar approach.

Note that with this solution it is possible for a container with a very high log volume to “starve” out co-located pods, which could cause other logs from the node to be delayed, or dropped. This concern is out of scope here, but should be monitored for.

The issue that arises with a node-based log forwarder and short-lived containers is that it is possible that the logs from the container are removed by the node before they are identified and forwarded. This is typically due to the performance of your log forwarder; the new log files are not being identified and opened fast enough before the kubelet clears the file. For example, if your forwarding agent is checking for new files every 15 seconds, and your container only lives for 10 seconds, the logs may be gone before your forwarder can open the file. Even if your forwarder is acting on file-system events for new files, but is being throttled due to throughput, it’s possible that the file is not being acted on fast enough, and thus being cleaned up.

The source of both of these problems is the default kubelet garbage collection configuration, which sets minimum-container-ttl-duration at 0 (every finished container will be garbage collected). I personally think immediate cleanup is a very poor default (60s would be much more reasonable). At 0, this puts your log collector into a race condition with the garbage collector.

If you are having log-forwarding issues, your best first step is giving yourself a bit of buffer room here. By increasing this value, you will give your log-forwarder a bit of extra time to pickup the log files of these short-lived container before they are cleared from the node.


Hopefully this works out for you, or was at least helpful. If you have any questions, don’t hesitate to shoot me an email, or follow me on twitter @nrmitchi.