Working Around Kubernetes Sidecar Shutdowns

September 4, 2020
kubernetes devops sre

Over 2 years ago I wrote a quick Kubernetes controller in order to ensuring that “sidecar” containers were shut down after the “main” container in a pod exited. The issue we were seeing was fairly straight forward: we had a container running in a pod to accomplish some application logic, as well as a number of “sidecar” containers to provide some functionality to the application. Examples of this are log forwarding, sql proxies, networking proxies, or in our case, metric collectors. We had a metric collection sidecar running along side a CronJob pod, which prevented the CronJob from ever “succeeding”.

But why is this necessary in the first place? Why does it make sense for a sidecar container to stay alive after the main container has exited? In short, it doesn’t, but this is a long-outstanding issue in Kubernetes that we have to deal with. There has been an open proposal to address this issue for years, however it stumbled into the much larger issue of “container lifecycle management”, and is currently on-hold while #sig-node addresses some underlying node shutdown issues.

There is now a lot of lifecycle logic encapsulated in that KEP, however my controller solution focuses only on the issue of terminating sidecar containers after a container exit, in order for the pod to fully complete.

It behaves with fairly simple logic:

  • You define a list of “sidecar containers” as an annotation in your deployable PodSpec
  • When a container within a monitored pod exits, the controller checks if all still-running containers are in the sidecar list
  • If they are, we send a SIGTERM to each of those containers

By following this simple logic, we were able to move past the immediate issue we were seeing without much concern.

I will be the first to admit that the code in that repository is far from top-notch. In fact, I am embarassed by it. The controller was written as a stop-gap solution while the reference KEP was finalized and implemented. I never expected that we were be here 2 years later (4 years after the original issue was filed) still having to work around it. The initial project has been forked multiple times, improvements have been made, and I have done a terrible job in pulling improvements upstream and making the tool easy to use (again, based on the belief that it would not be necessary long-term). In particular, Lemonade’s fork (work by @igorshapiro) and Riskified’s fork (work by @nisan270390) look particularly promising. Now that I’m seeing how long this tool is likely to be useful, there are many improvements that I want to pull back into the initial project.

Much larger projects in the Kubernetes ecosystem, such as Istio, Linkerd, and even the google-sql proxy, have used sidecars as their implementation method, and have all had to build in work-arounds to this core issue.

Other members of the community have attempted to build tools to solve this in similar ways. One example is Karl Isenberg (@karlkfi), who attempted to standardize a tombstone approach that can be built into their application images. While I am personally not a fan of this approach (I would rather the deployable images not have to worry about maintaining their own lifecycle), it is an option that has worked for them.

If you’re interested in other alternatives that have been considered, I recommend taking a look at the Design Doc for the Sidecar Containers KEP.


The fact is that application and tool developers having to consider this situation at all breaks the premise of Kubernetes as a container orchestrator. When you have to start building your applications specifically to run within Kubernetes, Kubernetes can no longer be considered a “container orchestrator”; you have to start treating it as a unique deployment target. For some system level tools (such as Istio, Envoy, and other network proxies) that might be okay, as well as for applications specifically wanting to use more advanced Kubernetes features, but it does not feel like an acceptable state of the world for user-facing, platform agnostic web applications or APIs.


Hopefully either the controller approach, or tombstone approach, can help bridge the gap until Kubernetes is able to address these issues natively.