Find way to cause configuration changes to cause pod restart
Overview
As seen in #36 (closed), we added the notations recommended in the Helm and Kubernetes documentation, however these checksum
s don't function the way we want because of when and how they are generated. We'll need to derive a method to ensure that all pods using a configuration item (ConfigMap
|Secret
) are restarted when that configuration item changes.
Technical problem
We currently follow the recommendations of the master
branch of helm to attempt to handle restarting Pod
s that have a change requiring a restart. The problem with this is that the checksum
annotations are created based on the raw content of the template, not the value of the output of the template. We end up with a checksum that never changes because the raw template is not altered, only the values used in processing. I don't immediately see a method to alter this behavior to create them from the output without a high likelihood of hitting circular behaviors in the templating engine.
Also, K8S essentially considers Secret
s and ConfigMap
s immutable.
Possible Solutions
When we create the ConfigMap
s via templates, include some sort of versioning in the name
so that we can reliably cause the Deployment
to restart because we'd have created a new ConfigMap
and configured the Deployment
to make use of that. We'd then have altered the Volume
portion of the Deployment
's specification, which should cause K8S to reload the pod. A secondary thought on this is ensuring that these are properly cleaned up as they are replaced.
It may be possible to get the JSON form of .Values
in Helm and perform checksums on that, only testing and investigation will if show this viable. This possible approach also runs against the immutable consideration from above.
Further issues
Neither of the above deal with changes to Secret
s. A real-work example would be password and certificate rotation.
cc @marin @pcarranza