The person making the manual changes needs to have admin access to the cluster, and to the storage solutions being used.
First you make the desired changes to the disk outside the cluster. (Resize the disk in gke, or create a new disk from a snapshot or clone, etc)
Next step is to use kubectl edit on existing volume, to change their ReclaimPolicy to Retain. This is to ensure the system does not delete your existing volumes during pvc deletion.
After this the specifics diverge a bit based on whether you are using a new disk, or re-using the same one. But from a high level:
you ensure you have a volume with config that reflects your changes, and is pointing at your disk.
You have a re-created pvc with the new options.
You restart your pods to mount using the new options
You delete the statefulset, but keep the pods if you are working with a statefulset
You helm upgrade with the persistence settings that already match what you manually put in the pvcs. (Which will recreate the statefulset object)
Yikes. Why do you have to restart the pods and/or delete the stateful set? Wouldn't that happen anyway since you are doing a helm upgrade? Just curious.
@joshlambert the statefulset's pvc template is immuatable, so you can't change the storageclass or size, or pretty much anything without deleting the whole statefulset first.
And pods don't take on the new volume information until they are restarted. (Even using the upcoming resize functionality of 1.11). Though you may not always need them to. (with a just a size change for example).
And yes, in this case the helm upgrade would most likely restart the pods for you. The earlier restart was more just so you know it's working before maybe killing your statefulset.