Stream errors when using kubectl rollout status deployment
Deployed agent today on a test cluster (Gitlab SaaS). K8s Versions:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.16", GitCommit:"e37e4ab4cc8dcda84f1344dda47a97bb1927d074", GitTreeState:"clean", BuildDate:"2021-10-27T16:25:59Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.15-eks-9c63c4", GitCommit:"9c63c4037a56f9cad887ee76d55142abd4155179", GitTreeState:"clean", BuildDate:"2021-10-20T00:21:03Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
When doing deployments, the "kubectl rollout status deployment" will throw the following errors:
aiting for deployment "scion" rollout to finish: 1 old replicas are pending termination...
W0215 18:26:23.252956 137 reflector.go:424] k8s.io/client-go/tools/watch/informerwatcher.go:146: watch of *unstructured.Unstructured ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 5; INTERNAL_ERROR") has prevented the request from succeeding
Waiting for deployment "scion" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scion" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scion" rollout to finish: 1 old replicas are pending termination...
deployment "scion" successfully rolled out
Deployment was successful.
But the deployemnt is successful. From the agent logs:
{"level":"debug","time":"2022-02-15T18:26:23.241Z","msg":"Canceled connection","mod_name":"reverse_tunnel","error":"rpc error: code = Canceled desc = context canceled","correlation_id":"01FVZA183GJYEAYD1N86PAZWBN"}
{"level":"debug","time":"2022-02-15T18:26:24.745Z","msg":"Handled a connection successfully","mod_name":"reverse_tunnel"}
{"level":"debug","time":"2022-02-15T18:26:30.530Z","msg":"Canceled connection","mod_name":"reverse_tunnel","error":"rpc error: code = Canceled desc = context canceled","correlation_id":"01FVZA5EWT1ZN6WVEM16QMRBHC"}
I don't get the errors if I switch back to the Kubernetes integration with cluster certificates. Thanks.