gitlab runner with kubernetes executor- Runner assumes worker node role instead of the pod role
I have created gitlab runner with kubernetes executor using helm chart and values.yaml provided by gitlab. Whenever I run a pipleine the job runs by creating a job pod. This pod has assigned role specific for doing some operations, but instead the pod assumes role of worker node on which it is running. The pod is expected to assume the role assigned to it but instead the aws get caller identity returns the worker node role. I have updated security context too but still worker node role is assumed instead of attached pod role. I have also properly created the IAM role and attached provider Trust relationship to it. Please find the attached values.yaml and the get caller identity aws api call result which shows worker node role is assumed. I have tried multiple things from the blogs but nothing works. Highly appreciate inputs on this as this has become a blocker for me with not much documentation on itgitlab-values.rtfWorkerNode-role.rtf
Expected Behaviour - The Running job pod should assume attached role to it instead of the worker node role.
Actual Behaviour - The worker node role is assumed and hence the job fails with 403 forbidden issue for S3 bucket.