Enforce Cloudflare Authenticated Origin Pulls
Production Change
Change Summary
Note This issue is marked confidential until it is completed, and can be made public after the change is completed.
Currently it is possible to bypass Cloudflare if you know the origin IP of the GCP loadbalancer ingressing GitLab.com (https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2803 and gitlab-org/gitlab#368397 (closed)).
(Note: This does not allow to spoof the X-Forwarded-For-Header, as HAproxy only applies those if the requests originates in Cloudflare IP ranges.).
This is a known issue for a long time, and has been fixed on gstg during the initial rollout of Cloudflare, but was not done in production at the time because of internal resources needing to bypass Cloudflare.
Now that production services no longer use the main ingress port of HAProxy directly, but a separate ingress port, we can lock down the 443 port on HAProxy to require mTLS between Cloudflare and ourselves.
Cloudflare already sends the mTLS certificate to our origin, we just need to enable enforcement of this certificate in HAProxy. The latter is what this change will accomplish.
This precise to-be configuration is how we are running gstg for about 2 years now. Still, because of the potential of bugs in the rollout, this is a C2 change.
Change Details
- Services Impacted - ServiceAPI ServiceWeb ServiceGit (HTTPS) ServiceCloudflare ServiceWebsockets
- Change Technician - @T4cC0re
-
Change Reviewer -
@igorwwwwwwwwwwwwwwwwwwww - Time tracking - 60 minutes
- Downtime Component - No planned downtime
Detailed steps for the change
Pre-change Steps - validation steps before the change
-
Validate there is no traffic destined for int.gprd.gitlab.netarriving on port 443 of the loadbalancer (ssh fe-01-lb-gprd, then execute the command below)
date --utc ; sudo tcpdump -Q in -Ai any -s 1500 "tcp port 443 and (tcp[((tcp[12] & 0xf0) >>2)] = 0x16) && (tcp[((tcp[12] & 0xf0) >>2)+5] = 0x01)" | awk 'match($0,/[a-z][a-z0-9\-]{2,}(\.[a-z][a-z0-9\-]+)+\.?(:[0-9]+)?/) {print substr($0,RSTART,RLENGTH)}' | grep gitlab.net | sed "s/^/`date` | /"
should be left running for 30 minutes and should NOT show anything related to int.gprd.gitlab.net
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60
-
Set label changein-progress /label ~change::in-progress -
Disable chef on all ( fe-XX-lb-gprd) HAProxy nodes. -
Remove Draft:-state from https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2257 and merge it -
Run chef on fe-01-lb-gprd -
Monitor error rates for 10 minutes, if there are any noticeable problems, rollback the MR. -
Run chef on the remaining fleet and reenable -
ssh fe-XX-lb-gprd sudo chef-client-enable -
ssh fe-XX-lb-gprd sudo -i chef-client -
Wait a few seconds in between each node, to ensure there is enough splay between HAProxy reloads across machines.
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 20
-
Turn off HAProxy on fe-01-lb-gprd forcefully ( ssh fe-01-lb-gprd, thensudo systemctl disable haproxyandsudo systemctl kill haproxy). This will prevent traffic from being handled, and the GCP LB will direct traffic to other nodes. -
Rollback https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2257 -
Run chef on fe-01-lb-gprd(This will restore the configuration, as well as reenabling and restarting HAProxy) -
Reenable chef on all ( fe-XX-lb-gprd) HAProxy nodes. -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Origin unreachable
- Location: https://dashboards.gitlab.net/d/sPqgMv9Zk/cloudflare-traffic-overview?orgId=1&refresh=5m&viewPanel=12
- What changes to this metric should prompt a rollback: intermittent unreachable requests are fine, but if there is a consistent non-zero value for more than 5 minutes, rollback.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.