HAProxy has lingering processes for long periods of time

Over the course of time, we sometimes reload haproxy. Whether it be a configuration change, or a node having issues, it doesn't matter, but haproxy ends up reloaded, and due to us hanging on to connections to limit errors for the clients, we'll have a process that will linger for some time waiting to finish that connection. This has led to some interesting findings: https://gitlab.com/gitlab-org/release/framework/issues/365. In the linked example, we had a process survive for over 24 hours. This introduces problematic behaviors during deploys as we'll rotate server states, and when we have multiple processes giving us feedback, we are performing inconsistent actions on that node. There exists a potential that X backend was drained from only 1 of the n+1 running haproxy processes.

https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.2

It is important to understand that when multiple haproxy processes are started on the same sockets, any process may pick up the request and will output its own stats.

Use this issue to track an investigation into what causes haproxy to leave a connection lingering for so long. Determine what we should do going forward as this has been problematic in the past. Spin up any necessary issues to remediate any findings.

/cc @gitlab-com/gl-infra

Edited by John Jarvis