Workhorse should keepalive websocket connection with periodic pings

Problem

Sometimes long running LLM requests may take over a minute before we send data to the client. This might result in the websocket connection being dropped by Cloudflare or other proxy layers.

See gitlab-org/modelops/applied-ml/code-suggestions/ai-assist!3376 (merged)

Solution

Workhorse should periodically send keepalive messages down the websocket connection. This would match what we do for gRPC connections already.

Edited by 🤖 GitLab Bot 🤖