Skip to content
GitLab
Next
    • GitLab: the DevOps platform
    • Explore GitLab
    • Install GitLab
    • How GitLab compares
    • Get started
    • GitLab docs
    • GitLab Learn
  • Pricing
  • Talk to an expert
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
    Projects Groups Topics Snippets
  • Register
  • Sign in
  • reliability reliability
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
    • Locked files
  • Issues 2,249
    • Issues 2,249
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Insights
    • Issue
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • GitLab.comGitLab.com
  • GitLab Infrastructure TeamGitLab Infrastructure Team
  • reliabilityreliability
  • Issues
  • #4106
Closed (moved) (moved)
Open
Issue created Apr 26, 2018 by Alex Hanselka@ahanselkaOwner

Severe site degradation due to database load

At around 01:26 UTC, a database failover was accidentally performed leading to a split-brain problem. Fortunately, the fleet continued to follow the true primary.

We shut down postgres-01 since it was the rogue primary. In our investigation, both postgres-03 and postgres-04 were trying to follow postgres-01. As such, we are rebuilding replication on postgres-03 as I write this issue and then postgres-04 when it is finished.

Unfortunately, progress is slowed since we are also simultaneously taking a pg_basebackup for WAL-E since we do not have a full basebackup since the correct failover earlier today.

This also means that ALL read traffic is going to postgres-02, which has led to some slow performance, however the site remains up. We have also stopped sidekiq-cluster on the gprd besteffort nodes as they cause a very large query to be performed on the database that can't be handled at this time.

cc/ @gl-infra

Edited Apr 26, 2018 by John Jarvis
Assignee
Assign to
Time tracking