Skip to content

IDs are running out in the future

Problem

Yesterday, @NikolayS concerned on Slack https://gitlab.slack.com/archives/C3NBYFJ6N/p1535651249000100 that the id in ci_build_trace_sections table could be overflowed in the future. Today there are about 250 million rows in the table. This means we can create 1.8 billion more rows, however, after we've consumed the capacity, we can not create a new record. As the side-effect, some feature which uses this ci_build_trace_sections table will broken.

Statistics on gitlab.com (Date: 31th August 2018)

Tables with SERIAL type on id (4byte integer - MAX: 2,147,483,647)

  • ci_build_trace_sections ... 248,269,283 (Growth rate: ? count/month)
  • ci_build_trace_section_names ... ? (Growth rate: ? count/month)
  • ci_job_artifacts ... ? (Growth rate: ? count/month)
  • ci_builds ... ? (Growth rate: ? count/month)

Tables with BIGSERIAL type on id (8byte integer - MAX: 922,337,2036,854,775,807)

  • ci_build_trace_chunks

Actions

  • Setup Prometheus alerts. If columns with auto-increment integer are about to reach the maximum value of the type, we're notified.
  • Recreate the table with bigint type (https://hackernoon.com/the-night-the-postgresql-ids-ran-out-9430a2dbb895)
  • Fortunately, ci_build_trace_sections has not been used by actual features, yet. We can just wipe the whole date and recreate the table with BIGSERIAL.

/cc @ayufan @grzesiek @erushton @NikolayS @yorickpeterse

Edited by 🤖 GitLab Bot 🤖