reset-data job fails with Parallel::DeadWorker during issue seeding
Summary
The reset-data CI job is failing during the database seeding phase, specifically when seeding issues. The job runs for approximately 47 minutes before failing with a Parallel::DeadWorker error.
Error Details
Job ID: 11659963063
Pipeline: https://gitlab.com/gitlab-org/gitlab-development-kit/-/jobs/11659963063
Status: Canceled
Duration: ~47 minutes (12:51:33 - 13:45:28 UTC)
Stack Trace
rake aborted!
Parallel::DeadWorker: Parallel::DeadWorker
/home/gdk/.gitlab-ci-cache/ruby/gem/gems/parallel-1.27.0/lib/parallel.rb:83:in `rescue in work'
/home/gdk/.gitlab-ci-cache/ruby/gem/gems/parallel-1.27.0/lib/parallel.rb:80:in `work'
...
Caused by:
EOFError: end of file reached
<internal:marshal>:34:in `load'
Failure Point
The job was executing the 09_issues.rb seed file when it failed:
== Seed from /home/gdk/gdk/gitlab/db/fixtures/development/09_issues.rb
Timeline
The job successfully completed several seeding phases:
-
✅ Base work item types (0.18s) -
✅ Default organization (0.12s) -
✅ Admin user (7.47s) -
✅ Application settings (0.64s) -
✅ Users and namespaces (3.22s) -
✅ Projects (34.10s) -
✅ Project features, routes, labels (various times) -
✅ Teams (5.92s) -
✅ Milestones (1.31s) -
❌ Issues seeding - Failed after ~41 minutes
Environment
- Ruby: 3.3.9
- Parallel gem: 1.27.0
- Container:
registry.gitlab.com/gitlab-org/gitlab-development-kit/mise-bootstrapped-gdk-installed:pl-openssl-3-6-mitigation
Impact
This failure prevents the reset-data verification from completing, which could impact:
- Development environment setup
- CI pipeline reliability
- Developer productivity when resetting GDK data
Potential Causes
- Memory/Resource exhaustion - The parallel processing during issue seeding may be consuming too much memory
-
Process communication failure - The
EOFErrorsuggests inter-process communication broke down - Timeout issues - Long-running parallel processes may be timing out
- Container resource limits - The CI container may have insufficient resources for parallel issue creation
Suggested Investigation
- Check if this is a recurring issue in recent pipelines
- Review memory usage during the issues seeding phase
- Consider reducing parallelism for the issues seeding step
- Investigate if container resource limits need adjustment
- Add better error handling/retry logic for parallel processing failures
Workaround
For immediate relief, consider:
- Reducing the number of parallel workers for issue seeding
- Adding retry logic for the seeding process
- Splitting the issue seeding into smaller batches