Optimize CI/CD Pipeline Performance
CI/CD Pipeline Optimization Opportunities
Our current CI/CD pipeline has several optimization opportunities that could significantly reduce build times, especially for non-build stages. While the build job is already well-optimized with 8 parallel chunks, other stages can be improved.
Current Performance Issues
- Duplicate dependency installations (staging + production)
- Sequential execution of jobs that could run in parallel
- Suboptimal caching strategy
- Heavy Docker images
- Excessive memory allocation
- Missing conditional job execution
Optimization Recommendations
1. Cache Optimization
Current issue: Using yarn install --frozen-lockfile
in both staging and production installs without optimal caching.
Solution: Improve cache configuration:
.cache_definition: &cache_definition
key:
files:
- yarn.lock
paths:
- node_modules/
- .yarn/cache/ # Add yarn cache
policy: pull-push # Use pull-push for better cache utilization
2. Parallel Job Execution
Current issue: Lint jobs run sequentially.
Solution: Run linting tasks in parallel:
# Option 1: Parallel matrix
lint-and-prettier:
parallel: 2
# Option 2: Separate jobs
eslint:
stage: lint
script: yarn eslint:diff --fix
prettier:
stage: lint
script: yarn prettier:diff:fix
3. Dependency Installation Optimization
Current issue: Two separate install jobs with different caching policies.
Solution: Consolidate to single optimized install:
install-dependencies:
stage: prepare
script:
- yarn install --frozen-lockfile --prefer-offline
cache:
<<: *cache_definition
policy: pull-push
artifacts:
paths:
- node_modules/
expire_in: 1 week
4. Test Stage Improvements
Current issue: Tests run sequentially when they could be parallel.
Solution:
- Run
vitest
andcheck_file_naming
in parallel - Use
--reporter=basic
for faster CI output - Add more granular
changes
conditions
5. Deployment Optimization
Current issue: Downloads external artifacts during deployment, adding latency.
Solutions:
- Cache buyer-experience artifacts
- Use GitLab's dependency proxy
- Implement parallel artifact downloads
- Consider artifact mirroring
6. Docker Image Optimization
Current issue: Using node:22.16-slim
which is larger than necessary.
Solution: Switch to Alpine-based images:
default:
image: node:22.16-alpine # Smaller, faster startup
7. Conditional Job Execution
Current issue: Jobs run even when unrelated files change.
Solution: Add granular change detection:
lint-and-prettier:
rules:
- if: *mr_condition
changes:
- "**/*.{js,ts,vue,yml,yaml}"
- "package.json"
- "yarn.lock"
8. Memory and Resource Optimization
Current issue: NODE_OPTIONS: '--max-old-space-size=50000'
allocates 50GB memory.
Solution:
- Reduce to 8000-16000 (8-16GB) for most jobs
- Use
saas-linux-xlarge-amd64
only for memory-intensive jobs - Profile actual memory usage to right-size allocation
Expected Performance Improvements
Optimization | Estimated Time Savings |
---|---|
Parallel linting | 2-3 minutes |
Better caching | 1-2 minutes per job |
Optimized installs | 30-60 seconds |
Alpine images | 10-30 seconds per job |
Conditional execution | Variable (skips unnecessary jobs) |
Total estimated savings: 4-7 minutes per pipeline run
Implementation Priority
- High Impact, Low Risk: Cache optimization, Alpine images, memory tuning
- Medium Impact, Low Risk: Parallel linting, conditional execution
- High Impact, Medium Risk: Dependency installation consolidation
- Medium Impact, Medium Risk: Deployment optimizations
Acceptance Criteria
-
Implement cache optimization with yarn cache inclusion -
Switch to Alpine-based Docker images -
Optimize memory allocation (reduce NODE_OPTIONS) -
Implement parallel linting execution -
Add granular conditional job execution -
Consolidate dependency installation jobs -
Optimize test stage for parallel execution -
Document deployment artifact caching strategy -
Measure and validate performance improvements -
Update pipeline documentation
Additional Notes
- These optimizations should not affect the build job performance (already well-optimized)
- Changes should be implemented incrementally to validate each improvement
- Consider creating a separate branch to test optimizations before merging
- Monitor pipeline success rates during implementation to catch any regressions