Expose cluster infrastructure metadata in KAS agent API (IP ranges, endpoints, deployment details)
Problem Statement
When using GitLab's Kubernetes Agent Service (KAS) for deployments, there is currently no way to retrieve cluster infrastructure metadata (IP ranges, endpoints, node IPs, network CIDR) through the GitLab API. This creates a significant gap for:
- Security auditing: Unable to verify which IP ranges are associated with production deployments
- Compliance tracking: Cannot correlate agent names with actual infrastructure endpoints
- Deployment verification: Difficult to confirm that deployments occurred through specific agents to specific clusters
- Infrastructure management: No programmatic way to map agents to their underlying Kubernetes infrastructure
Current Limitations
The existing KAS agent API endpoints provide:
- Agent metadata (name, ID, config project)
- Agent tokens and authentication
- Internal communication details
- Usage metrics
But do NOT expose:
- Cluster API endpoint IP address
- Node IP ranges (external and internal)
- Pod CIDR ranges
- Cluster network information
- Deployment infrastructure location
Example Current API Response
{
"id": 177,
"name": "eks-dwainaina_pro-sit-a",
"config_project": {
"id": 220,
"name": "dwainaina_pro",
"path_with_namespace": "devops/dwainaina_pro"
},
"created_at": "2025-01-13T10:53:40.475Z",
"created_by_user_id": 49
}
Missing: No cluster infrastructure details to verify production deployment location or IP ranges.
Proposed Solution
Add optional cluster_info field to the KAS agent API response that exposes infrastructure metadata:
{
"id": 177,
"name": "eks-dwainaina_pro-sit-a",
"config_project": {
"id": 220,
"name": "dwainaina_pro",
"path_with_namespace": "devops/dwainaina_pro"
},
"created_at": "2025-01-13T10:53:40.475Z",
"created_by_user_id": 49,
"cluster_info": {
"api_endpoint": "https://10.0.1.50:6443",
"api_endpoint_ip": "10.0.1.50",
"cluster_name": "eks-prod-cluster",
"cluster_context": "arn:aws:eks:eu-west-1:123456789:cluster/eks-prod-cluster",
"node_ips": {
"external": ["52.1.2.3", "52.1.2.4", "52.1.2.5"],
"internal": ["10.0.1.10", "10.0.1.11", "10.0.1.12"]
},
"pod_cidr": "10.2.0.0/16",
"service_cidr": "10.9.0.0/12",
"agent_pod_ip": "10.2.1.50",
"agent_namespace": "gitlab-agent",
"deployment_status": "connected",
"last_connection": "2026-02-11T10:13:05.953Z"
}
}
Use Cases
- Security Audit: Verify that agent "prod-agent" is actually connected to production cluster with IP range 10.0.0.0/16
- Compliance Report: Generate audit trail showing which agents deployed to which infrastructure
- Incident Response: Quickly identify which cluster an agent is connected to during security incidents
- Infrastructure Mapping: Programmatically build inventory of agent-to-cluster relationships
- CI/CD Pipeline Verification: Confirm deployment target infrastructure before executing critical jobs
Implementation Considerations
- Scope: Optional field to maintain backward compatibility
- Permissions: Respect existing agent access controls
- Performance: Cache cluster info to avoid excessive Kubernetes API calls
- Security: Only expose information accessible to authenticated users with appropriate permissions
- Availability: Support both GitLab.com and Self-Managed instances
Related Issues
- #220912 (closed) - Private project access for agents
- #389393 - Agent access to private projects for GitOps
Acceptance Criteria
- KAS agent API returns cluster infrastructure metadata
- Metadata includes API endpoint IP, node IPs, and CIDR ranges
- Field is optional and backward compatible
- Documentation updated with new fields
- Works with both GitLab.com and Self-Managed instances
- Respects existing permission model
Edited by 🤖 GitLab Bot 🤖