Skip to content

feat: Add EKS Kubeconfig Generation with Private Cluster Support

Overview

This PR is a merge of two feature PRs that together provide EKS private cluster access capabilities:

  1. !87 - EKS Kubeconfig Generation - Generate ready-to-use kubeconfig files via HTTP API
  2. !88 - SSM Port Forwarding - Enable secure tunneling to private EKS clusters and other AWS resources

These features work together to enable seamless CI/CD access to both public and private EKS clusters without requiring additional tools or complex configuration.

Issue #4

Child MRs

!87 (closed) - EKS Kubeconfig Generation

View full details →

Adds a new /kubeconfig endpoint that generates complete, ready-to-use kubeconfig files for AWS EKS clusters with pre-authenticated tokens. Enables CI/CD pipelines to obtain kubeconfig files via simple HTTP request without requiring AWS CLI installation.

!88 (closed) - SSM Port Forwarding

View full details →

Adds AWS Systems Manager (SSM) port forwarding capabilities to enable secure tunneling to EC2 instances and private resources. Uses a dual-port architecture with Python TCP proxy to forward traffic from external clients to SSM-managed instances.


How These Features Work Together: Private EKS Cluster Support

The combination of these two MRs enables access to private EKS clusters. When the /kubeconfig endpoint detects a private EKS cluster (one with endpointPublicAccess: false and endpointPrivateAccess: true), it automatically:

  1. Validates Requirements - Ensures instance_id parameter is provided (required for private clusters)
  2. Establishes SSM Tunnel - Uses the SSM port forwarding feature (!88 (closed)) to create a tunnel through the specified EC2 instance
  3. Proxies Kubernetes API - Forwards traffic to the private Kubernetes API endpoint
  4. Configures TLS - Sets insecure-skip-tls-verify: true since traffic is proxied through localhost and verifies the SSL certificate manually by making a test to /version
  5. Returns Kubeconfig - Provides a fully functional kubeconfig with proxy URL and authentication token

Private Cluster Requirements

  • You must provide an instance_id parameter pointing to a running EC2 instance in the EKS cluster's VPC
  • The EC2 instance must have SSM agent installed and configured
  • Appropriate IAM permissions for SSM Session Manager
  • The instance must be able to reach the private Kubernetes API endpoint (typically in the same VPC/subnets)
  • IAM permissions for EKS cluster access (API mode access entries or aws-auth ConfigMap)

Required IAM

  • AWS Systems Manager
    • StartSession
    • DescribeInstanceInformation
  • Amazon Elastic Kubernetes Service
    • DescribeCluster
  • AWS Security Token Service 
    • GetCallerIdentity

Private Cluster Usage Example

# Generate kubeconfig for a private cluster
curl -s "http://aws-auth-provider/kubeconfig?cluster_name=my-private-cluster&instance_id=i-1234567890abcdef0" > kubeconfig.yaml

# Use the kubeconfig (SSM tunnel is active as long as aws-auth-provider is running)
export KUBECONFIG=kubeconfig.yaml
kubectl get nodes
kubectl get pods -n kube-system

Example Kubeconfig Output for Private Cluster

apiVersion: v1
kind: Config
clusters:
- name: curious-jazz-mushroom
  cluster:
    server: https://localhost:443 # Proxied through SSM
    insecure-skip-tls-verify: true # Required as we changed the hostname
contexts:
- name: curious-jazz-mushroom-context
  context:
    cluster: curious-jazz-mushroom
    user: kubectl-user
    namespace: default
current-context: curious-jazz-mushroom-context
users:
- name: kubectl-user
  user:
    token: k8s-aws-v1.verySecretToken
preferences: {}
metadata:
  token-ttl-minutes: 15
  token-expires-at: '2025-10-05T14:00:17.660947Z'
  cluster-arn: arn:aws:eks:us-east-1:009093122950:cluster/curious-jazz-mushroom
  cluster-version: '1.33'
  cluster-status: ACTIVE
  authentication-mode: API
  endpoint-public-access: false
  endpoint-private-access: true
  note: 'This is a PRIVATE cluster. SSM port forwarding has been established. TLS
    hostname verification is disabled (insecure-skip-tls-verify: true) because traffic
    is proxied through localhost. Ensure your AWS IAM user/role is mapped in the cluster''s
    access entries (for API mode) or aws-auth ConfigMap (for CONFIG_MAP mode). The
    SSM session will remain active as long as the aws-auth-provider service is running.'
  ssm-forwarded: true
  original-endpoint: https://curious-jazz-mushroom.gr7.us-east-1.eks.amazonaws.com
  tls-hostname-verification: disabled
Edited by Apoorva64

Merge request reports

Loading