Skip to content
Published on

Istio Architecture Internals: Control Plane and Data Plane

Authors

Introduction

Istio is the most widely adopted service mesh for Kubernetes environments. However, understanding what actually happens internally beyond "creating a VirtualService routes traffic" makes a significant difference in operations and troubleshooting.

This post analyzes the internal mechanisms of how Istio's control plane and data plane interact, and how user-defined CRDs are translated into Envoy configurations.

Istio Architecture Overview

Istio consists of two major parts:

┌─────────────────────────────────────────────────┐
Control Plane│  ┌───────────────────────────────────────────┐  │
│  │              istiod                        │  │
│  │  ┌─────────┐ ┌─────────┐ ┌──────────┐   │  │
│  │  │  Pilot  │ │ Citadel │ │  Galley  │   │  │
│  │    (xDS)  (CA)(Validate)│   │  │
│  │  └─────────┘ └─────────┘ └──────────┘   │  │
│  └───────────────────────────────────────────┘  │
├─────────────────────────────────────────────────┤
Data Plane│  ┌──────────┐  ┌──────────┐  ┌──────────┐     │
│  │ App + EP │  │ App + EP │  │ App + EP │     │
 (Pod A) (Pod B) (Pod C)  │     │
│  └──────────┘  └──────────┘  └──────────┘     │
EP = Envoy Proxy (istio-proxy sidecar)└─────────────────────────────────────────────────┘

istiod: The Unified Control Plane

Historical Background

Before Istio 1.5, Pilot, Citadel, Galley, and Mixer were each deployed as separate microservices. Starting with Istio 1.5, they were unified into a single binary called istiod, and Mixer was completely removed in 1.8.

Pilot: Traffic Management Engine

Pilot is the core of istiod, performing the following roles:

  1. Service Discovery: Watches the Kubernetes API server to track Service, Endpoint, and Pod changes
  2. Configuration Translation: Converts Istio CRDs (VirtualService, DestinationRule, etc.) into Envoy configuration
  3. xDS Server: Pushes translated configurations to each Envoy proxy via gRPC streams
Kubernetes API Server
   ┌─────────┐
Pilot  │ ← Watches Istio CRDs (VirtualService, DestinationRule, Gateway...)
   │         │ ← Watches Kubernetes resources (Service, Endpoints, Pod...)
   └────┬────┘
xDS (gRPC stream)
   ┌─────────┐
Envoy  │ ← Receives LDS, RDS, CDS, EDS, SDS
   └─────────┘

Citadel: Certificate Management

Citadel (now integrated into istiod) manages workload identity and certificates in the mesh:

  • Acts as a CA (Certificate Authority)
  • Issues X.509 certificates to each workload
  • Assigns identity based on the SPIFFE standard
  • Automatic certificate rotation (default 24 hours)

Galley: Configuration Validation

Galley validates Istio configuration:

  • Configuration validation via Kubernetes Admission Webhook
  • CRD schema validation
  • Cross-reference integrity checking (e.g., whether a Gateway referenced by a VirtualService exists)

Envoy Sidecar Injection Mechanism

MutatingWebhookConfiguration

Istio's sidecar injection leverages Kubernetes MutatingAdmissionWebhook:

apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: istio-sidecar-injector
webhooks:
  - name: sidecar-injector.istio.io
    namespaceSelector:
      matchLabels:
        istio-injection: enabled
    rules:
      - apiGroups: ['']
        apiVersions: ['v1']
        operations: ['CREATE']
        resources: ['pods']

When a pod is created, the following process occurs:

1. kubectl apply -f deployment.yaml
2. Kubernetes API Server calls the Admission Webhook
3. istiod Sidecar Injector modifies the pod spec
4. Modified pod spec is returned
5. Pod is created with the modified spec

Injected Containers

Two containers are added during sidecar injection:

1. istio-init (Init Container)

initContainers:
  - name: istio-init
    image: proxyv2
    command:
      - istio-iptables
      - '-p'
      - '15001' # Envoy outbound port
      - '-z'
      - '15006' # Envoy inbound port
      - '-u'
      - '1337' # istio-proxy UID (traffic from this UID is excluded from redirect)
      - '-m'
      - 'REDIRECT'
    securityContext:
      capabilities:
        add: ['NET_ADMIN', 'NET_RAW']

The istio-init container sets up iptables rules to redirect all inbound/outbound traffic to the Envoy proxy.

2. istio-proxy (Sidecar Container)

containers:
  - name: istio-proxy
    image: proxyv2
    ports:
      - containerPort: 15090 # Prometheus metrics
      - containerPort: 15021 # Health check
    env:
      - name: ISTIO_META_CLUSTER_ID
        value: 'Kubernetes'
      - name: PILOT_CERT_PROVIDER
        value: 'istiod'

iptables Traffic Redirect

The flow of iptables rules set by istio-init:

[Inbound Traffic]
External -> Pod IP:Port
  -> iptables PREROUTING
  -> ISTIO_INBOUND chain
  -> REDIRECT to 15006 (Envoy inbound listener)
  -> Envoy processes then forwards to localhost:AppPort

[Outbound Traffic]
App -> External Service IP:Port
  -> iptables OUTPUT
  -> ISTIO_OUTPUT chain
  -> REDIRECT to 15001 (Envoy outbound listener)
  -> Envoy processes then forwards to actual destination

[Exception]
Traffic from UID 1337 (istio-proxy) is excluded from redirect -> prevents infinite loop

xDS Protocol In Detail

xDS (x Discovery Service) is the API protocol through which Envoy dynamically receives configuration.

xDS API Types

APIFull NameRole
LDSListener Discovery ServiceListener configuration (ports, protocols)
RDSRoute Discovery ServiceHTTP routing rules
CDSCluster Discovery ServiceUpstream cluster definitions
EDSEndpoint Discovery ServiceActual endpoint list within clusters
SDSSecret Discovery ServiceTLS certificates and keys

Configuration Push Flow

[1] User creates a VirtualService
[2] Pilot detects the change via Kubernetes API watch
[3] Pilot translates VirtualService to Envoy RDS configuration
[4] Related CDS and EDS configurations are also generated
[5] Pushed to the workload's Envoy via gRPC stream
[6] Envoy hot-reloads the new configuration (no connection drops)

ADS (Aggregated Discovery Service)

Istio uses ADS to consolidate all xDS responses into a single gRPC stream. This ensures configuration consistency:

  • Ordering guarantee between CDS and EDS (cluster definition first, endpoints after)
  • Ordering guarantee between LDS and RDS
  • Atomic configuration updates

Verifying Configuration

# Check Envoy listeners for a specific pod
istioctl proxy-config listeners PODNAME.NAMESPACE

# Check route configuration
istioctl proxy-config routes PODNAME.NAMESPACE

# Check cluster configuration
istioctl proxy-config clusters PODNAME.NAMESPACE

# Check endpoints
istioctl proxy-config endpoints PODNAME.NAMESPACE

# Full Envoy configuration dump
istioctl proxy-config all PODNAME.NAMESPACE -o json

Envoy Filter Chain Architecture

The Envoy proxy processes requests through a hierarchical filter chain:

[Request Flow]

Listener (port binding)
Filter Chain (select matching filter chain)
    ├── Network Filters
    │   ├── TCP Proxy Filter (L4)
    │   └── HTTP Connection Manager (L7)
    │       │
    │       ├── HTTP Filters
    │       │   ├── RBAC Filter (authorization)
    │       │   ├── JWT Authn Filter (authentication)
    │       │   ├── Fault Injection Filter
    │       │   ├── CORS Filter
    │       │   ├── Stats Filter (metrics)
    │       │   └── Router Filter (final routing)
    │       │
    │       └── Route Configuration
    │           ├── Virtual Host selection
    │           └── Route matching and Cluster determination
Cluster (upstream selection)
Endpoint (actual target pod)

Listener Structure

Listener 0.0.0.0:15006 (Inbound)
├── FilterChain: App port (e.g., 8080)
│   ├── TLS Inspector
│   ├── HTTP Connection Manager
│   │   ├── istio_authn filter
│   │   ├── envoy.filters.http.rbac
│   │   └── envoy.filters.http.router
│   └── Route: inbound|8080|http|service.ns.svc.cluster.local
└── FilterChain: Default (passthrough)

Listener 0.0.0.0:15001 (Outbound)
├── FilterChain: Per-service matching
│   ├── HTTP Connection Manager
│   │   ├── envoy.filters.http.fault
│   │   ├── envoy.filters.http.cors
│   │   ├── istio.stats
│   │   └── envoy.filters.http.router
│   └── Route: Per-service VirtualHost
└── FilterChain: PassthroughCluster (unmatched traffic)

Workload Identity: SPIFFE

SPIFFE ID Scheme

Istio uses the SPIFFE (Secure Production Identity Framework For Everyone) standard:

spiffe://TRUST_DOMAIN/ns/NAMESPACE/sa/SERVICE_ACCOUNT

Example:
spiffe://cluster.local/ns/production/sa/frontend

Certificate Issuance Flow (CSR Flow)

[1] istio-agent (in-pod) generates a key pair
[2] Creates a CSR (Certificate Signing Request)
[3] Sends CSR to istiod (gRPC, authenticated via bootstrap token)
[4] istiod validates the CSR:
    - ServiceAccount token validity
    - Namespace membership verification
[5] istiod CA signs the X.509 certificate
[6] Returns signed certificate to istio-agent
[7] istio-agent delivers the certificate to Envoy via SDS
[8] Envoy uses the certificate for mTLS

SDS (Secret Discovery Service)

Certificates are delivered to Envoy via SDS, not through the filesystem:

  • istio-agent acts as a local SDS server
  • Envoy requests certificates via the SDS API
  • No Envoy restart needed for certificate rotation
  • Certificates are not written to disk, improving security

Configuration Translation: From Istio CRDs to Envoy Config

VirtualService Translation Example

A user-defined VirtualService:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews-route
spec:
  hosts:
    - reviews
  http:
    - match:
        - headers:
            end-user:
              exact: jason
      route:
        - destination:
            host: reviews
            subset: v2
    - route:
        - destination:
            host: reviews
            subset: v1

This is translated into Envoy RDS configuration:

{
  "name": "reviews.default.svc.cluster.local:9080",
  "virtual_hosts": [
    {
      "name": "reviews.default.svc.cluster.local:9080",
      "domains": ["reviews.default.svc.cluster.local"],
      "routes": [
        {
          "match": {
            "prefix": "/",
            "headers": [
              {
                "name": "end-user",
                "string_match": {
                  "exact": "jason"
                }
              }
            ]
          },
          "route": {
            "cluster": "outbound|9080|v2|reviews.default.svc.cluster.local"
          }
        },
        {
          "match": {
            "prefix": "/"
          },
          "route": {
            "cluster": "outbound|9080|v1|reviews.default.svc.cluster.local"
          }
        }
      ]
    }
  ]
}

DestinationRule Translation Example

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: reviews-destination
spec:
  host: reviews
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 100

This is translated into Envoy CDS configuration:

{
  "name": "outbound|9080|v1|reviews.default.svc.cluster.local",
  "type": "EDS",
  "eds_cluster_config": {
    "service_name": "outbound|9080|v1|reviews.default.svc.cluster.local"
  },
  "circuit_breakers": {
    "thresholds": [
      {
        "max_connections": 100
      }
    ]
  },
  "transport_socket": {
    "name": "envoy.transport_sockets.tls",
    "typed_config": {
      "common_tls_context": {
        "tls_certificate_sds_secret_configs": [
          {
            "name": "default",
            "sds_config": {
              "api_config_source": {
                "api_type": "GRPC",
                "grpc_services": [
                  {
                    "envoy_grpc": {
                      "cluster_name": "sds-grpc"
                    }
                  }
                ]
              }
            }
          }
        ]
      }
    }
  }
}

Configuration Synchronization and Debugging

Checking Sync Status with proxy-status

$ istioctl proxy-status
NAME                    CDS    LDS    EDS    RDS    ECDS   ISTIOD
frontend-v1-xxx.prod    SYNCED SYNCED SYNCED SYNCED        istiod-abc
reviews-v1-yyy.prod     SYNCED SYNCED SYNCED SYNCED        istiod-abc
ratings-v1-zzz.prod     STALE  SYNCED SYNCED SYNCED        istiod-abc

Status codes:

  • SYNCED: Proxy has received the latest configuration
  • NOT SENT: istiod has not yet sent the configuration
  • STALE: istiod sent the configuration but did not receive an ACK

Comparing Configuration Differences

# Compare configuration sent by istiod with proxy's current config
istioctl proxy-config all PODNAME -o json > proxy-config.json
istioctl proxy-status PODNAME --diff

Performance Considerations

Control Plane Scaling

  • istiod can be horizontally scaled (multiple replicas)
  • Each Envoy connects to a single istiod instance
  • If istiod fails, Envoy continues operating with the last received configuration

Optimization in Large-Scale Meshes

  1. Use Sidecar resources: Limit the scope of services each workload needs to know
  2. Set exportTo: Limit CRD visibility scope
  3. Event batching: istiod batches multiple changes within a short period before pushing
  4. Incremental xDS (Delta xDS): Send only changed portions
# Optimize Envoy memory with Sidecar resource
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  name: default
  namespace: production
spec:
  egress:
    - hosts:
        - './*' # Same namespace services
        - 'istio-system/*' # Istio system services
        - 'monitoring/prometheus' # Specific external service

Conclusion

Understanding Istio's internal architecture provides these benefits:

  1. Improved troubleshooting: Quickly diagnose xDS sync issues, sidecar injection failures, certificate expiration, etc.
  2. Performance optimization: Properly leverage Sidecar resources, connection pool tuning, and configuration scope limits
  3. Enhanced security: Understand how mTLS works and configure PeerAuthentication correctly

In the next post, we will dive deeper into the internals of the traffic management engine.