- Authors
- Name
- 1. Introduction: The Kubernetes Networking We Knew Is Gone
- 2. The Departure of IPVS and the Grand Entrance of nftables
- 3. "Goodbye Sidecars" — The Innovation of Istio Ambient Mesh
- 4. The Evolution of Certification Exams: The Era of 100% Hands-On, More Demanding Than Theory
- 5. Beyond Ingress: Gateway API as the New Routing Standard
- 6. CNI Is No Longer Just a "Pipe": The Center of Observability
- 7. Conclusion: Preparing for the Next Generation of Kubernetes
- References
1. Introduction: The Kubernetes Networking We Knew Is Gone
The complexity of the Kubernetes ecosystem has always loomed like a towering mountain for operators. But now, the very terrain of that mountain is shifting. The arrival of the latest v1.35 release goes beyond mere feature additions — it signals the end of networking standards we took for granted over the past decade.
The answer to "Why should we care about these changes now?" is clear: clinging to legacy approaches leads directly to uncontrollable technical debt. From an architect's perspective, it is time to see through the essence of the disruptive innovations currently underway.
The five critical changes covered in this article are:
- IPVS Deprecation and the Rise of nftables (KEP-5495, KEP-3866)
- Transition to Istio Ambient Mesh — Service mesh without sidecars
- Certification Exams Go 100% Hands-On — Introduction of ICA and changes to CKA/CKAD
- Gateway API — The new routing standard replacing Ingress
- Cilium eBPF CNI — The network interface that became the center of observability
2. The Departure of IPVS and the Grand Entrance of nftables
2.1 Why Is IPVS Being Retired?
IPVS (IP Virtual Server), long responsible for high-performance load balancing in large-scale clusters, has been officially deprecated as of Kubernetes v1.35 (KEP-5495). This is not a simple generational shift but an inevitable consequence of the evolution of the Linux kernel networking stack.
kube-proxy Mode Evolution Timeline:
| Event | Version | KEP |
|---|---|---|
| nftables mode Alpha | v1.29 | KEP-3866 |
| nftables mode Beta | v1.31 | KEP-3866 |
| nftables mode GA | v1.33 | KEP-3866 |
| IPVS mode Deprecated | v1.35 | KEP-5495 |
| IPVS mode Removal (planned) | ~v1.38 | KEP-5344 (under discussion) |
2.2 Technical Comparison: iptables vs IPVS vs nftables
Legacy iptables suffered severe performance degradation in large-scale environments due to its O(N) complexity — requiring linear traversal of all rules as the rule count grew — and the global lock bottleneck that occurred with every rule update.
IPVS solved this with hashmap-based O(1) performance, but it carried the structural debt of maintaining an entirely separate kernel subsystem for networking.
nftables combines the strengths of both approaches, delivering flexibility and performance simultaneously within a modern, unified kernel API as the next-generation standard.
| Category | iptables (Legacy) | IPVS (Deprecated) | nftables (GA) |
|---|---|---|---|
| Data Plane Complexity | O(N) linear traversal | O(1) hashmap | O(1) Verdict Map |
| Control Plane Updates | Full rule rewrite | Incremental updates | Delta-only updates |
| Kernel API | netfilter (legacy) | Separate subsystem | Unified modern API |
| Minimum Kernel Required | 2.6+ | 2.6+ | 5.13+ |
| IPv4/IPv6 Handling | Separate management | Separate management | Unified management |
| K8s v1.35 Status | Default (legacy) | Deprecated | Recommended mode |
2.3 Performance Benchmarks: The Overwhelming Advantage of nftables
According to benchmarks published on the official Kubernetes blog, nftables data plane latency remains constant regardless of the number of services:
Data Plane Latency (first packet, p50):
| Service Count | iptables mode | nftables mode |
|---|---|---|
| 5,000 | ~50-100 us | ~5-10 us |
| 10,000 | ~100+ us | ~5-10 us |
| 30,000 | ~300+ us | ~5-10 us |
The key to this difference lies in the Verdict Map data structure. While iptables creates individual rule chains per service, nftables manages all services through a single hash table:
iptables approach (individual rules per service):
-A KUBE-SERVICES -m comment --comment "ns1/svc1:p80 cluster IP" \
-m tcp -p tcp -d 172.30.0.41 --dport 80 -j KUBE-SVC-XPGD46QRK7WJZT7O
-A KUBE-SERVICES -m comment --comment "ns2/svc2:p443 cluster IP" \
-m tcp -p tcp -d 172.30.0.42 --dport 443 -j KUBE-SVC-GNZBNJ2PO5MGZ6GT
nftables approach (single Verdict Map):
table ip kube-proxy {
map service-ips {
type ipv4_addr . inet_proto . inet_service : verdict
elements = {
172.30.0.41 . tcp . 80 : goto service-ns1/svc1/tcp/p80,
172.30.0.42 . tcp . 443 : goto service-ns2/svc2/tcp/p443,
}
}
chain services {
ip daddr . meta l4proto . th dport vmap @service-ips
}
}
2.4 Practical Migration Guide: From IPVS to nftables
Prerequisites
- Kubernetes v1.31 or later (v1.33+ recommended — nftables GA)
- Linux kernel 5.13 or later (RHEL 9+, Ubuntu 22.04+, Debian 12+)
- If using Calico: v3.30 or later
- Recommended to perform during a maintenance window
Step 1: Check Current Mode
kubectl logs -n kube-system daemonset/kube-proxy | grep -i ipvs
# Expected output: "Using ipvs Proxier"
Step 2: Modify the kube-proxy ConfigMap
kubectl edit configmap -n kube-system kube-proxy
Change mode: ipvs to mode: nftables.
Or for kubeadm-based clusters, use the following configuration during cluster initialization:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
kubernetesVersion: v1.33.0
networking:
podSubnet: '192.168.0.0/16'
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: nftables
Step 3: Restart the kube-proxy DaemonSet
kubectl rollout restart -n kube-system daemonset/kube-proxy
Note: Changing the ConfigMap alone does not apply the changes. You must restart the DaemonSet.
Step 4: Verify nftables Mode Activation
kubectl logs -n kube-system daemonset/kube-proxy | grep -i nftables
Step 5: Verify nftables Rules on the Node
sudo nft list chain ip kube-proxy services
Step 6: If Using Calico CNI — Switch to nftables Data Plane
kubectl patch installation default --type=merge \
-p '{"spec":{"calicoNetwork":{"linuxDataplane":"Nftables"}}}'
kubectl logs -f -n calico-system daemonset/calico-node | grep -i nftables
# Expected output: "Parsed value for NFTablesMode: Enabled"
For AKS (Azure) Clusters
{
"enabled": true,
"mode": "NFTABLES"
}
az aks update \
--resource-group <resourceGroup> \
--name <clusterName> \
--kube-proxy-config kube-proxy.json
2.5 Migration Considerations
| Item | Description |
|---|---|
| Restart required after ConfigMap change | ConfigMap changes alone are not applied. You must run rollout restart |
| Calico rolling restart | Patching Calico triggers a restart of all calico-node Pods — temporary network disruption |
| Rollback compatibility | Rolling back from nftables to iptables/IPVS requires kube-proxy v1.29+ (includes auto-cleanup code) |
| localhost NodePort behavior change | nftables mode does not enable the route_localnet sysctl |
| When switching to eBPF | When migrating to the eBPF data plane, kube-proxy must first be changed to iptables mode |
"With IPVS support ending, continuing to use it is not simply a matter of maintaining legacy technology. As upstream support diminishes, testing and bug fixes decrease, ultimately leading to unexpected failures and incompatibility with the latest features." — From IPVS to NFTables: A Migration Guide for Kubernetes v1.35
3. "Goodbye Sidecars" — The Innovation of Istio Ambient Mesh
3.1 The Limitations of the Sidecar Model
The sidecar proxy model, once synonymous with the very definition of service mesh, is fading. The approach of injecting an Envoy proxy into every pod hit the following limitations:
- Resource waste: Proxy overhead of ~0.20 vCPU, ~60 MB memory per Pod
- Complex upgrades: All Pods must be restarted when changing the Envoy version
- Application lifecycle interference: Sidecar injection is coupled with Pod deployment
3.2 Ambient Mode Architecture
Istio's Ambient Mode reached GA with Istio v1.24 (November 2024), innovatively eliminating these constraints. After its announcement in September 2022, Solo.io, Google, Microsoft, Intel, and others co-developed it over 26 months.
Core Architecture — Two-Layer Separation:
| Layer | Component | Role |
|---|---|---|
| L4 | ztunnel (DaemonSet) | Shared per-node proxy — mTLS, L4 auth/authz, TCP telemetry |
| L7 | Waypoint Proxy (Deployment) | Optional L7 processing — HTTP routing, traffic shifting, L7 authz |
ztunnel (Zero Trust Tunnel):
- Rust-based lightweight proxy — ~0.06 vCPU, ~12 MB memory per node
- mTLS tunneling via HBONE (HTTP-Based Overlay Network Environment) protocol
- Operates as a single DaemonSet per node
- Even intra-node traffic passes through ztunnel for uniform policy enforcement
Traffic Flow Comparison:
[L4 Only - No Waypoint]
Source Pod -> ztunnel(source) --HBONE/mTLS--> ztunnel(dest) -> Dest Pod
[L7 - With Waypoint]
Source Pod -> ztunnel(source) --HBONE--> Waypoint --HBONE--> ztunnel(dest) -> Dest Pod
3.3 Performance Comparison: Ambient vs Sidecar
Latency (1KB HTTP request, Istio v1.24 benchmark):
| Mode | p90 Latency | p99 Latency |
|---|---|---|
| Ambient (L4, ztunnel) | ~0.16 ms | ~0.20 ms |
| Ambient (L7, Waypoint) | ~0.40 ms | ~0.50 ms |
| Sidecar | ~0.63 ms | ~0.88 ms |
Resource Usage (1,000 req/s, 1KB payload):
| Component | CPU | Memory |
|---|---|---|
| Sidecar (Envoy per Pod) | ~0.20 vCPU | ~60 MB |
| Waypoint (L7 Envoy) | ~0.25 vCPU | ~60 MB |
| ztunnel (per node) | ~0.06 vCPU | ~12 MB |
Actual Savings:
- Switching to Ambient mode yields 73% CPU reduction (measured by Solo.io)
- Approximately 1.3 CPU cores saved per namespace
- Over 90% overhead reduction in some use cases
- One user achieved a 45% reduction in container count after migrating from AWS App Mesh
3.4 Practical Deployment Guide
Prerequisites
# Install Gateway API CRDs
kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/experimental-install.yaml
Method 1: istioctl (Quick Start)
istioctl install --set profile=ambient --skip-confirmation
Method 2: Helm (Recommended for Production)
# 1. Install Base chart
helm install istio-base istio/base -n istio-system --create-namespace --wait
# 2. Install istiod (control plane)
helm install istiod istio/istiod -n istio-system --set profile=ambient --wait
# 3. Install CNI agent
helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait
# 4. Install ztunnel DaemonSet
helm install ztunnel istio/ztunnel -n istio-system --wait
Enrolling Workloads in the Mesh
# Namespace-level enrollment — No Pod restart required!
kubectl label namespace default istio.io/dataplane-mode=ambient
# Exclude individual Pods
kubectl label pod <pod-name> istio.io/dataplane-mode=none
Deploying a Waypoint Proxy (When L7 Features Are Needed)
# Deploy a Waypoint to a namespace
istioctl waypoint apply -n default --enroll-namespace
# Deploy a per-service Waypoint
istioctl waypoint apply -n default --name reviews-svc-waypoint
kubectl label service reviews istio.io/use-waypoint=reviews-svc-waypoint
Waypoint Gateway API YAML Example:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
labels:
istio.io/waypoint-for: service
name: waypoint
namespace: default
spec:
gatewayClassName: istio-waypoint
listeners:
- name: mesh
port: 15008
protocol: HBONE
3.5 Ambient vs Sidecar: When to Choose What
| Scenario | Recommended Mode |
|---|---|
| Only L4 zero-trust encryption needed | Ambient |
| Resource-constrained environments | Ambient |
| Production environments where Pod restarts are difficult | Ambient (no restart needed) |
| Gradual L7 feature adoption | Ambient (selective Waypoint deployment) |
| Multi-cluster / multi-network | Sidecar (Ambient support in development) |
| VM workloads | Sidecar (Ambient VM support in development) |
| Maximum per-Pod security isolation | Sidecar |
4. The Evolution of Certification Exams: The Era of 100% Hands-On, More Demanding Than Theory
Changes in technology are immediately reflected in certification trends. CKA and CKAD have already fully established themselves as performance-based exams.
4.1 CKA (Certified Kubernetes Administrator)
| Domain | Weight |
|---|---|
| Troubleshooting | 30% |
| Cluster Architecture, Installation & Configuration | 25% |
| Services & Networking | 20% |
| Workloads & Scheduling | 15% |
| Storage | 10% |
4.2 CKAD (Certified Kubernetes Application Developer)
| Domain | Weight |
|---|---|
| Application Environment, Configuration & Security | 25% |
| Services & Networking | 20% |
| Application Design & Build | 20% |
| Application Deployment | 20% |
| Application Observability & Maintenance | 15% |
4.3 ICA (Istio Certified Associate) — New Certification
A particularly noteworthy change is the emergence of the ICA (Istio Certified Associate). This is not merely an additional certification — it is designed as an essential gateway to validate the foundational knowledge needed for transitioning to sidecar-free architectures like Ambient Mesh.
ICA Exam Domains (Updated August 12, 2025):
| Domain | Weight | Key Competencies |
|---|---|---|
| Traffic Management | 35% | VirtualService, DestinationRule, Gateway, ServiceEntry, traffic shifting, Circuit Breaking, Failover |
| Securing Workloads | 25% | mTLS, PeerAuthentication, AuthorizationPolicy, JWT authentication |
| Installation & Configuration | 20% | istioctl/Helm installation, Sidecar/Ambient mode deployment, canary/in-place upgrades |
| Troubleshooting | 20% | Control plane diagnostics, data plane diagnostics, configuration issue resolution |
Exam Details:
| Item | Details |
|---|---|
| Istio Version | v1.26 |
| Exam Duration | 2 hours |
| Format | Online proctored, 15-20 hands-on tasks |
| Passing Score | 68% |
| Cost | $250 (includes 1 free retake) |
| Allowed References | Istio official docs, Istio Blog, Kubernetes docs |
Passing Tip: You have approximately 6-8 minutes per task. Secure points by solving familiar tasks first, and hands-on practice with VirtualService/DestinationRule/AuthorizationPolicy configuration is essential. 2-3 months of hands-on experience is recommended.
5. Beyond Ingress: Gateway API as the New Routing Standard
5.1 The Limitations of Ingress
Legacy Ingress lacked L4 protocol support, forcing reliance on NGINX ConfigMaps or complex annotations as workarounds. Gateway API overcomes these limitations and evolves traffic management into a declarative, structured approach.
Gateway API reached GA with v1.0 (October 2023) and continues to evolve through the latest v1.4 (November 2025).
5.2 Role-Oriented Resource Separation (Role-Oriented Design)
The most innovative feature of Gateway API is its role-based resource separation:
| Role | Managed Resource | Responsibility |
|---|---|---|
| Infrastructure Provider (Ian) | GatewayClass | Defines underlying implementation (Envoy, AWS NLB, etc.), cluster scope |
| Cluster Operator (Chihiro) | Gateway | Instantiates load balancer, listener/TLS config, namespace access control |
| Application Developer (Ana) | HTTPRoute / GRPCRoute | Defines service routing rules (paths, headers, weights) |
5.3 Core Resource Guide
| Resource | Scope | Stability | Description |
|---|---|---|---|
| GatewayClass | Cluster | GA (v1) | Defines common configuration for a set of Gateways. Similar to IngressClass |
| Gateway | Namespace | GA (v1) | Defines how traffic is received (addresses, listeners, TLS) |
| HTTPRoute | Namespace | GA (v1) | HTTP/HTTPS traffic routing (hosts, paths, headers) |
| GRPCRoute | Namespace | GA (v1.1+) | Dedicated gRPC traffic routing |
| TCPRoute | Namespace | Experimental | L4 TCP port mapping |
| UDPRoute | Namespace | Experimental | L4 UDP port mapping |
| TLSRoute | Namespace | Experimental | SNI-based TLS connection multiplexing |
Resource Relationships: GatewayClass -> Gateway -> Routes -> Services
5.4 Practical YAML Examples
Canary Deployment — Traffic Split (90/10)
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: canary-split
spec:
parentRefs:
- name: production-gateway
hostnames:
- 'app.example.com'
rules:
- backendRefs:
- name: app-v1
port: 8080
weight: 90
- name: app-v2
port: 8080
weight: 10
A/B Testing — Header-Based Routing
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ab-testing
spec:
parentRefs:
- name: main-gateway
hostnames:
- 'api.example.com'
rules:
# Requests with X-API-Version: v2 header -> route to v2
- matches:
- headers:
- name: X-API-Version
value: 'v2'
backendRefs:
- name: api-v2
port: 8080
# Default requests -> route to v1
- backendRefs:
- name: api-v1
port: 8080
Combined Test Traffic + Production Split
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: combined-routing
spec:
parentRefs:
- name: production-gateway
hostnames:
- 'app.example.com'
rules:
# traffic: test header -> route directly to v2
- matches:
- headers:
- name: traffic
value: test
backendRefs:
- name: app-v2
port: 8080
# Production traffic -> 90/10 split
- backendRefs:
- name: app-stable
port: 8080
weight: 90
- name: app-canary
port: 8080
weight: 10
5.5 Migrating from Ingress to Gateway API
Automated conversion using the ingress2gateway tool:
# Install the tool
go install github.com/kubernetes-sigs/ingress2gateway@latest
# Convert existing Ingress resources to Gateway API
ingress2gateway print
Migration Strategy:
- Install Gateway API CRDs
- Choose a Gateway API implementation (Istio, Envoy Gateway, NGINX Gateway Fabric, etc.)
- Auto-convert existing Ingress resources with
ingress2gateway - Run Gateway API and Ingress in parallel for validation
- Remove the existing Ingress after traffic switchover
6. CNI Is No Longer Just a "Pipe": The Center of Observability
6.1 Cilium: The Next-Generation eBPF-Based CNI
CNI (Container Network Interface) has moved beyond its simple role of connecting IPs between pods. Cilium, built on eBPF technology, unifies networking, security, and observability into a single platform.
Cilium Replacing kube-proxy — Full Replacement with eBPF:
helm install cilium cilium/cilium --version 1.19.1 \
--namespace kube-system \
--set kubeProxyReplacement=true \
--set k8sServiceHost=${API_SERVER_IP} \
--set k8sServicePort=${API_SERVER_PORT}
Supported Service Types: ClusterIP, NodePort, LoadBalancer, externalIPs, hostPort
Load Balancing Algorithms:
- Random (default): Random backend selection
- Maglev: Consistent hashing — minimal disruption on backend changes (
loadBalancer.algorithm=maglev)
Forwarding Modes:
- SNAT (default): Standard Source NAT
- DSR (Direct Server Return): Backend responds directly to client — eliminates extra hops
- Hybrid: DSR for TCP, SNAT for UDP
6.2 Hubble: Visualizing the Black Box Network
Cilium's Hubble brings visibility into what was once a black-box network, down to the L7 traffic level:
| Component | Role |
|---|---|
| Hubble (per-node) | Runs on each node, provides local flows via Unix Domain Socket |
| Hubble Relay | Aggregates flows across the entire cluster or multi-cluster |
| Hubble CLI | Command-line flow querying and filtering |
| Hubble UI | Web interface — service dependency maps, flow filtering visualization |
Monitoring Scope:
- L3/L4: Source/destination IP, ports, TCP connection state, DNS resolution issues, connection timeouts
- L7: HTTP requests/responses (method, path, status code), Kafka topics, gRPC calls, DNS queries
- Encrypted traffic filtering support
- Automatic service dependency graph discovery
Hubble-Enabled Installation:
helm install cilium cilium/cilium --version 1.19.1 \
--namespace kube-system \
--set kubeProxyReplacement=true \
--set k8sServiceHost=${API_SERVER_IP} \
--set k8sServicePort=${API_SERVER_PORT} \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
6.3 Core Security Features of Cilium
Enhanced NetworkPolicy:
- Kubernetes standard
NetworkPolicy+ Cilium-specificCiliumNetworkPolicy/CiliumClusterwideNetworkPolicy - L3, L4, L7 level policy enforcement (e.g., allow only
GET /api/v1/users) - Identity-based policies instead of IP-based — stable even when Pods change
Transparent Encryption (3 Options):
| Method | Description |
|---|---|
| IPsec | Encrypts traffic between all Cilium-managed endpoints |
| WireGuard | Default: Pod-to-Pod. Options: Node-to-Node, Pod-to-Node |
| Ztunnel (Beta) | Transparent encryption and authentication of TCP connections without sidecars |
6.4 CNI Comparison: Flannel vs Calico vs Cilium
| Feature | Flannel | Calico | Cilium |
|---|---|---|---|
| Data Plane | VXLAN / host-gw | iptables or eBPF | eBPF (native) |
| NetworkPolicy | Not supported | L3/L4 | L3/L4/L7 |
| kube-proxy Replacement | Not possible | Not possible | Possible (eBPF) |
| Encryption | Not supported | WireGuard | IPsec, WireGuard, Ztunnel |
| Observability | None | Basic flow logs | Hubble (L3-L7) |
| BGP | Not supported | Native | Supported (v1.10+) |
| Multi-Cluster | Not supported | Federation | Cluster Mesh |
| Resource Usage | Very low | Low-Medium | Medium-High |
| Complexity | Very low | Medium | High |
Selection Guide:
- Flannel: Small-scale dev/test clusters, when NetworkPolicy is not needed
- Calico: On-premises/hybrid environments requiring BGP peering, stable L3/L4 policies
- Cilium: Large-scale high-performance environments, when L7 policies/observability are needed, when you want to eliminate kube-proxy
6.5 CNI Migration Tips: From Calico to Cilium
There is a critical point that architects must not overlook here. When performing a live migration from Calico to Cilium, legacy routing mode must be enabled:
# Cilium Migration Helm Values
ipam:
mode: 'cluster-pool'
operator:
clusterPoolIPv4PodCIDRList: ['10.245.0.0/16'] # Different CIDR from Calico
cni:
customConf: true
uninstall: false
policyEnforcementMode: 'never' # Disable policies during migration
bpf:
hostLegacyRouting: true # Critical! Use Linux routing stack
tunnelPort: 8473 # Different port from Calico default (8472)
Migration Process (per node):
- Install Cilium in auxiliary mode (without CNI management)
- Per node: cordon -> drain -> assign Cilium label -> restart Cilium agent -> reboot
- Verify Pods on each node are using the new Cilium CIDR
- After full migration: enable policies, remove Calico, clean up iptables rules (automatic on reboot)
"Choosing the right CNI plugin is not simply a matter of connectivity. It is the most critical architectural decision that determines the overall performance, security posture, and operational complexity of your cluster." — Comparing Kubernetes CNI Plugins: Calico, Cilium, Flannel, and Weave
7. Conclusion: Preparing for the Next Generation of Kubernetes
Kubernetes networking is undergoing a massive transformation:
| Area | Before | After |
|---|---|---|
| kube-proxy | iptables/IPVS | nftables (GA v1.33) |
| Service Mesh | Sidecar Proxy | Ambient Mode (GA Istio 1.24) |
| Ingress | Ingress + Annotation | Gateway API (GA v1.0+) |
| CNI | Simple networking | eBPF-based unified platform (Cilium) |
The ultimate destination of these changes is the achievement of Operational Excellence through performance optimization and operational efficiency.
The transformation has already begun, and the grace period is shorter than you think. Is your cluster prepared for the departure of IPVS? Now is the golden window to commit to a future-oriented architecture.
References
- KEP-3866: nftables-based kube-proxy backend
- KEP-5495: Deprecate ipvs mode in kube-proxy
- NFTables mode for kube-proxy (Kubernetes Official Blog)
- Virtual IPs and Service Proxies (Kubernetes Docs)
- Kubernetes v1.35 Release Highlights
- From IPVS to NFTables: A Migration Guide (Tigera)
- Istio Ambient Mode GA Announcement
- Istio Ambient Mode Architecture
- Istio Ambient Install Guide
- Sidecar or Ambient? (Istio Docs)
- ICA Certification (Linux Foundation)
- Changes Coming to ICA Exam
- Gateway API Official Docs
- Gateway API v1.0 GA Announcement
- ingress2gateway Tool
- Cilium Documentation
- Cilium kube-proxy Replacement
- Hubble Observability
- Cilium Transparent Encryption
- Migrating a Cluster to Cilium
- CNI Comparison: Calico vs Cilium vs Flannel (2025)
- Configure kube-proxy (AKS)