Skip to content
Published on

Kubernetes CKA Certification Practice Exam (55 Questions + 10 Practical Simulations)

Authors

CKA Exam Overview

The CKA (Certified Kubernetes Administrator) is a hands-on, performance-based exam offered by CNCF (Cloud Native Computing Foundation).

ItemDetails
Duration120 minutes
Questions17 performance-based tasks
Passing Score66% or above
FormatOnline, proctored
Validity3 years
Kubernetes VersionLatest stable release

Domain Weights

DomainWeight
Cluster Architecture, Installation & Configuration25%
Workloads & Scheduling15%
Services & Networking20%
Storage10%
Troubleshooting30%

kubectl Command Cheat Sheet

# Cluster info
kubectl cluster-info
kubectl get nodes -o wide
kubectl describe node NODE_NAME

# Pod management
kubectl run nginx --image=nginx --restart=Never
kubectl get pods -A -o wide
kubectl describe pod POD_NAME
kubectl logs POD_NAME -c CONTAINER_NAME
kubectl exec -it POD_NAME -- /bin/sh

# Deployment management
kubectl create deployment myapp --image=nginx --replicas=3
kubectl scale deployment myapp --replicas=5
kubectl rollout status deployment myapp
kubectl rollout undo deployment myapp
kubectl rollout history deployment myapp

# Expose services
kubectl expose deployment myapp --port=80 --type=ClusterIP
kubectl expose deployment myapp --port=80 --type=NodePort

# Edit resources
kubectl edit deployment myapp
kubectl patch deployment myapp -p '{"spec":{"replicas":3}}'

# RBAC
kubectl create serviceaccount mysa
kubectl create clusterrole myrole --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding mybinding --clusterrole=myrole --serviceaccount=default:mysa

# etcd backup/restore
ETCDCTL_API=3 etcdctl snapshot save /backup/etcd-snapshot.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# Node management
kubectl cordon NODE_NAME
kubectl drain NODE_NAME --ignore-daemonsets --delete-emptydir-data
kubectl uncordon NODE_NAME

# Certificate verification
kubeadm certs check-expiration

# jsonpath examples (curly braces must be quoted)
kubectl get pod mypod -o jsonpath='{.spec.nodeName}'
kubectl get nodes -o jsonpath='{.items[*].metadata.name}'

Multiple Choice Questions (Q1 ~ Q55)

Domain 1: Cluster Architecture, Installation & Configuration

Q1. When initializing a Kubernetes cluster with kubeadm init, what is the default pod network CIDR?

A) 10.0.0.0/8 B) 192.168.0.0/16 C) Must be explicitly specified (no default) D) 172.16.0.0/12

Answer: C

Explanation: kubeadm init does not automatically set a default pod network CIDR. You must specify it with the --pod-network-cidr flag. The value depends on your CNI plugin: Calico typically uses 192.168.0.0/16, while Flannel uses 10.244.0.0/16.

Q2. What is the correct etcdctl command to restore an etcd snapshot?

A) etcdctl snapshot load B) etcdctl snapshot restore C) etcdctl backup restore D) etcdctl cluster restore

Answer: B

Explanation: Use ETCDCTL_API=3 etcdctl snapshot restore to restore an etcd snapshot. Specify the restore location with --data-dir, then restart the etcd service with the new data directory configured.

Q3. What is the key difference between ClusterRole and Role?

A) ClusterRole applies to only one namespace B) Role can access cluster-wide resources C) ClusterRole can apply to cluster-scoped resources and across all namespaces D) ClusterRole and Role are functionally identical

Answer: C

Explanation: ClusterRole can grant access to cluster-scoped resources (nodes, persistentvolumes, etc.) and namespace-scoped resources across all namespaces. Role is valid only within a specific namespace.

Q4. What is the difference between RoleBinding and ClusterRoleBinding?

A) RoleBinding cannot reference a ClusterRole B) ClusterRoleBinding applies to a specific namespace only C) RoleBinding grants permissions within a specific namespace; ClusterRoleBinding grants permissions cluster-wide D) Both have identical scope

Answer: C

Explanation: RoleBinding grants access to resources within a specific namespace. ClusterRoleBinding grants access cluster-wide. When a RoleBinding references a ClusterRole, the ClusterRole's permissions apply only within that namespace.

Q5. What is the correct order for upgrading a cluster with kubeadm?

A) Control plane → Worker nodes → etcd B) etcd → Control plane → Worker nodes C) Worker nodes → Control plane → etcd D) kubeadm upgrade plan → kubeadm upgrade apply → upgrade kubelet/kubectl

Answer: D

Explanation: The correct upgrade sequence: 1) kubeadm upgrade plan to check available versions, 2) kubeadm upgrade apply vX.Y.Z to upgrade control plane, 3) Upgrade kubelet and kubectl, 4) Run kubeadm upgrade node on each worker, 5) Upgrade worker node kubelet.

Q6. What is the default location for Kubernetes API server certificates?

A) /etc/ssl/kubernetes/ B) /var/lib/kubernetes/ C) /etc/kubernetes/pki/ D) /home/kubernetes/certs/

Answer: C

Explanation: In kubeadm-provisioned clusters, certificates reside in /etc/kubernetes/pki/. This includes apiserver.crt, apiserver.key, ca.crt, ca.key, and other component certificates.

Q7. What does kubectl config use-context do?

A) Creates a new kubeconfig file B) Changes the currently active context C) Adds a new user to the cluster D) Changes the namespace

Answer: B

Explanation: kubectl config use-context CONTEXT_NAME switches the active context in the kubeconfig file. In the CKA exam, you must switch contexts for each question, making this a critical command.

Q8. What is the correct command to add a taint to a node?

A) kubectl label node NODE_NAME key=value:NoSchedule B) kubectl taint node NODE_NAME key=value:NoSchedule C) kubectl annotate node NODE_NAME key=value:NoSchedule D) kubectl mark node NODE_NAME key=value:NoSchedule

Answer: B

Explanation: Use kubectl taint nodes NODE_NAME key=value:effect to add a taint. Effects can be NoSchedule, PreferNoSchedule, or NoExecute. To remove: kubectl taint nodes NODE_NAME key=value:NoSchedule-

Q9. What is the correct way to create a Static Pod?

A) Use the kubectl create pod command B) Place a YAML manifest in the kubelet-watched directory (default: /etc/kubernetes/manifests/) C) Make a direct API call to kube-apiserver D) Write data directly to etcd

Answer: B

Explanation: Static Pods are created by placing Pod manifest files in the directory watched by kubelet (default: /etc/kubernetes/manifests/). kubelet detects the file and runs the Pod. Control plane components like kube-apiserver, etcd, controller-manager, and scheduler run as Static Pods.

Q10. In kubeadm join, what are --token and --discovery-token-ca-cert-hash used for?

A) Generating cluster certificates B) Authentication credentials for a worker node to securely join the control plane C) Configuring network CNI D) Joining the etcd cluster

Answer: B

Explanation: --token is a Bootstrap Token for temporary authentication between worker nodes and the control plane. --discovery-token-ca-cert-hash is a hash of the CA certificate that prevents man-in-the-middle attacks. Regenerate with kubeadm token create --print-join-command.

Domain 2: Workloads & Scheduling

Q11. Which field in a Deployment controls the maximum number of unavailable Pods during a rolling update?

A) maxSurge B) maxUnavailable C) minReadySeconds D) revisionHistoryLimit

Answer: B

Explanation: spec.strategy.rollingUpdate.maxUnavailable specifies the maximum number (or percentage) of Pods that can be unavailable during a rolling update. maxSurge specifies the maximum number of additional Pods that can be created above the desired count. Both default to 25%.

Q12. Which QoS class is assigned when a Pod's requests and limits are set to identical values?

A) BestEffort B) Burstable C) Guaranteed D) Reserved

Answer: C

Explanation: Guaranteed QoS requires all containers in the Pod to have CPU and memory requests equal to their limits. Burstable is when requests and limits differ. BestEffort is when no requests or limits are set at all.

Q13. What is the advantage of nodeAffinity over nodeSelector?

A) nodeSelector supports more complex rules B) nodeAffinity supports required/preferred rules and operators like In, NotIn, Exists, etc. C) nodeSelector uses annotations instead of labels D) nodeAffinity is deprecated

Answer: B

Explanation: nodeAffinity is more expressive: it supports requiredDuringSchedulingIgnoredDuringExecution (hard requirements) and preferredDuringSchedulingIgnoredDuringExecution (soft requirements), plus operators In, NotIn, Exists, DoesNotExist, Gt, Lt.

Q14. When should you use a DaemonSet?

A) To maintain a fixed number of Pod replicas B) To run one Pod on each (or selected) node C) To run stateful applications with ordered Pod management D) To run batch jobs

Answer: B

Explanation: DaemonSet ensures one Pod per node across all (or node-selector-matching) nodes. Common use cases include node monitoring agents, log collectors, and network plugins. New nodes automatically get a Pod when they join the cluster.

Q15. What is the naming format for Pods in a StatefulSet?

A) pod-random-suffix B) statefulset-name-0, statefulset-name-1, ... C) statefulset-name-random D) pod-0, pod-1, ...

Answer: B

Explanation: StatefulSet Pods have predictable names: statefulset-name-0, statefulset-name-1, and so on. These stable identities are one of the core features StatefulSet provides, enabling ordered deployment, scaling, and deletion.

Q16. What are the completions and parallelism fields in a Job?

A) completions: simultaneous Pods, parallelism: total tasks B) completions: total tasks to complete, parallelism: simultaneous Pod count C) completions: retry count, parallelism: timeout D) Both fields control retry behavior

Answer: B

Explanation: completions specifies the total number of Pods that must complete successfully. parallelism specifies how many Pods can run simultaneously. Example: completions=10, parallelism=3 processes 10 tasks 3 at a time.

Q17. How do you configure timezone in a CronJob?

A) Include timezone info in the spec.schedule field B) Use spec.timeZone with an IANA timezone name (Kubernetes 1.27+) C) Only possible via cluster-wide configuration D) CronJob does not support timezones

Answer: B

Explanation: Since Kubernetes 1.27 (GA), spec.timeZone accepts IANA timezone names like "America/New_York" or "Asia/Tokyo". Before 1.27, all schedules were UTC-only.

Q18. What is the difference between Pod Affinity and Pod Anti-Affinity?

A) Pod Affinity targets specific nodes; Pod Anti-Affinity targets Pods on specific nodes B) Pod Affinity schedules near specific Pods; Pod Anti-Affinity schedules away from specific Pods C) Both concepts are identical D) Pod Anti-Affinity is deprecated

Answer: B

Explanation: Pod Affinity schedules a Pod on the same node/zone as Pods with matching labels (e.g., co-locating a cache with its app). Pod Anti-Affinity schedules a Pod away from Pods with matching labels (e.g., spreading replicas across nodes for availability).

Q19. In kubectl get pod --field-selector=status.phase=Running, what does field-selector do?

A) Filters by label B) Filters by specific field values on the resource object C) Filters by annotation D) Filters by namespace

Answer: B

Explanation: --field-selector filters by actual resource field values like status.phase, metadata.name, or spec.nodeName. This differs from --selector / -l which filters by labels.

Q20. What is the purpose of a Toleration?

A) Forces a Pod to schedule on a specific node B) Allows a Pod to be scheduled on nodes with matching taints C) Removes Pods from a specific node D) Adds new taints to a node

Answer: B

Explanation: Taints on nodes repel Pods that don't tolerate them. Tolerations on Pods declare they can "tolerate" specific taints. Having a toleration doesn't guarantee the Pod will land on that node — it just permits scheduling there.

Domain 3: Services & Networking

Q21. What is the characteristic of a ClusterIP service?

A) Accessible from outside the cluster B) Accessible through a specific port on each node C) Provides a virtual IP accessible only within the cluster D) Automatically creates a load balancer

Answer: C

Explanation: ClusterIP services assign a virtual IP (the Cluster IP) that is only reachable within the cluster. They cannot be accessed externally and are used for inter-service communication. ClusterIP is the default service type.

Q22. What is the allowed port range for a NodePort service?

A) 1-1024 B) 1024-65535 C) 30000-32767 D) 8000-9000

Answer: C

Explanation: The default NodePort range is 30000-32767. This can be changed with the --service-node-port-range flag on kube-apiserver. If not specified, a port is automatically assigned within this range.

Q23. What is the DNS name format for a service in CoreDNS?

A) service-name.cluster.local B) service-name.namespace.svc.cluster.local C) namespace.service-name.cluster.local D) service-name.default.kubernetes.local

Answer: B

Explanation: The fully qualified DNS name for a service is service-name.namespace.svc.cluster.local. Within the same namespace, service-name alone is sufficient. Pod DNS follows the format pod-ip-dashes.namespace.pod.cluster.local.

Q24. What is a prerequisite for applying a NetworkPolicy?

A) kube-proxy must be in IPVS mode B) A CNI plugin that supports NetworkPolicy is required C) The cluster must run on a cloud provider D) An Ingress Controller must be installed

Answer: B

Explanation: NetworkPolicy enforcement requires a compatible CNI plugin. Calico, Cilium, and WeaveNet support NetworkPolicy. Flannel does not natively enforce NetworkPolicy. Without a compatible CNI, creating NetworkPolicy objects has no effect on traffic.

Q25. What is required for an Ingress resource to work?

A) A NodePort service must exist B) A LoadBalancer service must exist C) An Ingress Controller must be deployed in the cluster D) kube-proxy must be running

Answer: C

Explanation: An Ingress resource alone does not process traffic. An Ingress Controller (nginx, Traefik, HAProxy, etc.) must be deployed to watch Ingress resources and handle actual routing.

Q26. What is the default mode for kube-proxy?

A) userspace B) iptables C) IPVS D) ebpf

Answer: B

Explanation: The default kube-proxy mode is iptables. IPVS mode supports more load-balancing algorithms and offers better performance at scale. CNIs like Cilium can handle service routing via eBPF without kube-proxy.

Q27. What does setting externalTrafficPolicy: Local on a service do?

A) Blocks external traffic B) Routes traffic only to Pods on the receiving node, preserving the client IP C) Distributes traffic evenly across all nodes D) Only applies to LoadBalancer services

Answer: B

Explanation: With externalTrafficPolicy: Local, external traffic is delivered only to nodes that host the targeted Pod, preserving the original client IP. However, connections to nodes without the Pod are rejected. The default Cluster policy SNATs traffic and routes it to any node.

Q28. What is the role of a CNI (Container Network Interface) plugin?

A) Manages container runtimes B) Handles network connectivity between Pods and IP address assignment C) Manages communication with the Kubernetes API server D) Mounts storage volumes

Answer: B

Explanation: CNI plugins configure network interfaces when Pods are created, assign IP addresses, and set up routing for Pod-to-Pod communication. Flannel, Calico, Cilium, WeaveNet, and Canal are common CNI plugins.

Q29. What is the behavior of a headless service (clusterIP: None)?

A) The service cannot be accessed externally B) DNS returns individual Pod IPs directly, allowing clients to choose Pods C) Routes to a random Pod without load balancing D) Can only be used with StatefulSets

Answer: B

Explanation: A headless service with clusterIP: None has no virtual IP. DNS returns the actual Pod IPs directly. Combined with StatefulSet, each Pod gets a stable DNS name like pod-0.service.namespace.svc.cluster.local.

Q30. In a NetworkPolicy, what does podSelector: {} (empty selector) mean?

A) Selects no Pods B) Selects all Pods in the namespace C) Selects all Pods across all namespaces D) Selects only system Pods

Answer: B

Explanation: An empty podSelector: {} selects all Pods in the namespace where the NetworkPolicy is deployed. This is commonly used to apply a default-deny policy to all Pods in a namespace.

Domain 4: Storage

Q31. What happens with a PersistentVolume's Reclaim Policy of Retain when the PVC is deleted?

A) The PV is automatically deleted B) The PV data is wiped and reset C) The PV is retained and must be manually reclaimed by an admin D) The PV is automatically bound to another PVC

Answer: C

Explanation: With the Retain policy, the PV persists after the PVC is deleted. The PV enters the Released state and the data is preserved. An admin must manually delete or reconfigure the PV to make it available again. Delete removes the PV; Recycle is deprecated.

Q32. What does the ReadWriteMany (RWX) access mode mean?

A) Read/write from one node B) Read-only from multiple nodes simultaneously C) Read/write from multiple nodes simultaneously D) Read/write from only one Pod

Answer: C

Explanation: ReadWriteMany (RWX) allows the volume to be mounted read-write by multiple nodes simultaneously. NFS, CephFS, and Azure File support this mode. ReadWriteOnce (RWO) allows read-write from one node; ReadOnlyMany (ROX) allows read-only from multiple nodes.

Q33. What is the primary role of a StorageClass?

A) Manages existing PVs B) Defines storage properties for dynamic PV provisioning C) Directly assigns storage to Pods D) Blocks access to network storage

Answer: B

Explanation: StorageClass defines how PVs are dynamically provisioned when a PVC is created. It specifies the provisioner (e.g., kubernetes.io/aws-ebs), reclaimPolicy, and parameters. volumeBindingMode: WaitForFirstConsumer delays PV creation until a Pod is scheduled.

Q34. What are the characteristics of an emptyDir volume?

A) Data persists across node restarts B) Deleted when the Pod is removed; useful for sharing data between containers in a Pod C) Can be shared across multiple Pods D) Accesses external storage without a PVC

Answer: B

Explanation: emptyDir is created when a Pod is assigned to a node and deleted when the Pod is removed. It is useful for sharing files between containers within the same Pod. By default it uses node disk; with medium: Memory it uses tmpfs.

Q35. What is the advantage of mounting a ConfigMap as a volume instead of injecting it as environment variables?

A) Faster than environment variable injection B) Changes to the ConfigMap are automatically reflected in the mounted files at runtime C) Can store encrypted data D) Data persists like permanent storage

Answer: B

Explanation: When a ConfigMap is volume-mounted, updates to the ConfigMap are automatically propagated to the mounted files (within kubelet's syncFrequency). ConfigMaps injected as environment variables are not updated until the Pod is restarted.

Domain 5: Troubleshooting

Q36. When a Pod is in CrashLoopBackOff state, what should you check first?

A) Use kubectl get pod to list Pods B) Use kubectl logs to check container logs C) Delete and recreate the Pod D) Restart the node

Answer: B

Explanation: CrashLoopBackOff means the container is repeatedly crashing and being restarted. Check kubectl logs POD_NAME for stdout/stderr output. Use kubectl logs POD_NAME --previous for logs from the last crash. kubectl describe pod shows Events too.

Q37. What are the main causes of and solutions for an ImagePullBackOff error?

A) Insufficient CPU/memory for the Pod B) Incorrect image name/tag or missing credentials for a private registry C) NetworkPolicy blocking traffic D) PVC binding failure

Answer: B

Explanation: ImagePullBackOff causes: 1) wrong image name or tag, 2) image doesn't exist, 3) missing imagePullSecrets for private registry, 4) network connectivity issue. Check kubectl describe pod Events for the specific error message.

Q38. What should you check when a node is in NotReady state?

A) Only check kube-apiserver logs B) kubelet service status, logs, network connectivity, disk/memory pressure C) Only check etcd status D) Delete and recreate Pods

Answer: B

Explanation: Diagnosing NotReady: 1) systemctl status kubelet to check kubelet state, 2) journalctl -u kubelet -f for kubelet logs, 3) Check disk/memory/PID pressure in kubectl describe node, 4) Verify network connectivity, 5) Check container runtime (containerd/docker) status.

Q39. What command checks the health of an etcd cluster?

A) kubectl get etcd B) etcdctl endpoint health C) systemctl status etcd D) kubectl describe etcd

Answer: B

Explanation: ETCDCTL_API=3 etcdctl endpoint health --endpoints=https://127.0.0.1:2379 --cacert=... --cert=... --key=... checks cluster health. etcdctl endpoint status shows per-member status including leader information.

Q40. What is the correct way to run a command inside a running Pod?

A) kubectl exec POD_NAME COMMAND B) kubectl exec -it POD_NAME -- COMMAND C) kubectl run POD_NAME -- COMMAND D) kubectl attach POD_NAME -c COMMAND

Answer: B

Explanation: kubectl exec -it POD_NAME -- /bin/sh opens an interactive shell. -i keeps stdin open, -t allocates a TTY. Add -c CONTAINER_NAME for multi-container Pods. Single command: kubectl exec POD_NAME -- ls /etc

Q41. What command shows Pod resource usage?

A) kubectl get pod --resources B) kubectl top pod C) kubectl describe pod --metrics D) kubectl stats pod

Answer: B

Explanation: kubectl top pod displays CPU and memory usage per Pod. Requires Metrics Server installed in the cluster. kubectl top node shows node-level resource usage.

Q42. What are the main reasons a Pod stays in Pending state?

A) Container is crashing B) Image cannot be found C) Scheduler cannot find a suitable node, or PVC is not bound D) Network connectivity issue

Answer: C

Explanation: Pending Pod causes: 1) Insufficient resources (CPU/memory) on all nodes, 2) taint/toleration mismatch, 3) No node satisfying nodeSelector/affinity, 4) PVC not bound, 5) Scheduler not running. Check kubectl describe pod Events for specifics.

Q43. How do you check kube-apiserver logs?

A) kubectl logs kube-apiserver -n default B) kubectl logs -n kube-system kube-apiserver-NODENAME C) systemctl logs kube-apiserver D) journalctl -u kubernetes

Answer: B

Explanation: The kube-apiserver runs as a Static Pod. Access its logs with kubectl logs -n kube-system kube-apiserver-controlplane (where "controlplane" is the node name). Alternatively: crictl logs CONTAINER_ID.

Q44. Why does a container enter OOMKilled state and how do you fix it?

A) CPU limit exceeded — increase limits.cpu B) Memory limit exceeded — increase limits.memory or optimize app memory usage C) Disk space insufficient — increase PVC size D) Network timeout — add retry settings

Answer: B

Explanation: OOMKilled (Out of Memory Killed) occurs when a container exceeds its limits.memory. Solutions: 1) Increase limits.memory appropriately, 2) Check for memory leaks in the application, 3) For JVM apps, adjust heap size with -Xmx.

Q45. What command shows Kubernetes events sorted by time?

A) kubectl get events B) kubectl get events --sort-by=.metadata.creationTimestamp C) kubectl describe events D) kubectl logs events

Answer: B

Explanation: kubectl get events --sort-by=.metadata.creationTimestamp sorts events chronologically. Filter by namespace: kubectl get events -n NAMESPACE. Watch in real time: kubectl get events --watch. Pod events also appear in kubectl describe pod.

Q46. Where is the kubelet configuration file located?

A) /etc/kubernetes/kubelet.conf B) /var/lib/kubelet/config.yaml C) /etc/kubelet/config.yaml D) /usr/local/kubernetes/kubelet.conf

Answer: B

Explanation: The default kubelet configuration file is /var/lib/kubelet/config.yaml. Systemd service settings are in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf or /etc/default/kubelet. systemctl status kubelet shows the active configuration.

Q47. Why would a PersistentVolumeClaim remain in Pending state?

A) No Pods are running B) No PV satisfies the requirements (storageClass, accessMode, capacity) or dynamic provisioning fails C) Namespace is incorrectly configured D) Node has insufficient disk space

Answer: B

Explanation: PVC Pending causes: 1) Required StorageClass doesn't exist, 2) No PV with the required accessMode, 3) No PV with sufficient capacity, 4) Dynamic provisioning failed, 5) Waiting for Pod scheduling with volumeBindingMode: WaitForFirstConsumer. Check with kubectl describe pvc.

Q48. What additional columns does kubectl get pod -o wide show?

A) Container logs B) IP address, node name, gateway C) Resource usage D) Environment variables

Answer: B

Explanation: The -o wide flag adds IP address, node name (NODE column), Nominated Node, and Readiness Gates to the standard output. Useful for quickly seeing which node a Pod runs on and what IP it has.

Q49. What event message appears when kube-scheduler cannot schedule a Pod?

A) Error B) Warning FailedScheduling C) Normal SchedulingFailed D) Critical NoNodeAvailable

Answer: B

Explanation: When kube-scheduler cannot schedule a Pod, a Warning FailedScheduling event is generated. Check kubectl describe pod POD_NAME Events section for details like "0/3 nodes are available: 3 Insufficient memory."

Q50. What does kubectl rollout restart deployment DEPLOY_NAME do?

A) Rolls back the deployment to the previous version B) Restarts all Pods in a rolling fashion C) Deletes and recreates the Deployment D) Pauses the Pods

Answer: B

Explanation: kubectl rollout restart deployment restarts all Pods in the Deployment using a rolling update strategy. Useful for applying ConfigMap/Secret updates without changing the image or spec. Monitor with kubectl rollout status.

Q51. How do you attach a ServiceAccount to a Pod?

A) kubectl attach B) Set the spec.serviceAccountName field to the ServiceAccount name C) kubectl link D) Add serviceaccount=NAME to Pod labels

Answer: B

Explanation: Set spec.serviceAccountName in the Pod or Deployment spec. If not specified, the default ServiceAccount of the namespace is automatically mounted. The token is available at /var/run/secrets/kubernetes.io/serviceaccount/.

Q52. How do you create a temporary Pod for network debugging in a Kubernetes cluster?

A) kubectl debug node B) kubectl run tmp --image=busybox --rm -it -- /bin/sh C) kubectl create pod debug D) kubectl network debug

Answer: B

Explanation: kubectl run tmp --image=busybox --rm -it -- /bin/sh creates a temporary debugging Pod. --rm auto-deletes it on exit, -it provides interactive TTY. The nicolaka/netshoot image includes extensive network debugging tools.

Q53. What is the relationship between cluster, user, and context in a kubeconfig file?

A) cluster and user are independent and unrelated to context B) A context combines a cluster and user (plus optional namespace) to define how to access a cluster with specific credentials C) A user contains a cluster D) A cluster contains users and contexts

Answer: B

Explanation: A kubeconfig has three sections: clusters (server address and CA), users (credentials), and contexts (cluster + user + namespace combinations). A context defines which cluster to connect to and with which credentials. kubectl config use-context changes the active context.

Q54. How do you check environment variables inside a running Pod?

A) kubectl get pod -o env B) kubectl exec POD_NAME -- env C) kubectl describe pod --env D) kubectl get configmap

Answer: B

Explanation: kubectl exec POD_NAME -- env lists all environment variables in the container. Use kubectl exec POD_NAME -- printenv VARIABLE_NAME for a specific variable. kubectl describe pod also shows the configured environment variables.

Q55. Which of the following is NOT a control plane component?

A) kube-apiserver B) kube-scheduler C) kubelet D) kube-controller-manager

Answer: C

Explanation: kubelet is a node agent running on every node (control plane and worker nodes), not a control plane component. Control plane components: kube-apiserver, etcd, kube-scheduler, kube-controller-manager. Node components: kubelet, kube-proxy, container runtime.


Practical Simulations (P1 ~ P10)

P1. etcd Backup and Restore

Scenario: Save an etcd snapshot to /opt/etcd-backup.db on the control plane node, then restore it.

# Backup etcd
ETCDCTL_API=3 etcdctl snapshot save /opt/etcd-backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# Verify snapshot
ETCDCTL_API=3 etcdctl snapshot status /opt/etcd-backup.db --write-out=table

# Restore to new data directory
ETCDCTL_API=3 etcdctl snapshot restore /opt/etcd-backup.db \
  --data-dir=/var/lib/etcd-restore

# Edit /etc/kubernetes/manifests/etcd.yaml:
# Change --data-dir to /var/lib/etcd-restore
# Update hostPath volumes to match

P2. RBAC Configuration

Scenario: Configure RBAC so that the developer ServiceAccount in namespace dev can get, list, watch, create, update, and delete pods and deployments.

kubectl create namespace dev
kubectl create serviceaccount developer -n dev

kubectl create role developer-role \
  --verb=get,list,watch,create,update,delete \
  --resource=pods,deployments \
  -n dev

kubectl create rolebinding developer-binding \
  --role=developer-role \
  --serviceaccount=dev:developer \
  -n dev

# Verify
kubectl auth can-i get pods --as=system:serviceaccount:dev:developer -n dev
kubectl auth can-i delete deployments --as=system:serviceaccount:dev:developer -n dev
kubectl auth can-i get nodes --as=system:serviceaccount:dev:developer -n dev  # should be no

P3. Multi-Container Pod Logging

Scenario: Create a Pod with nginx and busybox containers in namespace app, and retrieve logs from the busybox container.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: multi-container
  namespace: app
spec:
  containers:
  - name: nginx
    image: nginx:latest
  - name: busybox
    image: busybox:latest
    command: ['sh', '-c', 'while true; do echo "Hello from busybox"; sleep 5; done']
EOF

kubectl logs multi-container -c busybox -n app
kubectl logs multi-container -c busybox -n app -f
kubectl exec -it multi-container -c nginx -n app -- nginx -v

P4. Node Maintenance (Drain and Uncordon)

Scenario: Put worker-node-1 into maintenance mode and restore it after maintenance.

kubectl get nodes
kubectl cordon worker-node-1
kubectl drain worker-node-1 \
  --ignore-daemonsets \
  --delete-emptydir-data \
  --force

kubectl get pods -o wide -A | grep worker-node-1

# After maintenance is complete
kubectl uncordon worker-node-1
kubectl get nodes

P5. PersistentVolume and PersistentVolumeClaim

Scenario: Create a hostPath PV with 10Gi capacity, then create a PVC and Pod that use it.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/my-pv
EOF

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
EOF

kubectl get pv my-pv
kubectl get pvc my-pvc

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: pvc-pod
spec:
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - mountPath: /data
      name: storage
  volumes:
  - name: storage
    persistentVolumeClaim:
      claimName: my-pvc
EOF

P6. NetworkPolicy Configuration

Scenario: Configure a NetworkPolicy so that Pods in the backend namespace only accept traffic on port 8080 from Pods labeled role: frontend in the frontend namespace.

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-policy
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: frontend
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
EOF

kubectl get networkpolicy -n backend
kubectl describe networkpolicy backend-policy -n backend

P7. Deployment Upgrade and Rollback

Scenario: Upgrade the webapp Deployment from nginx:1.25 to nginx:1.26, then roll back if issues arise.

kubectl create deployment webapp --image=nginx:1.25 --replicas=3
kubectl set image deployment/webapp nginx=nginx:1.26
kubectl rollout status deployment/webapp
kubectl rollout history deployment/webapp
kubectl rollout history deployment/webapp --revision=2
kubectl rollout undo deployment/webapp
kubectl rollout undo deployment/webapp --to-revision=1

# Verify current image
kubectl get deployment webapp -o jsonpath='{.spec.template.spec.containers[0].image}'

P8. CronJob Creation

Scenario: Create a CronJob that prints the current time every 5 minutes.

cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: CronJob
metadata:
  name: print-time
spec:
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: time-printer
            image: busybox
            command:
            - /bin/sh
            - -c
            - date; echo "CronJob executed successfully"
          restartPolicy: OnFailure
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
EOF

kubectl get cronjob print-time
kubectl get jobs

P9. Ingress Configuration

Scenario: Create an Ingress to make myapp-service (port 80) accessible at myapp.example.com.

kubectl create deployment myapp --image=nginx --replicas=2
kubectl expose deployment myapp --port=80 --name=myapp-service

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80
EOF

kubectl get ingress myapp-ingress
kubectl describe ingress myapp-ingress

P10. Certificate Renewal and Cluster Health Check

Scenario: Check certificate expiration dates, renew them, and perform a full cluster health check.

# Check certificate expiration
kubeadm certs check-expiration

# Renew all certificates
kubeadm certs renew all

# Renew individual certificates
kubeadm certs renew apiserver
kubeadm certs renew apiserver-kubelet-client

# Verify with openssl
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | grep -A2 Validity

# Cluster health check
kubectl get pods -n kube-system
kubectl cluster-info
kubectl get nodes -o wide
kubectl get pods -A
kubectl get pods -A | grep -v Running | grep -v Completed

# Node conditions
kubectl describe nodes | grep -A5 "Conditions:"

Exam Tips

  1. Switch contexts: Always run kubectl config use-context CONTEXT_NAME at the start of each question
  2. Use imperative commands: Faster than writing YAML for most tasks
  3. Template generation: --dry-run=client -o yaml to generate and then edit YAML
  4. vim setup: :set et ts=2 sw=2 nu for YAML editing
  5. Aliases: alias k=kubectl and export do="--dry-run=client -o yaml"
  6. Documentation: kubernetes.io/docs bookmarks are permitted