- Authors

- Name
- Youngju Kim
- @fjvbn20031
CKA Extra 30 Practice Questions - Advanced Scenarios
These 30 additional questions focus on advanced scenarios frequently tested in the CKA exam. Topics include etcd backup/restore, kubeadm upgrade edge cases, complex RBAC, NetworkPolicy troubleshooting, and more.
Questions 1-10: etcd Backup/Restore and Cluster Management
Question 1: What is the correct procedure for restoring an etcd snapshot?
What is the correct procedure when restoring an etcd snapshot and restarting the cluster?
A) Use the existing data-dir as-is after restoring the snapshot B) Specify a new data-dir during snapshot restore, update the etcd configuration to point to the new data-dir, then restart C) Only restart kubelet on all nodes after restoring the snapshot D) Snapshot restore automatically restarts etcd
Answer: B
The etcdutl snapshot restore command restores data to a new data-dir. After restoration, you must update the etcd configuration file (or static pod manifest) to point to the new data-dir path and restart etcd. Overwriting the existing data-dir risks data corruption.
Question 2: What certificate files are required for etcd backup?
Which combination of certificate-related flags must be specified when running etcdctl snapshot save?
A) --cacert, --cert, --key B) Only --ca-file and --cert-file C) Only --tls-cert D) Backup is possible without certificates
Answer: A
When etcd is TLS-protected, you must specify --cacert (CA certificate), --cert (client certificate), and --key (client key) for etcdctl. Additionally, the --endpoints flag should be used to specify the etcd endpoint.
Question 3: What precautions are needed when re-adding a removed etcd member?
In a 3-node etcd cluster, after removing a member and re-adding the same node, what must be done?
A) Keep the existing data-dir and just restart B) Delete the existing data-dir, add as a new member using etcdctl member add, and set initial-cluster-state to existing C) Just run etcdctl member update D) Restart the entire cluster
Answer: B
When re-adding a removed member, the existing data directory must be deleted. Then register the new member with etcdctl member add and set initial-cluster-state=existing to join the existing cluster.
Question 4: How to recover from etcd quorum loss?
In a 3-node etcd cluster, 2 nodes fail simultaneously causing quorum loss. How do you recover?
A) The remaining 1 node recovers automatically B) Save a snapshot from the remaining node, create a single-node cluster with --force-new-cluster flag, then add remaining members C) Restarting all 3 nodes will auto-recover D) Just run etcdctl defrag
Answer: B
When quorum is lost, normal read/write operations are impossible. You must save a snapshot from the surviving node, create a new single-node cluster using --force-new-cluster, then add the remaining members one by one to reconstruct the cluster.
Question 5: What metrics to check when etcd performance degrades?
Which metrics should be checked first when etcd performance is degraded?
A) etcd_server_proposals_committed_total B) etcd_disk_wal_fsync_duration_seconds and etcd_disk_backend_commit_duration_seconds C) etcd_network_client_grpc_received_bytes_total D) etcd_server_version
Answer: B
The most common cause of etcd performance degradation is disk latency. You should first check WAL fsync latency (etcd_disk_wal_fsync_duration_seconds) and backend commit latency (etcd_disk_backend_commit_duration_seconds). If these values are high, SSD usage or disk I/O optimization is needed.
Question 6: What is the correct order for kubeadm upgrade?
What is the correct order when upgrading a cluster from 1.29 to 1.30 using kubeadm?
A) Upgrade worker nodes first, then control plane B) Upgrade kubeadm -> upgrade kubelet and kubectl -> upgrade worker nodes, starting from control plane C) Upgrading kubectl alone handles everything automatically D) Run kubeadm upgrade apply simultaneously on all nodes
Answer: B
The correct order is: 1) Upgrade kubeadm on the control plane first, 2) Run kubeadm upgrade plan to verify, 3) Run kubeadm upgrade apply for control plane components, 4) Upgrade kubelet and kubectl then restart, 5) Drain worker nodes and repeat the same procedure.
Question 7: How to handle drain failure during kubeadm upgrade?
Draining a worker node fails due to DaemonSet Pods. How do you resolve this?
A) Delete the DaemonSet first B) Add the --ignore-daemonsets flag to the drain command C) Add only the --force flag D) Upgrade directly without drain
Answer: B
Adding the --ignore-daemonsets flag to kubectl drain will ignore Pods managed by DaemonSets and proceed with the drain. DaemonSet Pods must always be present on nodes, so excluding them from eviction is the correct approach.
Question 8: Why might kubelet fail to start after kubeadm upgrade?
After running kubeadm upgrade apply, kubelet fails to start. What is the most likely cause?
A) kube-proxy configuration error B) The kubelet package is still at the old version, causing version mismatch C) CNI plugin issue D) etcd is stopped
Answer: B
kubeadm upgrade apply only upgrades control plane components. kubelet and kubectl packages must be upgraded separately. Version mismatch can cause kubelet to fail to start properly.
Question 9: What to note when upgrading a multi-control-plane cluster?
What is the correct method for upgrading an HA cluster with 3 control plane nodes?
A) Run kubeadm upgrade apply simultaneously on all control planes B) Run kubeadm upgrade apply on the first control plane, then kubeadm upgrade node on the rest C) Run kubeadm upgrade apply on any node D) Upgrade worker nodes first, then control plane
Answer: B
In an HA cluster, only run kubeadm upgrade apply on the first control plane node. On remaining control plane nodes, run kubeadm upgrade node to update local kubelet configuration and certificates.
Question 10: How to configure etcd auto-compaction?
etcd data keeps growing and you receive space shortage warnings. How do you set up auto-compaction?
A) Add --auto-compaction-retention=1 --auto-compaction-mode=periodic to etcd startup options B) You can only compact manually with etcdctl compact C) etcd auto-compacts so no configuration is needed D) You need to manually replace boltdb
Answer: A
Setting --auto-compaction-mode=periodic and --auto-compaction-retention on etcd enables automatic compaction at specified intervals. In periodic mode, a retention of 1 means compaction every hour. Revision mode is also available. Running defrag after compaction is recommended.
Questions 11-20: RBAC and NetworkPolicy Advanced Scenarios
Question 11: What is the correct RBAC design combining ClusterRole and Role?
To allow reading only Pods in the dev namespace while also reading Node information across all namespaces, what setup is correct?
A) Put all permissions in one ClusterRole and bind with ClusterRoleBinding B) Create a Role for Pod reading in the dev namespace with RoleBinding, and a ClusterRole for Node reading with ClusterRoleBinding C) Put both Pod and Node permissions in one Role D) Grant permissions directly to the ServiceAccount
Answer: B
Namespace-scoped resources (Pods) need Role + RoleBinding, while cluster-scoped resources (Nodes) need ClusterRole + ClusterRoleBinding. A Role only applies to a specific namespace, and Nodes are cluster-scoped resources requiring a ClusterRole.
Question 12: What is the purpose of aggregated ClusterRoles in RBAC?
What is the correct description of how ClusterRoles with aggregationRule work?
A) They automatically merge rules from other ClusterRoles into a single ClusterRole B) They combine Roles from multiple namespaces C) They automatically reduce ClusterRole permissions D) They automatically create ServiceAccounts
Answer: A
aggregationRule uses label selectors to automatically merge rules from other ClusterRoles. Built-in ClusterRoles like admin, edit, and view use this mechanism. When adding new CRDs, creating ClusterRoles with appropriate labels automatically adds permissions to existing roles.
Question 13: How to disable automatic ServiceAccount token mounting?
How do you prevent ServiceAccount tokens from being automatically mounted in a specific Pod?
A) Delete the ServiceAccount B) Set automountServiceAccountToken to false in the Pod spec or ServiceAccount C) Remove all RBAC permissions D) Delete the namespace
Answer: B
Set the automountServiceAccountToken field to false in either the Pod spec or the ServiceAccount itself. Pod-level settings take precedence over ServiceAccount-level settings. This prevents the token from being mounted at /var/run/secrets/kubernetes.io/serviceaccount.
Question 14: How to allow traffic only from Pods in a specific namespace with NetworkPolicy?
To allow a Pod in the production namespace to receive traffic only from Pods in the monitoring namespace, what should you do?
A) Use namespaceSelector in ingress rules to select the monitoring namespace B) Only set egress rules C) Create a policy that allows all traffic D) Just change the Pod labels
Answer: A
Use namespaceSelector in the NetworkPolicy ingress rules to select the monitoring namespace labels. The monitoring namespace must have appropriate labels set. Using namespaceSelector together with podSelector allows for more fine-grained control.
Question 15: What is the difference between AND/OR combining namespaceSelector and podSelector?
What is the difference between these two NetworkPolicy ingress configurations?
Setup A: namespaceSelector and podSelector in the same from entry Setup B: namespaceSelector and podSelector as separate from entries
A) No difference B) Setup A is AND (both conditions must match), Setup B is OR (either condition allows traffic) C) Setup A is the OR condition D) Both are OR conditions
Answer: B
When namespaceSelector and podSelector are in the same from entry, it is an AND condition (only Pods matching both conditions are allowed). When they are separate from entries, it is an OR condition (matching either one allows traffic). This distinction is critical in NetworkPolicy and frequently tested.
Question 16: How to set up a Default Deny NetworkPolicy?
How do you default deny all ingress traffic in a specific namespace?
A) All traffic is denied by default without any NetworkPolicy B) Create a NetworkPolicy with an empty podSelector (selecting all Pods) and empty ingress rules C) Just leave egress rules empty D) Add an annotation to the namespace
Answer: B
Setting podSelector to an empty value selects all Pods in the namespace. Specifying Ingress in policyTypes without defining any ingress rules denies all inbound traffic. This is the standard pattern for default deny policies.
Question 17: What is a common NetworkPolicy troubleshooting mistake?
You applied a NetworkPolicy but Pods can still communicate. What is the most likely cause?
A) Too many NetworkPolicies are applied B) The CNI plugin does not support NetworkPolicy (e.g., Flannel) C) The Pod was not restarted D) kube-proxy is stopped
Answer: B
Some CNI plugins like Flannel do not support NetworkPolicy. You need a CNI that supports NetworkPolicy such as Calico, Cilium, or Weave Net. Creating NetworkPolicy resources has no effect if the CNI does not support them.
Question 18: What does the impersonate permission mean in RBAC?
What can a user do when granted the impersonate verb permission?
A) Change other users' passwords B) Make API requests as another user, group, or ServiceAccount C) Delete user accounts D) Access all Secrets
Answer: B
The impersonate permission allows using --as and --as-group flags in kubectl commands to impersonate other users or groups. This is useful for testing and debugging RBAC policies but should be granted carefully as it is a powerful permission.
Question 19: How to restrict external traffic using CIDR blocks in NetworkPolicy?
How do you allow egress from a Pod only to a specific external IP range (10.0.0.0/8)?
A) Set CIDR in ingress rules B) Specify ipBlock with cidr: 10.0.0.0/8 in egress rules C) Set externalIPs in the Service D) Set hostNetwork to true on the Pod
Answer: B
Using ipBlock in NetworkPolicy egress rules allows you to control external traffic based on CIDR blocks. The except field can be used to exclude specific subnets. This method is useful for restricting traffic to external services outside the cluster.
Question 20: How to use resourceNames for fine-grained RBAC control?
How do you grant read access to only one specific ConfigMap?
A) Grant get permission for all ConfigMaps B) Specify configmaps in the Role resources and list the specific ConfigMap name in resourceNames C) Add an annotation to the ConfigMap D) Separate namespaces
Answer: B
Using the resourceNames field in a Role's rules allows you to restrict permissions to specific named resources. For example, setting resources: configmaps, verbs: get, resourceNames: my-config grants read access only to the ConfigMap named my-config.
Questions 21-30: CSI, kubelet, Audit Logging, Certificates, Custom Scheduler
Question 21: What is the correct description of CSI driver components?
What are the main components of a CSI (Container Storage Interface) driver?
A) Only the Controller Plugin is needed B) It consists of a Controller Plugin (provisioning, attaching) and a Node Plugin (mounting, formatting) C) kubelet has built-in CSI functionality D) kube-proxy manages storage
Answer: B
CSI drivers consist of a Controller Plugin and a Node Plugin. The Controller Plugin handles volume creation/deletion and attach/detach operations, typically deployed as a Deployment. The Node Plugin handles volume mount/unmount and formatting, deployed as a DaemonSet.
Question 22: What is the role of kubelet eviction threshold settings?
When kubelet's eviction-hard is set with memory.available below 100Mi, what happens?
A) The node automatically reboots B) When available memory falls below 100Mi, kubelet evicts Pods starting from lowest priority C) Only blocks scheduling of new Pods D) All Pods on the node terminate simultaneously
Answer: B
When the eviction-hard threshold is reached, kubelet evicts Pods to reclaim resources. It selects eviction targets based on QoS class (BestEffort, Burstable, Guaranteed order) and resource usage. The node gets conditions like MemoryPressure added.
Question 23: What is correct about static Pods in kubelet?
Which statement about static Pods is correct?
A) They can be directly deleted with kubectl B) kubelet directly manages them by watching manifest files in a specific directory C) They are managed by a ReplicaSet D) They are stored directly in etcd
Answer: B
Static Pods are managed by kubelet which watches YAML files in a designated directory (default: /etc/kubernetes/manifests). A mirror Pod is created on the API server for visibility, but kubectl delete will not work as kubelet recreates them. Control plane components typically run as static Pods.
Question 24: What is the RequestResponse audit logging level?
What does the RequestResponse level in Kubernetes API server audit policy do?
A) Records only event metadata B) Records request metadata and request body C) Records request metadata, request body, and response body D) Records nothing
Answer: C
Audit policy levels are: None (no recording), Metadata (metadata only), Request (metadata + request body), RequestResponse (metadata + request body + response body). RequestResponse is the most detailed level, useful for tracking Secret access and critical API calls, but generates very large log volumes.
Question 25: How to configure audit logging for specific resources only?
How do you configure audit logging at RequestResponse level only for Secret resource access?
A) Set all resources to RequestResponse B) In the audit policy rules, specify secrets as the resource and set level to RequestResponse C) Just set --audit-log-path on kube-apiserver D) Configure logging directly in etcd
Answer: B
In the audit policy file rules section, you can specify resource groups and resource names to set logging levels for specific resources. Rules are evaluated top to bottom, and the first matching rule applies.
Question 26: How to renew Kubernetes component certificates?
The control plane certificates for a kubeadm-installed cluster are about to expire. How do you renew them?
A) Certificates are permanent and never need renewal B) Renew with kubeadm certs renew all, then restart control plane components C) Deleting certificates will auto-regenerate them D) You need to install a new cluster
Answer: B
Use kubeadm certs renew all to renew all certificates. After renewal, restart the static Pods for kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm issues certificates with a default validity period of 1 year.
Question 27: What does kubeadm certs check-expiration show?
What information does the kubeadm certs check-expiration command display?
A) Only kubelet certificates B) Expiration dates, remaining validity periods, and CA information for all kubeadm-managed certificates C) External CA certificate information D) TLS Secret expiration information
Answer: B
kubeadm certs check-expiration shows certificate expiration dates, remaining validity periods, and issuing CA information for API server, controller manager, scheduler, etcd, and other components in a table format. It is useful for certificate management planning.
Question 28: How to deploy a custom scheduler?
How do you deploy an additional custom scheduler alongside the default scheduler?
A) You must modify the default scheduler B) Deploy a separate scheduler as a Deployment and specify the scheduler name in the Pod spec's schedulerName field C) All Pods automatically use the custom scheduler D) Change kubelet configuration
Answer: B
A custom scheduler can be deployed as a separate Deployment. For a Pod to use a specific scheduler, specify the custom scheduler name in spec.schedulerName. Pods without this specification will be handled by the default scheduler (default-scheduler).
Question 29: How to use scopes in ResourceQuota?
How do you apply resource quotas only to BestEffort QoS Pods?
A) Apply the same quota to all Pods B) Use scopeSelector in ResourceQuota to specify the BestEffort scope C) Only PriorityClass can be used for restriction D) Use only LimitRange
Answer: B
Using the scopeSelector or scopes field in ResourceQuota, you can apply quotas to Pods matching specific conditions like BestEffort, NotBestEffort, Terminating, and NotTerminating scopes. For example, specifying the BestEffort scope applies quotas only to Pods without resource requests/limits.
Question 30: What is the difference between LimitRange and ResourceQuota?
What is the main difference between LimitRange and ResourceQuota?
A) Both limit total resources for the entire namespace B) LimitRange sets defaults and min/max resources for individual Pods/containers, while ResourceQuota limits total resource usage for the entire namespace C) ResourceQuota only applies to individual containers D) LimitRange applies cluster-wide
Answer: B
LimitRange sets default resource requests/limits and min/max values at the individual container or Pod level. ResourceQuota limits the total amount of resources (CPU, memory, Pod count, etc.) that can be used across the entire namespace. Using both together enables fine-grained resource management.