Skip to content
Published on

[Golden Kubestronaut] CKAD Extra 30 Practice Questions - Advanced Scenarios

Authors

CKAD Extra 30 Practice Questions - Advanced Scenarios

These 30 additional questions focus on advanced scenarios for the CKAD exam. Topics include multi-container patterns (Ambassador, Adapter), Helm chart troubleshooting, CRD, Admission Webhooks, advanced probes, Gateway API, and more.


Questions 1-10: Advanced Multi-Container Patterns and Helm

Question 1: What is the primary use of the Ambassador container pattern?

What is the Ambassador container pattern used for?

A) Collecting logs and sending them externally B) Proxying connections to external services on behalf of the main container, simplifying connection management C) Performing initialization tasks D) Transforming data and passing it to the main container

Answer: B

In the Ambassador pattern, the main container communicates via localhost, and the Ambassador container proxies connections to external services (e.g., databases, caches). This separates service discovery, connection pooling, TLS handling, etc. from the main application.

Question 2: What distinguishes the Adapter container pattern?

What is the characteristic of the Adapter container pattern?

A) It proxies traffic for the main container B) It transforms the main container's output data into a standardized format C) It performs the same tasks as the main container D) It only handles initialization tasks

Answer: B

The Adapter pattern transforms the main container's output (logs, metrics, etc.) into a standardized format. For example, converting various log formats to a common JSON format, or transforming application-specific metrics to Prometheus format.

Question 3: What happens when an Init container fails?

If a Pod has 3 Init containers defined and the second one fails, what happens?

A) The third Init container runs B) The main container starts immediately C) The second Init container is retried based on the Pod's restartPolicy, and the third does not run until it succeeds D) The Pod is immediately deleted

Answer: C

Init containers run sequentially, and each must succeed before the next one starts. If one fails, it is retried based on the restartPolicy. With Always or OnFailure, it retries; with Never, the Pod enters Failed state. All Init containers must succeed before the main container starts.

Question 4: How to override values.yaml in a Helm chart?

How do you override specific values from values.yaml during helm install?

A) You must directly modify the values.yaml file B) Use the --set flag or -f flag to specify a custom values file C) Only environment variables can override values D) You must use ConfigMaps

Answer: B

During helm install or helm upgrade, use --set key=value to override individual values, or -f custom-values.yaml to specify a separate values file. --set takes precedence over -f. For complex overrides, using a file is recommended.

Question 5: How to rollback a Helm release?

After helm upgrade causes issues, how do you rollback to a previous version?

A) You must helm delete and reinstall B) Use the helm rollback command to rollback to a previous revision C) Use kubectl rollout undo D) Change values.yaml to previous values and upgrade again

Answer: B

Use helm rollback RELEASE_NAME REVISION to rollback to a specific revision. Check revision history with helm history RELEASE_NAME and specify the desired revision number. The rollback itself is recorded as a new revision.

Question 6: How to preview Helm template rendering results?

How do you check the rendered YAML manifests before installing a Helm chart?

A) Use helm install --dry-run or helm template commands B) Only kubectl apply --dry-run can be used C) You must read values.yaml directly D) Helm does not support previews

Answer: A

helm template renders the chart locally and outputs YAML. helm install --dry-run includes server-side validation in the rendering. Both commands do not create actual resources, making them useful for pre-deployment verification.

Question 7: How to manage Helm chart dependencies?

How do you manage dependencies on other charts in a Helm chart?

A) Manually copy all YAML files B) Define them in the dependencies section of Chart.yaml and run helm dependency update C) Use a requirements.txt file D) Manage dependencies with kubectl

Answer: B

Define dependent chart names, versions, and repositories in the dependencies section of Chart.yaml. Running helm dependency update downloads dependent charts to the charts/ directory. You can selectively enable dependencies using condition or tags.

Question 8: How to share volumes between containers in a multi-container Pod?

How do you share files between a sidecar container and the main container?

A) Sharing is only possible through the network B) Define an emptyDir volume and mount it in both containers using volumeMounts C) Direct filesystem access between containers is possible D) Only ConfigMaps can be used

Answer: B

Define an emptyDir volume at the Pod level and mount it in each container's volumeMounts to share the same directory. emptyDir is created/deleted with the Pod lifecycle, and setting medium: Memory uses tmpfs.

Question 9: What are Helm hooks used for?

What is the purpose of hooks like pre-install and post-install in Helm?

A) Monitoring Kubernetes events B) Allowing Jobs or other resources to run at specific points during chart install/upgrade/delete C) Automatically configuring network policies D) Managing Pod restart policies

Answer: B

Helm hooks execute actions at specific points in the release lifecycle. For example, pre-install for database migration, post-install for initial data loading, pre-delete for backup operations. Hooks are specified as annotations on resources.

Question 10: How do Sidecar containers differ from Init containers in execution?

What is the behavior of Sidecar containers (initContainers with restartPolicy: Always) introduced in Kubernetes 1.28+?

A) They start simultaneously with the main container B) They start in Init container order but do not wait to complete before the next runs, and continue running until Pod termination C) They start after the main container D) They behave identically to Init containers

Answer: B

In Kubernetes 1.28+, setting restartPolicy: Always on initContainers creates native sidecars. They start in Init container order but the next container proceeds without waiting for completion. They run throughout the Pod lifecycle and terminate after the main container during Pod shutdown.


Questions 11-20: CRD, Admission Webhooks, API Migration

Question 11: What is the role of a CRD (CustomResourceDefinition)?

What becomes possible when you create a CRD?

A) You can modify the schema of existing Kubernetes resources B) You can define new API resource types and create custom resources manageable via kubectl C) You can modify the Kubernetes core API D) You can change Pod runtime behavior

Answer: B

Creating a CRD registers a new resource type in the Kubernetes API. CRUD operations are possible via kubectl, and data is stored in etcd. In the Operator pattern, CRDs are used to declaratively define the desired state.

Question 12: How to define validation schemas in a CRD?

How do you set up field validation for custom resources in a CRD?

A) A separate Webhook must be used B) Define JSON Schema in the CRD's spec.versions.schema.openAPIV3Schema C) Store validation rules in a ConfigMap D) Kubernetes does not support custom resource validation

Answer: B

Defining JSON Schema in the openAPIV3Schema field enables automatic validation of field types, required fields, patterns, etc. during custom resource creation/modification. For more complex validation, a Validating Admission Webhook can be added.

Question 13: How does a Validating Admission Webhook work?

When is a Validating Admission Webhook called and what does it do?

A) Called after the resource is stored in etcd B) Called after authentication/authorization passes but before etcd storage, to allow or deny the request C) Called before kubectl command execution D) Called at Pod startup

Answer: B

Validating Admission Webhooks are called after the API request passes authentication, authorization, and mutation stages. They inspect the request and decide to allow or deny it. They cannot modify request content; use Mutating Admission Webhooks for modifications.

Question 14: What is the execution order of Mutating and Validating Webhooks?

When both types of webhooks are configured, what is the execution order?

A) Validating runs first B) Mutating runs first, then Validating runs C) They run simultaneously D) The order is random

Answer: B

The admission processing order is: 1) Mutating Admission Webhooks run first to modify the request, 2) Then Validating Admission Webhooks validate the modified request. This order allows Validating to verify defaults added by Mutating.

Question 15: When is API version migration needed?

What needs to be done when a specific resource API version is deprecated in Kubernetes?

A) Nothing needs to be done B) All manifests and tools using the deprecated API version must be updated to the new API version C) Kubernetes migrates automatically D) Downgrade to a previous cluster version

Answer: B

When an API version is deprecated, it is removed after a certain number of releases. You must update the apiVersion in manifests using kubectl convert or manually. All manifests in CI/CD pipelines, Helm charts, and operational scripts must be checked.

Question 16: What is the purpose of kubectl convert?

What is the kubectl convert command used for?

A) Converting container image formats B) Converting Kubernetes resource manifest API versions to different versions C) Converting YAML to JSON D) Encoding Secrets

Answer: B

kubectl convert converts the apiVersion of resource manifests. For example, converting a Deployment from apps/v1beta1 to apps/v1. This plugin requires separate installation and is useful during API migrations.

Question 17: What is the role of failurePolicy in Admission Webhooks?

What does failurePolicy: Ignore mean in an Admission Webhook configuration?

A) If the webhook call fails, the API request is denied B) If the webhook call fails, the webhook is skipped and the API request is allowed C) The webhook is disabled D) Errors are only logged

Answer: B

failurePolicy: Ignore skips the webhook and allows the request when the webhook server is unreachable or times out. failurePolicy: Fail (default) denies the API request on webhook failure. Critical security webhooks typically use Fail, while auxiliary features use Ignore.

Question 18: What is the purpose of additionalPrinterColumns in CRDs?

What effect does setting additionalPrinterColumns in a CRD have?

A) The resource storage format changes B) Custom columns are added to kubectl get output, showing important fields at a glance C) Validation rules are added to the resource D) API server performance improves

Answer: B

additionalPrinterColumns allows displaying specific fields of custom resources as columns in kubectl get output. You specify jsonPath to display particular field values from spec or status. This is useful for quickly assessing resource state.

Question 19: What is the role of CRD subresources (status, scale)?

What benefit does enabling the status subresource in a CRD provide?

A) The status field is automatically updated B) Spec and status updates are separated, allowing controllers to update status independently C) The status field is deleted D) Validation is disabled

Answer: B

Enabling the status subresource creates a /status endpoint. This separates spec updates from status updates. Users change the spec, and controllers change the status, enabling separation of concerns. Separate RBAC permission management is also possible.

Question 20: How to use namespaceSelector with Webhooks?

How do you make an Admission Webhook apply only to specific namespaces?

A) It must apply to all namespaces B) Specify label matching conditions in the namespaceSelector of the webhook configuration C) Separate webhooks must be created for each namespace D) Add annotations to Pods

Answer: B

Setting matchLabels or matchExpressions in the namespaceSelector field of ValidatingWebhookConfiguration or MutatingWebhookConfiguration applies the webhook only to resources in matching namespaces. This is commonly used to exclude system namespaces like kube-system.


Questions 21-30: Advanced Probes, Topology, Gateway API

Question 21: What is the purpose of Startup Probes?

When are Startup Probes used?

A) To verify that a container terminated normally B) To delay liveness/readiness checks until initialization completes for slow-starting applications C) To manage traffic routing D) To monitor disk usage

Answer: B

Startup Probes are used only once during container startup, preventing liveness and readiness probes from running until successful. For applications like Java apps or those requiring large data loading, this prevents unnecessary restarts caused by liveness probes.

Question 22: How to configure gRPC probes?

How do you set up gRPC-based health check probes in Kubernetes 1.24+?

A) You must use grpcurl command in an exec probe B) Use the grpc field in the probe to specify the port C) Only HTTP probes are available D) TCP socket probes must be used instead

Answer: B

In Kubernetes 1.24+, the grpc probe type is GA. Specifying grpc.port in livenessProbe or readinessProbe performs health checks using the gRPC Health Checking Protocol. Optionally, the service field can specify a particular service.

Question 23: What is the purpose of Ephemeral Containers?

What are the characteristics of ephemeral containers created by kubectl debug?

A) They are permanently added to the Pod B) They temporarily add debugging containers to running Pods for troubleshooting without Pod restart C) They create new Pods D) They only run at node level

Answer: B

Ephemeral containers can be added to running Pods using kubectl debug. They cannot specify probes, ports, or resource limits, and allow diagnosing issues using images with debugging tools without Pod restart. They are particularly useful for troubleshooting distroless containers.

Question 24: What is the purpose of Pod Topology Spread Constraints?

What is the purpose of using topologySpreadConstraints?

A) To limit Pod resource usage B) To evenly distribute Pods across topology domains (nodes, zones, regions) C) To set network policies D) To manage storage

Answer: B

topologySpreadConstraints distribute Pods evenly across topology domains like nodes, availability zones, and regions. maxSkew specifies the maximum imbalance between domains, and whenUnsatisfiable determines behavior when constraints cannot be met (DoNotSchedule or ScheduleAnyway).

Question 25: What does maxSkew mean in topologySpreadConstraints?

What does setting maxSkew: 1 mean?

A) Maximum of 1 Pod is created B) The Pod count difference between topology domains must not exceed 1 C) Deploy to only 1 domain D) Scheduling delay of 1 second

Answer: B

maxSkew: 1 means the maximum difference in matching Pod count between topology domains must not exceed 1. For example, when distributing Pods across 3 zones, the Pod count difference between zones stays within 1. Smaller values ensure more even distribution.

Question 26: What are the key differences between Gateway API and Ingress?

What improvements does Gateway API offer over traditional Ingress?

A) Gateway API is identical to Ingress B) Gateway API provides role-based separation (infra admins, app developers), multi-protocol support (HTTP, TCP, gRPC), and more granular routing rules C) Gateway API only supports HTTP D) Gateway API can only be used outside the cluster

Answer: B

Gateway API supports role-based management through resources like GatewayClass, Gateway, and HTTPRoute. Infrastructure teams manage GatewayClass and Gateway, while app teams manage Routes. It supports various protocols (HTTP, TCP, TLS, gRPC) and features like header-based routing and weighted distribution.

Question 27: How to set up weight-based traffic splitting in HTTPRoute?

How do you distribute traffic 80:20 between two services in Gateway API's HTTPRoute?

A) Use Ingress annotations B) Specify two services in HTTPRoute's backendRefs with weight set to 80 and 20 respectively C) Change the Service selector D) Use a separate load balancer

Answer: B

Specifying multiple services in HTTPRoute rules.backendRefs with weight fields enables weight-based traffic distribution. This is useful for canary deployments, A/B testing, and gradual rollouts.

Question 28: What is the purpose of projected volumes in Pods?

What is the projected volume type used for?

A) Mounting external storage B) Combining multiple volume sources (Secret, ConfigMap, downwardAPI, serviceAccountToken) into a single directory mount C) Encrypting volumes D) Sharing volumes with other Pods

Answer: B

Projected volumes combine data from multiple sources into a single mount point. Secrets, ConfigMaps, Downward API, and ServiceAccount tokens can be mounted together in one directory. This is frequently used with bound service account tokens.

Question 29: How to implement Canary deployments in Kubernetes?

How do you implement a simple Canary deployment using only native Kubernetes resources?

A) Set the Deployment strategy to Canary B) Create two Deployments connected to the same Service with identical label selectors, and adjust the canary Deployment replicas to manage traffic ratio C) Use only StatefulSet D) Only possible with DaemonSet

Answer: B

Kubernetes has no native Canary deployment strategy. However, creating two Deployments matching the same Service selector and adjusting replica counts enables simple Canary deployments. For more sophisticated control, use weighted routing with Istio or Gateway API.

Question 30: How to do node-level debugging with kubectl debug?

How do you access a node's filesystem using kubectl debug?

A) Only accessible via SSH B) Use kubectl debug node/NODE_NAME --image=IMAGE to create a debugging Pod on the node with access to the host filesystem C) kubelet must be restarted D) A DaemonSet must be deployed to the node

Answer: B

The kubectl debug node/NODE_NAME command creates a privileged Pod on the target node and mounts the host filesystem at /host. Using chroot /host provides direct access to the host environment, useful for checking kubelet logs, network configuration, and disk status.