Skip to content
Published on

AWS DevOps Engineer Professional (DOP-C02) Practice Exam: 75 Questions

Authors

AWS DevOps Engineer Professional (DOP-C02) Practice Exam: 75 Questions

This practice exam covers the key domains of the AWS DevOps Engineer Professional DOP-C02 exam including SDLC Automation, Configuration Management, Monitoring, and High Availability.


Question 1

A company uses AWS CodePipeline for CI/CD. They want to automatically roll back a deployment if error rates exceed 5% in the 10 minutes after deployment. What is the MOST automated solution?

A) Set up CloudWatch alarms and manually trigger rollback B) Use CodeDeploy with automatic rollback triggered by CloudWatch alarms C) Use CodePipeline approval actions to pause for monitoring D) Use AWS Lambda to monitor and manually invoke rollback

Answer

Answer: B

Explanation: CodeDeploy supports automatic rollback that can be triggered by CloudWatch alarms. You configure the deployment group to monitor specific CloudWatch alarms (like error rate) and automatically roll back if the alarm enters the ALARM state within a defined period. This is the most automated and native solution without requiring manual intervention or custom Lambda functions.


Question 2

A team uses AWS CodeBuild for their CI pipeline. They notice build times have increased significantly because dependencies are downloaded on every build. How can they reduce build times?

A) Increase CodeBuild compute type B) Use CodeBuild local caching or S3 caching for dependencies C) Use a larger EC2 instance for builds D) Run builds in parallel

Answer

Answer: B

Explanation: CodeBuild supports caching to store reusable build artifacts (like npm modules, Maven dependencies) between builds. Caches can be stored locally in the build environment or in S3. This significantly reduces build times by avoiding re-downloading unchanged dependencies. Increasing compute type (A) speeds up the build itself but not the download time. EC2 (C) is not how CodeBuild works. Parallel builds (D) don't reduce individual build time.


Question 3

A company needs to deploy a CloudFormation stack to 50 AWS accounts across 5 regions simultaneously. Which service simplifies this?

A) CloudFormation Nested Stacks B) AWS CloudFormation StackSets C) AWS CDK D) AWS Service Catalog

Answer

Answer: B

Explanation: AWS CloudFormation StackSets allow you to create, update, or delete CloudFormation stacks across multiple accounts and regions with a single operation. It's specifically designed for multi-account, multi-region deployments. Nested Stacks (A) organize stacks hierarchically but within one account. CDK (C) generates CloudFormation templates but doesn't manage cross-account deployment. Service Catalog (D) manages product portfolios.


Question 4

A company wants to enforce that all EC2 instances have a specific tag before they can be launched. How should they implement this?

A) Use IAM policies with tag-based conditions B) Use AWS Config rules to remediate missing tags C) Use AWS Service Control Policies (SCPs) to require tags D) Use a combination of IAM policies with aws:RequestTag condition and tag policies in AWS Organizations

Answer

Answer: D

Explanation: The most comprehensive approach uses IAM policies with the aws:RequestTag condition key to deny resource creation without required tags, combined with AWS Organizations tag policies to standardize tag values. IAM policies alone (A) require tagging on creation but don't standardize values. Config rules (B) are detective, not preventive. SCPs (C) can deny actions but require careful configuration.


Question 5

A company's CodePipeline deploys to production using CodeDeploy. They want a deployment strategy that deploys to 25% of instances at a time, waits for health checks, then proceeds. Which CodeDeploy deployment configuration should they use?

A) CodeDeployDefault.AllAtOnce B) CodeDeployDefault.HalfAtATime C) CodeDeployDefault.OneAtATime D) Custom deployment configuration with 25% batch size

Answer

Answer: D

Explanation: CodeDeploy allows creating custom deployment configurations specifying the percentage or number of instances to deploy to at a time. A custom configuration with a 25% minimum healthy hosts value would deploy to 25% of instances at a time. HalfAtATime (B) deploys to 50% at a time. OneAtATime (C) deploys to one instance at a time. AllAtOnce (A) deploys to all instances simultaneously.


Question 6

A team wants to implement GitOps for their Kubernetes workloads on EKS. Which AWS service or combination provides this capability?

A) AWS CodePipeline with CodeDeploy B) AWS CodePipeline with Flux or ArgoCD C) AWS CodeCommit with Lambda D) AWS Elastic Beanstalk

Answer

Answer: B

Explanation: GitOps for Kubernetes uses tools like Flux or ArgoCD that continuously reconcile the desired state in Git with the actual state in Kubernetes. AWS CodePipeline can trigger these tools or they can work independently, continuously pulling from CodeCommit/GitHub. CodePipeline with CodeDeploy (A) is for traditional deployments. Lambda (C) is not a GitOps tool. Elastic Beanstalk (D) is for managed application deployments.


Question 7

A company needs to store sensitive configuration values used in CodeBuild build specs. The values should be encrypted and access-controlled. Where should they store these values?

A) In the CodeBuild environment variables (plaintext) B) In the buildspec.yml file in the repository C) In AWS Secrets Manager or SSM Parameter Store, referenced in the buildspec D) In an S3 bucket encrypted with SSE-S3

Answer

Answer: C

Explanation: AWS Secrets Manager or SSM Parameter Store SecureString parameters provide encrypted storage with IAM-based access control. CodeBuild can natively reference Secrets Manager secrets and SSM parameters in environment variables, retrieving them at build time. Storing plaintext in environment variables (A) exposes them in logs. Storing in the repository (B) is a security risk. S3 (D) requires additional code to retrieve and doesn't provide the same access control integration.


Question 8

A company uses AWS CodeArtifact to manage their npm packages. They want to ensure that CodeBuild uses CodeArtifact as the npm registry. What do they need to configure?

A) Set the npm registry URL in package.json B) Configure the npm registry using the CodeArtifact login command in the pre-build phase C) Create a VPC endpoint for CodeArtifact D) Use CodeBuild environment variables to set the registry

Answer

Answer: B

Explanation: AWS CodeArtifact provides a login command (aws codeartifact login --tool npm) that configures npm to use CodeArtifact as the registry and retrieves a temporary authorization token. This command should be run in the pre-build phase of the buildspec.yml. Simply setting the URL in package.json (A) doesn't handle authentication. VPC endpoints (C) are for network connectivity. Environment variables alone (D) don't configure the npm client.


Question 9

A company needs to implement a pipeline that can deploy infrastructure using CloudFormation with a manual approval stage before production deployment. They also need to notify the team via email when approval is needed. Which services should they combine?

A) CodePipeline + SNS + Manual Approval action B) CodePipeline + SQS + Lambda C) CodeCommit + CloudWatch Events + Email D) Systems Manager + SNS

Answer

Answer: A

Explanation: AWS CodePipeline natively supports Manual Approval actions. When the pipeline reaches the approval stage, it can send a notification via SNS (which can trigger email) and pauses until a reviewer approves or rejects. This is the native, purpose-built solution for pipeline approvals with notifications. SQS+Lambda (B) adds unnecessary complexity. The other options (C, D) don't provide native pipeline approval functionality.


Question 10

A company wants to ensure their AWS infrastructure always complies with their security policies. They need automatic remediation when drift is detected. Which service should they use?

A) AWS CloudTrail B) Amazon Inspector C) AWS Config with remediation actions D) AWS Security Hub

Answer

Answer: C

Explanation: AWS Config continuously monitors resource configurations and evaluates them against Config Rules. When rules are violated, Config can automatically trigger remediation actions (using Systems Manager Automation documents) to bring resources back into compliance. CloudTrail (A) logs API calls. Inspector (B) scans for vulnerabilities. Security Hub (D) aggregates findings but doesn't automatically remediate.


Question 11

A company runs a microservices application on ECS. They need to perform blue/green deployments with automatic rollback if the new version fails health checks. Which approach is BEST?

A) ECS with CodeDeploy blue/green deployment type B) CloudFormation with rolling updates C) ECS rolling update deployment D) Manual update with ECS task definition revision

Answer

Answer: A

Explanation: ECS integrates with CodeDeploy to provide blue/green deployments. CodeDeploy shifts traffic from the old (blue) to the new (green) task set gradually, monitors health checks, and automatically rolls back if health checks fail. CloudFormation rolling updates (B) are for EC2 Auto Scaling. ECS rolling updates (C) are less sophisticated and harder to roll back. Manual updates (D) are error-prone.


Question 12

A DevOps team needs to manage configuration of hundreds of EC2 instances including software installation and configuration changes. They want a fully managed solution that doesn't require agents. Which service is BEST?

A) Ansible on a control node B) AWS Systems Manager Run Command and State Manager C) AWS OpsWorks Chef/Puppet D) Custom scripts via EC2 user data

Answer

Answer: B

Explanation: AWS Systems Manager Run Command allows running commands across multiple EC2 instances without needing SSH access or managing an Ansible control node. State Manager provides desired state configuration to maintain instance configurations. The SSM Agent is pre-installed on Amazon Linux/Windows AMIs. Ansible (A) requires managing a control node. OpsWorks (C) requires Chef/Puppet expertise. EC2 user data (D) only runs at instance launch.


Question 13

A company's CodePipeline pipeline fails because a CloudFormation stack deployment fails. They want to investigate why the stack failed without losing the failed state. What should they do?

A) Check CloudTrail logs for the API calls B) Check the CloudFormation stack events in the console or CLI C) Re-run the pipeline and add debug logging D) Check CodeBuild logs

Answer

Answer: B

Explanation: CloudFormation stack events provide a detailed timeline of what happened during stack creation/update, including which resource failed, why it failed, and the error message. The stack remains in a failed state (ROLLBACK_COMPLETE or UPDATE_ROLLBACK_COMPLETE) until you take action, preserving the failure information. CloudTrail (A) shows the API calls but not resource-level failures. CodeBuild logs (D) are for build stage failures.


Question 14

A company needs to implement continuous compliance monitoring for their AWS resources. They want to be notified immediately when a resource violates a compliance policy. What should they set up?

A) CloudTrail with EventBridge rules B) AWS Config rules with SNS notifications C) Amazon Inspector with SNS D) AWS Security Hub with EventBridge

Answer

Answer: B

Explanation: AWS Config rules continuously evaluate resource configurations. Config can be configured to send notifications via SNS when the compliance status changes (e.g., from COMPLIANT to NON_COMPLIANT). This provides real-time notifications when policies are violated. CloudTrail with EventBridge (A) detects API calls, not configuration compliance. Inspector (C) is for vulnerability scanning. Security Hub (D) aggregates findings from multiple services.


Question 15

A company wants to implement Infrastructure as Code using AWS CDK. They want to reuse infrastructure components across multiple applications. How should they structure their CDK code?

A) Copy and paste CloudFormation templates B) Create CDK Constructs and publish them as libraries C) Use CDK Aspects for reuse D) Use CloudFormation nested stacks

Answer

Answer: B

Explanation: AWS CDK Constructs are reusable infrastructure components that can be packaged as libraries and shared via npm, PyPI, or Maven. Teams can create their own construct libraries that encapsulate best practices (e.g., a secure S3 bucket construct with encryption and versioning enabled). CDK Aspects (C) are for traversing and modifying CDK trees (like applying tags). CloudFormation templates (A, D) are not CDK.


Question 16

A company's Lambda function is deployed via CodePipeline. After deploying a new version, they want 10% of traffic to go to the new version, and increase it by 10% every 5 minutes if no errors occur. Which CodeDeploy deployment type should they use?

A) Linear10PercentEvery10Minutes B) Canary10Percent5Minutes C) AllAtOnce D) Linear10PercentEvery1Minute

Answer

Answer: B

Explanation: CodeDeploy for Lambda supports canary deployments. Canary10Percent5Minutes shifts 10% of traffic to the new Lambda version, waits 5 minutes, and if no CloudWatch alarms are triggered, shifts the remaining 90%. This matches the requirement. Linear deployments (A, D) shift traffic gradually in equal increments. AllAtOnce (C) shifts all traffic immediately.


Question 17

A company uses AWS CodeCommit. They want to enforce code review - no direct commits to the main branch, all changes must go through pull requests reviewed by at least 2 people. How do they enforce this?

A) Use IAM policies to deny codecommit:GitPush to main branch B) Use CodeCommit approval rule templates C) Use branch protection rules in CodeCommit D) Both A and B

Answer

Answer: D

Explanation: Enforcing pull requests in CodeCommit requires two components: (1) IAM policies that deny direct push to the main branch for regular developers, and (2) CodeCommit approval rule templates that require a minimum number of approvals on pull requests before they can be merged. Approval rule templates alone don't prevent direct pushes. IAM policies alone don't enforce the approval count on PRs.


Question 18

A company wants to monitor the performance of their application and automatically scale based on custom metrics (like queue depth). Which combination of AWS services achieves this?

A) CloudWatch custom metrics + Application Auto Scaling + target tracking scaling policy B) X-Ray + EC2 Auto Scaling C) CloudWatch Logs + Lambda + DynamoDB D) EventBridge + Step Functions

Answer

Answer: A

Explanation: Publishing custom metrics to CloudWatch, then using Application Auto Scaling with target tracking scaling policies allows automatic scaling of various resources (ECS tasks, DynamoDB capacity, etc.) based on custom metrics like queue depth. This is the native AWS solution for custom metric-based scaling. X-Ray (B) is for distributed tracing. Lambda+DynamoDB (C) doesn't provide auto-scaling. EventBridge+Step Functions (D) is for event orchestration.


Question 19

A team needs to ensure that any changes to their production environment are tracked and can be audited. They use CloudFormation. What feature provides a history of all changes to their stacks?

A) CloudTrail API logging B) CloudFormation change sets C) CloudFormation drift detection D) AWS Config CloudFormation stack tracking

Answer

Answer: D

Explanation: AWS Config records the configuration history of CloudFormation stacks as a resource type. It tracks all changes to stack configurations over time, providing an audit trail. CloudTrail (A) logs the API calls but not the full stack configuration at each point. Change sets (B) show what will change before applying. Drift detection (C) shows the current difference between the template and actual resources.


Question 20

A company needs to deploy their application across multiple environments (dev, staging, prod) using separate AWS accounts. They want a single pipeline that promotes artifacts through environments. How should they design this?

A) Create three separate pipelines, one per environment B) Create one pipeline that cross-account deploys using IAM roles C) Use Elastic Beanstalk for multi-environment management D) Use AWS Organizations with separate CodePipeline in each account

Answer

Answer: B

Explanation: A single CodePipeline can deploy across multiple AWS accounts by assuming cross-account IAM roles. The pipeline in the tooling account uses roles in the target accounts (dev, staging, prod) to deploy. This provides a single source of truth for the pipeline while maintaining account isolation. Separate pipelines (A) duplicate configuration and make synchronization difficult. Elastic Beanstalk (C) doesn't address multi-account deployment. Separate pipelines in each account (D) are hard to maintain.


Question 21

A company needs to implement a self-healing mechanism for their EC2-based application. If an instance fails health checks, it should be automatically replaced. What should they implement?

A) CloudWatch alarms with SNS notifications B) EC2 Auto Scaling Group with ELB health checks C) AWS Lambda to monitor and restart instances D) AWS Systems Manager maintenance windows

Answer

Answer: B

Explanation: EC2 Auto Scaling Groups with ELB health checks automatically terminate and replace instances that fail health checks. This is the native self-healing mechanism for EC2. CloudWatch alarms with SNS (A) notify but don't automatically replace. Lambda (C) can monitor but adds custom complexity. Maintenance windows (D) are for scheduled maintenance, not self-healing.


Question 22

A company wants to detect and alert on unauthorized API calls in their AWS account. Which combination of services provides this capability?

A) VPC Flow Logs + CloudWatch B) AWS CloudTrail + Amazon EventBridge + SNS C) AWS Config + SNS D) Amazon GuardDuty + Security Hub

Answer

Answer: B

Explanation: AWS CloudTrail records all API calls. EventBridge can create rules that match specific CloudTrail events (like unauthorized API calls, root account usage, security group changes) and trigger SNS notifications. This is the standard pattern for API-level alerting. VPC Flow Logs (A) capture network traffic. Config (C) monitors resource configurations. GuardDuty (D) provides ML-based threat detection but uses a different model.


Question 23

A company's build pipeline needs to build Docker images and push them to Amazon ECR. What permissions does the CodeBuild service role need?

A) AmazonEC2ContainerRegistryFullAccess B) AmazonEC2ContainerRegistryPowerUser C) Custom policy allowing ecr:GetAuthorizationToken and relevant push permissions D) AmazonECS-FullAccess

Answer

Answer: C

Explanation: Following least privilege, the CodeBuild service role needs specific ECR permissions: ecr:GetAuthorizationToken (for Docker login), ecr:BatchCheckLayerAvailability, ecr:PutImage, ecr:InitiateLayerUpload, ecr:UploadLayerPart, ecr:CompleteLayerUpload. AmazonEC2ContainerRegistryPowerUser (B) includes all needed permissions but also more than needed. FullAccess (A) includes delete permissions. AmazonECS-FullAccess (D) is for ECS management.


Question 24

A company uses CloudFormation to manage their infrastructure. They want to prevent accidental deletion of production databases. How can they implement this?

A) Use CloudFormation Retention policies on the database resource B) Set the DeletionPolicy attribute to Retain on the RDS resource C) Use IAM policies to deny cloudformation:DeleteStack D) Use CloudFormation termination protection

Answer

Answer: B

Explanation: Setting DeletionPolicy: Retain on a CloudFormation resource (like an RDS instance) means that when the stack is deleted, the resource is retained (not deleted). This prevents accidental database deletion through CloudFormation stack operations. Termination protection (D) prevents the entire stack from being deleted but the question asks specifically about the database resource. Retention policies (A) are for specific behaviors. IAM policies (C) prevent all stack deletions.


Question 25

A company uses CodePipeline and wants to run integration tests after deployment to staging. The tests run in an external testing framework. How should they integrate this into the pipeline?

A) Use a CodeBuild stage to run the tests B) Use a Lambda invoke action to trigger the external framework C) Use a Jenkins action in CodePipeline D) Use a manual approval step after deployment

Answer

Answer: A

Explanation: A CodeBuild stage in CodePipeline can be used to run integration tests. The buildspec.yml in the test phase can invoke external testing frameworks, run test scripts, and report results. If tests fail, the CodeBuild stage fails and the pipeline stops. This is the most integrated approach within AWS. Lambda invocations (B) are for simple event-driven tasks. Jenkins (C) requires managing Jenkins infrastructure. Manual approval (D) is not automated.


Question 26

A company needs to manage thousands of EC2 instances and wants to run patching operations at scale without SSH access. Which service and approach should they use?

A) AWS Systems Manager Patch Manager with maintenance windows B) EC2 user data scripts for patching C) Ansible Tower for automated patching D) AWS OpsWorks with Chef recipes

Answer

Answer: A

Explanation: AWS Systems Manager Patch Manager can automatically patch EC2 instances based on patch baselines and patch groups. Maintenance windows schedule patching operations. The SSM Agent on instances receives patch instructions without needing SSH. Compliance reports show patching status. EC2 user data (B) only runs at launch. Ansible Tower (C) requires managing infrastructure. OpsWorks (D) requires Chef expertise.


Question 27

A company wants to implement canary testing for their application. They deploy to a small subset of users and monitor metrics before full rollout. What combination enables automatic rollback?

A) Route 53 weighted routing + CloudWatch + manual process B) AWS CodeDeploy canary deployment + CloudWatch alarms C) ALB weighted target groups + CloudWatch + Lambda D) CloudFront functions + CloudWatch

Answer

Answer: B

Explanation: AWS CodeDeploy canary deployments shift a percentage of traffic to the new version and monitor CloudWatch alarms. If alarms are triggered during the baking period, CodeDeploy automatically rolls back to the previous version. This provides fully automated canary deployment with rollback. Route 53 weighted routing (A) requires manual rollback. ALB weighted target groups (C) require custom automation. CloudFront functions (D) are for edge request manipulation.


Question 28

A team needs to ensure that their CodeBuild builds run in an isolated environment with no internet access. They need to access private repositories and ECR. What should they configure?

A) CodeBuild within a VPC with no internet access, using VPC endpoints for AWS services B) CodeBuild with security groups blocking outbound traffic C) CodeBuild with a NAT gateway that has restrictive egress rules D) Use private CodeBuild environments

Answer

Answer: A

Explanation: Running CodeBuild inside a VPC without a NAT gateway provides no internet access. VPC endpoints for CodeCommit, ECR, S3, and other AWS services allow the build to access private repositories and ECR through the AWS private network. Security groups (B) blocking all outbound would also block AWS service access. NAT gateway with restrictions (C) still provides internet access. There are no "private CodeBuild environments" (D).


Question 29

A company's application uses feature flags stored in SSM Parameter Store. They need to reload feature flags without restarting their application. Which approach is BEST?

A) Restart the application on SSM parameter changes B) Use AWS AppConfig with dynamic configuration delivery C) Poll SSM Parameter Store periodically from the application D) Use DynamoDB to store feature flags

Answer

Answer: B

Explanation: AWS AppConfig is designed specifically for dynamic configuration delivery to applications without restarts. It supports gradual rollouts of configuration changes, validation, and monitoring. Applications use the AppConfig Agent or SDK to receive configuration updates in real-time. Polling SSM periodically (C) works but is not as sophisticated as AppConfig. Restarting the application (A) causes downtime. DynamoDB (D) requires custom polling code.


Question 30

A company needs to implement log aggregation from multiple EC2 instances and Lambda functions for centralized analysis. Which service should they use?

A) Amazon S3 with Athena B) Amazon CloudWatch Logs with Log Insights C) Amazon Kinesis Data Streams D) ELK Stack on EC2

Answer

Answer: B

Explanation: Amazon CloudWatch Logs natively integrates with EC2 (via CloudWatch agent) and Lambda (automatic integration). CloudWatch Logs Insights provides powerful query capabilities for centralized log analysis. This is the most integrated AWS-native solution. S3 with Athena (A) works but requires exporting logs. Kinesis (C) is for streaming, not log storage/analysis. ELK on EC2 (D) requires managing infrastructure.


Question 31

A company uses AWS X-Ray for distributed tracing. They notice high latency in one service. How can they identify which downstream call is causing the latency?

A) Check CloudWatch metrics for the service B) Use the X-Ray Service Map and trace details to identify high-latency segments C) Enable enhanced monitoring on RDS D) Check EC2 instance CPU metrics

Answer

Answer: B

Explanation: AWS X-Ray Service Map provides a visual representation of the application components and their connections. Each segment and subsegment in a trace represents a unit of work, showing the time spent. By examining trace details, you can identify which specific downstream call (database, API, microservice) is causing high latency. CloudWatch metrics (A) provide aggregate data. Enhanced RDS monitoring (C) is for database internals. EC2 CPU metrics (D) don't show distributed call latency.


Question 32

A team needs to implement a GitFlow branching strategy with their CI/CD pipeline. Feature branches should trigger builds, pull requests should trigger integration tests, and merges to main should trigger production deployment. How should they configure CodePipeline?

A) Create one pipeline that triggers on all branch events B) Use CodePipeline for main branch deployment and CodeBuild triggers for feature branches via EventBridge rules C) Create separate pipelines for each branch D) Use a single Jenkins pipeline

Answer

Answer: B

Explanation: The best approach uses multiple mechanisms: (1) EventBridge rules that trigger CodeBuild projects for feature branch builds and PR checks when CodeCommit events occur, and (2) a CodePipeline that triggers only on the main branch for production deployment. Separate pipelines for each branch (C) are impractical at scale. A single pipeline on all branches (A) can lead to unintended deployments. Jenkins (D) requires managing infrastructure.


Question 33

A company needs to implement secrets rotation for an RDS database password. The rotation should happen automatically every 30 days without application downtime. Which service provides this capability?

A) AWS KMS key rotation B) AWS Secrets Manager with automatic rotation C) SSM Parameter Store with a Lambda rotator D) AWS Certificate Manager

Answer

Answer: B

Explanation: AWS Secrets Manager supports automatic rotation of database credentials. For RDS, it has built-in rotation Lambda functions that update the password in both Secrets Manager and the RDS instance. Applications retrieve the current secret from Secrets Manager, so rotation is transparent. KMS key rotation (A) is for encryption keys. SSM Parameter Store (C) can rotate with a custom Lambda but requires more custom code. ACM (D) is for SSL certificates.


Question 34

A company uses CodePipeline with CodeBuild. Their builds access resources in a private VPC. CodeBuild builds are failing with network timeout errors when trying to access external npm packages during builds. What is the issue and solution?

A) Add an internet gateway to the CodeBuild VPC B) Ensure the CodeBuild VPC has a NAT gateway and that the CodeBuild subnet routes through it C) Use CodeBuild outside the VPC D) Add an elastic IP to the CodeBuild instance

Answer

Answer: B

Explanation: When CodeBuild runs inside a VPC, it loses its default internet access. To access external internet resources (like npm packages), the VPC needs a NAT gateway in a public subnet, and the CodeBuild subnet's route table must route internet-bound traffic through the NAT gateway. Using CodeArtifact (proxy for npm) would also resolve this. Adding an internet gateway (A) alone doesn't route traffic from private subnets. Moving outside VPC (C) loses access to private resources.


Question 35

A company needs to implement chaos engineering to test their application's resilience. They want to inject failures into AWS services in a controlled way. Which AWS service supports this?

A) AWS Systems Manager Automation B) AWS Fault Injection Simulator (FIS) C) AWS CloudFormation drift detection D) Amazon DevOps Guru

Answer

Answer: B

Explanation: AWS Fault Injection Simulator (FIS) is a managed chaos engineering service that allows you to run controlled experiments by injecting faults into AWS infrastructure (like terminating EC2 instances, adding network latency, causing CPU stress, throttling API calls). Systems Manager Automation (A) runs operational tasks. CloudFormation drift detection (C) finds configuration drift. DevOps Guru (D) uses ML to detect operational anomalies.


Question 36

A company wants to implement an automated security scanning step in their CI/CD pipeline that checks for known vulnerabilities in their container images before deployment. Which service should they integrate?

A) Amazon Inspector with ECR integration B) AWS WAF C) Amazon GuardDuty D) AWS Security Hub

Answer

Answer: A

Explanation: Amazon Inspector integrates with Amazon ECR to automatically scan container images for software vulnerabilities (CVEs). When a new image is pushed to ECR, Inspector scans it and reports findings. The pipeline can check Inspector findings via an API call or EventBridge notification and fail the build if critical vulnerabilities are found. WAF (B) is for web app protection. GuardDuty (C) is for threat detection. Security Hub (D) aggregates findings.


Question 37

A company has a microservices application where different teams own different services. They want each team to be able to deploy their service independently without impacting other services. What deployment strategy supports this?

A) Deploy all services together in one pipeline B) Independent deployment pipelines per service with contract testing C) Use feature flags to control rollouts D) Manual deployments with change management

Answer

Answer: B

Explanation: Independent deployment pipelines per service, combined with contract testing (to ensure services still communicate correctly), enables teams to deploy independently without coordination. Contract tests verify that the service interfaces remain compatible. Deploying all together (A) creates tight coupling. Feature flags (C) control feature visibility but don't enable independent deployment pipelines. Manual deployments (D) eliminate the CI/CD benefit.


Question 38

A company needs to ensure that their CloudFormation templates always use the latest approved AMI IDs. They store approved AMI IDs in SSM Parameter Store. How can they reference the latest AMI in CloudFormation?

A) Hard-code the AMI ID in the template and update manually B) Use AWS::SSM::Parameter resource to look up the AMI ID at deploy time C) Use the resolve:ssm:/path/to/ami dynamic reference in the CloudFormation template D) Use a Lambda-backed custom resource to retrieve the AMI

Answer

Answer: C

Explanation: CloudFormation dynamic references allow you to reference SSM Parameter Store values directly in templates using the syntax {{resolve:ssm:/path/to/parameter}}. When the stack is deployed or updated, CloudFormation retrieves the current value from SSM. This ensures the latest approved AMI is always used. AWS::SSM::Parameter (B) is for creating SSM parameters, not retrieving them. Lambda custom resources (D) add complexity. Hard-coding (A) requires manual updates.


Question 39

A DevOps team needs to implement progressive delivery with feature flags. They want to gradually expose a new feature to users based on user attributes. Which service is BEST suited for this?

A) Route 53 weighted routing B) AWS AppConfig with feature flag configuration C) ALB listener rules based on headers D) CloudFront Lambda@Edge

Answer

Answer: B

Explanation: AWS AppConfig supports feature flag configurations that can be deployed gradually using deployment strategies. Applications can evaluate feature flags at runtime to enable features for specific users or percentages. Route 53 weighted routing (A) routes at DNS level, not per-user. ALB rules (C) route based on request attributes but don't provide the feature flag management interface. Lambda@Edge (D) can implement feature flags but requires custom code.


Question 40

A company runs CodePipeline for their deployment. They want to receive a Slack notification when the pipeline fails. What is the MOST efficient solution?

A) Poll the CodePipeline API every minute from a Lambda function B) Create an EventBridge rule for CodePipeline state changes that triggers a Lambda function sending to Slack C) Use CloudWatch metrics alarms for pipeline failures D) Use SNS with email subscription and forward to Slack manually

Answer

Answer: B

Explanation: EventBridge (formerly CloudWatch Events) emits events for CodePipeline stage and pipeline state changes. An EventBridge rule can filter for FAILED state changes and trigger a Lambda function that sends a message to Slack (using the Slack webhook API). This is event-driven, efficient, and requires no polling. Polling (A) is inefficient. CloudWatch alarms (C) don't directly monitor pipeline state. SNS with manual Slack forwarding (D) is not automated.


Question 41

A company uses CloudFormation nested stacks. The parent stack update fails because a nested stack update failed. How do they identify which nested stack failed and why?

A) Check the parent stack events B) Check the Events tab for each nested stack individually C) Check the parent stack Events - it references the failed nested stack; then check that nested stack's events D) Check CloudTrail logs

Answer

Answer: C

Explanation: When a nested stack fails, the parent stack events show a failure referencing the nested stack (e.g., "Embedded stack failed to reach COMPLETE state"). By clicking on the nested stack name in the events or navigating to it directly, you can see its events which show the specific resource that failed and the error message. Checking parent stack events (A) gives a partial picture. Checking all nested stacks (B) is inefficient. CloudTrail (D) shows API calls, not stack-level resource failures.


Question 42

A company needs to implement a database migration as part of their CI/CD pipeline. The migration must be applied before deploying new application code. How should they structure this?

A) Include migration scripts in the application code and run at startup B) Add a dedicated migration step in the pipeline before the deployment step, using CodeBuild to run migrations C) Run migrations manually before each deployment D) Use RDS automated backup for migration management

Answer

Answer: B

Explanation: Adding a dedicated migration stage in CodePipeline before the deployment stage ensures migrations run first. A CodeBuild action can connect to the database and run migration scripts. If migrations fail, the pipeline stops and the new application code is not deployed, preventing version mismatches. Running at startup (A) can cause issues with multiple instances running simultaneously. Manual migrations (C) eliminate automation benefits. RDS backups (D) are for data recovery.


Question 43

A team wants to enforce that all CodeBuild projects use approved base images and that the buildspec.yml is stored in the CodeCommit repository (not inline). Which approach enforces this?

A) Code review process B) AWS Config rules and IAM policies for CodeBuild C) Amazon Inspector scanning D) Manual audits

Answer

Answer: B

Explanation: AWS Config can evaluate CodeBuild project configurations and flag non-compliant projects (e.g., projects using unapproved images or with inline buildspecs). IAM policies can restrict which images are allowed in CodeBuild project configurations. This provides both detection (Config) and prevention (IAM). Code reviews (A) and manual audits (D) are not automated. Inspector (C) scans for vulnerabilities, not configuration compliance.


Question 44

A company uses AWS CodeDeploy for EC2 deployments. They need to run a script to drain connections from a load balancer before the deployment starts. Where should this script be placed in the AppSpec?

A) BeforeInstall hook B) AfterInstall hook C) BeforeBlockTraffic hook D) AfterBlockTraffic hook

Answer

Answer: C

Explanation: The CodeDeploy AppSpec lifecycle events for load balancer deployments include: BeforeBlockTraffic (before the load balancer blocks traffic to instances), BlockTraffic (load balancer blocks traffic), AfterBlockTraffic (after traffic is blocked). To drain connections before traffic is blocked by the load balancer, use the BeforeBlockTraffic hook. BeforeInstall (A) runs after traffic is already blocked. AfterInstall (B) runs after application installation.


Question 45

A company wants to implement GitOps principles - the Git repository is the single source of truth for infrastructure. Any change to the repository should automatically be reflected in the infrastructure. How should they implement this for CloudFormation?

A) Use CodePipeline triggered by CodeCommit changes B) Manually apply CloudFormation changes C) Use CloudFormation StackSets D) Use AWS CDK Pipelines

Answer

Answer: A

Explanation: CodePipeline triggered by CodeCommit (or GitHub) changes implements GitOps for CloudFormation. Any commit to the repository triggers the pipeline which applies CloudFormation changes. This ensures the deployed infrastructure always matches the repository. CDK Pipelines (D) also implements this but specifically for CDK applications. Manual changes (B) violate GitOps principles. StackSets (C) manage cross-account deployments but need a trigger.


Question 46

A company's application deployed on ECS is experiencing intermittent issues. They need to correlate application logs with infrastructure metrics and distributed traces. Which combination provides this visibility?

A) CloudWatch Logs + CloudWatch Metrics + X-Ray B) S3 + Athena + X-Ray C) Elasticsearch + Kibana + X-Ray D) CloudTrail + CloudWatch + Inspector

Answer

Answer: A

Explanation: CloudWatch Logs (application logs), CloudWatch Metrics (infrastructure metrics), and X-Ray (distributed traces) form the observability trinity on AWS. CloudWatch Container Insights provides ECS-specific metrics and logs. X-Ray correlates traces with CloudWatch using trace IDs. S3+Athena (B) doesn't provide real-time correlation. Elasticsearch+Kibana (C) requires managing infrastructure. CloudTrail (D) is for API audit, not application observability.


Question 47

A company runs a multi-region application and needs to ensure deployments are coordinated across regions - first deploying to us-east-1 and then to eu-west-1 only if us-east-1 is healthy. What is the BEST approach?

A) Two separate pipelines with manual trigger for the second region B) A single CodePipeline with sequential regional deployment stages and health check gates C) Use CloudFormation StackSets for both regions simultaneously D) Manual deployments with change management

Answer

Answer: B

Explanation: A single CodePipeline with sequential stages for each region implements the required flow: deploy to us-east-1, run health checks (CodeBuild test stage), then deploy to eu-west-1 only if health checks pass. This provides automated, sequential deployment with gates. Two separate pipelines (A) require manual coordination. StackSets (C) deploy to multiple regions simultaneously, not sequentially with gates. Manual deployments (D) eliminate automation.


Question 48

A company uses EC2 Auto Scaling with a launch template. They want to use a new AMI but want to test it on a few instances before updating all instances. Which feature should they use?

A) Launch configuration update B) Instance refresh with a warm pool and phased replacement percentage C) Manual instance termination and relaunch D) Create a new Auto Scaling Group

Answer

Answer: B

Explanation: EC2 Auto Scaling Instance Refresh allows rolling updates of instances to use new launch templates. By setting the MinHealthyPercentage and configuring checkpoints, you can test on a percentage of instances before proceeding. Warm pools can pre-warm replacement instances. Launch configurations (A) are deprecated. Manual termination (C) is error-prone. Creating a new ASG (D) doesn't integrate with the existing deployment.


Question 49

A team needs to implement a robust rollback strategy for database schema changes in their CI/CD pipeline. What is the BEST practice?

A) Store a database dump before each migration B) Use reversible migrations (up/down) with automated rollback capability in the pipeline C) Use point-in-time recovery from RDS backups D) Snapshot the database before each deployment

Answer

Answer: B

Explanation: Reversible migrations with up/down scripts allow the application to apply (up) migrations when deploying and reverse (down) them when rolling back. The pipeline can automatically run the down migration if deployment fails. This is faster than restoring from snapshots. Database dumps (A) and snapshots (D) are slow to restore. PITR (C) is even slower and may lose data from the period after the snapshot.


Question 50

A company needs to ensure their Docker builds are reproducible and secure. They want to scan for secrets accidentally committed to the Docker image layers. Which approach should they use?

A) CodeBuild with manual image inspection B) Multi-stage Docker builds to minimize attack surface + Amazon Inspector for vulnerability scanning C) Use ECR image scanning with Inspector D) Manually review Dockerfiles before build

Answer

Answer: C

Explanation: Amazon Inspector integrates with ECR to scan container images for both vulnerabilities (CVEs in packages) and now secrets accidentally embedded in image layers (using the Secrets detection feature). ECR enhanced scanning with Inspector provides automated detection without manual review. Multi-stage builds (B) reduce image size but don't detect secrets. Manual review (A, D) is not scalable.


Question 51

A company uses CodePipeline and wants to implement a strategy where the deployment stops automatically if the error rate increases post-deployment, and notifies the team. What is the BEST approach?

A) Set up CloudWatch alarms that send emails B) Use CodeDeploy deployment group with CloudWatch alarm-based automatic rollback + SNS notification C) Write a Lambda function to monitor and stop the pipeline D) Use manual monitoring with a checklist

Answer

Answer: B

Explanation: CodeDeploy deployment groups can be configured with CloudWatch alarms for automatic rollback and with triggers that send SNS notifications when deployment events occur (including rollbacks). This is the fully integrated, native solution for automatic rollback with notification. CloudWatch email alarms (A) notify but don't stop/rollback deployments. Lambda (C) adds custom complexity. Manual monitoring (D) is not automated.


Question 52

A company manages hundreds of Lambda functions and needs to ensure all functions have appropriate reserved concurrency limits to prevent one function from consuming all concurrency. How can they enforce this at scale?

A) Manually set reserved concurrency for each function B) Use AWS Config rule to check Lambda functions without reserved concurrency C) Use Service Quotas to limit Lambda concurrency D) Use IAM policies to prevent function creation without reserved concurrency

Answer

Answer: B

Explanation: An AWS Config custom rule can evaluate all Lambda functions and report non-compliant ones that don't have reserved concurrency set. Config can also trigger remediation actions. This provides visibility at scale without manual checking. Manual configuration (A) doesn't scale. Service Quotas (C) limit total account concurrency but not per-function enforcement. IAM policies (D) can't require the presence of a configuration attribute during creation.


Question 53

A company uses AWS CloudFormation and needs to execute a custom script during stack creation to configure a third-party service. Which resource type should they use?

A) CloudFormation Init (cfn-init) B) AWS::CloudFormation::CustomResource with Lambda C) AWS::CloudFormation::WaitCondition D) AWS::CloudFormation::Stack

Answer

Answer: B

Explanation: CloudFormation Custom Resources backed by Lambda allow you to execute arbitrary code during stack create/update/delete operations. The Lambda function can call third-party APIs, configure external services, and return data back to the CloudFormation template. cfn-init (A) is for configuring EC2 instances. WaitCondition (C) is for waiting on signals from EC2 instances. AWS::CloudFormation::Stack (D) is for nested stacks.


Question 54

A DevOps team needs to implement health checks at the application level (not just instance level) for their Auto Scaling Group. They want the ASG to replace unhealthy instances based on application health. How should they implement this?

A) Use EC2 instance status checks B) Configure the ASG to use ELB health checks C) Use CloudWatch alarms to trigger instance termination D) Configure a custom health check endpoint and use EC2 health checks

Answer

Answer: B

Explanation: When an ASG is configured to use ELB health checks, it considers an instance unhealthy if the load balancer reports it as unhealthy (based on the ALB/NLB target group health check, which checks application-level endpoints like /health). The ASG then automatically terminates and replaces the unhealthy instance. EC2 status checks (A) only check EC2 infrastructure health. CloudWatch alarms (C) trigger scaling policies, not direct replacement. Custom health check with EC2 checks (D) doesn't leverage the application-level check properly.


Question 55

A company needs to implement a strategy for managing environment-specific configuration in their CI/CD pipeline. Each environment has different database endpoints, API keys, and feature flags. What is the BEST approach?

A) Store all configurations in the code repository B) Use environment-specific SSM Parameter Store paths per environment C) Hard-code environment-specific values in the build artifacts D) Use different CodeBuild projects for each environment

Answer

Answer: B

Explanation: Using SSM Parameter Store with environment-specific paths (e.g., /dev/database/endpoint, /prod/database/endpoint) allows applications and pipelines to retrieve environment-specific configurations at runtime. Access can be controlled via IAM policies per environment. Storing configs in the repository (A) risks exposing secrets. Hard-coding in build artifacts (C) creates different artifacts per environment. Different CodeBuild projects (D) don't address the config management problem.


Question 56

A company wants to implement immutable infrastructure - instead of updating existing servers, they replace them with new ones on each deployment. Which AWS service and pattern best supports this?

A) EC2 Auto Scaling with Instance Refresh using a golden AMI pipeline B) In-place CodeDeploy deployments on existing EC2 instances C) AWS Systems Manager patching on existing instances D) Manual instance updates via SSH

Answer

Answer: A

Explanation: Immutable infrastructure uses an AMI pipeline (CodeBuild to build and bake a golden AMI with the new code, then Instance Refresh the Auto Scaling Group with the new AMI). New instances are launched from the new AMI and old instances are terminated. In-place deployments (B) modify existing instances, violating immutable principles. SSM patching (C) modifies existing instances. Manual updates (D) are error-prone.


Question 57

A company has multiple CodePipeline pipelines. They want a centralized dashboard to view the status of all pipelines across their AWS organization. What should they implement?

A) AWS Management Console with multiple browser tabs B) Amazon CloudWatch custom dashboard with CodePipeline metrics C) AWS DevOps Guru D) AWS Organizations console

Answer

Answer: B

Explanation: Amazon CloudWatch dashboards can aggregate CodePipeline execution metrics and status across multiple pipelines. Combined with EventBridge events from pipelines, a Lambda function can update a DynamoDB table of pipeline statuses which a CloudWatch dashboard displays. CloudWatch also provides pipeline-level metrics. DevOps Guru (C) uses ML for anomaly detection, not a pipeline dashboard. Organizations console (D) manages accounts, not pipelines.


Question 58

A company needs to ensure their ECS task definitions always pull the latest image from ECR with digest verification (not using 'latest' tag). How should they implement this in their pipeline?

A) Use latest tag in the ECS task definition B) In the CodeBuild post-build stage, output the image URI with digest and use it to update the task definition C) Use ECR tag mutability to prevent overwriting D) Manually update the task definition after each build

Answer

Answer: B

Explanation: After a CodeBuild build pushes an image to ECR, it can retrieve the full image URI with the digest (SHA256 hash) using docker inspect or ECR API. This digest URI is written to an artifact file (imagedefinitions.json) that the CodeDeploy action in CodePipeline uses to update the ECS task definition. Using digests ensures the exact image is deployed, not whatever 'latest' points to. ECR tag immutability (C) prevents overwriting tags but doesn't ensure digest-based deployment.


Question 59

A company needs to implement a compliance check that runs automatically after every CloudFormation deployment to verify the stack is compliant with security standards. What should they implement in their CodePipeline?

A) Manual review step B) CodeBuild action running AWS CLI to check Config rule compliance for the stack's resources C) Amazon Inspector scan D) GuardDuty findings review

Answer

Answer: B

Explanation: After a CloudFormation deployment, a CodeBuild action can run AWS CLI commands to check AWS Config rule compliance for the resources created by the stack (aws configservice get-compliance-details-by-resource). If any resources are NON_COMPLIANT, the CodeBuild stage fails and the pipeline stops. Manual review (A) is not automated. Inspector (C) scans for vulnerabilities. GuardDuty (D) is for threat detection.


Question 60

A company needs to implement a solution to prevent CloudFormation stack deployments that don't comply with their security standards (e.g., S3 buckets without encryption). What is the preventive control?

A) CloudFormation hooks (AWS CloudFormation Guard) B) AWS Config rules C) AWS Security Hub D) AWS Trusted Advisor

Answer

Answer: A

Explanation: AWS CloudFormation Guard is a policy-as-code tool that validates CloudFormation templates against custom rules before deployment. CloudFormation Hooks can invoke Guard or Lambda functions before resource creation to block non-compliant resources. Config rules (B) are detective (after deployment). Security Hub (C) aggregates findings. Trusted Advisor (D) provides recommendations but doesn't prevent deployments.


Question 61

A company needs to ensure that all EC2 instances in their Auto Scaling Group are replaced within 24 hours of a new AMI becoming available. They want this to happen automatically. What should they implement?

A) EventBridge rule for new AMI events triggering Instance Refresh B) Manual instance refresh when notified of new AMIs C) Use spot instances that get replaced frequently D) Set the maximum instance lifetime in the Auto Scaling Group

Answer

Answer: D

Explanation: EC2 Auto Scaling's Maximum Instance Lifetime feature automatically replaces instances after a specified duration. Setting it to 86400 seconds (24 hours) ensures all instances are replaced within 24 hours, picking up new AMIs if the launch template is updated. EventBridge triggering Instance Refresh (A) requires a custom automation. Manual refresh (B) is not automated. Spot instances (C) may be replaced but not guaranteed within 24 hours.


Question 62

A company needs to run a security scan on their application code in the CodeBuild pipeline. They want to detect hard-coded secrets and credentials in the source code. Which tool should they integrate?

A) Amazon Inspector B) AWS Secrets Manager C) git-secrets or truffleHog integrated in the CodeBuild buildspec D) Amazon Macie

Answer

Answer: C

Explanation: Tools like git-secrets, truffleHog, or Gitleaks are designed to scan source code for hard-coded secrets and credentials. These can be integrated into the CodeBuild buildspec as a build step. If secrets are found, the build fails. Amazon Inspector (A) scans container images and EC2 instances. Secrets Manager (B) stores secrets but doesn't scan code. Macie (D) scans S3 objects for sensitive data, not source code.


Question 63

A company uses CodePipeline to deploy to multiple ECS services. Each service has its own CodeDeploy deployment group. When one service deployment fails, they want only that service's deployment to roll back, not all services. How should they structure their pipeline?

A) Single CodeDeploy deployment group for all services B) Separate parallel deploy stages for each service using separate CodeDeploy deployment groups C) Sequential deployment with a single CloudFormation stack D) Deploy all services in a single CodeDeploy deployment

Answer

Answer: B

Explanation: Using separate parallel deploy actions (each with their own CodeDeploy deployment group) for each ECS service in CodePipeline allows individual service rollbacks when a deployment fails. If one service's deployment fails, only that deployment rolls back while other services continue. A single deployment group (A) or single deployment (D) for all services would roll back all services. Sequential CloudFormation (C) doesn't leverage CodeDeploy's rollback capabilities.


Question 64

A company needs to implement a solution to automatically detect and remediate EC2 instances that are not using IMDSv2 (Instance Metadata Service version 2). What should they implement?

A) AWS Config rule + SSM Automation remediation B) Amazon Inspector scan C) Manual audit and script execution D) CloudTrail monitoring

Answer

Answer: A

Explanation: An AWS Config rule can detect EC2 instances where MetadataOptions.HttpTokens is not set to required (IMDSv2). When non-compliant instances are found, an SSM Automation remediation document can automatically update the instance metadata options to require IMDSv2. This provides both detection and automatic remediation. Inspector (B) is for vulnerabilities. Manual audit (C) is not scalable. CloudTrail (D) logs API calls.


Question 65

A DevOps team uses Terraform for infrastructure management alongside CloudFormation. They need to store Terraform state files securely with versioning and locking. What should they use?

A) Local filesystem B) S3 bucket with versioning enabled + DynamoDB for state locking C) CodeCommit repository D) AWS CodeArtifact

Answer

Answer: B

Explanation: The recommended backend for Terraform on AWS is S3 (with versioning for state history and recovery) combined with DynamoDB for state locking (prevents concurrent state modifications). This is the standard, well-documented pattern for secure Terraform state management on AWS. Local filesystem (A) doesn't support team collaboration or DR. CodeCommit (C) would require committing state files which is not recommended. CodeArtifact (D) is a package manager.


Question 66

A company uses CodePipeline for deployment to EKS. They want to validate their Kubernetes manifests for security issues (like running as root, no resource limits) before deploying. Which approach should they take?

A) Manual review of manifests B) Use OPA (Open Policy Agent) or Kyverno policies integrated as a CodeBuild stage C) Use Amazon Inspector D) Use GuardDuty runtime monitoring

Answer

Answer: B

Explanation: OPA (Open Policy Agent) with conftest or Kyverno can validate Kubernetes manifests against policy rules (like requiring non-root users, resource limits, read-only filesystems) in a CodeBuild stage. If policies fail, the build fails and manifests are not deployed. This is shift-left security. Inspector (C) scans running workloads. GuardDuty (D) detects threats at runtime. Manual review (A) doesn't scale.


Question 67

A company needs to implement a "break glass" procedure for production access. Normally no one has direct production access, but in emergencies, specific individuals can gain temporary elevated access. How should this be implemented?

A) Share the root account credentials B) Use IAM roles with time-limited assume role permissions via AWS STS C) Create permanent admin users for break glass scenarios D) Use AWS Organizations with SCP bypass

Answer

Answer: B

Explanation: Break glass access uses IAM roles that can be assumed for a limited duration via AWS STS. Access is tightly controlled (specific users can assume the role), audited via CloudTrail, and time-limited (STS session duration). The role assumption can trigger SNS alerts. Sharing root credentials (A) is never acceptable. Permanent admin users (C) are always available, not "break glass". SCPs (D) can't be bypassed without Organization admin access.


Question 68

A company's CodeBuild project generates test reports. They want to view test results trend over time. Which CodeBuild feature provides this?

A) CloudWatch Logs B) CodeBuild Test Reports with report groups C) S3 artifact storage D) X-Ray tracing

Answer

Answer: B

Explanation: AWS CodeBuild Test Reports allow you to create report groups that aggregate test results from multiple builds over time. CodeBuild parses test result files (JUnit XML, Cucumber JSON, etc.) and displays pass/fail statistics, trends, and test durations in the CodeBuild console. CloudWatch Logs (A) store raw log output. S3 artifacts (C) store files but don't visualize test trends. X-Ray (D) is for distributed tracing.


Question 69

A company needs to implement Blue/Green deployments for their RDS database to test new schema changes before routing production traffic. Which service supports this for relational databases?

A) RDS Blue/Green Deployments B) RDS Multi-AZ failover C) RDS Read Replicas with DNS switching D) DMS for schema migration

Answer

Answer: A

Explanation: Amazon RDS Blue/Green Deployments is a feature that creates a staging environment (green) that is synchronized with the production database (blue). You can apply schema changes to the green environment, test them, and then perform a switchover with minimal downtime. RDS Multi-AZ (B) is for high availability failover. Read replicas (C) are read-only. DMS (D) is for migration between different database types.


Question 70

A team needs to implement a policy that prevents developers from deploying to production directly from their local machines. All deployments must go through the CI/CD pipeline. How should they enforce this?

A) Team policy and developer education B) IAM policies denying direct deployment actions + CodePipeline with IAM execution roles for deployments C) VPN-based access control D) Network ACLs

Answer

Answer: B

Explanation: IAM policies can deny developers the ability to directly invoke CodeDeploy, update ECS services, or push to ECR/EKS. The CodePipeline uses a dedicated service role that has the necessary permissions. This means deployments can only happen through the pipeline. Team policies (A) are not enforced technically. VPN (C) and NACLs (D) control network access, not deployment permissions.


Question 71

A company has a monorepo containing multiple services. They want their CI/CD pipeline to only build and deploy services that have changed files. How should they implement this?

A) Always build and deploy all services B) Use CodeBuild with a pre-build step that checks git diff to identify changed services C) Use separate repositories for each service D) Use Lambda functions to detect changes

Answer

Answer: B

Explanation: In a CodeBuild pre-build step, a script can run git diff to identify which directories (services) have changed since the last successful build. The script can then set environment variables or output files that control which subsequent build/deploy steps are executed. This is a common pattern for monorepo CI/CD optimization. Always building all services (A) wastes time and resources. Separate repos (C) changes the architecture significantly. Lambda detection (D) adds complexity.


Question 72

A company needs to implement automated load testing as part of their deployment pipeline. After deploying to staging, they want to run load tests and only promote to production if latency stays below 200ms at P99. What should they implement?

A) Manual load testing before each deployment B) CodeBuild stage running a load testing tool (like Locust or k6) that fails if P99 latency exceeds 200ms C) CloudWatch alarms monitoring production metrics D) AWS Trusted Advisor checks

Answer

Answer: B

Explanation: A CodeBuild stage after staging deployment can run load testing tools like Locust, k6, or JMeter. The test script defines the load profile and assertions (P99 latency < 200ms). If assertions fail, CodeBuild returns a non-zero exit code, failing the pipeline stage and preventing promotion to production. Manual testing (A) is not automated. CloudWatch alarms (C) monitor production, not staging pre-deployment tests. Trusted Advisor (D) provides cost/performance recommendations.


Question 73

A company needs to implement drift detection for their CloudFormation stacks and automatically remediate when drift is detected. What is the BEST approach?

A) Manually check for drift weekly B) CloudWatch Events (EventBridge) rule that triggers a Lambda to detect drift and initiate remediation via CodePipeline C) AWS Config rules for CloudFormation resources D) Enable CloudFormation termination protection

Answer

Answer: B

Explanation: An automated solution: EventBridge scheduled rule triggers a Lambda function that initiates CloudFormation drift detection for all stacks. If drift is detected, the Lambda triggers CodePipeline to re-deploy the CloudFormation stack (bringing it back to desired state). This is fully automated. Manual checks (A) are not continuous. Config rules (C) can detect drift but aren't specifically designed for CloudFormation stack drift. Termination protection (D) prevents deletion, not drift.


Question 74

A company needs to implement feature flags that can be changed in production without redeployment. The flags should take effect within 60 seconds of being changed. Which is the BEST solution?

A) Environment variables in ECS task definitions (requires redeployment) B) AWS AppConfig with an agent that polls for changes every 45 seconds C) SSM Parameter Store polled every 60 seconds D) DynamoDB with a Lambda function checking every minute

Answer

Answer: B

Explanation: AWS AppConfig is specifically designed for dynamic configuration delivery. The AppConfig agent (or SDK) polls for configuration changes and can be configured to check every 45 seconds, ensuring changes take effect within 60 seconds. AppConfig also supports deployment strategies and validation. ECS environment variables (A) require task restarts. SSM polling (C) works but lacks AppConfig's validation and deployment strategy features. DynamoDB+Lambda (D) requires custom code and has scheduling lag.


Question 75

A DevOps team needs to implement a solution that automatically tags all AWS resources created by their CI/CD pipeline with the pipeline name, commit ID, and deployment timestamp. Which approach is MOST comprehensive?

A) Manually add tags to each resource B) Use CloudFormation tagging at the stack level and propagate to resources C) Use a combination of CloudFormation stack tags + CodePipeline environment variables passed to CloudFormation parameters + Tag policies in AWS Organizations D) Use Lambda to apply tags after resource creation via CloudTrail events

Answer

Answer: C

Explanation: The most comprehensive approach: (1) CloudFormation stack tags automatically propagate to resources created by the stack, (2) CodePipeline passes commit ID and pipeline name as CloudFormation parameters used as tags, (3) AWS Organizations tag policies enforce tag standards. This handles tagging at the pipeline source, propagation through CloudFormation, and governance through Organizations. Manual tagging (A) doesn't scale. Lambda post-creation tagging (D) has race conditions and misses some resources.


Practice exam complete. Review all domains: SDLC Automation, Configuration Management and IaC, Monitoring and Logging, Policies and Standards Automation, Incident and Event Response, and High Availability and Fault Tolerance.