- Authors

- Name
- Youngju Kim
- @fjvbn20031
AWS Solutions Architect Professional (SAP-C02) Practice Exam: 75 Questions
This practice exam covers the advanced topics of the AWS SAP-C02 exam including organizational design, multi-account strategies, hybrid connectivity, cost optimization, and complex architectural patterns.
Question 1
A company has 200 AWS accounts and wants to implement a centralized logging strategy. All CloudTrail logs should be consolidated in a dedicated logging account and be tamper-proof. Which architecture achieves this?
A) Enable CloudTrail in each account and export to S3 in each account B) Use CloudTrail organization trail with logs delivered to an S3 bucket in a dedicated logging account with S3 Object Lock C) Use CloudWatch Logs Cross-Account Observability D) Use AWS Config aggregator in the logging account
Answer
Answer: B
Explanation: An AWS Organizations CloudTrail organization trail automatically enables CloudTrail in all accounts and delivers all events to a single S3 bucket in the management/logging account. Combining this with S3 Object Lock in compliance mode makes the logs tamper-proof (cannot be deleted or modified). Cross-Account CloudWatch Observability (C) is for metrics/logs, not specifically CloudTrail. Config aggregator (D) is for resource configuration data.
Question 2
A financial company needs to migrate on-premises workloads to AWS. They have strict compliance requirements that certain data must never leave a specific AWS Region. How should they enforce this?
A) Use IAM policies with resource-based conditions B) Implement Service Control Policies (SCPs) denying actions outside specific regions C) Use VPC endpoint policies D) Configure CloudTrail to alert on cross-region activity
Answer
Answer: B
Explanation: SCPs in AWS Organizations can deny all actions in all regions except the allowed regions using the aws:RequestedRegion condition key. This provides a preventive guardrail that even account administrators cannot bypass. IAM policies (A) must be applied per user/role and can be overridden by account admins. VPC endpoint policies (C) restrict VPC traffic. CloudTrail alerting (D) is detective, not preventive.
Question 3
A company runs a global application across 3 AWS regions. They need a global database that provides low-latency reads in each region and handles failover automatically. The application is read-heavy (90% reads). What should they use?
A) Amazon RDS Multi-AZ with cross-region read replicas B) Amazon Aurora Global Database C) Amazon DynamoDB Global Tables D) Amazon ElastiCache Global Datastore
Answer
Answer: B
Explanation: Amazon Aurora Global Database provides a single Aurora database spanning multiple regions with sub-second read replication lag and automatic failover in under 1 minute. It's designed for this exact use case - global reads with high availability. Aurora Global Database is specifically for relational workloads. DynamoDB Global Tables (C) is for NoSQL workloads. ElastiCache Global Datastore (D) is for caching, not primary database storage.
Question 4
A company uses AWS Organizations with 50 accounts. They need to ensure all accounts use only approved EC2 instance types to control costs. How should they implement this with minimal operational overhead?
A) Create IAM policies in each account restricting instance types B) Use an SCP that denies launching non-approved instance types C) Use AWS Budgets with instance type filters D) Use AWS Config rules in each account
Answer
Answer: B
Explanation: An SCP can deny ec2:RunInstances when the instance type is not in the approved list using the ec2:InstanceType condition. This applies to all accounts in the OU/Organization with a single policy, requiring no per-account configuration. IAM policies (A) must be managed in each account. AWS Budgets (C) monitors costs but doesn't prevent launches. Config rules (D) are detective, not preventive, and require management in each account.
Question 5
A company needs to migrate a large Oracle database to AWS. The source database is 10 TB and they have a 1 Gbps Direct Connect connection. The migration must complete in 5 days with minimal downtime. What strategy should they use?
A) Use AWS Snowball Edge to transfer data, then AWS DMS for ongoing replication B) Use AWS DMS full-load migration over Direct Connect C) Use pg_dump/pg_restore over Direct Connect D) Use AWS DataSync to migrate Oracle data
Answer
Answer: A
Explanation: For 10 TB over 1 Gbps, transfer would take about 22 hours - feasible but risky. Using Snowball Edge for the initial bulk data transfer followed by AWS DMS CDC (Change Data Capture) for ongoing replication is the safest approach. The Snowball transfers the bulk data, then DMS syncs changes. When changes are caught up, the cutover is minimal. DMS full-load over Direct Connect (B) is possible but riskier. pg_dump (C) is for PostgreSQL, not Oracle. DataSync (D) is for file transfers.
Question 6
A company has a hub-and-spoke network with a Transit Gateway. They need to prevent spoke VPCs from communicating directly with each other, only allowing communication through the hub (inspection VPC). How should they configure this?
A) Use Transit Gateway route tables with separate route tables for each spoke B) Use VPC peering between hub and spoke VPCs C) Create Transit Gateway with separate route tables per domain and use the inspection VPC as a hub with security appliances D) Use AWS PrivateLink for all inter-VPC communication
Answer
Answer: C
Explanation: This is the segmentation architecture for Transit Gateway. Create separate route tables: one for spokes (pointing default route to inspection VPC attachment) and one for the hub/inspection VPC (with routes back to all spokes). This forces all spoke-to-spoke traffic through the inspection VPC where security appliances (NGFW, IDS/IPS) are deployed. VPC peering (B) doesn't allow transit routing. PrivateLink (D) is for service connectivity.
Question 7
A media company needs to process video uploads. Videos vary in size from 100 MB to 50 GB. Processing time varies from 5 minutes to 8 hours. They need a cost-effective, scalable architecture. What should they use?
A) AWS Lambda for all video processing B) AWS Batch with EC2 Spot Instances for processing jobs, triggered by SQS C) EC2 Auto Scaling Group with On-Demand instances D) AWS Fargate tasks triggered by S3 events
Answer
Answer: B
Explanation: AWS Batch is designed for long-running compute jobs. Using Spot Instances reduces cost by up to 90%. SQS decouples the upload trigger from processing. Batch automatically manages the compute infrastructure. Lambda (A) has a 15-minute timeout limit - unsuitable for 8-hour jobs. EC2 Auto Scaling (C) with On-Demand is more expensive. Fargate (D) is good but doesn't provide the same cost optimization as Batch with Spot.
Question 8
A company is implementing a multi-account strategy. They want to separate billing, security, logging, and workloads into dedicated accounts. What is the recommended AWS Organizations structure?
A) All accounts in a single flat OU under the management account B) Management account > Security OU (security tooling, logging) + Workloads OU (dev, staging, prod) with separate SCPs C) Separate Organizations for each department D) A single account with resource tagging for separation
Answer
Answer: B
Explanation: The AWS recommended Landing Zone structure uses: Management Account (for billing and org management only), Security OU (containing security tooling account and logging account), and Workloads OU (containing dev/staging/prod accounts). Each OU has tailored SCPs. Using separate Organizations (C) prevents centralized governance. A single account (D) doesn't provide isolation. A flat structure (A) doesn't allow differentiated SCP application.
Question 9
A company runs microservices on ECS. Service discovery is needed - services must be able to call each other using stable DNS names. Which solution should they implement?
A) Hard-code IP addresses in service configurations B) Use Route 53 private hosted zones with manual DNS registration C) Use AWS Cloud Map with ECS service discovery integration D) Use an Application Load Balancer for all inter-service communication
Answer
Answer: C
Explanation: AWS Cloud Map provides service discovery for AWS resources. ECS integrates natively with Cloud Map to automatically register/deregister service instances as they start/stop. Services can discover each other using DNS names through Cloud Map. Route 53 private hosted zones (B) require manual registration. Hard-coding IPs (A) breaks when tasks restart. ALB (D) for all inter-service communication adds latency and cost.
Question 10
A company needs to implement a data lake that ingests streaming data, batch data, and enables multiple access patterns (SQL queries, ML training, BI dashboards). What architecture should they use?
A) Store everything in a single RDS database B) Use S3 as the data lake storage with Kinesis Firehose for streaming, Glue for ETL, Athena for SQL, and SageMaker for ML C) Use Redshift as the central data store D) Use a single DynamoDB table with different indexes
Answer
Answer: B
Explanation: The modern AWS data lake architecture uses S3 as the foundation (separate zones: raw, processed, curated). Kinesis Firehose handles streaming ingestion, Glue handles batch ETL and cataloging, Athena provides serverless SQL, SageMaker accesses S3 for ML. This architecture separates storage from compute and supports multiple access patterns without moving data. Redshift (C) is optimized for structured data warehousing. DynamoDB (D) is for operational NoSQL workloads.
Question 11
A company has a hybrid architecture with on-premises servers and AWS. They need to ensure on-premises servers can resolve AWS Route 53 private hosted zone DNS names. What should they implement?
A) Direct all on-premises DNS to Route 53 public resolver B) Use Route 53 Resolver inbound endpoints to allow on-premises DNS to query Route 53 C) Route 53 Resolver outbound endpoints to forward on-premises DNS queries D) Copy all private DNS records to on-premises DNS servers
Answer
Answer: B
Explanation: Route 53 Resolver inbound endpoints are IP addresses in the VPC that on-premises DNS servers can forward queries to. When on-premises DNS queries for a Route 53 private hosted zone name, it forwards to the inbound endpoint IP, which resolves using Route 53. Outbound endpoints (C) are for forwarding AWS DNS queries to on-premises DNS. Public resolver (A) doesn't resolve private hosted zone names.
Question 12
A company stores PII data in S3 and needs to automatically discover, classify, and protect this data across hundreds of S3 buckets. Which service automates this?
A) AWS Config B) Amazon Macie C) Amazon Inspector D) AWS Security Hub
Answer
Answer: B
Explanation: Amazon Macie uses ML to automatically discover, classify, and protect sensitive data in S3, including PII (names, credit card numbers, SSNs, etc.). It continuously monitors S3 and generates findings. Macie provides data discovery and classification at scale. AWS Config (A) monitors resource configurations. Inspector (C) scans for vulnerabilities. Security Hub (D) aggregates findings.
Question 13
A company runs a critical application with a 99.99% availability requirement. The application has a stateful component that requires in-memory caching. What architecture provides the highest availability for the caching layer?
A) Amazon ElastiCache Redis single node B) Amazon ElastiCache Redis Multi-AZ with automatic failover C) Amazon ElastiCache Redis Global Datastore (multi-region) D) Local in-memory cache on EC2 instances
Answer
Answer: C
Explanation: For 99.99% availability (requiring multi-region), ElastiCache Redis Global Datastore provides replication across multiple AWS regions with sub-second replication lag and automatic failover. A single node (A) has no redundancy. Multi-AZ (B) provides ~99.95% availability within a region. Global Datastore (C) provides the highest availability by spanning regions. Local in-memory cache (D) is not shared across instances.
Question 14
A company needs to implement a cost-effective strategy for running dev/test workloads. Dev instances should automatically stop at 7 PM and start at 9 AM on weekdays. What should they implement?
A) Manual start/stop with developer reminders B) AWS Instance Scheduler (solution from AWS Solutions Library) C) CloudWatch Alarms with Auto Scaling D) EC2 Auto Scaling scheduled actions
Answer
Answer: B
Explanation: AWS Instance Scheduler is an AWS solution that automatically starts and stops EC2 and RDS instances on a defined schedule. It supports multiple schedules, time zones, and can manage instances across multiple accounts and regions. EC2 Auto Scaling scheduled actions (D) work for ASGs but not standalone instances. Manual start/stop (A) is unreliable. CloudWatch alarms (C) are for metric-based actions, not time-based.
Question 15
A large enterprise is migrating to AWS. They have 500+ on-premises applications. The migration team needs to prioritize applications based on complexity and interdependencies. Which approach and tool should they use?
A) Manually document each application B) Use AWS Migration Hub with application discovery and dependency analysis C) Use AWS Well-Architected Tool D) Use Amazon Inspector to scan applications
Answer
Answer: B
Explanation: AWS Migration Hub provides a central location to track migration progress. AWS Application Discovery Service (part of Migration Hub) collects data on on-premises servers and applications, including performance data and network connections (dependencies). This data helps prioritize migrations. The Well-Architected Tool (C) evaluates architectures against best practices. Inspector (D) scans for security vulnerabilities.
Question 16
A company needs to implement a data sovereignty solution. Customer data from EU customers must only be stored and processed in EU regions. How should they architect this?
A) Use a single global AWS account with data stored in EU regions B) Implement separate AWS accounts for EU customers with SCPs restricting data to EU regions, and a data residency enforcement strategy C) Use CloudFront to route EU traffic to EU origins D) Use IAM conditions to restrict data storage to EU regions
Answer
Answer: B
Explanation: Separate AWS accounts for EU customers, with SCPs restricting allowed regions to only EU regions (aws:RequestedRegion not in EU regions = deny), ensures data sovereignty at the organizational policy level. This prevents even admin users from accidentally storing data outside EU regions. A single account (A) relies on IAM controls which can be complex to enforce consistently. CloudFront routing (C) handles traffic but not data storage. IAM conditions (D) must be in every policy and can be missed.
Question 17
A company has a Direct Connect connection from their data center. They want to use Direct Connect for primary connectivity and Site-to-Site VPN as backup. How should they configure the BGP routing?
A) Set equal BGP metrics for both connections B) Prefer Direct Connect by setting a higher BGP MED on the VPN; use BGP route preference C) Use Route 53 health checks to switch between connections D) Configure static routes for failover
Answer
Answer: B
Explanation: For Direct Connect primary with VPN backup, configure BGP so Direct Connect is preferred. On the on-premises side, advertise a longer AS-PATH for the VPN routes (making them less preferred). On the AWS side, use BGP Community attributes to influence route preference. The BGP MED (Multi-Exit Discriminator) or AS-PATH prepending achieves this. Route 53 health checks (C) are for DNS-based failover. Static routes (D) don't provide dynamic failover.
Question 18
A company uses AWS Organizations and wants to provide centralized DNS for all accounts. All accounts should be able to resolve internal domain names. What is the recommended architecture?
A) Deploy Route 53 private hosted zones in each account B) Share Route 53 Resolver rules using AWS RAM; centralize DNS in a shared services account C) Use public hosted zones for all internal DNS D) Deploy DNS servers on EC2 in each account
Answer
Answer: B
Explanation: The recommended architecture: (1) Create Route 53 Resolver outbound endpoint rules in a shared services account that forward queries to on-premises DNS. (2) Associate private hosted zones with VPCs using Route 53 Resolver. (3) Share Resolver rules via AWS RAM so all accounts can use them. This centralizes DNS management while making it available across all accounts. Per-account Route 53 deployment (A) duplicates management. EC2 DNS servers (D) require infrastructure management.
Question 19
A company runs a multi-tenant SaaS application. Each tenant's data is in a separate DynamoDB table with a tenant-specific prefix. The application uses Cognito for authentication. How should they implement tenant isolation using IAM?
A) Create separate AWS accounts per tenant B) Use Cognito User Pools with IAM roles mapped to Cognito groups, using IAM policy conditions with dynamodb:LeadingKeys C) Use DynamoDB resource-based policies per table D) Implement application-level tenant isolation without IAM
Answer
Answer: B
Explanation: Cognito Identity Pools can map authenticated users to IAM roles based on their Cognito group membership (tenant group). IAM policies can restrict DynamoDB access using the dynamodb:LeadingKeys condition, limiting access to only items with the tenant's prefix. This implements fine-grained access control at the IAM level. Separate accounts per tenant (A) is too expensive for many tenants. Application-level isolation (D) relies on application code correctness.
Question 20
A company has a distributed application that uses multiple queues and topics. Messages flow: Producer → SNS → SQS → Consumer. They're seeing message duplication in the consumer. How should they handle idempotency?
A) Switch to SQS FIFO queues for exactly-once processing B) Use DynamoDB to track processed message IDs with conditional writes for idempotency C) Remove SNS and use SQS directly D) Use SQS message timers to reduce duplication
Answer
Answer: B
Explanation: Using DynamoDB with conditional writes to track message IDs implements idempotent processing. Before processing a message, check if the ID exists in DynamoDB; if it does, skip processing. The conditional write ensures only one instance processes each message even with at-least-once delivery. SQS FIFO (A) provides exactly-once within the queue but messages from SNS may still be duplicated. Removing SNS (C) changes the architecture unnecessarily. Message timers (D) don't prevent duplication.
Question 21
A company needs to implement a CI/CD pipeline that deploys a microservices application to multiple AWS regions. The deployment must be sequential (primary region first, then secondary) and include automated testing. What architecture achieves this?
A) Separate CodePipelines per region with manual triggers B) A single CodePipeline with sequential deployment stages for each region, with CodeBuild testing gates between stages C) AWS CodeDeploy with multi-region capability D) Use Lambda to coordinate deployments across regions
Answer
Answer: B
Explanation: A single CodePipeline with sequential stages achieves ordered deployment: deploy to primary region → run automated tests → deploy to secondary region → run tests. The pipeline can assume cross-account/cross-region roles to deploy. Testing gates between stages prevent promotion if tests fail. Separate pipelines per region (A) require manual coordination. CodeDeploy (C) doesn't provide the pipeline orchestration needed. Lambda coordination (D) adds custom complexity.
Question 22
A company uses Amazon EKS with multiple node groups. They want to ensure that production workloads always run on dedicated nodes and are not preempted by other workloads. What Kubernetes features should they use?
A) Resource quotas only B) Node affinity rules with dedicated node groups + Kubernetes taints and tolerations C) Pod priority and preemption D) Horizontal Pod Autoscaler
Answer
Answer: B
Explanation: Taints on dedicated production nodes (e.g., dedicated=production:NoSchedule) prevent non-production pods from scheduling there. Tolerations on production pods allow them to schedule on tainted nodes. Node affinity rules ensure production pods prefer/require production nodes. This combination ensures production workload isolation. Resource quotas (A) limit resource consumption but don't prevent scheduling on same nodes. HPA (D) scales pods but doesn't control placement.
Question 23
A company wants to implement a "FinOps" practice. They need to allocate costs to business units, set budgets per team, and get recommendations for cost optimization. What AWS services should they use?
A) AWS Billing console only B) AWS Cost Allocation Tags + AWS Cost Explorer + AWS Budgets + AWS Compute Optimizer C) AWS CloudTrail cost tracking D) AWS Trusted Advisor only
Answer
Answer: B
Explanation: The FinOps practice on AWS uses: Cost Allocation Tags to tag resources by business unit/team, Cost Explorer to analyze spending by tags/services, Budgets to set alerts when spending exceeds thresholds per team/project, and Compute Optimizer to identify rightsizing opportunities. CloudTrail (C) is for API logging. Trusted Advisor (D) provides some cost checks but isn't comprehensive for FinOps. The AWS Billing console alone (A) doesn't provide granular allocation.
Question 24
A company needs to build a real-time fraud detection system. Transactions come in at 10,000 per second. Each transaction must be evaluated against fraud rules in milliseconds. What architecture handles this?
A) Store transactions in RDS and run batch fraud detection jobs B) Kinesis Data Streams → Lambda for rule evaluation → DynamoDB for results + ElastiCache for feature lookup C) SQS → EC2 fleet for detection D) API Gateway → RDS for fraud checking
Answer
Answer: B
Explanation: Kinesis Data Streams can handle 10,000+ events per second. Lambda (triggered by Kinesis) evaluates fraud rules in real-time. ElastiCache provides sub-millisecond feature lookup (account history, velocity checks). DynamoDB stores results with high throughput. This architecture handles the scale and latency requirements. Batch processing (A) doesn't meet millisecond requirements. SQS → EC2 (C) has higher operational overhead. API Gateway → RDS (D) doesn't handle the throughput.
Question 25
A company has multiple AWS accounts and wants to enforce a standard set of security controls across all accounts. New accounts should automatically have security controls applied. What service orchestrates this?
A) AWS CloudFormation in each account B) AWS Control Tower C) AWS Config in each account D) Manual account setup
Answer
Answer: B
Explanation: AWS Control Tower automates the setup of a multi-account AWS environment based on best practices. It provisions accounts with pre-configured security controls (guardrails), sets up a landing zone, and ensures new accounts (vended through Account Factory) are automatically provisioned with required controls. CloudFormation (A) in each account doesn't handle account vending automation. Config (C) monitors but doesn't automate account setup. Manual setup (D) doesn't scale.
Question 26
A company runs a stateful application on ECS that requires persistent connections to a database. During ECS service updates (rolling deployments), database connection pool exhaustion occurs. What is the BEST solution?
A) Increase the database instance size B) Add RDS Proxy to manage connection pooling and handle graceful connection management during ECS rollouts C) Use ECS blue/green deployments only D) Increase the database max connections
Answer
Answer: B
Explanation: RDS Proxy maintains persistent connection pools to the database and multiplexes application connections. During rolling ECS updates, new tasks connect to the Proxy while old tasks drain - the Proxy handles the connection management transparently. It prevents connection exhaustion during deployments by pooling and reusing connections. Increasing DB size (A) or max connections (D) are temporary fixes. Blue/green (C) switches all at once, which can cause a connection spike.
Question 27
A company has a complex VPC architecture with many subnets and route tables. After a network change, connectivity is broken. How should they troubleshoot this systematically?
A) Check Security Groups first, then Network ACLs B) Use VPC Reachability Analyzer to test specific paths and identify the blocking component C) Enable VPC Flow Logs and wait for data D) Use AWS CloudTrail to see what changed
Answer
Answer: B
Explanation: VPC Reachability Analyzer is a network diagnostic tool that analyzes the network configuration between two endpoints without generating actual traffic. It identifies exactly which component (security group rule, route table, NACL, etc.) is blocking connectivity and explains why. Security group checks (A) are manual and time-consuming. Flow Logs (C) require traffic to be generated. CloudTrail (D) shows what changed but doesn't diagnose current connectivity.
Question 28
A company uses S3 for storing large datasets. Analysts frequently query the same subsets of data. They want to reduce Athena query costs. What should they implement?
A) Copy frequently queried data to a separate S3 bucket B) Use S3 Intelligent-Tiering for all objects C) Partition the data properly and use Athena query result caching + AWS Glue partition indexes D) Move data to Redshift
Answer
Answer: C
Explanation: Athena charges based on data scanned. Proper partitioning (by date, region, etc.) reduces the data scanned per query. Athena result caching reuses query results for identical queries. Glue partition indexes speed up partition pruning. S3 Intelligent-Tiering (B) reduces storage costs but doesn't reduce Athena query costs. Moving to Redshift (D) changes the architecture significantly. Copying data (A) increases storage costs.
Question 29
A company needs to migrate from a monolithic Oracle application to a microservices architecture on AWS. The migration should minimize risk and allow rollback. What migration pattern should they use?
A) Big-bang: rewrite everything at once B) Strangler Fig pattern: incrementally replace monolith functionality with microservices C) Lift-and-shift the monolith first, then refactor D) Run monolith and microservices forever in parallel
Answer
Answer: B
Explanation: The Strangler Fig pattern involves gradually replacing parts of the monolith with new microservices. New functionality is built as microservices, existing functionality is gradually migrated. A facade (like API Gateway) routes requests to either the monolith or microservices. This minimizes risk as you can always route back to the monolith if a service fails. Big-bang (A) has high risk. Lift-and-shift then refactor (C) is the "strangler fig light" approach. Running both forever (D) doubles operational costs.
Question 30
A company has strict encryption requirements. All data at rest must be encrypted with customer-managed keys. The key rotation must be automatic and auditable. How should they implement this?
A) Use AWS-managed KMS keys (aws/s3, etc.) B) Use Customer Managed KMS Keys with automatic key rotation enabled and CloudTrail logging of all key usage C) Use SSE-C (customer-provided keys) for S3 D) Use client-side encryption with application-managed keys
Answer
Answer: B
Explanation: Customer Managed KMS Keys (CMKs) with automatic annual rotation enabled provides auditable, customer-managed encryption. CloudTrail automatically logs all KMS API calls (including each encrypt/decrypt operation) to the key's key policy. CMKs give full control over the key policy, rotation, and usage. AWS-managed keys (A) cannot have custom key policies. SSE-C (C) requires managing keys outside AWS. Client-side encryption (D) increases application complexity and auditability challenges.
Question 31
A company needs to implement a multi-region active-active architecture for their application. Users should be routed to the nearest healthy region. What combination provides this?
A) Route 53 latency routing + health checks + identical infrastructure in each region B) CloudFront with multiple origins C) Global Accelerator with endpoint groups in each region D) Both A and C are valid approaches
Answer
Answer: D
Explanation: Both Route 53 latency routing with health checks (A) and Global Accelerator (C) can implement multi-region active-active routing. Route 53 uses DNS-based routing with TTL-dependent failover (slower). Global Accelerator uses AWS anycast IP addresses and routes at the network level (faster failover, typically sub-minute). The choice depends on requirements: applications needing fast failover prefer Global Accelerator; DNS-based routing is simpler and already used for most web applications.
Question 32
A company is designing an event-driven architecture for an e-commerce platform. Order events must trigger inventory updates, payment processing, and notifications. Each component must be independently scalable and failures should not block other components. What architecture should they use?
A) Synchronous API calls between services B) Amazon EventBridge with event buses for order events, with separate consumers per function (inventory, payment, notifications) C) A single SQS queue for all events D) SNS with all consumers subscribed to one topic
Answer
Answer: B
Explanation: Amazon EventBridge provides an event bus where order events are published. Different event rules route to separate targets (Lambda, SQS queues, Step Functions) for inventory, payment, and notifications. Each consumer processes independently and failures in one don't affect others. EventBridge provides retry and dead-letter queue support. Synchronous API calls (A) create tight coupling. A single SQS queue (C) means only one consumer per message. SNS with one topic (D) fans out but lacks EventBridge's routing and schema filtering capabilities.
Question 33
A company wants to implement a robust disaster recovery strategy. Their RTO is 30 minutes and RPO is 5 minutes. They use a complex multi-tier application. What DR strategy achieves this within cost constraints?
A) Backup and restore to a secondary region B) Pilot Light with automated failover - keep minimal core components running, use CloudFormation to scale up C) Warm Standby - keep a scaled-down version running, scale up on failover D) Multi-Site Active/Active - full infrastructure in both regions
Answer
Answer: C
Explanation: Warm Standby keeps a scaled-down but fully functional version running in the DR region. For 5-minute RPO, database replication must be near-real-time (Aurora Global Database or DMS CDC). For 30-minute RTO, you only need to scale up existing resources (no provisioning time). Backup and Restore (A) typically has hours of RTO. Pilot Light (B) requires starting and provisioning servers (potentially > 30 minutes for complex apps). Multi-Site (D) meets the requirement but is more expensive.
Question 34
A company has a complex IAM permission structure. They need to understand what a specific IAM role can do and whether it complies with least privilege. What tool provides this analysis?
A) AWS IAM Access Analyzer B) AWS CloudTrail last accessed information C) AWS IAM Policy Simulator D) All of the above serve different purposes
Answer
Answer: D
Explanation: All three tools serve different purposes for IAM analysis: (1) IAM Access Analyzer identifies resources shared externally and generates least-privilege policies based on CloudTrail access activity. (2) CloudTrail last accessed information shows when IAM entities last used services/actions, helping identify unused permissions. (3) IAM Policy Simulator tests what API actions a policy allows/denies. For a comprehensive least-privilege analysis, use all three together.
Question 35
A company is deploying a new SaaS product on AWS. They need to provide each customer with their own isolated environment but manage it centrally. What AWS feature enables this efficiently?
A) Deploy a separate CloudFormation stack per customer manually B) AWS Service Catalog with portfolio sharing to customer accounts, or AWS Control Tower Account Factory for Terraform (AFT) C) Terraform modules for each customer D) Manual account creation for each customer
Answer
Answer: B
Explanation: AWS Service Catalog allows publishing standardized products (CloudFormation stacks) that customers can self-service deploy into their accounts. For account-level isolation, Control Tower Account Factory (and its Terraform variant AFT) automates account vending with consistent configurations. Service Catalog enforces standardization while enabling self-service. Manual deployment (A, D) doesn't scale. Terraform alone (C) requires additional orchestration.
Question 36
A company runs workloads on EC2 Spot Instances. They need to handle Spot interruptions gracefully. The application processes messages from SQS. What should they implement?
A) Only use On-Demand instances B) Use EC2 Instance Interruption notices to drain the current task, then signal SQS to make the message visible again before termination C) Use multiple smaller Spot Instance pools with Spot Fleet and Auto Scaling D) Use checkpointing to S3 and SQS message visibility timeout management on interruption
Answer
Answer: D
Explanation: The robust approach: when a Spot interruption notice is received (2 minutes before termination), save the checkpoint to S3, then change the SQS message visibility timeout to 0 (making it immediately visible for another consumer). This ensures no work is lost. Using multiple Spot pools (C) reduces interruption frequency but doesn't handle them when they occur. Draining and releasing (B) is part of the solution but checkpointing ensures resumability.
Question 37
A company needs to comply with PCI DSS requirements. Their payment processing application runs on AWS. The cardholder data environment (CDE) must be isolated from other workloads. What architecture implements this?
A) Use separate security groups for the CDE workloads B) Isolate the CDE in a separate AWS account with strict SCPs and dedicated Direct Connect or VPN for all CDE traffic C) Use network ACLs to isolate the CDE VPC D) Use S3 bucket policies to restrict CDE data access
Answer
Answer: B
Explanation: PCI DSS requires strict isolation of the CDE. Separate AWS accounts provide the strongest isolation - different blast radius, separate audit trail, dedicated IAM and network controls. SCPs ensure the CDE account cannot be misconfigured to reduce security. Dedicated connectivity (Direct Connect virtual interface) for CDE traffic avoids sharing network paths with non-CDE workloads. Security groups (A) and NACLs (C) provide some isolation but not the level required for PCI compliance.
Question 38
A company has an application that needs to process events in the order they were received. The application uses multiple consumers. Which service guarantees ordered processing per partition?
A) Amazon SQS Standard Queue B) Amazon SQS FIFO Queue C) Amazon Kinesis Data Streams with multiple shards D) Amazon SNS
Answer
Answer: C
Explanation: Amazon Kinesis Data Streams guarantees ordering within a shard. By using a partition key that maps related events to the same shard, you ensure ordered processing for that group while still using multiple shards for scale. SQS FIFO (B) guarantees ordering per message group within a single queue but is limited to 300 TPS per API action. Kinesis supports much higher throughput with ordering guarantees per shard. SQS Standard (A) doesn't guarantee order. SNS (D) doesn't guarantee order.
Question 39
A company uses AWS Organizations and wants to prevent any account from creating public S3 buckets, regardless of account administrator permissions. How do they implement this?
A) S3 Block Public Access setting in each account
B) SCP denying s3:PutBucketPublicAccessBlock with false value
C) SCP denying s3:PutBucketAcl and s3:PutBucketPolicy actions that grant public access, combined with enforcing S3 Block Public Access through the SCP
D) AWS Config rule in each account
Answer
Answer: C
Explanation: An SCP can deny actions that would make buckets public. Additionally, an SCP can mandate that s3:PutAccountPublicAccessBlock must always have BlockPublicAcls: true (and other Block Public Access settings). This prevents any account admin from disabling Block Public Access. Config rules (D) detect but don't prevent. Per-account S3 settings (A) can be changed by account admins. The SCP approach (C) prevents the change at the organizational level.
Question 40
A company needs to implement a zero-trust network architecture on AWS. All service-to-service communication must be authenticated and authorized, even within the same VPC. What combination implements this?
A) VPC security groups with tight rules B) AWS App Mesh (service mesh) with mTLS + AWS IAM Roles for service accounts (IRSA for EKS) or ECS task roles C) Network ACLs with strict rules D) PrivateLink for all service communication
Answer
Answer: B
Explanation: Zero-trust requires authentication and authorization for all service-to-service communication. AWS App Mesh provides a service mesh with mutual TLS (mTLS) for certificate-based authentication between services. IAM Roles for Service Accounts (IRSA for EKS) or ECS task roles provide identity for service-level authorization. Together, these ensure all communication is authenticated and authorized. Security groups (A) provide network-level controls but not application-layer identity. PrivateLink (D) provides private connectivity but not mutual authentication.
Question 41
A company runs a batch analytics job every night using EMR. The job processes data in S3 and writes results to S3. Currently it uses On-Demand instances. How can they significantly reduce EMR cluster costs?
A) Use smaller instance types B) Use Spot Instances for task nodes and On-Demand for core/master nodes C) Move to EC2 Reserved Instances D) Use EMR Serverless
Answer
Answer: B
Explanation: EMR clusters have core nodes (store data in HDFS, must be stable) and task nodes (add compute capacity, stateless). Using Spot Instances for task nodes (where interruption doesn't lose data) provides up to 90% cost savings on the majority of cluster cost. Core and master nodes should use On-Demand for reliability. EMR Serverless (D) is also a good option for batch workloads as it eliminates cluster management, but the question specifically asks about cost reduction on EMR clusters.
Question 42
A company needs to migrate from on-premises Active Directory to the cloud. Applications use LDAP and Kerberos authentication. Some applications must continue to use the same AD domain name. What AWS service provides this capability?
A) Amazon Cognito B) AWS IAM Identity Center C) AWS Managed Microsoft AD D) AWS Directory Service Simple AD
Answer
Answer: C
Explanation: AWS Managed Microsoft AD is a fully managed Microsoft Active Directory service that supports LDAP, Kerberos, and other AD protocols. It can be used as the primary directory or in a trust relationship with existing on-premises AD. Applications can join the same AD domain. Cognito (A) is for consumer identity management. IAM Identity Center (B) is for AWS access management using SAML/SCIM. Simple AD (D) is a Samba-based implementation that doesn't support all AD features.
Question 43
A company is designing a serverless API. They need the API to scale to millions of requests per second and minimize cold start latency for critical paths. What should they implement?
A) Large memory Lambda with standard provisioning B) Lambda with Provisioned Concurrency for critical paths + Lambda Power Tuning for optimal configuration C) Lambda with reserved concurrency D) EC2-based API behind API Gateway
Answer
Answer: B
Explanation: Provisioned Concurrency keeps Lambda instances initialized and ready, eliminating cold starts for critical paths. Lambda Power Tuning helps find the optimal memory/CPU configuration for best performance/cost. Reserved Concurrency (C) limits the maximum concurrency but doesn't eliminate cold starts. EC2 (D) requires managing servers. Large memory alone (A) reduces cold start duration but doesn't eliminate it like provisioned concurrency does.
Question 44
A company needs to implement a blue/green deployment for their Aurora database schema changes. After testing the new schema, they need to cut over with minimal downtime. What is the recommended approach?
A) Create a read replica, promote it, and point applications to the new instance B) Use Aurora Blue/Green Deployments with the switchover feature C) Dump and restore the database with new schema D) Use DMS to migrate between two Aurora instances
Answer
Answer: B
Explanation: Aurora Blue/Green Deployments creates a staging environment synchronized with production. You apply schema changes to the staging (green) environment, test them, and then perform a managed switchover. The switchover is fast (typically seconds) and Aurora handles connection draining and switching. It provides a tested, safe way to apply database changes with minimal downtime and easy rollback. Read replica promotion (A) has longer downtime. Dump/restore (C) has significant downtime. DMS (D) adds operational complexity.
Question 45
A company uses a microservices architecture with 50+ services. They need to trace requests across all services to identify performance bottlenecks. The tracing must work across Lambda, ECS, and EC2. What should they use?
A) CloudWatch Logs correlation B) AWS X-Ray with the X-Ray SDK instrumented in all services C) VPC Flow Logs D) CloudWatch Application Insights
Answer
Answer: B
Explanation: AWS X-Ray provides end-to-end distributed tracing across Lambda, ECS, EC2, and other AWS services. The X-Ray SDK instruments applications to create trace segments that show the complete request path, timing, and any errors. The X-Ray Service Map visualizes service dependencies and identifies bottlenecks. CloudWatch Logs (A) require manual correlation. VPC Flow Logs (C) capture network-level data. CloudWatch Application Insights (D) uses patterns for monitoring but doesn't provide request-level tracing.
Question 46
A company has a legacy application that uses a shared database. Multiple services write to the same tables causing contention. They want to modernize without a complete rewrite. What pattern addresses this?
A) Add more database read replicas B) Implement the Database-per-Service pattern with data synchronization using DMS CDC or Kafka C) Increase the database instance size D) Add caching with ElastiCache
Answer
Answer: B
Explanation: The Database-per-Service pattern separates the shared database into service-specific databases. DMS with CDC or Kafka can synchronize data between databases during transition. This eliminates contention and allows services to choose optimal database types. While not a "zero rewrite" solution, it's less risky than a complete rewrite. More read replicas (A) helps reads but not write contention. Increasing size (C) is a temporary fix. Caching (D) reduces read load but not write contention.
Question 47
A company wants to implement a landing zone with security best practices for new AWS accounts. New accounts should have CloudTrail enabled, GuardDuty enabled, Security Hub enabled, and specific IAM configurations applied automatically. What service automates this?
A) AWS CloudFormation StackSets B) AWS Control Tower C) AWS Config Aggregator D) AWS Organizations
Answer
Answer: B
Explanation: AWS Control Tower provides a landing zone that automatically provisions new accounts with security best practices including CloudTrail, Config, GuardDuty, and Security Hub. It uses CloudFormation StackSets under the hood but provides a higher-level abstraction with account factory, guardrails (detective and preventive), and a management dashboard. StackSets alone (A) require manual orchestration. Config Aggregator (C) provides visibility. Organizations (D) provides the structure but not automatic security tooling.
Question 48
A company needs to ensure their S3 data lake is queryable even when files have inconsistent schemas (some columns missing, different data types). They use Parquet files and Athena. What feature should they enable?
A) S3 Select B) Athena schema-on-read with Glue schema registry C) AWS Glue ETL to normalize schemas before storing D) Apache Iceberg table format with schema evolution support
Answer
Answer: D
Explanation: Apache Iceberg (supported in Athena) provides schema evolution - you can add columns, rename columns, and change compatible types without rewriting data. It maintains schema versions and handles different file schemas transparently. Glue Schema Registry (B) validates schemas for streaming data. Glue ETL (C) can normalize schemas but requires processing all data before querying. S3 Select (A) is for filtering within individual objects.
Question 49
A company needs to implement secure file transfer for partners. Partners should upload files to an S3 bucket using SFTP. The SFTP server must have a fixed IP address and support multiple users with different home directories. What should they use?
A) EC2 with OpenSSH and Elastic IP B) AWS Transfer Family for SFTP with S3 backend C) S3 presigned URLs for partners D) AWS DataSync with SFTP source
Answer
Answer: B
Explanation: AWS Transfer Family provides managed SFTP, FTPS, and FTP servers backed by S3 or EFS. It supports fixed IP addresses (static Elastic IPs or custom hostname), user management (IAM or custom identity providers), and logical directory mapping (home directories per user). EC2 with SSH (A) requires managing server infrastructure and patching. S3 presigned URLs (C) don't provide SFTP protocol support. DataSync SFTP (D) is for copying data from an existing SFTP server to AWS.
Question 50
A company runs their workload across 3 regions for high availability. They need a global configuration management solution that allows them to manage configuration centrally and have it available in all regions with millisecond access times. What should they use?
A) S3 with cross-region replication B) DynamoDB Global Tables for configuration storage C) Parameter Store in each region D) CloudFormation StackSets
Answer
Answer: B
Explanation: DynamoDB Global Tables automatically replicates data across multiple AWS regions with low-latency reads and writes in each region. For configuration that changes infrequently but needs global availability with low latency, Global Tables are ideal. S3 with CRR (A) has higher latency (file-based). Parameter Store (C) in each region requires manual synchronization. CloudFormation StackSets (D) deploys resources, not configuration storage.
Question 51
A company has a security requirement that all API keys and secrets must be stored with automatic rotation enabled. Some secrets are used by on-premises servers. Which service supports both automatic rotation and external access?
A) AWS KMS B) AWS Secrets Manager with Lambda rotation function accessible via VPC endpoint or internet C) SSM Parameter Store with SecureString D) IAM roles for on-premises servers
Answer
Answer: B
Explanation: AWS Secrets Manager supports automatic rotation via Lambda rotation functions and can be accessed from on-premises servers via HTTPS (through the internet or via VPC endpoint if using Direct Connect/VPN). KMS (A) manages encryption keys, not application secrets. SSM Parameter Store (C) supports SecureString but lacks built-in automatic rotation for arbitrary secrets. IAM roles (D) are for AWS resources, not on-premises servers (though IAM Roles Anywhere extends this capability).
Question 52
A company wants to implement a global load balancing solution that provides consistent performance for users regardless of their location, with sub-second failover. What should they use?
A) Route 53 with latency-based routing B) AWS Global Accelerator C) CloudFront with multiple origins D) Elastic Load Balancer in each region
Answer
Answer: B
Explanation: AWS Global Accelerator uses AWS's global network infrastructure and anycast IP addresses to route users to the optimal endpoint. It provides sub-second failover (doesn't depend on DNS TTL) and consistently routes users to the lowest-latency endpoint using the AWS backbone network. Route 53 (A) relies on DNS TTL which means failover can take minutes. CloudFront (C) is optimized for HTTP/HTTPS CDN workloads. ELB (D) is regional only.
Question 53
A company needs to build a serverless data pipeline that processes CSV files uploaded to S3, transforms them, and loads them into Redshift. The pipeline must handle failures and retry processing. What architecture should they use?
A) Lambda directly writing to Redshift B) S3 Event → EventBridge → Step Functions → Glue ETL job → Redshift COPY command C) S3 Event → SQS → Lambda → Redshift D) Kinesis Firehose directly to Redshift
Answer
Answer: B
Explanation: Step Functions provides orchestration with retry logic and error handling. EventBridge routes S3 events to Step Functions. The workflow can: validate the file, run a Glue ETL job for transformation, use Redshift COPY command for efficient bulk loading, handle errors with retry and dead-letter logic. Lambda directly writing to Redshift (A) doesn't handle large datasets efficiently. SQS → Lambda → Redshift (C) lacks orchestration. Kinesis Firehose (D) requires streaming data, not S3 events.
Question 54
A company's application uses DynamoDB. They need to audit all reads and writes to specific high-sensitivity items. CloudTrail doesn't capture item-level read/write details. What should they use?
A) Enable DynamoDB point-in-time recovery B) Use DynamoDB Streams to capture all changes and write to CloudWatch Logs for auditing C) Use AWS Backup for DynamoDB D) Enable DynamoDB Encryption at rest
Answer
Answer: B
Explanation: DynamoDB Streams captures a time-ordered sequence of all item-level changes (inserts, updates, deletes) including the old and new item values. By processing the stream with Lambda and writing to CloudWatch Logs (or S3), you create an audit trail of all changes. For read auditing, you'd need application-level logging. PITR (A) is for recovery, not auditing. Backup (C) creates snapshots. Encryption (D) protects data at rest.
Question 55
A company is planning their AWS architecture and needs to choose the right Availability Zone strategy. They have a 3-tier application. What is the recommended AZ deployment strategy?
A) Deploy all tiers in a single AZ to minimize latency B) Deploy all tiers across 3 AZs minimum, with at least one instance per AZ per tier C) Deploy the web tier in 3 AZs and DB tier in a single AZ D) Use a single AZ with multiple instances
Answer
Answer: B
Explanation: Deploying each tier across at least 3 AZs ensures the application survives any single AZ failure. AWS guarantees at least 3 AZs per region, and ALB/NLB automatically route around unhealthy AZs. Each tier (web, app, database) must have replicas in all AZs to avoid a single point of failure. Single AZ deployment (A, D) is not resilient. Partial deployment (C) makes the DB tier a single point of failure.
Question 56
A company needs to implement a solution where developers can request temporary, time-limited AWS credentials for accessing production resources for debugging. Access should be audited and automatically expire. What should they implement?
A) Share IAM user credentials temporarily B) Use AWS STS AssumeRole with a time-limited session + condition for MFA requirement; log via CloudTrail C) Create temporary IAM users D) Use root account credentials
Answer
Answer: B
Explanation: STS AssumeRole allows generating temporary credentials with a defined expiration (minimum 15 minutes, maximum 12 hours). Adding an MFA condition (aws:MultiFactorAuthPresent: true) ensures the developer authenticated with MFA before assuming the role. CloudTrail logs the AssumeRole call and all subsequent API calls. This is secure, auditable, and automatically expiring. Sharing credentials (A, D) is a security risk. Temporary IAM users (C) still create long-lived credentials.
Question 57
A company runs a containerized microservices application on EKS. They want to implement autoscaling based on custom metrics from their application. What should they implement?
A) Kubernetes HPA with CPU/memory metrics only B) KEDA (Kubernetes Event-Driven Autoscaling) or HPA with custom metrics from CloudWatch via KEDA/Adapter C) EKS managed node group auto-scaling only D) Manual pod scaling
Answer
Answer: B
Explanation: KEDA (Kubernetes Event-Driven Autoscaling) supports scaling Kubernetes workloads based on many event sources including SQS queue depth, CloudWatch metrics, Prometheus metrics, etc. Alternatively, the Kubernetes Custom Metrics Adapter allows exposing custom CloudWatch metrics to HPA. Both approaches enable custom metric-based autoscaling beyond CPU/memory. HPA with only CPU/memory (A) doesn't meet the requirement. Node group scaling (C) scales the cluster, not individual services.
Question 58
A company needs to implement cost allocation for their multi-account, multi-region AWS environment. Different business units own different accounts. How should they implement cost allocation and reporting?
A) Manual review of individual account bills B) AWS Cost Categories + Cost Allocation Tags + AWS Cost Explorer with multi-account consolidation through AWS Organizations C) AWS Trusted Advisor for cost optimization only D) Separate billing for each account
Answer
Answer: B
Explanation: AWS Organizations provides consolidated billing across all accounts. Cost Allocation Tags (applied to resources by business unit, team, environment) enable granular cost attribution. AWS Cost Categories allow creating rules to categorize costs (e.g., map specific accounts to business units). Cost Explorer provides visualization and reporting across all accounts. Manual review (A) doesn't scale. Trusted Advisor (C) provides recommendations but not detailed cost allocation. Separate billing (D) loses consolidated billing benefits.
Question 59
A company is building a high-performance trading platform on AWS. Latency must be sub-millisecond between components. What deployment approach minimizes latency?
A) Deploy across multiple AZs for high availability B) Use EC2 cluster placement groups with Enhanced Networking and Elastic Fabric Adapter (EFA) C) Use Lambda for all compute D) Deploy in multiple regions
Answer
Answer: B
Explanation: EC2 Cluster Placement Groups place instances physically close together in the same AZ, minimizing network latency (single-digit microseconds). Enhanced Networking (SR-IOV) provides lower latency and higher throughput. Elastic Fabric Adapter (EFA) provides HPC-level networking for ultra-low latency. Multiple AZs (A) increase latency as cross-AZ traffic traverses more network hops. Lambda (C) has cold start latency. Multiple regions (D) dramatically increase latency.
Question 60
A company uses CloudFormation to manage their infrastructure. They have a complex stack that deploys 50+ resources. A stack update is failing due to a circular dependency error. How should they resolve this?
A) Delete the entire stack and recreate it B) Identify the circular dependency using the stack events, break the cycle using CloudFormation Export/Import or SSM Parameters for cross-stack references C) Run the update multiple times until it succeeds D) Update resources manually then import them into CloudFormation
Answer
Answer: B
Explanation: Circular dependencies occur when Resource A depends on Resource B and Resource B depends on Resource A. The solution is to break the cycle by extracting one resource into a separate stack and referencing it via CloudFormation Exports or SSM Parameter Store. Stack events identify which resources have the circular dependency. Stack deletion (A) risks data loss. Re-running the update (C) won't fix a circular dependency. Manual updates (D) create drift.
Question 61
A company needs to implement a solution for securely connecting their AWS workloads to SaaS applications (like Salesforce, ServiceNow) without exposing their VPC to the internet. What should they implement?
A) Direct internet access from the VPC B) AWS PrivateLink with VPC endpoints for SaaS providers that support it, or NAT Gateway for others C) VPN connection to each SaaS provider D) AWS Direct Connect to each SaaS provider
Answer
Answer: B
Explanation: AWS PrivateLink allows creating interface VPC endpoints to access SaaS services that publish their endpoints as PrivateLink services. Traffic stays on the AWS network without requiring internet access. For SaaS providers that don't support PrivateLink, a NAT Gateway provides internet connectivity without exposing the VPC directly. VPN/Direct Connect (C, D) to each SaaS provider is operationally complex and many SaaS providers don't offer dedicated connectivity.
Question 62
A company runs a social media application. They need to implement user timeline generation - each user's timeline aggregates posts from all accounts they follow. The application has 50 million users and each user follows 500 accounts on average. What approach handles this at scale?
A) Query DynamoDB at read time for all followed accounts B) Fanout-on-write: when a post is created, write to each follower's timeline (precomputed timeline in DynamoDB or ElastiCache) C) Use Athena to query S3 for timelines D) Use Amazon Neptune for graph traversal
Answer
Answer: B
Explanation: Fan-out on write precomputes timelines when posts are created. When User A posts, Lambda/Kinesis writes the post to each of A's followers' timeline caches. Read performance is O(1) - just read the cached timeline. This trades write complexity for read simplicity, which is optimal for read-heavy social media. Fan-in on read (A) queries all followed accounts at read time - too slow for 500 follows. Athena (C) has minutes of latency. Neptune graph traversal (D) works but has higher latency than cached timelines.
Question 63
A company needs to implement a solution to manage and rotate credentials for a legacy application that cannot be modified. The application reads credentials from a configuration file on the filesystem. What is the BEST solution?
A) Manually update the configuration file when credentials expire B) Use AWS Secrets Manager with a sidecar container or process that reads the secret and updates the configuration file automatically C) Store credentials in S3 and have the application read from S3 D) Use EC2 instance profile credentials
Answer
Answer: B
Explanation: A sidecar pattern (for containers) or a background process (for EC2) can retrieve secrets from Secrets Manager and write them to the expected configuration file location. When Secrets Manager rotates the secret, the sidecar detects the change and updates the file. The legacy application continues to read from its expected file location without modification. Manual updates (A) are error-prone. S3 storage (C) doesn't provide the rotation management. EC2 instance profiles (D) provide AWS API credentials, not application-specific credentials.
Question 64
A company uses AWS CodePipeline for deployments. They need to implement environment promotion where an artifact is deployed to staging, tested, and then promoted to production only after explicit sign-off. What mechanism should they use?
A) Automatic promotion after successful tests B) Manual Approval action in CodePipeline with SNS notification to approvers C) Separate pipelines for staging and production D) CloudWatch alarms as gates
Answer
Answer: B
Explanation: CodePipeline's Manual Approval action pauses the pipeline and sends an SNS notification to the configured approvers. The designated people review the staging deployment and explicitly approve or reject promotion. The approval includes a URL to review deployment details. This implements the "explicit sign-off" requirement. Automatic promotion (A) doesn't provide sign-off. Separate pipelines (C) require manual triggering of the production pipeline. CloudWatch alarms (D) are metric-based, not human sign-off.
Question 65
A company has a security requirement to prevent data exfiltration via S3. EC2 instances should only be able to access S3 buckets owned by the company's AWS accounts. What mechanism enforces this?
A) S3 bucket policies in each bucket
B) IAM policies on EC2 instance profiles
C) S3 VPC endpoint with an endpoint policy restricting access to specific bucket ARNs or requiring aws:ResourceAccount condition
D) Network ACLs blocking S3 traffic
Answer
Answer: C
Explanation: An S3 Gateway VPC Endpoint with an endpoint policy can restrict which S3 buckets can be accessed through the endpoint. Using the aws:ResourceAccount condition in the endpoint policy restricts access to buckets owned by specific account IDs. This prevents EC2 instances from using the VPC endpoint to access buckets in other accounts (exfiltration to external buckets). IAM policies (B) can restrict access to specific buckets but a misconfigured policy could allow others. Bucket policies (A) are per-bucket. NACLs (D) can't distinguish between S3 buckets.
Question 66
A company needs to migrate petabytes of data from an on-premises NAS to S3. The data migration must complete in 30 days. Their WAN connection is 1 Gbps. What solution meets this requirement?
A) AWS DataSync over the 1 Gbps WAN connection B) AWS Snowball Edge Compute Optimized devices in parallel C) AWS Direct Connect temporary upgrade to 10 Gbps D) Upload directly to S3 over the internet
Answer
Answer: B
Explanation: For petabytes of data within 30 days: at 1 Gbps fully utilized, you can transfer ~2.7 PB in 30 days - this might be sufficient for some amounts but not petabytes efficiently. AWS Snowball Edge devices hold up to 80 TB each. For large amounts, multiple devices can be shipped in parallel, each loaded concurrently. The physical data transfer bypasses network constraints. DataSync over 1 Gbps WAN (A) may be too slow for petabytes. Direct Connect upgrade (C) takes months to provision. Direct internet upload (D) is slower.
Question 67
A company runs Lambda functions that process sensitive financial data. They need to ensure Lambda functions only process data when running in the expected VPC and with expected security group configurations. What security mechanism should they use?
A) VPC security groups on Lambda B) Lambda resource policies C) Lambda function URL with IAM authorization D) VPC configuration on Lambda + Lambda layer for security validation
Answer
Answer: A
Explanation: Lambda functions can be configured to run inside a VPC with specific security groups. The VPC configuration restricts which network resources Lambda can access and which resources can access Lambda. Security groups control inbound/outbound traffic. When a Lambda is in a VPC with a specific security group, it inherits the network controls of that security group. Lambda resource policies (B) control who can invoke the Lambda. Lambda URL with IAM (C) is for HTTP invocations. Layer for validation (D) is application-level, not infrastructure-level.
Question 68
A company is implementing AWS Well-Architected reviews. They need to identify and remediate common architectural issues across 100+ workloads. What approach is most scalable?
A) Manual review of each workload individually B) Use AWS Well-Architected Tool with custom lenses, API to automate reviews, and AWS Trusted Advisor for automated checks C) Only review new workloads D) Use AWS Cost Explorer for architectural recommendations
Answer
Answer: B
Explanation: The AWS Well-Architected Tool provides a systematic framework for reviewing workloads. For scale: use the API to programmatically create and submit reviews, custom lenses for company-specific standards, and integrate with AWS Trusted Advisor for automated real-time checks across all accounts. Manual reviews (A) don't scale to 100+ workloads. Only reviewing new workloads (C) misses existing issues. Cost Explorer (D) is for cost analysis.
Question 69
A company uses AWS Direct Connect for their primary connectivity. They want to ensure they have redundant connectivity. They need both hardware and geographic redundancy. What should they implement?
A) Two Direct Connect connections from the same location B) Two Direct Connect connections from two different Direct Connect locations connected to two different AWS Direct Connect routers C) One Direct Connect + one VPN D) Two Direct Connect connections using LAG
Answer
Answer: B
Explanation: For maximum resilience, AWS recommends two Direct Connect connections from different physical DX locations, connecting to different AWS Direct Connect routers (to avoid single router failure). This provides both geographic redundancy (different DX locations) and hardware redundancy (different AWS routers). Same location (A) has geographic single point of failure. DX + VPN (C) provides backup but not the same bandwidth redundancy. LAG (D) combines connections from the same location, no geographic redundancy.
Question 70
A company needs to implement a solution that allows their on-premises applications to use Amazon SQS and SNS without traffic going over the internet. What should they implement?
A) Use NAT Gateway for internet access to SQS/SNS B) Create VPC Interface Endpoints for SQS and SNS and use them from Direct Connect C) Deploy SQS/SNS equivalent on-premises D) Use S3 as a message broker instead
Answer
Answer: B
Explanation: VPC Interface Endpoints for SQS and SNS create private IP addresses in your VPC that resolve to SQS/SNS. When connected via Direct Connect (with private virtual interface routing to the VPC), on-premises applications can access SQS and SNS through the Direct Connect connection without internet traversal. The DNS for the endpoint resolves to private IPs accessible via Direct Connect. NAT Gateway (A) routes through internet. On-premises equivalents (C) are not AWS services. S3 as a broker (D) is a completely different architecture.
Question 71
A company uses Kubernetes on EKS. They want to use AWS services like DynamoDB, S3, and SQS from their pods without managing static credentials. What should they use?
A) Store credentials in Kubernetes secrets B) IAM Roles for Service Accounts (IRSA) - associate IAM roles with Kubernetes service accounts C) Use EC2 instance profiles on the node D) Use environment variables with IAM credentials
Answer
Answer: B
Explanation: IAM Roles for Service Accounts (IRSA) associates IAM roles with Kubernetes service accounts using OIDC federation. Pods using the service account get temporary AWS credentials via the projected service account token, without needing static credentials or relying on the node-level EC2 instance profile. This implements least-privilege at the pod level. Static credentials in Kubernetes secrets (A) or environment variables (D) are security risks. EC2 instance profiles (C) give all pods on the node the same permissions, violating least privilege.
Question 72
A company runs multiple EC2 instances that need to communicate securely. Instead of managing TLS certificates manually, they want automated certificate management. What should they implement?
A) Self-signed certificates on each instance B) AWS Private Certificate Authority (ACM Private CA) with automatic certificate renewal and distribution C) Let's Encrypt certificates (public CA) D) AWS Certificate Manager (ACM) for all internal certificates
Answer
Answer: B
Explanation: AWS Private CA creates a private CA hierarchy that issues private certificates for internal resources. ACM integrates with Private CA to automatically renew and deploy certificates. Applications can use the AWS ACM API to request certificates for internal hostnames. ACM Public certificates (D) are only for public-facing HTTPS endpoints (ALB, CloudFront). Self-signed certificates (A) require manual management. Let's Encrypt (C) requires public domain names and may not be appropriate for internal services.
Question 73
A company needs to implement a solution that allows an S3 bucket to be accessed by specific on-premises IP ranges only. The bucket should not be accessible from the internet. What should they implement?
A) S3 bucket ACL with IP restrictions
B) S3 bucket policy with aws:SourceIp condition restricting to on-premises IP ranges
C) S3 VPC endpoint with endpoint policy + Direct Connect for on-premises access
D) IAM policies with IP conditions for all users
Answer
Answer: C
Explanation: The most robust solution: (1) Create an S3 Gateway VPC endpoint; (2) The S3 bucket policy denies access unless it comes through the VPC endpoint (using aws:SourceVpce condition); (3) On-premises access goes through Direct Connect to the VPC, then through the VPC endpoint. This ensures all access, both from AWS and on-premises, goes through controlled network paths. IP-based conditions (B) can be spoofed and are harder to maintain as IP ranges change. ACLs (A) are legacy and limited.
Question 74
A company is planning to build a machine learning platform on AWS. Data scientists need access to GPU instances for training but these instances should only run when jobs are executing, not all the time. What architecture minimizes cost while providing on-demand GPU access?
A) Keep GPU instances running 24/7 B) Use SageMaker Training Jobs with GPU instances (pay per training job duration) C) Use EC2 Spot GPU instances in an ASG with scale-to-zero D) Use AWS Batch with GPU-enabled compute environments
Answer
Answer: B
Explanation: SageMaker Training Jobs provision GPU compute only for the duration of the training job and automatically terminate the instances after. You pay only for the time the job runs. No instance management is needed. EC2 Spot in ASG (C) can scale to zero but requires managing the scaling triggers and cluster setup. AWS Batch (D) with GPU is also good for batch ML workloads but requires cluster management. Always-on instances (A) are most expensive.
Question 75
A company is using AWS Organizations with Control Tower. They need to implement a new security control that checks whether S3 bucket logging is enabled. The control should automatically remediate non-compliant buckets. What is the BEST approach?
A) Create a Config rule in each account manually B) Use Control Tower's customizations (Customizations for Control Tower - CfCT) to deploy a Config rule with SSM Automation remediation to all accounts C) Use an SCP to require bucket logging at creation D) Use Security Hub to enable the check
Answer
Answer: B
Explanation: CfCT (Customizations for Control Tower) allows deploying custom AWS Config rules, SCPs, and other resources to all accounts managed by Control Tower. Deploying a Config rule with an SSM Automation remediation document to all accounts ensures: (1) the rule checks all S3 buckets, (2) non-compliant buckets are automatically remediated by enabling logging. This scales automatically to new accounts created through Control Tower. Manual Config rule creation (A) doesn't scale. SCPs (C) are preventive but can't retroactively fix existing buckets. Security Hub (D) aggregates findings but doesn't directly remediate.
Practice exam complete. Focus on the key SAP-C02 domains: Organizational Complexity, New Solutions, Migration Planning, Cost Control, and Continuous Improvement.