Skip to content
Published on

AWS Solutions Architect Associate (SAA-C03) Practice Exam: 65 Questions

Authors

AWS Solutions Architect Associate (SAA-C03) Practice Exam: 65 Questions

This practice exam covers the key domains of the AWS SAA-C03 exam. Use these questions to test your knowledge before taking the real exam.


Question 1

A company runs a web application on EC2 instances behind an Application Load Balancer. The application stores session state in local instance memory. Users are occasionally losing session data. What is the BEST solution?

A) Enable sticky sessions on the ALB B) Store session state in Amazon ElastiCache C) Use a Network Load Balancer instead D) Increase the number of EC2 instances

Answer

Answer: B

Explanation: Storing session state in local instance memory is not scalable or resilient. The best solution is to use an external state store like Amazon ElastiCache (Redis or Memcached), which allows any instance to retrieve the session data. Sticky sessions (A) can help but sessions are lost if an instance fails. A Network Load Balancer (C) doesn't address session storage. Adding more instances (D) worsens the problem.


Question 2

A company needs to ensure S3 objects are encrypted at rest. The security team requires the encryption keys be managed by the company, not AWS. Which solution meets this requirement?

A) SSE-S3 (Server-Side Encryption with S3 managed keys) B) SSE-KMS with AWS managed keys C) SSE-C (Server-Side Encryption with customer-provided keys) D) Client-side encryption with AWS KMS

Answer

Answer: C

Explanation: SSE-C allows customers to provide and manage their own encryption keys. AWS handles the encryption/decryption process but does not store the keys. SSE-S3 (A) uses AWS-managed keys. SSE-KMS with AWS managed keys (B) still has AWS controlling the key material. SSE-C is the most direct answer where the company fully controls the keys.


Question 3

A solutions architect needs to design a system where an application writes data to a queue and a separate service processes it asynchronously. Processing must be exactly once and messages must not be lost if the processor fails. Which service is BEST?

A) Amazon SNS B) Amazon SQS Standard Queue C) Amazon SQS FIFO Queue D) Amazon Kinesis Data Streams

Answer

Answer: C

Explanation: Amazon SQS FIFO Queue provides exactly-once processing and preserves message order. Messages are not lost because SQS is durable. SQS Standard (B) provides at-least-once delivery. SNS (A) is a pub/sub service without queuing for retry. Kinesis (D) is designed for streaming data, not simple queue processing.


Question 4

A company has a VPC with public and private subnets. EC2 instances in the private subnet need to download software updates from the internet but must not be directly accessible from the internet. What should the architect configure?

A) Internet Gateway attached to the private subnet B) NAT Gateway in the public subnet with a route from the private subnet C) NAT Gateway in the private subnet D) VPC Peering connection

Answer

Answer: B

Explanation: A NAT Gateway must be placed in a public subnet and allows instances in private subnets to initiate outbound internet connections. The private subnet route table must have a route pointing to the NAT Gateway. Placing the NAT Gateway in a private subnet (C) would not work because it needs internet access itself. An Internet Gateway on a private subnet (A) would make it public. VPC Peering (D) is for connecting VPCs, not internet access.


Question 5

An application stores files accessed frequently for 30 days, then rarely for 90 days, then deleted. Which S3 lifecycle configuration is MOST cost-effective?

A) S3 Standard for 30 days, then S3 Glacier Instant Retrieval, delete at 120 days B) S3 Standard for 30 days, then S3 Standard-IA, delete at 120 days C) S3 Standard for 30 days, then S3 Glacier Deep Archive, delete at 120 days D) S3 Intelligent-Tiering for all objects, delete at 120 days

Answer

Answer: A

Explanation: S3 Glacier Instant Retrieval is designed for data that is rarely accessed but requires millisecond retrieval. It offers lower storage costs than Standard-IA for data accessed less than once per quarter. The lifecycle policy moves data after 30 days and deletes after 120 days. Standard-IA (B) is more expensive than Glacier Instant Retrieval for rarely accessed data. Glacier Deep Archive (C) has retrieval times of hours. Intelligent-Tiering (D) has monitoring fees.


Question 6

A company runs a stateless web application on EC2 instances and wants to automatically scale based on CPU utilization. What is the correct combination of services?

A) Auto Scaling Group with CloudWatch Alarms B) Elastic Load Balancer with CloudWatch Events C) Auto Scaling Group with AWS Config D) CloudFormation with CloudWatch Alarms

Answer

Answer: A

Explanation: An Auto Scaling Group with CloudWatch Alarms is the standard solution for EC2 auto-scaling. CloudWatch monitors CPU utilization and triggers the ASG to scale in or out. ELB (B) distributes traffic but doesn't make scaling decisions. AWS Config (C) is a configuration compliance service. CloudFormation (D) is infrastructure as code and doesn't auto-scale dynamically.


Question 7

A company needs a relational database that automatically scales storage and supports multi-AZ deployments without manual intervention. Which service is the BEST fit?

A) Amazon RDS with Multi-AZ deployment B) Amazon Aurora C) Amazon DynamoDB D) Amazon Redshift

Answer

Answer: B

Explanation: Amazon Aurora automatically scales storage from 10 GB to 128 TB in 10 GB increments as needed. It supports Multi-AZ by default with up to 15 read replicas across multiple AZs. Amazon RDS (A) requires manual storage scaling. DynamoDB (C) is a NoSQL database. Redshift (D) is a data warehouse.


Question 8

A company wants to allow users from a partner company (with their own AWS accounts) to access specific S3 buckets. What is the MOST secure approach?

A) Create IAM users for the partner company in your account B) Share the S3 bucket access keys C) Use IAM roles with cross-account trust relationships D) Make the S3 bucket public

Answer

Answer: C

Explanation: IAM roles with cross-account trust relationships allow users from one AWS account to assume a role in another account, following least privilege without sharing long-term credentials. Creating IAM users (A) is harder to manage. Sharing access keys (B) is a security risk. Making the bucket public (D) is insecure.


Question 9

An application on EC2 needs to access an S3 bucket securely without storing credentials on the instance. What is the recommended approach?

A) Store AWS access keys in environment variables B) Use IAM roles attached to the EC2 instance C) Store credentials in AWS Secrets Manager and retrieve at runtime D) Hard-code credentials in the application code

Answer

Answer: B

Explanation: IAM roles attached to EC2 instances provide temporary security credentials through the instance metadata service. The application can call AWS APIs without hard-coded credentials. This is the AWS best practice. Storing keys in environment variables (A) or hard-coding (D) risks credential exposure. Secrets Manager (C) is better than hard-coding but still requires initial credentials to access.


Question 10

A company needs to migrate an on-premises database to AWS with minimal downtime while the database is actively used. Which service provides continuous data replication?

A) AWS Database Migration Service (DMS) with ongoing replication B) AWS Backup C) AWS DataSync D) AWS Snowball

Answer

Answer: A

Explanation: AWS DMS supports ongoing replication using Change Data Capture (CDC) that continuously replicates data changes from the source to the target database. This minimizes downtime. AWS Backup (B) is for backup/restore. DataSync (C) is for file storage migration. Snowball (D) is for offline bulk data transfer.


Question 11

A company runs a global web application and wants to reduce latency for users worldwide. The application uses static and dynamic content. Which service is BEST?

A) AWS Global Accelerator B) Amazon CloudFront C) Route 53 with latency-based routing D) Elastic Load Balancer

Answer

Answer: B

Explanation: Amazon CloudFront is a CDN that caches both static and dynamic content at edge locations globally, reducing latency for users worldwide. Global Accelerator (A) improves availability using the AWS global network but is better for non-HTTP applications. Route 53 latency routing (C) routes users to the nearest region but doesn't cache content. ELB (D) operates within a region.


Question 12

An application requires a file system mounted by multiple EC2 instances simultaneously across multiple Availability Zones. Which storage solution should be used?

A) Amazon EBS gp3 volume B) Amazon EFS (Elastic File System) C) Amazon S3 D) Amazon FSx for Windows File Server

Answer

Answer: B

Explanation: Amazon EFS provides a shared NFS file system that can be mounted by multiple EC2 instances simultaneously across multiple AZs. EBS (A) can only be attached to one instance at a time. S3 (C) is object storage, not a traditional file system. FSx for Windows (D) is for Windows workloads using SMB protocol.


Question 13

A company needs to run fault-tolerant batch processing jobs that can be interrupted and restarted. Which EC2 pricing model offers the lowest cost?

A) On-Demand Instances B) Reserved Instances C) Spot Instances D) Dedicated Hosts

Answer

Answer: C

Explanation: Spot Instances use spare EC2 capacity at discounts of up to 90% compared to On-Demand pricing. They can be interrupted with a 2-minute notice, which is acceptable for fault-tolerant batch jobs that can be checkpointed and restarted. On-Demand (A) is more expensive. Reserved (B) requires commitment. Dedicated Hosts (D) are for compliance/licensing requirements.


Question 14

A company wants to receive email alerts when their AWS billing exceeds a threshold. What should they configure?

A) AWS CloudTrail B) AWS Budgets with an alert action C) AWS Config Rules D) Amazon GuardDuty

Answer

Answer: B

Explanation: AWS Budgets allows you to set cost and usage thresholds and receive alerts (via SNS or email) when those thresholds are breached or forecasted to be breached. CloudTrail (A) is for API call logging. AWS Config (C) monitors resource configurations. GuardDuty (D) is for threat detection.


Question 15

A web application needs to authenticate users through social identity providers like Google and Facebook. Which AWS service provides this capability?

A) AWS IAM Identity Center B) Amazon Cognito User Pools C) AWS Directory Service D) Amazon Cognito Identity Pools

Answer

Answer: B

Explanation: Amazon Cognito User Pools provides user authentication and supports federation with social identity providers like Google, Facebook, and Amazon. IAM Identity Center (A) is for workforce SSO. Directory Service (C) is for Microsoft AD integration. Cognito Identity Pools (D) provides temporary AWS credentials to authenticated users but relies on User Pools for authentication.


Question 16

A company uses Amazon RDS MySQL and needs to improve read performance for a reporting application without affecting the primary database. What should they do?

A) Create a Multi-AZ standby replica B) Create an RDS Read Replica C) Use ElastiCache in front of RDS D) Increase the RDS instance size

Answer

Answer: B

Explanation: RDS Read Replicas allow creation of read-only copies that serve read traffic, offloading the primary instance. They use asynchronous replication. Multi-AZ standbys (A) are for failover only and cannot serve read traffic in RDS MySQL. ElastiCache (C) caches query results but requires application changes. Increasing instance size (D) doesn't distribute load.


Question 17

An EC2 instance needs to store temporary computation data that doesn't need to persist if the instance stops. Which option provides the HIGHEST I/O performance?

A) Amazon EBS gp3 B) Amazon EFS C) Instance Store D) Amazon S3

Answer

Answer: C

Explanation: Instance Store (ephemeral storage) is physically attached to the host computer and provides very high I/O performance. Data is lost when the instance stops or terminates. For temporary computation data, this is acceptable. EBS gp3 (A) is persistent with lower peak performance than instance store. EFS (B) is a network file system with higher latency. S3 (D) is object storage.


Question 18

A company needs to control outbound internet traffic from EC2 instances in private subnets, allowing specific domains and blocking all others. Which service should they use?

A) Security Groups B) Network ACLs C) AWS Network Firewall D) AWS WAF

Answer

Answer: C

Explanation: AWS Network Firewall provides stateful firewall rules including domain-based filtering (domain allow/block lists). Security Groups (A) and Network ACLs (B) work with IP addresses and ports, not domain names. AWS WAF (D) is for HTTP/HTTPS web application protection placed in front of ALB or CloudFront.


Question 19

A company wants to run containers without managing the underlying infrastructure. They need a serverless container solution. Which service should they use?

A) Amazon ECS with EC2 launch type B) Amazon EKS with managed node groups C) AWS Fargate D) Amazon ECR

Answer

Answer: C

Explanation: AWS Fargate is a serverless compute engine for containers that works with ECS and EKS. You don't need to provision or manage servers. ECS with EC2 (A) requires managing EC2 instances. EKS with managed node groups (B) still requires managing EC2 nodes. ECR (D) is a container registry, not a compute service.


Question 20

A company needs to process petabyte-scale data with complex SQL queries for analytics. Which service is MOST appropriate?

A) Amazon RDS B) Amazon DynamoDB C) Amazon Redshift D) Amazon Aurora

Answer

Answer: C

Explanation: Amazon Redshift is a petabyte-scale data warehouse designed for complex analytical queries (OLAP). It uses columnar storage and massively parallel processing (MPP). RDS (A) and Aurora (D) are OLTP relational databases not optimized for petabyte-scale analytics. DynamoDB (B) is a NoSQL database not suitable for complex SQL analytics.


Question 21

A developer wants a deployment strategy that routes a small percentage of traffic to a new version, monitors it, and gradually increases traffic if no errors occur. Which strategy is this?

A) Blue/Green deployment B) Canary deployment C) Rolling deployment D) In-place deployment

Answer

Answer: B

Explanation: Canary deployment routes a small percentage of traffic (e.g., 5-10%) to the new version while the majority uses the old version. Traffic gradually shifts if no issues are detected. Blue/Green (A) switches all traffic at once between two environments. Rolling (C) gradually replaces old instances with new ones. In-place (D) updates existing instances directly.


Question 22

A company needs to create a VPN connection between their on-premises data center and their AWS VPC. Which components are required on the AWS side?

A) Internet Gateway and Customer Gateway B) Virtual Private Gateway and Customer Gateway C) Transit Gateway and Direct Connect D) NAT Gateway and Customer Gateway

Answer

Answer: B

Explanation: A Site-to-Site VPN requires a Virtual Private Gateway (VGW) on the AWS side and a Customer Gateway (CGW) representing the on-premises VPN device. The VGW is attached to the VPC. Direct Connect (C) is a dedicated network connection, not VPN. NAT Gateway (D) is for internet access.


Question 23

An application needs to send notifications to multiple subscribers simultaneously including email recipients, mobile devices, and Lambda functions. Which service should be used?

A) Amazon SQS B) Amazon SNS C) Amazon EventBridge D) AWS Step Functions

Answer

Answer: B

Explanation: Amazon SNS (Simple Notification Service) is a pub/sub messaging service that can fan out messages to multiple subscribers simultaneously including email, SMS, mobile push, Lambda, SQS, and HTTP endpoints. SQS (A) is a point-to-point queue. EventBridge (C) is for event routing between services. Step Functions (D) orchestrates workflows.


Question 24

A company stores customer data in S3 and needs to replicate it to another AWS Region for disaster recovery. What feature should they enable?

A) S3 Transfer Acceleration B) S3 Cross-Region Replication (CRR) C) S3 Versioning only D) S3 Multi-Part Upload

Answer

Answer: B

Explanation: S3 Cross-Region Replication (CRR) automatically replicates objects to a destination bucket in a different AWS Region. It requires versioning on both source and destination buckets. Transfer Acceleration (A) speeds up uploads. Versioning alone (C) is required for CRR but doesn't replicate across regions by itself. Multi-Part Upload (D) is for large file uploads.


Question 25

A company runs a stateful application using ECS with Fargate and needs persistent storage. How should they handle this?

A) Use ECS Task Definition volumes with host mount points B) Mount an Amazon EFS file system in the Fargate task C) Store data in the Fargate task's local filesystem D) Use an EBS volume attached to the Fargate task

Answer

Answer: B

Explanation: Amazon EFS integrates with Fargate and allows tasks to mount a shared, persistent file system. EFS data persists beyond the task lifecycle. Fargate doesn't expose host access (A). Local Fargate filesystem (C) is ephemeral and lost when the task stops. EBS volumes (D) cannot be directly attached to Fargate tasks.


Question 26

A company wants to create and manage AWS resources in a repeatable way, track changes, and roll back if needed. Which service should they use?

A) AWS Systems Manager B) AWS CloudFormation C) AWS CodeDeploy D) AWS OpsWorks

Answer

Answer: B

Explanation: AWS CloudFormation is the native IaC service that defines AWS infrastructure as JSON or YAML templates. It supports drift detection, change sets, and rollback on failure. Systems Manager (A) manages operational tasks. CodeDeploy (C) is for application deployment. OpsWorks (D) is a configuration management service using Chef/Puppet.


Question 27

A company runs an e-commerce application with unpredictable traffic spikes and needs the database to handle sudden increases in connections. Which solution is MOST appropriate?

A) Use Aurora with Auto Scaling B) Use RDS Proxy in front of the database C) Use ElastiCache to cache all queries D) Increase the RDS max_connections parameter

Answer

Answer: B

Explanation: Amazon RDS Proxy manages database connection pooling, allowing the application to scale without overwhelming the database with too many direct connections. Aurora Auto Scaling (A) helps with read replicas but doesn't directly address connection exhaustion. ElastiCache (C) reduces database load but doesn't manage connections. Increasing max_connections (D) has limits based on instance memory.


Question 28

A company needs to comply with regulations requiring all API calls to AWS services be logged. Which service provides this functionality?

A) Amazon CloudWatch B) AWS CloudTrail C) AWS Config D) Amazon GuardDuty

Answer

Answer: B

Explanation: AWS CloudTrail records all API calls made to AWS services, including the caller's identity, time, source IP, request parameters, and response. CloudWatch (A) monitors metrics and logs but doesn't record API calls. AWS Config (C) records resource configuration changes. GuardDuty (D) uses CloudTrail data for threat detection.


Question 29

A Lambda function needs to access a private RDS database in a private subnet. How should the Lambda function be configured?

A) Use VPC endpoints for RDS B) Configure the Lambda function to run inside the VPC with the RDS database C) Make the RDS database publicly accessible D) Use an API Gateway to proxy requests to RDS

Answer

Answer: B

Explanation: Lambda functions can be configured to run inside a VPC by specifying the VPC, subnets, and security groups. This allows Lambda to access resources in private subnets like RDS. VPC endpoints (A) are for accessing AWS services privately, not RDS. Making RDS public (C) is a security risk. API Gateway proxy to RDS (D) adds unnecessary complexity.


Question 30

A company is designing microservices on AWS. Each service needs to communicate asynchronously. Which service is MOST appropriate?

A) Amazon SQS B) AWS AppSync C) Amazon API Gateway D) AWS Direct Connect

Answer

Answer: A

Explanation: Amazon SQS is ideal for asynchronous inter-service communication in microservices architectures, providing decoupling and resilience. AppSync (B) is a GraphQL service. API Gateway (C) is for synchronous HTTP APIs. Direct Connect (D) is for dedicated network connectivity to AWS.


Question 31

A company needs to store database credentials securely and automatically rotate them. Which service should they use?

A) AWS Systems Manager Parameter Store B) AWS Secrets Manager C) AWS Key Management Service (KMS) D) IAM roles

Answer

Answer: B

Explanation: AWS Secrets Manager is designed for storing and automatically rotating secrets like database credentials. It integrates natively with RDS, Redshift, and DocumentDB for automatic rotation. Parameter Store (A) can store secrets but lacks built-in automatic rotation. KMS (C) manages encryption keys. IAM roles (D) provide AWS service access, not database credentials.


Question 32

A company needs to protect their web application against SQL injection and cross-site scripting (XSS) attacks. Which service should they implement?

A) AWS Shield B) AWS WAF C) Amazon GuardDuty D) AWS Network Firewall

Answer

Answer: B

Explanation: AWS WAF (Web Application Firewall) protects web applications from common exploits including SQL injection and XSS by inspecting HTTP/HTTPS requests and blocking malicious traffic. AWS Shield (A) protects against DDoS attacks. GuardDuty (C) is a threat detection service. Network Firewall (D) provides network-level protection.


Question 33

A company wants to receive alerts when EC2 CPU utilization exceeds 80% for 5 minutes. What should they configure?

A) AWS CloudTrail with an EventBridge rule B) Amazon CloudWatch Metric Alarm C) AWS Config rule D) Amazon Inspector

Answer

Answer: B

Explanation: Amazon CloudWatch Metric Alarms monitor specific metrics like CPUUtilization over a specified period and trigger actions when thresholds are breached. CloudTrail with EventBridge (A) is for API events, not metric monitoring. AWS Config (C) monitors configuration compliance. Inspector (D) is for vulnerability assessment.


Question 34

A company needs a CDN for static and dynamic content with DDoS protection and 24/7 DDoS response team support. Which combination should they use?

A) CloudFront with AWS Shield Standard B) Global Accelerator with AWS WAF C) Route 53 with Elastic Load Balancer D) CloudFront with AWS Shield Advanced

Answer

Answer: D

Explanation: CloudFront serves static and dynamic content from global edge locations. AWS Shield Advanced provides enhanced DDoS protection with 24/7 DRT support, real-time attack diagnostics, and cost protection during DDoS events. Shield Standard (A) is free but provides basic protection only. Global Accelerator (B) is for routing. Route 53 + ELB (C) doesn't provide CDN.


Question 35

A company has RTO of 15 minutes and RPO of 5 minutes for disaster recovery. Which DR strategy is MOST appropriate?

A) Backup and Restore B) Pilot Light C) Warm Standby D) Multi-Site Active/Active

Answer

Answer: C

Explanation: Warm Standby maintains a scaled-down but fully functional version of the production environment in the DR region. With near-real-time replication, RPO of 5 minutes is achievable and scaling up to meet traffic can be done within 15 minutes. Backup and Restore (A) typically has RTO/RPO of hours. Pilot Light (B) has slower RTO as core services need to be started. Multi-Site Active/Active (D) offers near-zero RTO/RPO but is more expensive.


Question 36

A company wants millisecond response times for simple key-based queries on unstructured data at scale. Which service should they use?

A) Amazon RDS PostgreSQL B) Amazon Redshift C) Amazon DynamoDB D) Amazon Aurora

Answer

Answer: C

Explanation: Amazon DynamoDB is a fully managed NoSQL database that provides single-digit millisecond performance at any scale. It's optimized for simple key-value and document lookups. RDS PostgreSQL (A) and Aurora (D) are relational databases. Redshift (B) is a data warehouse for analytics.


Question 37

A company needs to enforce HTTPS for all S3 API calls through corporate proxy servers. Which solution is MOST appropriate?

A) Use S3 Transfer Acceleration B) Enforce HTTPS via an S3 bucket policy denying HTTP C) Use a VPN connection to AWS D) Use AWS Direct Connect

Answer

Answer: B

Explanation: An S3 bucket policy with a condition denying requests where aws:SecureTransport is false enforces HTTPS for all S3 API calls. This works through corporate proxies as long as they allow HTTPS. Transfer Acceleration (A) speeds up uploads but doesn't enforce HTTPS. VPN (C) or Direct Connect (D) are for network connectivity.


Question 38

An application uploads files to S3 and processes them with Lambda. Failed Lambda executions must be retried without reprocessing successful ones. What architecture is BEST?

A) S3 Event Notification to Lambda directly B) S3 Event Notification to SQS, then Lambda with Dead Letter Queue C) S3 Event Notification to SNS, then Lambda D) CloudWatch Events to Lambda

Answer

Answer: B

Explanation: Using SQS between S3 events and Lambda enables retry logic. If Lambda fails, the message returns to the queue and is retried. After configured failures, messages go to a DLQ for investigation. Direct S3 to Lambda (A) has limited retry. SNS (C) doesn't persist messages for retry. CloudWatch Events (D) is for scheduled or service events.


Question 39

A company is setting up a data lake on S3 and needs to catalog and discover metadata for analytics queries. Which service should they use?

A) Amazon Athena B) AWS Glue Data Catalog C) Amazon Redshift Spectrum D) Amazon QuickSight

Answer

Answer: B

Explanation: AWS Glue Data Catalog is a centralized metadata repository storing table definitions and schemas for data in S3. It integrates with Athena, Redshift Spectrum, and EMR. Athena (A) is a query service that uses the Glue catalog. Redshift Spectrum (C) queries S3 data but needs a catalog. QuickSight (D) is a BI visualization tool.


Question 40

A company needs to implement automatic failover between AWS regions for their application with minimal RTO. Which Route 53 routing policy should be configured?

A) Weighted routing B) Latency-based routing C) Failover routing with health checks D) Geolocation routing

Answer

Answer: C

Explanation: Route 53 Failover routing with health checks monitors the primary endpoint and automatically routes traffic to the secondary endpoint if the primary becomes unhealthy. Weighted routing (A) distributes traffic by weight. Latency-based (B) routes to the lowest latency endpoint. Geolocation (D) routes based on user location.


Question 41

A company needs to implement least-privilege access for developers who need specific EC2 and S3 access but must not be able to delete any resources. How should this be implemented?

A) Add developers to the PowerUserAccess managed policy B) Create custom IAM policies with specific Allow statements and explicit Deny for delete actions C) Use Service Control Policies (SCPs) at the account level D) Use IAM permission boundaries only

Answer

Answer: B

Explanation: Custom IAM policies with specific Allow statements for required actions and explicit Deny for delete actions implement least privilege. Explicit Deny overrides any Allow statements. PowerUserAccess (A) gives too broad access. SCPs (C) are for AWS Organization accounts. Permission boundaries (D) set maximum permissions but don't automatically restrict to specific resources.


Question 42

An application reads DynamoDB data and requires strongly consistent reads. What should be specified?

A) Set ReadCapacityUnits to high values B) Use TransactGetItems API C) Specify ConsistentRead = true in the GetItem or Query call D) Enable DynamoDB Streams

Answer

Answer: C

Explanation: DynamoDB supports eventually consistent (default) and strongly consistent reads. Setting ConsistentRead = true in GetItem, Query, or Scan ensures the most recent data. Strong reads consume twice the read capacity of eventually consistent reads. TransactGetItems (B) is for ACID transactions. DynamoDB Streams (D) is for change data capture.


Question 43

A company wants to centralize security findings from GuardDuty, Inspector, and Macie across multiple accounts. Which service should they use?

A) AWS CloudTrail B) Amazon CloudWatch C) AWS Security Hub D) Amazon Detective

Answer

Answer: C

Explanation: AWS Security Hub aggregates, organizes, and prioritizes security findings from AWS security services and third-party tools across multiple accounts. CloudTrail (A) is for API logging. CloudWatch (B) is for monitoring metrics. Amazon Detective (D) helps investigate security incidents but doesn't aggregate findings.


Question 44

A company uses EC2 instances in an Auto Scaling Group behind an ALB. During scale-in events, in-flight requests to terminating instances are being dropped. How should this be resolved?

A) Increase the Health Check Grace Period B) Enable Connection Draining (Deregistration Delay) on the ALB C) Use lifecycle hooks to delay termination D) Enable Cross-Zone Load Balancing

Answer

Answer: B

Explanation: Connection Draining (Deregistration Delay in ALB) allows the load balancer to stop sending new requests to a deregistering instance while allowing existing in-flight connections to complete. The default delay is 300 seconds. Health Check Grace Period (A) is for new instances. Lifecycle hooks (C) pause termination but don't specifically handle in-flight requests. Cross-Zone Load Balancing (D) distributes traffic across zones.


Question 45

A company needs end-to-end encryption for S3 data with keys generated and controlled within their own hardware security module (HSM). Which option meets this requirement?

A) SSE-S3 B) SSE-KMS with AWS managed keys C) SSE-KMS with customer managed keys in AWS CloudHSM custom key store D) SSE-C

Answer

Answer: C

Explanation: AWS CloudHSM provides dedicated HSMs. Using CloudHSM as a custom key store for AWS KMS means keys are generated and never leave the HSM. SSE-S3 (A) and SSE-KMS with AWS managed keys (B) use AWS-controlled HSMs. SSE-C (D) requires the customer to manage keys externally and pass them with each request.


Question 46

A three-tier web application should have the database tier inaccessible from the internet while the web tier needs internet access. What VPC design is BEST?

A) All tiers in public subnets B) Web tier in public subnet, application and database tiers in private subnets C) All tiers in private subnets with a NAT Gateway D) Web tier in private subnet, database in public subnet

Answer

Answer: B

Explanation: In a three-tier architecture, the web tier is in a public subnet (accessible from internet via ALB) while application and database tiers are in private subnets (not directly accessible from internet). All tiers in public subnets (A) exposes the database. All tiers private (C) means the web tier can't receive direct internet traffic. Option D is backwards.


Question 47

A company needs Lambda to process events from a specific source in strict order. Which event source should they use?

A) Amazon S3 Event Notifications B) Amazon SQS Standard Queue C) Amazon SQS FIFO Queue D) Amazon SNS

Answer

Answer: C

Explanation: Amazon SQS FIFO queues preserve message order and ensure each message is processed exactly once. Lambda can poll FIFO queues and process messages in order per message group. SQS Standard (B) doesn't guarantee order. S3 Events (A) and SNS (D) don't guarantee delivery order.


Question 48

A company has variable traffic with idle periods and wants to minimize costs while scaling quickly when traffic increases. Which compute option is BEST?

A) Large EC2 On-Demand instance running 24/7 B) AWS Lambda with appropriate memory configuration C) EC2 Reserved Instance D) EC2 Spot Instance fleet

Answer

Answer: B

Explanation: AWS Lambda charges only for actual execution time and is ideal for variable traffic with idle periods. No cost during idle. It scales automatically. A 24/7 EC2 instance (A) wastes money during idle periods. Reserved Instances (C) require paying even when idle. Spot Instances (D) can be interrupted and are better for batch workloads.


Question 49

A company needs to transfer 50 TB of data to AWS. Their internet connection is 100 Mbps and needs to complete within 5 days. Which service should they use?

A) AWS DataSync over the internet B) S3 Transfer Acceleration C) AWS Snowball Edge D) AWS Direct Connect

Answer

Answer: C

Explanation: At 100 Mbps, transferring 50 TB would take approximately 46 days. AWS Snowball Edge is a physical device loaded with data and shipped to AWS, completing the transfer much faster. DataSync (A) and Transfer Acceleration (B) still use the internet connection. Direct Connect (D) setup takes weeks.


Question 50

A company uses AWS Organizations and needs to ensure all accounts have CloudTrail enabled and cannot disable it. Which approach enforces this?

A) IAM policies in each account B) AWS Config rules in each account C) Service Control Policies (SCPs) at the Organization level D) AWS CloudFormation StackSets

Answer

Answer: C

Explanation: SCPs can restrict maximum available permissions for accounts in an organization. An SCP can deny cloudtrail:StopLogging and cloudtrail:DeleteTrail actions for all accounts, preventing anyone from disabling CloudTrail. IAM policies (A) can be overridden by account admins. Config rules (B) detect but don't prevent. CloudFormation StackSets (D) deploy resources but don't prevent manual changes.


Question 51

A company's DynamoDB table experiences high read latency during peak hours. What is the MOST cost-effective solution?

A) Increase DynamoDB read capacity units B) Add DynamoDB Accelerator (DAX) as a caching layer C) Move to Amazon Aurora D) Create DynamoDB Global Tables

Answer

Answer: B

Explanation: DynamoDB Accelerator (DAX) is a fully managed in-memory caching service for DynamoDB that delivers up to 10x performance improvement. Increasing read capacity units (A) improves throughput but not latency. Aurora (C) is a relational database. Global Tables (D) are for multi-region replication.


Question 52

EC2 instances in a private subnet need to access Amazon S3 without going through the internet. What should they implement?

A) NAT Gateway B) Internet Gateway C) S3 Gateway VPC Endpoint D) AWS Direct Connect

Answer

Answer: C

Explanation: An S3 Gateway VPC Endpoint allows EC2 instances in private subnets to access S3 directly through the AWS private network without internet access. Gateway endpoints for S3 and DynamoDB are free. NAT Gateway (A) routes through the internet. Internet Gateway (B) requires a public IP. Direct Connect (D) is for on-premises connectivity.


Question 53

A company needs to migrate on-premises VMware VMs to AWS EC2 instances with minimal disruption. Which service simplifies this migration?

A) AWS Database Migration Service B) AWS Application Migration Service (MGN) C) AWS DataSync D) AWS Snowball Edge

Answer

Answer: B

Explanation: AWS Application Migration Service (MGN) is designed to simplify migration of physical servers, virtual machines, and cloud instances to AWS. It continuously replicates source servers and allows non-disruptive testing before cutover. DMS (A) is for database migration. DataSync (C) is for file transfer. Snowball (D) is for bulk data transfer.


Question 54

A company needs to prevent S3 buckets from becoming public across all accounts in their AWS Organization. Which solution provides the strongest preventive control?

A) S3 Block Public Access at the account level across all accounts B) AWS Config rule s3-bucket-public-read-prohibited C) S3 Bucket Policies on each bucket D) IAM policies restricting s3:PutBucketAcl

Answer

Answer: A

Explanation: S3 Block Public Access at the account level overrides any bucket-level ACL settings that would grant public access. When enabled across all accounts (via SCP enforcement or per account), it prevents any bucket from becoming public. AWS Config (B) detects but doesn't prevent. Bucket policies (C) must be set per bucket. IAM policies (D) can be bypassed by the root account.


Question 55

An application on EC2 needs to retrieve configuration parameters including database connection strings at startup. Which service is MOST appropriate?

A) AWS Secrets Manager B) AWS Systems Manager Parameter Store C) Amazon S3 D) AWS CodePipeline

Answer

Answer: B

Explanation: AWS Systems Manager Parameter Store is designed for storing configuration data. It supports hierarchical storage, versioning, and IAM access control. It's cost-effective for configuration parameters (free for standard parameters). Secrets Manager (A) is better for frequently rotated secrets. S3 (C) is for object storage. CodePipeline (D) is for CI/CD.


Question 56

A company wants to implement a hub-and-spoke network topology connecting multiple VPCs and on-premises networks via Direct Connect. Which service should be at the hub?

A) VPC Peering B) AWS Transit Gateway C) Virtual Private Gateway D) AWS PrivateLink

Answer

Answer: B

Explanation: AWS Transit Gateway acts as a regional hub connecting multiple VPCs and on-premises networks, simplifying architecture by eliminating complex VPC peering meshes. VPC Peering (A) is point-to-point and doesn't scale well. Virtual Private Gateway (C) is for individual VPC-to-VPN connections. PrivateLink (D) is for private service connectivity.


Question 57

A company runs a stateful application and needs auto-scaling without breaking user sessions. What approach is BEST?

A) Use sticky sessions on ALB B) Store session state in ElastiCache and enable auto-scaling C) Use instance store for session data D) Increase minimum instance count to prevent scaling in

Answer

Answer: B

Explanation: Externalizing session state to ElastiCache (Redis) allows any instance to serve any user's session, enabling seamless auto-scaling without session loss. Sticky sessions (A) create session affinity but sessions are lost if the instance terminates. Instance store (C) is lost on termination. Option D prevents scaling and costs more.


Question 58

A company needs to stream millions of clickstream events per second to S3 for analysis. Which service should they use for ingestion?

A) Amazon SQS B) Amazon Kinesis Data Firehose C) Amazon EventBridge D) AWS Batch

Answer

Answer: B

Explanation: Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to S3, Redshift, and Elasticsearch. It automatically scales for high-volume data streams. SQS (A) is a message queue not designed for high-volume streaming to S3. EventBridge (C) is for event routing. AWS Batch (D) is for batch jobs.


Question 59

A company needs to give third-party auditors read-only access to their AWS account. The auditors have their own AWS accounts. What is the MOST secure approach?

A) Create IAM users with read-only permissions and share credentials B) Create an IAM role with SecurityAudit managed policy and trust the auditor's AWS account C) Create an IAM user group with read-only access D) Share root account credentials

Answer

Answer: B

Explanation: Creating an IAM role with SecurityAudit managed policy and a trust relationship with the auditor's AWS account allows auditors to assume the role using their own credentials. No long-term credential sharing, access can be revoked by modifying the trust policy. Creating IAM users (A, C) requires sharing credentials. Sharing root credentials (D) is never acceptable.


Question 60

When a new order is placed, three downstream services need to be notified: inventory, payment, and notification. What architecture pattern is BEST?

A) API Gateway calling each service sequentially B) SNS topic with separate SQS queues for each service (fan-out pattern) C) Direct Lambda invocations in parallel D) Single SQS queue shared by all three services

Answer

Answer: B

Explanation: The SNS fan-out pattern publishes a message to SNS and multiple SQS queues subscribe. Each service reads from its own dedicated SQS queue, providing decoupling, independent scaling, and per-service failure handling. Sequential API Gateway calls (A) creates tight coupling. Direct Lambda invocations (C) are synchronous. Shared SQS queue (D) means only one service gets each message.


Question 61

A company stores petabytes of data in S3 and needs to run SQL queries without loading data into a database. Which service should they use?

A) Amazon RDS B) Amazon Athena C) Amazon Redshift D) AWS Glue ETL

Answer

Answer: B

Explanation: Amazon Athena is a serverless interactive query service that runs SQL queries directly on data stored in S3 using standard SQL. You pay only for queries run. No data loading is required. RDS (A) and Redshift (C) require loading data. Glue ETL (D) is for data transformation, not querying.


Question 62

An Auto Scaling Group spans 2 AZs with a minimum of 2 instances. What change ensures the application stays available if one AZ fails?

A) Increase minimum instances to 4 across 2 AZs B) Extend the ASG to span 3 AZs with a minimum of 3 instances C) Add a second ALB in a different region D) Enable Enhanced Networking on EC2 instances

Answer

Answer: B

Explanation: Spanning 3 AZs with a minimum of 3 instances (one per AZ) ensures that if one AZ fails, the application continues running in the other 2 AZs. With only 2 AZs (A), if one fails you have a single instance remaining. Cross-region ALB (C) is complex. Enhanced Networking (D) improves networking performance, not availability.


Question 63

A company's S3 objects are replicated to another bucket. They need to ensure the replica objects retain the same access controls as the source. What feature ensures this?

A) S3 Versioning B) S3 Replication with Replica Ownership settings C) S3 Object Lock D) S3 Lifecycle policies

Answer

Answer: B

Explanation: S3 Replication allows configuring Replica Ownership to specify who owns the replicated objects. By default, the destination bucket owner owns replicas. Configuring the ownership settings ensures replica objects have appropriate access controls. Versioning (A) is required for replication but doesn't control ownership. Object Lock (C) is for immutability. Lifecycle policies (D) manage object transitions.


Question 64

A company needs to analyze logs from multiple AWS accounts centrally. They want a cost-effective solution that doesn't require provisioning servers. Which architecture is BEST?

A) Send all logs to CloudWatch Logs in a central account B) Send all logs to an S3 bucket in a central account, use Athena to query C) Deploy an ELK stack on EC2 D) Use Amazon Redshift to store and query logs

Answer

Answer: B

Explanation: Centralizing logs in an S3 bucket and using Amazon Athena for serverless queries is cost-effective and doesn't require provisioning servers. You pay only for queries and S3 storage. CloudWatch Logs (A) can be expensive for large volumes. ELK on EC2 (C) requires server management. Redshift (D) requires loading data and cluster management.


Question 65

A company has EC2 instances with consistent usage for 2 years. What is the MOST cost-effective purchasing option?

A) On-Demand Instances B) Spot Instances C) Reserved Instances with 3-year term All Upfront D) Savings Plans 1-year No Upfront

Answer

Answer: C

Explanation: For workloads with consistent 2+ years of usage, Reserved Instances with a 3-year term All Upfront or Compute Savings Plans provide the highest discount (up to 72% compared to On-Demand). On-Demand (A) has no discount. Spot Instances (B) can be interrupted. 1-year No Upfront Savings Plans (D) offer less discount than 3-year All Upfront. For steady-state workloads, committing to 3 years with upfront payment maximizes savings.


Practice exam complete. Review missed questions and focus on those AWS service domains for further study.