- Authors

- Name
- Youngju Kim
- @fjvbn20031
Exam Overview
The AWS SysOps Administrator Associate (SOA-C02) validates your ability to deploy, manage, and operate workloads on AWS.
| Item | Details |
|---|---|
| Exam Code | SOA-C02 |
| Duration | 180 minutes |
| Questions | 65 (multiple choice + hands-on labs) |
| Passing Score | 720 / 1000 |
| Languages | English, Japanese, Korean, Simplified Chinese |
Domain Breakdown
| Domain | Topic | Weight |
|---|---|---|
| 1 | Monitoring, Logging and Remediation | 20% |
| 2 | Reliability and Business Continuity | 16% |
| 3 | Deployment, Provisioning and Automation | 18% |
| 4 | Security and Compliance | 16% |
| 5 | Networking and Content Delivery | 18% |
| 6 | Cost and Performance Optimization | 12% |
Key SysOps Concepts
CloudWatch Essentials
- Standard metrics: EC2 CPU, network, disk I/O (5-minute granularity)
- Detailed monitoring: 1-minute granularity (additional cost)
- Custom metrics: Memory usage, disk utilization published via agent or API
- Logs Insights: Interactive query engine for CloudWatch Logs
- Contributor Insights: Identifies top contributors causing high request volumes
High Availability Patterns
- Multi-AZ: Multiple Availability Zones in one Region — automatic failover for RDS
- Multi-Region: Active-passive or active-active across Regions — DR and latency
- RTO vs RPO: Recovery Time Objective (downtime) vs Recovery Point Objective (data loss)
Practice Questions
Domain 1: Monitoring, Logging and Remediation
Q1. An operations team wants to monitor memory utilization of EC2 instances in CloudWatch. Memory is not included in the default EC2 metrics. What is the correct approach?
A) Enable detailed monitoring on the EC2 instances B) Install the CloudWatch agent and publish custom metrics C) Use AWS Systems Manager to automatically track memory D) Enable CloudTrail to collect memory metrics
Answer: B
Explanation: EC2 memory and disk utilization are not included in the default CloudWatch metrics because they require OS-level access. Installing the CloudWatch agent on each EC2 instance allows you to collect and publish these as custom metrics to CloudWatch namespaces.
Q2. A CloudWatch alarm is showing INSUFFICIENT_DATA state. What does this state indicate?
A) The alarm threshold has been breached B) There is not enough data to evaluate the alarm C) The metric data is within the normal range D) The CloudWatch service has experienced a failure
Answer: B
Explanation: INSUFFICIENT_DATA means the alarm was just created, the metric data is not available, or not enough data has been collected to determine the alarm state. The three alarm states are ALARM (threshold exceeded), OK (within range), and INSUFFICIENT_DATA.
Q3. A company wants to automatically remediate any S3 bucket that has public access enabled. What is the most effective approach using AWS native services?
A) Connect CloudTrail events to a Lambda function B) Use an AWS Config rule with SSM Automation for auto-remediation C) Add a bucket policy condition to block public access D) Configure GuardDuty alerts to SNS and process with Lambda
Answer: B
Explanation: AWS Config supports automatic remediation using SSM Automation documents. You can associate the s3-bucket-public-read-prohibited managed rule with the AWS-DisableS3BucketPublicReadWrite automation document to automatically block public access whenever it is enabled.
Q4. What is the difference between Management Events and Data Events in AWS CloudTrail?
A) Management events are free; data events are charged B) Management events record control plane operations on AWS resources; data events record data plane operations within resources C) Management events only capture IAM activity; data events cover all services D) Management events are regional; data events are global
Answer: B
Explanation: Management events (control plane) cover operations that modify AWS resource configuration, such as creating an EC2 instance or modifying an IAM policy. Data events (data plane) cover high-volume operations within resources, such as S3 GetObject/PutObject or Lambda invocations. Data events must be explicitly enabled because of their high volume.
Q5. Which scenario best describes a use case for CloudWatch Contributor Insights?
A) Setting CPU utilization threshold alarms on EC2 instances B) Identifying the top IP addresses generating the most errors C) Aggregating logs from multiple regions into a single dashboard D) Automatically archiving CloudTrail logs to S3
Answer: B
Explanation: CloudWatch Contributor Insights analyzes log data and generates time series data showing the top contributors affecting system performance. A classic use case is identifying the top IP addresses causing 404 errors or generating the highest request volume from ALB access logs.
Q6. An EC2 instance's CPU utilization spikes every day at the same time. Which tool best helps visualize and investigate this pattern?
A) AWS Trusted Advisor B) CloudWatch dashboards with Metric Math C) AWS Cost Explorer D) AWS Personal Health Dashboard
Answer: B
Explanation: CloudWatch dashboards allow you to visualize multiple metrics over time. Metric Math enables you to perform calculations across multiple metrics. The combination provides time-series visualization that makes recurring daily patterns easy to identify and analyze.
Q7. When using AWS Systems Manager Patch Manager to patch EC2 instances, what must be defined first?
A) The AMI ID and latest snapshots B) A Patch Baseline and a Maintenance Window C) VPC security groups and subnets D) CloudWatch log groups and alarms
Answer: B
Explanation: Patch Manager requires a Patch Baseline (defines which patches are approved for installation) and a Maintenance Window (defines when patching occurs). Patch Groups logically organize instances so different baselines can be applied to different sets of instances.
Q8. In a CloudWatch Logs Insights query, what does "filter @message like /ERROR/" do?
A) Deletes all log events with ERROR severity B) Filters log events where the message contains the string "ERROR" C) Creates a CloudWatch alarm for ERROR-type events D) Exports ERROR events to S3 automatically
Answer: B
Explanation: The Logs Insights filter command selects log events that match the specified condition. The like /ERROR/ syntax uses a regex pattern to match events where the @message field contains the string "ERROR", returning only those matching records.
Q9. What is the difference between the AWS Personal Health Dashboard and the AWS Service Health Dashboard?
A) Personal Health Dashboard is paid; Service Health Dashboard is free B) Personal Health Dashboard shows events affecting your specific AWS resources; Service Health Dashboard shows the overall status of all AWS services globally C) Personal Health Dashboard shows global services only; Service Health Dashboard shows regional services D) Both dashboards provide identical information with different UIs
Answer: B
Explanation: The AWS Personal Health Dashboard provides personalized information about AWS events that specifically affect your account's resources. The AWS Service Health Dashboard (status.aws.amazon.com) is a public page showing the general status of AWS services across all Regions.
Q10. What does CloudTrail Insights detect?
A) Abnormal public access to S3 buckets B) Unusual API call volume in an AWS account C) Abnormal CPU utilization on EC2 instances D) IAM user password changes
Answer: B
Explanation: CloudTrail Insights continuously monitors CloudTrail management events and detects unusual patterns in API call volume or error rates. It can identify anomalies such as a sudden spike in the number of IAM CreateUser API calls, which may indicate a security breach or operational issue.
Domain 2: Reliability and Business Continuity
Q11. In an RDS Multi-AZ deployment, what happens when the primary instance fails?
A) The application must be manually reconfigured to point to the standby B) AWS automatically updates the DNS record to point to the standby instance C) A read replica is automatically promoted to primary D) A new RDS instance is created automatically in the same Availability Zone
Answer: B
Explanation: During a Multi-AZ failover, AWS automatically flips the DNS CNAME record of the DB endpoint to the standby instance. This typically completes within 60–120 seconds. Because the application uses the same endpoint, no code change is needed.
Q12. Which statement about Amazon EBS snapshots is correct?
A) Snapshots are stored incrementally, but each snapshot can be used to restore the full volume independently B) Snapshots are always full copies, consuming large amounts of storage C) Snapshots can only be used within the same Region D) Deleting a snapshot also deletes data from previous snapshots
Answer: A
Explanation: EBS snapshots use incremental storage — the first snapshot captures the entire volume, and subsequent snapshots capture only changed blocks. However, AWS manages the block references internally, so each snapshot is self-sufficient for a full volume restore. Snapshots can also be copied to other Regions.
Q13. What is the primary use case for Amazon Data Lifecycle Manager (DLM)?
A) Managing S3 object lifecycle policies B) Automating the creation, retention, and deletion of EBS snapshots and AMIs C) Scheduling RDS automated backup windows D) Managing CloudWatch Logs retention periods
Answer: B
Explanation: Amazon DLM automates the lifecycle of EBS-backed resources. You create lifecycle policies that define schedules (e.g., daily at 5 AM) for creating snapshots and AMIs, and retention rules (e.g., keep last 7 snapshots) for automatic cleanup.
Q14. What is required when configuring Route 53 Failover routing?
A) Multiple regions with weight assignments B) Primary and secondary records with health checks attached C) Geolocation-based routing rules D) CloudFront distributions as origin targets
Answer: B
Explanation: Failover routing requires a Primary record with an associated health check and a Secondary record. When the primary's health check fails, Route 53 automatically routes traffic to the secondary record. The health check can use HTTP, HTTPS, or TCP.
Q15. What is the maximum retention period for RDS Point-in-Time Recovery (PITR)?
A) 7 days B) 14 days C) 35 days D) 90 days
Answer: C
Explanation: With automated backups enabled, RDS supports PITR up to a maximum of 35 days. Transaction logs are uploaded to S3 every 5 minutes. The default retention period is 7 days, and you can increase it to up to 35 days.
Q16. A global web application must remain available even if an entire AWS Region fails. What architecture best achieves this?
A) Multi-AZ RDS with Auto Scaling group B) Route 53 failover with standby infrastructure in another Region C) CloudFront distribution with Origin Group D) A combination of Route 53 failover and CloudFront Origin Group across multiple Regions
Answer: D
Explanation: Surviving a full Region outage requires a multi-Region architecture. Route 53 failover handles DNS-level switching, while CloudFront Origin Group automatically fails over to a backup origin at the CDN layer. Combining both mechanisms provides the most resilient architecture.
Q17. You want to share an EBS snapshot with another AWS account. What is the correct method?
A) Copy the snapshot to S3 and use a bucket policy to share it B) Add the target AWS account ID to the snapshot permissions C) Delegate cross-account IAM role access to the snapshot D) Snapshots can only be shared through AWS Organizations
Answer: B
Explanation: EBS snapshots can be shared with specific AWS accounts by adding their account IDs to the snapshot's permissions (Modify Permissions). Snapshots can also be made public. When sharing encrypted snapshots, the associated KMS key must also be shared with the target account.
Q18. What is a "Calculated Health Check" in Amazon Route 53?
A) A check that calculates response time and compares it against a threshold B) A health check that combines the results of multiple child health checks C) A feature that automatically selects the endpoint with the lowest RTT D) A health check driven by a CloudWatch alarm state
Answer: B
Explanation: A Calculated Health Check aggregates the results of multiple child health checks using AND/OR logic. This lets you represent the health of a complex system (e.g., combining web, database, and cache health checks) as a single health check for failover routing.
Q19. Which definition correctly describes RTO (Recovery Time Objective)?
A) The maximum amount of data loss tolerated after a disaster B) The maximum acceptable time to restore normal operations after a disaster C) The actual time taken to restore backup data D) The maximum cost investment in disaster recovery systems
Answer: B
Explanation: RTO is the maximum acceptable duration from the moment of a disaster to when normal business operations resume. RPO is the maximum acceptable amount of data loss measured in time. Lower RTO/RPO targets generally require higher investment.
Q20. A company wants to centrally manage backups across EC2, EBS, RDS, and DynamoDB. What is the primary advantage of using AWS Backup?
A) Reduces backup costs for each service by 50% B) Centrally manages backups for multiple AWS services using policy-based plans C) Automatically determines optimal backup frequency D) Automatically encrypts backups and transfers them to another account
Answer: B
Explanation: AWS Backup provides a central console for creating, scheduling, and managing backups across EC2, EBS, RDS, Aurora, DynamoDB, EFS, and Storage Gateway. Backup plans define backup frequency, retention, and lifecycle rules, enabling consistent governance across all supported services.
Domain 3: Deployment, Provisioning and Automation
Q21. Which CloudFormation resource change requires replacement (deletion and recreation of the resource)?
A) Changing an EC2 instance type B) Changing an S3 bucket name C) Changing EC2 tags D) Changing a CloudWatch alarm threshold
Answer: B
Explanation: Some CloudFormation resource properties are immutable. The S3 BucketName property requires replacement — the existing bucket is deleted and a new one is created. EC2 instance type changes cause an interruption but not replacement. Tag changes are non-interrupting updates.
Q22. What is the purpose of a CloudFormation Stack Policy?
A) Restricts which IAM users can deploy CloudFormation stacks B) Prevents updates to specific stack resources during stack updates C) Limits the maximum number of resources in a stack D) Protects a production stack from deletion
Answer: B
Explanation: A Stack Policy is a JSON document that defines the update actions allowed on specific resources during a stack update. For example, you can use a stack policy to prevent accidental replacement or deletion of a production database or other critical resources.
Q23. What is the primary security benefit of AWS Systems Manager Session Manager?
A) Allows SSH access without an IAM role on the instance B) Provides secure shell access to EC2 instances without opening inbound ports or using a bastion host C) Enables both RDP and SSH over a single port D) Automatically upgrades the instance OS remotely
Answer: B
Explanation: Session Manager eliminates the need to open inbound SSH/RDP ports (22/3389) or manage bastion hosts. All session activity is logged to CloudTrail and optionally streamed to CloudWatch Logs or S3. The instance needs the SSM Agent and an IAM instance profile with the AmazonSSMManagedInstanceCore policy.
Q24. What is CloudFormation Drift Detection?
A) Measures the time it takes to deploy a CloudFormation stack B) Detects whether stack resources have been modified outside of CloudFormation C) Automatically analyzes dependencies between stacks D) Automatically updates templates to the latest resource versions
Answer: B
Explanation: Drift detection compares the actual configuration of stack resources against the expected configuration defined in the CloudFormation template. It identifies resources that have been manually modified outside of CloudFormation, helping maintain infrastructure consistency.
Q25. What is AWS OpsWorks primarily used for?
A) Cost optimization of EC2 instances B) Server configuration automation using Chef or Puppet C) Serverless application deployment automation D) Container orchestration
Answer: B
Explanation: AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks Stacks uses Chef recipes to automate server configuration. OpsWorks for Chef Automate and OpsWorks for Puppet Enterprise provide fully managed configuration management servers.
Q26. What happens when you enable Scale-In Protection on an EC2 instance within an Auto Scaling group?
A) The instance will not be terminated during scale-in events B) The instance type is automatically upgraded C) The protected instance is terminated first during scale-in D) Health checks are disabled for the instance
Answer: A
Explanation: Instance scale-in protection prevents Auto Scaling from terminating a specific instance during a scale-in activity. This is useful for protecting instances running critical batch jobs or stateful workloads. The protection does not prevent termination if the instance fails its health check.
Q27. What is the primary use case for AWS Service Catalog?
A) Comparing and optimizing costs across AWS services B) Enabling end users to self-provision IT-approved products as a portfolio C) Automating third-party software purchases from AWS Marketplace D) Monitoring resource usage across accounts from a single dashboard
Answer: B
Explanation: AWS Service Catalog lets organizations create and manage catalogs of approved IT products (backed by CloudFormation templates) that users can self-provision. This provides governance and compliance while enabling developer autonomy — users can only deploy pre-approved, standardized configurations.
Q28. In AWS Systems Manager Automation, what is a "Runbook"?
A) A PDF documenting incident response procedures B) An SSM document defining a series of automated steps C) A boot script for EC2 instance OS startup D) A parameter file for CloudFormation stacks
Answer: B
Explanation: An SSM Automation Runbook (also called an Automation document) is a YAML or JSON file that defines a sequence of steps to automate operational tasks such as creating AMIs, patching instances, and remediating misconfigurations. AWS provides managed runbooks and you can create custom ones.
Q29. What is the purpose of a CloudFormation Change Set?
A) Manages stack changes using GitOps workflows B) Previews which resources will be added, modified, or replaced before executing a stack update C) Deploys stack changes simultaneously to multiple Regions D) Automatically tests changes and rolls back if they fail
Answer: B
Explanation: A Change Set lets you preview the impact of a proposed stack update before executing it. It shows which resources will be added, modified, or deleted, and whether any resources will be replaced (which could cause data loss). This is a critical safety check before updating production stacks.
Q30. You need to patch a large fleet of EC2 instances with minimal service disruption. What is the best approach?
A) Terminate all instances simultaneously and relaunch with a new AMI B) Use Systems Manager Patch Manager with a Maintenance Window and rate control C) Redeploy all instances using CloudFormation D) Use Lambda to SSH into each instance and apply patches
Answer: B
Explanation: Systems Manager Patch Manager automates OS patching across large instance fleets. Maintenance Windows define when patching occurs (e.g., off-peak hours). Rate control (concurrency and error threshold) limits how many instances are patched simultaneously, minimizing service disruption during the process.
Domain 4: Security and Compliance
Q31. What is the role of an IAM Permissions Boundary?
A) Restricts IAM users to specific AWS Regions B) Defines the maximum permissions that an IAM entity can have C) Restricts services accessible without MFA D) Limits the root account to the same permissions as regular users
Answer: B
Explanation: A Permissions Boundary is a managed policy attached to an IAM user or role that defines the maximum permissions the entity can have. The effective permissions are the intersection of the identity-based policy and the permissions boundary. This prevents privilege escalation when delegating IAM management.
Q32. What is the primary function of Amazon Macie?
A) Scans EC2 instances for vulnerabilities B) Automatically discovers and protects sensitive data (PII, etc.) in S3 C) Detects abnormal network traffic within a VPC D) Automatically corrects overly permissive IAM policies
Answer: B
Explanation: Amazon Macie uses machine learning to automatically discover sensitive data — such as PII, financial records, and credentials — stored in S3 buckets. It also evaluates S3 bucket security posture (public access, encryption status, access control lists) and generates findings for review.
Q33. In AWS Organizations, which accounts are NOT affected by Service Control Policies (SCPs)?
A) Only the root account B) The management account (formerly master account) C) Specific IAM users and roles only D) Marketplace purchases only
Answer: B
Explanation: SCPs apply to all accounts in the organization hierarchy (root, OUs, individual accounts) except the management account (formerly master account). SCPs can use allowlist strategies (specify what is allowed) or denylist strategies (deny specific actions while allowing everything else).
Q34. Which data sources does Amazon GuardDuty analyze to detect threats?
A) Only application logs from EC2 instances B) VPC Flow Logs, DNS query logs, and CloudTrail event logs C) All objects in all S3 buckets D) IAM policies and user behavioral patterns
Answer: B
Explanation: GuardDuty analyzes VPC Flow Logs, Route 53 DNS resolver query logs, and CloudTrail management and S3 data events. It uses machine learning, anomaly detection, and threat intelligence feeds to identify threats like cryptomining, compromised instances, and unauthorized access.
Q35. When automatic key rotation is enabled for a KMS key, what happens to data previously encrypted with that key?
A) Existing data is immediately re-encrypted with the new key material B) A new KMS key is created annually and all existing data is migrated C) The key is deactivated and a new key is created D) New key material is generated, but existing data is NOT automatically re-encrypted
Answer: D
Explanation: Automatic key rotation generates new cryptographic key material annually but does NOT re-encrypt existing data. AWS retains all previous key material versions so existing ciphertext can still be decrypted. New data is encrypted with the current key material. If you need to re-encrypt existing data, you must do so manually.
Q36. What type of issue does IAM Access Analyzer identify?
A) IAM user passwords that do not meet complexity requirements B) Resources that are unintentionally shared with external entities C) IAM roles with excessive permissions D) IAM users without MFA enabled
Answer: B
Explanation: IAM Access Analyzer identifies S3 buckets, IAM roles, KMS keys, SQS queues, Lambda functions, and Secrets Manager secrets whose policies grant access to external principals (other AWS accounts, AWS services, or the internet). It helps identify unintended public or cross-account exposure.
Q37. What is the primary function of AWS Security Hub?
A) Provides external auditor access for compliance reviews B) Aggregates security findings from GuardDuty, Macie, Inspector, and other services in a central place C) Automatically applies security patches to EC2 instances D) Blocks malicious network traffic in real time
Answer: B
Explanation: AWS Security Hub provides a comprehensive view of your security posture by aggregating, organizing, and prioritizing findings from multiple AWS security services and third-party tools. It also runs automated security checks against CIS AWS Foundations Benchmark and AWS Foundational Security Best Practices standards.
Q38. What does Amazon Inspector scan?
A) S3 buckets for public access misconfigurations B) EC2 instances and container images (ECR) for software vulnerabilities C) IAM policies for excessive permissions D) CloudTrail logs for anomalous activity
Answer: B
Explanation: Amazon Inspector automatically discovers and scans EC2 instances and Amazon ECR container images for software vulnerabilities (CVEs) and unintended network exposure. Inspector v2 uses the SSM Agent for agentless scanning and continuously reassesses as new CVEs are published.
Q39. What is the key difference between AWS CloudHSM and AWS KMS?
A) CloudHSM is software-based encryption; KMS is hardware-based B) CloudHSM provides dedicated hardware security modules; KMS is a managed multi-tenant service C) CloudHSM is free; KMS is paid D) CloudHSM is regional; KMS is global
Answer: B
Explanation: AWS CloudHSM gives you exclusive access to FIPS 140-2 Level 3 validated hardware security modules for full key control. KMS is a multi-tenant managed service where AWS manages the underlying HSMs. CloudHSM is required when regulations mandate single-tenant HSM or when you need full control over key material.
Q40. In AWS Organizations SCPs, what is the difference between an implicit deny and an explicit deny?
A) Both operate identically in SCPs B) Explicit deny overrides identity-based policy allows; implicit deny simply means the action is not permitted in the SCP C) Implicit deny can be overridden by another SCP allowing the action; explicit deny cannot be overridden by any other policy D) Explicit deny applies to the root account; implicit deny applies to member accounts only
Answer: C
Explanation: An implicit deny in an SCP means the action is not listed as allowed. A parent OU or root SCP that does allow the action will override this. An explicit Deny statement in an SCP always wins — no other policy (identity-based or resource-based) can override an explicit deny.
Domain 5: Networking and Content Delivery
Q41. Which is a key limitation of VPC Peering?
A) Cross-Region VPC peering is not supported B) Transitive routing is not supported through VPC peering connections C) VPC peering is limited to a maximum of five VPCs D) Peered VPCs must have identical CIDR ranges
Answer: B
Explanation: VPC peering does not support transitive routing. If VPC A is peered with VPC B and VPC B is peered with VPC C, VPC A cannot communicate with VPC C through VPC B. For transitive connectivity between many VPCs, use AWS Transit Gateway.
Q42. What is the primary benefit of AWS Transit Gateway?
A) Connects multiple VPCs and on-premises networks through a central hub B) Accelerates internet traffic using the AWS global network C) Automates routing between subnets within a VPC D) Provides private access to AWS services without internet exposure
Answer: A
Explanation: AWS Transit Gateway is a network transit hub that connects VPCs and on-premises networks through a central hub, eliminating the complex mesh topology of VPC peering. It supports transitive routing, route tables per attachment, and can scale to thousands of VPCs. It also integrates with Direct Connect and VPN.
Q43. When should you use CloudFront Cache Invalidation?
A) When changing the origin server for a CloudFront distribution B) When you have uploaded a new version of a file to S3 but CloudFront still serves the old cached version C) When changing the CloudFront price class D) When adding a new custom domain to a CloudFront distribution
Answer: B
Explanation: Cache invalidation removes objects from CloudFront edge caches before their TTL expires. You specify paths (e.g., /images/*) to invalidate. Note that versioned file names (e.g., app.v2.js) are more efficient and cost-effective than frequent invalidations because they avoid per-invalidation charges.
Q44. What is the primary use case for Route 53 Weighted Routing?
A) Routing users to the geographically closest endpoint B) Distributing traffic across multiple endpoints for canary deployments or A/B testing C) Automatically routing to an alternate endpoint when the primary fails D) Routing to the endpoint with the lowest latency
Answer: B
Explanation: Weighted routing assigns a numerical weight (0–255) to each record. Traffic is distributed proportionally to the assigned weights. This is ideal for canary deployments (e.g., sending 10% of traffic to a new version) and A/B testing. A weight of 0 stops traffic from being routed to that record.
Q45. What is the functional difference between a NAT Gateway and an Internet Gateway?
A) NAT Gateways are in public subnets; Internet Gateways are in private subnets B) An Internet Gateway enables bidirectional communication between the VPC and the internet; a NAT Gateway allows outbound-only internet access from private subnets C) NAT Gateways cover all Availability Zones; Internet Gateways cover only a single AZ D) Internet Gateways work only with EC2; NAT Gateways work with all AWS services
Answer: B
Explanation: An Internet Gateway enables two-way communication between a VPC and the internet (resources must have a public IP or EIP). A NAT Gateway allows resources in private subnets to initiate outbound connections to the internet while blocking inbound connections initiated from the internet.
Q46. What are the two types of VPC Endpoints?
A) Public endpoints and private endpoints B) Gateway endpoints (S3 and DynamoDB) and Interface endpoints (PrivateLink-based) C) HTTP endpoints and HTTPS endpoints D) Regional endpoints and global endpoints
Answer: B
Explanation: Gateway endpoints use routing table entries to route traffic to S3 and DynamoDB without internet exposure and have no additional charge. Interface endpoints (AWS PrivateLink) create ENIs in your subnet to provide private connectivity to supported AWS services and partner services with per-hour and data processing charges.
Q47. When should you use CloudFront Signed Cookies instead of Signed URLs?
A) Signed URLs support HTTPS only; Signed Cookies support both HTTP and HTTPS B) Signed URLs are for single-file access control; Signed Cookies are for controlling access to multiple files or an entire path C) Signed URLs are generated by the origin server; Signed Cookies are auto-generated by CloudFront D) Signed URLs are permanent; Signed Cookies are session-based
Answer: B
Explanation: Signed URLs are best for individual file access with expiration (e.g., one-time download link). Signed Cookies are better when you need to grant access to multiple files matching a path pattern — for example, allowing premium subscribers to access all content under /premium/* without changing every URL in your application.
Q48. What is the key feature of AWS Global Accelerator?
A) Provides the same content caching as CloudFront B) Improves TCP/UDP application performance by routing traffic through the AWS global network C) Operates as a regional CDN service caching dynamic content D) Accelerates on-premises traffic through VPN connections
Answer: B
Explanation: AWS Global Accelerator uses anycast IP addresses to route user traffic to the nearest AWS edge location, then forwards it via the AWS private global network to the optimal endpoint. Unlike CloudFront, it does not cache content — it's designed for non-HTTP use cases and applications requiring consistent, low-latency performance.
Q49. What is the difference between Security Groups and Network ACLs (NACLs)?
A) Security Groups operate at the instance level and are stateful; NACLs operate at the subnet level and are stateless B) Security Groups support only allow rules; NACLs support only deny rules C) NACLs operate at the instance level; Security Groups operate at the subnet level D) Security Groups support a maximum of 10 rules; NACLs support unlimited rules
Answer: A
Explanation: Security Groups are stateful (return traffic is automatically allowed) and operate at the ENI level. NACLs are stateless (both inbound and outbound rules must explicitly allow return traffic) and operate at the subnet level. Security Groups can only allow traffic; NACLs can both allow and deny traffic.
Q50. What is the key difference between Route 53 Latency-Based Routing and Geolocation Routing?
A) Latency-based routing routes by country; geolocation routing measures actual latency B) Latency-based routing sends traffic to the Region with the lowest measured network latency; geolocation routing routes based on the geographic location of the DNS query origin C) Both routing methods work identically but with different configurations D) Latency-based routing works only at the Region level; geolocation routing works at country, continent, and subdivision levels
Answer: B
Explanation: Latency-based routing measures the actual network latency between the user and each AWS Region and routes to the Region with the lowest latency. Geolocation routing routes based on the geographic location inferred from the DNS resolver's IP address, enabling region-specific content serving.
Domain 6: Cost and Performance Optimization
Q51. What is the primary capability of AWS Cost Explorer?
A) Real-time cost alerts and automatic cost-reduction actions B) Visualizing historical costs and usage, forecasting future costs, and identifying spending patterns C) Automatically purchasing Reserved Instances for cost savings D) Automatically distributing costs across multiple accounts
Answer: B
Explanation: Cost Explorer visualizes up to 12 months of historical AWS cost and usage data, provides 12-month cost forecasts, and allows filtering and grouping by service, Region, account, and tags. It also provides Reserved Instance and Savings Plans purchase recommendations.
Q52. What is the key difference between Savings Plans and Reserved Instances?
A) Savings Plans commit to specific instance types; Reserved Instances commit to spending amounts B) Compute Savings Plans apply flexibly regardless of instance type, size, Region, or OS; EC2 Reserved Instances commit to specific instance families and Regions C) Savings Plans offer only 1-year commitments; Reserved Instances offer only 3-year commitments D) Savings Plans apply only to EC2; Reserved Instances apply to all AWS services
Answer: B
Explanation: Compute Savings Plans offer the most flexibility — they apply to EC2, Fargate, and Lambda regardless of instance type, size, OS, Region, or tenancy. EC2 Instance Savings Plans apply to a specific instance family in a Region. Standard Reserved Instances provide the highest discount for a specific instance type and Region.
Q53. How does Amazon S3 Intelligent-Tiering work?
A) It sends notifications to manually reclassify objects to storage classes B) It monitors access patterns and automatically moves objects to the most cost-effective storage tier C) It moves all objects to Glacier to minimize costs D) It keeps frequently accessed objects in S3 Standard and deletes the rest
Answer: B
Explanation: S3 Intelligent-Tiering monitors object access patterns and automatically moves objects between Frequent Access and Infrequent Access tiers (after 30 days without access) and optionally to Archive Instant Access (90 days) and Deep Archive Access (180 days) tiers. There are no retrieval fees for tier transitions.
Q54. How much advance notice is provided before an EC2 Spot Instance is interrupted?
A) 30 seconds B) 2 minutes C) 5 minutes D) 15 minutes
Answer: B
Explanation: AWS provides a 2-minute interruption notice via instance metadata (at /latest/meta-data/spot/interruption-notice) and an Amazon EventBridge event. Applications can use this notification to checkpoint work, drain connections, or save state before the instance is reclaimed.
Q55. What is the purpose of RDS Performance Insights?
A) Analyzes and optimizes RDS instance costs B) Visualizes database load and identifies the SQL queries, applications, or hosts causing performance bottlenecks C) Automatically optimizes RDS backups to reduce storage costs D) Automatically rewrites and optimizes database queries
Answer: B
Explanation: RDS Performance Insights monitors database load using the DB Load metric and breaks it down by wait events, SQL statements, users, and hosts. This helps DBAs quickly identify which queries are consuming the most resources and causing performance degradation.
Q56. When is Scheduled Scaling in EC2 Auto Scaling most useful?
A) Scaling out immediately when CPU utilization exceeds 80% B) Scaling out proactively before predictable traffic increases (e.g., every morning at 9 AM) C) Replacing Spot Instances with On-Demand instances automatically D) Scaling in when costs exceed a defined threshold
Answer: B
Explanation: Scheduled Scaling allows you to set specific times to adjust minimum, maximum, or desired capacity. For predictable traffic patterns (like business hours or weekly peaks), scheduled scaling ensures capacity is ready before the load arrives, avoiding the lag inherent in reactive dynamic scaling.
Q57. What can AWS Budgets do when a budget threshold is exceeded?
A) Automatically terminates resources when costs exceed the budget B) Sends alerts and can execute automated actions such as stopping EC2/RDS instances C) Consolidates billing across all AWS accounts D) Automatically purchases RIs to reduce costs
Answer: B
Explanation: AWS Budgets can alert via SNS topics or email when actual or forecasted costs, usage, or RI/Savings Plans coverage thresholds are breached. With Budgets Actions, you can automatically apply IAM policies, restrict SCP permissions, or stop EC2 and RDS instances when a budget threshold is exceeded.
Q58. What can an S3 Lifecycle policy accomplish?
A) Automatically notify when total S3 bucket costs exceed a threshold B) Transition objects to different storage classes or delete them after a specified number of days C) Automatically block access from specific IPs D) Automatically replicate data between S3 buckets
Answer: B
Explanation: S3 Lifecycle policies let you define rules to transition objects to cheaper storage classes (e.g., S3-IA after 30 days, Glacier after 90 days) and expire (delete) objects after a specified period. They can also clean up incomplete multipart uploads and expired object delete markers.
Q59. A company wants to allocate AWS costs by department. What is the most effective approach?
A) Create separate AWS accounts for each department B) Apply Cost Allocation Tags to resources and analyze them in Cost Explorer C) Use AWS Organizations consolidated billing D) Manually calculate per-resource costs in CloudWatch dashboards
Answer: B
Explanation: Cost Allocation Tags (e.g., Department=Engineering) can be applied to AWS resources. After activating the tags in the Billing console, Cost Explorer can filter, group, and export costs by tag. This enables department-level cost reporting without requiring separate accounts.
Q60. How does Predictive Scaling in EC2 Auto Scaling work?
A) Scales based on current CPU utilization in real time B) Uses machine learning to analyze historical traffic patterns and proactively adjust capacity before demand arrives C) Maintains a fixed number of instances during specified time windows D) Scales based on the number of messages in an SQS queue
Answer: B
Explanation: Predictive Scaling uses Amazon's ML models to analyze at least 14 days of historical load data for the Auto Scaling group. It generates a forecast and pre-scales capacity before anticipated increases, eliminating the lag of reactive scaling and ensuring capacity is available when traffic arrives.
Mixed Scenario Questions
Q61. A production EC2 instance becomes unresponsive. CloudWatch shows CPU at 100% and a network traffic spike. What should you do first?
A) Immediately terminate the instance and launch a new one B) Create CloudWatch alarms and configure SNS notifications C) Analyze VPC Flow Logs and CloudTrail to identify the source of abnormal traffic D) Resize the instance to a larger type
Answer: C
Explanation: A sudden CPU spike combined with network traffic surge could indicate a DDoS attack, cryptomining malware, or an application runaway loop. Before taking action, analyze VPC Flow Logs for unusual connection patterns and CloudTrail for recent API calls that may indicate account compromise.
Q62. A multi-Region application's Route 53 failover is not triggering despite the primary Region (us-east-1) being down. What is the most likely cause?
A) Route 53 is hosted in us-east-1 and failed along with it B) Health checks are configured to run only from inside us-east-1 C) The TTL value is too high and DNS responses are being cached by clients D) Failover records do not have proper weights configured
Answer: C
Explanation: Route 53 is a global service and is not affected by a single Region outage. However, if the DNS TTL is high (e.g., 300 seconds), clients continue using the cached DNS response pointing to the failed Region until the TTL expires. For failover routing, always use a low TTL (e.g., 60 seconds) on the primary record.
Q63. A CloudFormation stack deployment enters ROLLBACK_IN_PROGRESS state. What is the most effective way to identify the root cause?
A) Check the AWS Health Dashboard for service outages B) Check the Events tab in the CloudFormation console for resources in FAILED state and their status reason C) Review recent API calls in CloudTrail D) Open an AWS Support case for root cause analysis
Answer: B
Explanation: The CloudFormation console's Events tab shows a chronological log of all resource state changes. Filtering for FAILED events reveals the specific resource that failed and the error message in the Status Reason column, which typically contains the root cause (e.g., IAM permission denied, resource limit exceeded).
Q64. A developer accidentally deleted objects from a production S3 bucket. What is the best recovery method?
A) Contact AWS Support to recover data from backend storage B) If S3 Versioning is enabled, remove the delete marker to restore the previous version C) Retrieve the files from the S3 Cross-Region Replication destination D) Extract the object content from CloudTrail logs
Answer: B
Explanation: When S3 Versioning is enabled, deleting an object places a delete marker rather than permanently removing the data. You can restore the object by deleting the delete marker, which makes the previous version current again. This is why enabling S3 Versioning (and optionally MFA Delete) is a critical data protection best practice.
Q65. A company's monthly AWS bill is 30% higher than expected. What is the best first step toward cost optimization?
A) Immediately shut down all development environment instances B) Use AWS Cost Explorer and AWS Trusted Advisor to analyze the cost anomaly and identify optimization opportunities C) Replace all On-Demand instances with Spot Instances D) Request a cost review from AWS Support
Answer: B
Explanation: Systematic cost optimization starts with analysis: use Cost Explorer to understand which services, accounts, or tags drove the increase, and use Trusted Advisor to identify idle resources, underutilized instances, and unattached EBS volumes. Data-driven decisions prevent unnecessary disruption while maximizing savings.
Answer Key
| # | Ans | # | Ans | # | Ans | # | Ans | # | Ans |
|---|---|---|---|---|---|---|---|---|---|
| Q1 | B | Q14 | B | Q27 | B | Q40 | C | Q53 | B |
| Q2 | B | Q15 | C | Q28 | B | Q41 | B | Q54 | B |
| Q3 | B | Q16 | D | Q29 | B | Q42 | A | Q55 | B |
| Q4 | B | Q17 | B | Q30 | B | Q43 | B | Q56 | B |
| Q5 | B | Q18 | B | Q31 | B | Q44 | B | Q57 | B |
| Q6 | B | Q19 | B | Q32 | B | Q45 | B | Q58 | B |
| Q7 | B | Q20 | B | Q33 | B | Q46 | B | Q59 | B |
| Q8 | B | Q21 | B | Q34 | B | Q47 | B | Q60 | B |
| Q9 | B | Q22 | B | Q35 | D | Q48 | B | Q61 | C |
| Q10 | B | Q23 | B | Q36 | B | Q49 | A | Q62 | C |
| Q11 | B | Q24 | B | Q37 | B | Q50 | B | Q63 | B |
| Q12 | A | Q25 | B | Q38 | B | Q51 | B | Q64 | B |
| Q13 | B | Q26 | A | Q39 | B | Q52 | B | Q65 | B |
Exam Preparation Tips
- Hands-on practice: Directly configure CloudWatch alarms, Systems Manager Patch Manager, and CloudFormation change sets in a sandbox account
- Understand the cost model: Know the pricing structures for EC2 (RI, Savings Plans, Spot), S3 storage classes, and data transfer
- Lab preparation: SOA-C02 includes hands-on labs — practice in the AWS Management Console, not just conceptually
- Read the FAQs: AWS service FAQs often contain exam-relevant edge cases
- Study the Well-Architected Framework: The operational excellence pillar aligns closely with SysOps domain content