- Authors

- Name
- Youngju Kim
- @fjvbn20031
AWS Cloud Practitioner (CLF-C02) Practice Exam
Exam Overview
| Item | Details |
|---|---|
| Exam Code | CLF-C02 |
| Duration | 90 minutes |
| Questions | 65 (50 scored + 15 unscored) |
| Passing Score | 700 / 1000 |
| Format | Multiple choice and multiple response |
| Cost | USD 100 |
Domain Breakdown
| Domain | Content | Weight |
|---|---|---|
| Domain 1 | Cloud Concepts | 24% |
| Domain 2 | Security and Compliance | 30% |
| Domain 3 | Cloud Technology and Services | 34% |
| Domain 4 | Billing, Pricing, and Support | 12% |
Key Concepts Summary
Domain 1: Cloud Concepts
- CapEx vs OpEx: On-premises uses Capital Expenditure (CapEx); cloud uses Operational Expenditure (OpEx)
- Economies of Scale: AWS aggregates usage from millions of customers to lower costs, passing savings to customers
- Shared Responsibility Model: AWS secures the cloud infrastructure; customers secure what is in the cloud
- Six Advantages of Cloud: No upfront costs, pay-as-you-go, no capacity guessing, speed and agility, global reach, focus on business
Domain 2: Security and Compliance
- IAM: Control access with users, groups, roles, and policies
- MFA: Additional authentication layer to enhance account security
- AWS Shield: DDoS protection (Standard: free, Advanced: paid)
- WAF: Web Application Firewall defending against SQL Injection, XSS
- KMS: Key Management Service for data encryption
- CloudTrail: Records all API calls in your AWS account
Domain 3: Cloud Technology and Services
- EC2: Virtual servers (instance families: t, m, c, r, p)
- S3: Object storage with 99.999999999% durability
- RDS: Managed relational DB (MySQL, PostgreSQL, Oracle, etc.)
- Lambda: Serverless function execution
- VPC: Virtual Private Cloud networking
- CloudFront: CDN delivering content from edge locations
- Route 53: DNS service
- ELB: Load balancer (ALB, NLB, CLB)
- Auto Scaling: Automatically adjusts EC2 instances based on demand
Domain 4: Billing and Pricing
- On-Demand: Pay per use, no commitment
- Reserved Instances: 1 or 3-year commitment, up to 72% savings
- Spot Instances: Unused EC2 capacity, up to 90% savings
- Savings Plans: Flexible pricing model with usage commitment
- AWS Cost Explorer: Visualize and analyze costs
- AWS Trusted Advisor: Provides best practice recommendations
Practice Questions: 60 Questions
Domain 1: Cloud Concepts (Q1-Q15)
Q1. Which AWS cloud benefit allows businesses to use infrastructure on demand without massive upfront investments in building data centers?
A) Capital Expenditure (CapEx) model B) Operational Expenditure (OpEx) model C) Depreciation model D) Fixed-cost model
Answer: B
Explanation: AWS Cloud uses the Operational Expenditure (OpEx) model. Companies do not need to purchase servers or data centers upfront (CapEx); they only pay for the resources they actually use. This eliminates the need for large upfront capital investment while providing the computing resources needed. Traditional on-premises environments require significant CapEx for server purchases, data center leases, and hardware maintenance, which AWS eliminates entirely.
Q2. In the AWS Shared Responsibility Model, which of the following is AWS responsible for?
A) Encrypting customer data B) Managing operating system patches C) Physical security of data centers D) Managing IAM user permissions
Answer: C
Explanation: In the AWS Shared Responsibility Model, AWS is responsible for "Security OF the Cloud," which includes physical data center security, hardware, network infrastructure, and the virtualization layer. Customers are responsible for "Security IN the Cloud," which includes OS patching, application security, data encryption, and IAM management. This shared model clearly delineates security responsibilities between AWS and customers.
Q3. How does AWS Cloud's "Economies of Scale" benefit customers when they choose not to purchase their own servers?
A) AWS services are always available at a fixed price B) AWS aggregates usage from millions of customers to achieve lower costs and passes those savings on to customers C) Service quality decreases as more customers use the platform D) Economies of scale only apply to large enterprises
Answer: B
Explanation: AWS achieves economies of scale by managing infrastructure for millions of customers worldwide. This allows AWS to reduce hardware, energy, and operational costs, and these savings are passed on to customers through lower pricing. This cost efficiency is impossible for individual companies managing their own data centers. AWS has historically reduced service prices over time due to these economies of scale.
Q4. What are the three main cloud deployment models?
A) Public, Private, Hybrid B) IaaS, PaaS, SaaS C) On-premises, Colocation, Cloud D) Development, Staging, Production
Answer: A
Explanation: The three cloud deployment models are Public (shared cloud like AWS), Private (dedicated cloud for one organization), and Hybrid (combination of on-premises and public cloud). IaaS, PaaS, and SaaS are cloud service models, which is a different concept from deployment models. Most enterprises today adopt a hybrid model that combines existing on-premises infrastructure with cloud services to meet their diverse needs.
Q5. What does the AWS cloud benefit "Stop Guessing Capacity" mean?
A) AWS pre-purchases all server capacity in advance B) Resources can be scaled up or down as needed, preventing over- or under-provisioning C) AWS always provides services at maximum capacity D) Customers must pay for unused resources
Answer: B
Explanation: AWS cloud's elasticity eliminates the need to guess server capacity in advance. When traffic increases, resources can immediately be scaled out, and when it decreases, they can be scaled back in. This solves the over-provisioning (cost waste) or under-provisioning (service outages) problems that commonly occur in traditional on-premises environments, enabling businesses to match capacity precisely to actual demand at any given moment.
Q6. What is the definition of an "Availability Zone (AZ)" in AWS global infrastructure?
A) AWS edge locations in major cities worldwide B) One or more physical data centers with independent power, cooling, and networking C) A geographic area where AWS services are provided D) Distribution points for the AWS CDN service
Answer: B
Explanation: An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Each AZ is isolated from other AZs to prevent the spread of failures. A single region typically has two to six AZs connected by high-speed private networking. Deploying applications across multiple AZs achieves high availability and fault tolerance.
Q7. Which AWS service is an example of "Infrastructure as a Service (IaaS)"?
A) Amazon RDS B) AWS Lambda C) Amazon EC2 D) AWS Elastic Beanstalk
Answer: C
Explanation: Amazon EC2 is the classic example of IaaS (Infrastructure as a Service). EC2 provides virtual servers (computing infrastructure), and the customer manages the operating system, middleware, runtime, data, and applications. RDS has more PaaS characteristics, Lambda is FaaS (Function as a Service), and Elastic Beanstalk is closer to PaaS. IaaS provides customers with the most control over their infrastructure configuration.
Q8. When a company uses AWS for disaster recovery, what is the primary benefit?
A) AWS automatically recovers all data when a disaster occurs B) Backup environments can be built across multiple regions at low cost C) Disaster recovery is not a supported feature in AWS D) Disaster recovery requires purchasing separate hardware
Answer: B
Explanation: Using AWS cloud, organizations can build robust DR (Disaster Recovery) environments at a fraction of the cost compared to traditional on-premises DR. Strategies like Pilot Light or Warm Standby allow replication of data across multiple regions and activation of resources only when needed. Traditional DR site setup required massive hardware investment, but in AWS you only pay for what you use, making enterprise-grade DR accessible to organizations of all sizes.
Q9. Which of the following is NOT one of the six pillars of the AWS Well-Architected Framework?
A) Operational Excellence B) Security C) Cost Optimization D) Customer Satisfaction
Answer: D
Explanation: The six pillars of the AWS Well-Architected Framework are: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability. Customer Satisfaction is not an official pillar of the Well-Architected Framework. This framework is used to evaluate and improve cloud architectures against proven best practices, helping teams build secure, high-performing, resilient, and efficient infrastructure.
Q10. Which statement best describes the "Agility" benefit of AWS Cloud?
A) Always provides faster processing speeds than physical servers B) New resources can be provisioned in minutes, enabling rapid experimentation C) AWS services are automatically updated D) The global network eliminates latency entirely
Answer: B
Explanation: The agility benefit of AWS is the ability to provision IT resources in minutes rather than weeks or months. This allows development teams to quickly experiment with new ideas, scale successful experiments, and abandon failed ones without significant cost. In traditional on-premises environments, procuring new servers could take weeks, but in AWS a single API call can start an instance immediately, dramatically accelerating the innovation cycle.
Q11. What is the "High Availability" design principle in AWS Cloud?
A) Concentrate all resources on a single server to maximize performance B) Distribute resources across multiple Availability Zones to eliminate single points of failure C) Store data in only one region to enhance security D) Manually monitor servers to detect failures
Answer: B
Explanation: High availability means distributing resources across multiple Availability Zones (AZs) to eliminate single points of failure (SPOF). If one AZ experiences a failure, service continues in other AZs. Using AWS ELB (Elastic Load Balancer) and Auto Scaling, traffic can be distributed across multiple AZs and automatically routed to healthy instances when failures occur, ensuring continuous application availability.
Q12. What free benefit can a startup receive when starting with AWS Cloud services?
A) AWS does not provide free services to startups B) Certain services can be used free for 12 months through the AWS Free Tier C) All AWS services can be used free indefinitely D) Startups get a 50% discount on all AWS services
Answer: B
Explanation: The AWS Free Tier provides various free benefits for new AWS accounts. It includes 12-month free offerings (e.g., 750 hours/month of EC2 t2.micro, 5GB of S3 storage), always-free offers (e.g., 1 million Lambda requests/month), and short-term free trials (e.g., 60-day free trials for some services). This allows startups to experience AWS services and build prototypes without initial costs, lowering the barrier to cloud adoption.
Q13. Which is NOT a key factor when selecting an AWS Region?
A) Data regulation and governance requirements B) Service availability C) Number of AWS employees in the region D) Latency
Answer: C
Explanation: Key considerations when selecting an AWS Region include: compliance and data governance (e.g., GDPR, data localization requirements), service availability (not all services are available in all regions), latency (choosing a region close to users), and cost (pricing can differ by region). The number of AWS employees in a region is irrelevant to the selection decision, as it has no impact on service quality or availability.
Q14. What is the primary difference between "On-Premises" and "Cloud" infrastructure?
A) On-premises is always cheaper than cloud B) Cloud uses a pay-as-you-go model with no upfront investment; on-premises requires purchasing and managing hardware directly C) Cloud has poor security, so large enterprises avoid it D) On-premises allows unlimited scaling
Answer: B
Explanation: On-premises infrastructure requires companies to purchase, install, and maintain physical servers and data centers, requiring high upfront capital investment (CapEx). Cloud uses a provider's infrastructure and pays only for what is used (OpEx model). Cloud enables elastic scaling and eliminates upfront costs, but some regulated industries may choose on-premises or hybrid approaches due to data sovereignty requirements. The cloud model fundamentally changes how organizations think about and manage IT infrastructure.
Q15. Which correctly describes the components of AWS Global Infrastructure?
A) Regions, Availability Zones, Edge Locations B) Data centers, Server farms, Colocation C) VPC, Subnets, Routing tables D) EC2, S3, RDS
Answer: A
Explanation: AWS Global Infrastructure consists of Regions (collections of multiple AZs in a geographic location), Availability Zones (discrete data center facilities), and Edge Locations (distribution points for CloudFront CDN and Route 53 DNS globally). As of 2024, AWS operates more than 30 Regions and 100+ AZs worldwide. Edge Locations are separate from Regions and AZs, providing low-latency content delivery to end users in more geographic locations than full regions.
Domain 2: Security and Compliance (Q16-Q33)
Q16. What does the "Principle of Least Privilege" mean in AWS IAM?
A) Grant all users administrator permissions to maximize efficiency B) Grant users only the minimum permissions necessary to perform their job C) Set permissions once and never change them D) New users have all permissions by default
Answer: B
Explanation: The Principle of Least Privilege is a fundamental security principle stating that each IAM user, role, or service should have only the permissions necessary to perform their specific tasks. For example, a service that only reads files from an S3 bucket should be granted only s3:GetObject permission, not all S3 permissions. This limits the potential damage if an account is compromised, since the attacker can only access what that account has permission to access.
Q17. What is the best practice for protecting the AWS Root account?
A) Use the Root account for all daily tasks B) Enable MFA on the Root account and use IAM users for everyday tasks C) Share the Root account password with team members D) Security groups automatically protect the Root account
Answer: B
Explanation: The AWS Root account has complete access to all AWS services and resources. Security best practices include: enabling MFA (Multi-Factor Authentication) on the Root account, storing Root account credentials securely, and using IAM users or roles for daily operations. The Root account should only be used for specific tasks that require it, such as changing billing settings or closing the AWS account. Treating the root account with extreme care is fundamental to AWS security.
Q18. What is the role of MFA (Multi-Factor Authentication) in AWS?
A) Allows access to AWS without a password B) Requires an additional authentication factor beyond a password to strengthen account security C) Encrypts communications between AWS services D) Can be used instead of IAM policies
Answer: B
Explanation: MFA is a security feature that requires a second authentication factor (e.g., a 6-digit code from a smartphone app or a hardware token) in addition to a password. Even if a password is stolen, the account cannot be accessed without the second authentication factor. AWS supports various MFA methods including virtual MFA devices (like Google Authenticator), hardware MFA devices, and SMS MFA. Enabling MFA is strongly recommended for all IAM users, especially those with privileged access.
Q19. What is the primary function of AWS CloudTrail?
A) Monitors the performance of AWS resources B) Records and audits all API calls made in an AWS account C) Analyzes network traffic to defend against DDoS attacks D) Tracks the costs of AWS services
Answer: B
Explanation: AWS CloudTrail supports governance, compliance, operational auditing, and risk auditing of your AWS account. It records all AWS API calls—who did what, when, and from where—and stores these records in S3. For example, you can track who terminated a specific EC2 instance or when a security group was modified. By default, CloudTrail retains events for 90 days; for longer retention, events can be exported to S3. CloudTrail is essential for security investigations and compliance audits.
Q20. What is the difference between AWS Shield Standard and AWS Shield Advanced?
A) Shield Standard is paid and Advanced is free B) Shield Standard is provided free to all AWS customers and protects against common DDoS attacks; Shield Advanced is a paid service offering protection against sophisticated DDoS attacks with 24/7 support C) Shield Standard only applies to EC2 and Advanced applies to all services D) Both services provide identical features with different names
Answer: B
Explanation: AWS Shield Standard is automatically provided to all AWS customers at no additional charge, protecting against common L3/L4 DDoS attacks (SYN floods, UDP reflections, etc.). AWS Shield Advanced is a paid service at $3,000/month, offering protection against more sophisticated DDoS attacks, real-time attack visibility, DDoS cost protection, and 24/7 support from the AWS DDoS Response Team (DRT). Shield Advanced protects EC2, ELB, CloudFront, Route 53, and AWS Global Accelerator resources.
Q21. What types of attacks does AWS WAF (Web Application Firewall) protect against?
A) Physical server breaches B) DDoS volumetric attacks C) Web application attacks like SQL Injection and Cross-Site Scripting (XSS) D) Network layer spoofing attacks
Answer: C
Explanation: AWS WAF is a web firewall that protects web applications from common web exploits. It detects and blocks SQL Injection (inserting malicious SQL code), Cross-Site Scripting (XSS), HTTP floods, and malicious bots. WAF can be deployed in front of CloudFront, Application Load Balancer (ALB), API Gateway, and AppSync. Protection can be configured through custom rules or AWS Managed Rules groups that cover OWASP Top 10 vulnerabilities.
Q22. What is the primary use of AWS KMS (Key Management Service)?
A) Managing AWS account passwords B) Creating, storing, and managing encryption keys C) Issuing SSL/TLS certificates D) Encrypting VPN connections
Answer: B
Explanation: AWS KMS is a service for creating and managing cryptographic keys used for data encryption. It integrates with many AWS services such as S3, EBS, RDS, and DynamoDB to encrypt data at rest. KMS uses hardware security modules (HSMs) validated under FIPS 140-2 and records key usage audit logs in CloudTrail. You can create Customer Managed Keys (CMK) to have full control over your encryption keys, including defining key policies and rotation schedules.
Q23. What is the primary difference between IAM Roles and IAM Users?
A) Roles can only be assigned to people; users are assigned to services B) Roles use temporary credentials and are used to delegate permissions to AWS services or other AWS accounts C) Users can only use programmatic access D) Roles require a password; users do not
Answer: B
Explanation: IAM Roles are a mechanism to delegate permissions to specific entities (EC2 instances, Lambda functions, other AWS accounts) through temporary credentials. Roles have no passwords or access keys; they are "assumed" when needed to receive temporary credentials via AWS STS. IAM Users represent individuals or applications with long-term credentials (passwords, access keys). Best practice is to use roles rather than users when granting AWS services access to other services, as temporary credentials are more secure.
Q24. What is the function of a "Security Group" in AWS?
A) A function to manage IAM users in groups B) A virtual firewall that controls inbound and outbound traffic for EC2 instances C) Provides encryption between AWS services D) Routes traffic between VPCs
Answer: B
Explanation: A Security Group acts as a virtual firewall that controls network traffic for EC2 instances. You can set inbound (incoming) and outbound (outgoing) traffic rules based on IP address, port, and protocol. Security Groups have only "Allow" rules and are stateful—if you allow inbound traffic, the corresponding response traffic is automatically allowed. Multiple security groups can be attached to a single EC2 instance, and changes take effect immediately.
Q25. What is the primary function of AWS Artifact?
A) A package management service for storing code artifacts B) Provides on-demand access to AWS compliance reports and AWS agreements C) A CI/CD pipeline automation tool D) An application log analysis service
Answer: B
Explanation: AWS Artifact is a free self-service portal that provides on-demand access to AWS security and compliance reports and select online agreements. You can download ISO certifications, PCI DSS reports, SOC reports, and other AWS compliance documents. It's useful when you need to demonstrate AWS security controls to auditors or regulators. It also provides contract management features such as Business Associate Agreements (BAAs) required for HIPAA compliance.
Q26. What is the difference between "Network ACL (Access Control List)" and "Security Group" in AWS?
A) Network ACL operates at the instance level; Security Group operates at the subnet level B) Network ACL operates at the subnet level and is stateless; Security Group operates at the instance level and is stateful C) Both services provide identical functionality D) Network ACL has only Allow rules; Security Group has only Deny rules
Answer: B
Explanation: Network ACLs are stateless firewalls operating at the VPC subnet level that can have both Allow and Deny rules. Being stateless means that if inbound traffic is allowed, you still need a separate outbound rule for the response traffic. Security Groups are stateful firewalls at the instance level with only Allow rules; when inbound traffic is allowed, the response is automatically allowed. Using both layers together implements Defense in Depth, providing multiple security barriers around your resources.
Q27. Which AWS service does NOT support "encryption"?
A) Amazon S3 B) Amazon EBS C) AWS KMS D) Amazon Route 53
Answer: D
Explanation: Amazon Route 53 is a DNS service focused on domain name resolution rather than data encryption. S3 supports server-side encryption (SSE-S3, SSE-KMS, SSE-C), EBS supports volume encryption, and KMS is the encryption key management service. AWS supports both Encryption at Rest and Encryption in Transit across many services. Route 53 does use HTTPS for its API, but data stored within Route 53 (DNS records) isn't subject to encryption in the same way data services are.
Q28. What are the possible values for the "Effect" element in an IAM policy?
A) Permit, Deny B) Allow, Deny C) Grant, Revoke D) True, False
Answer: B
Explanation: The Effect element in an IAM policy can only have two values: "Allow" or "Deny." AWS IAM uses an implicit deny model—everything is denied by default, and only explicitly allowed actions are permitted. An explicit Allow is effective only when there is no explicit Deny. An explicit Deny always takes precedence over Allow. For example, even if a group policy grants Allow, if a more specific policy explicitly Denies, access is denied, making it important to understand evaluation order.
Q29. What is true about AWS Compliance Programs?
A) AWS takes all compliance responsibility on behalf of customers B) AWS supports many compliance programs like HIPAA, PCI DSS, SOC, and ISO, and customers can leverage these to achieve their own compliance C) Compliance programs are only available to Enterprise Support customers D) All AWS services meet all compliance standards
Answer: B
Explanation: AWS supports dozens of compliance programs worldwide (HIPAA, PCI DSS, SOC 1/2/3, ISO 27001, FedRAMP, etc.). AWS handles compliance for cloud infrastructure while customers are responsible for compliance of their workloads in the cloud (Shared Responsibility Model). This allows customers to accelerate their own audit and certification processes by building on AWS's compliance certifications. Relevant reports can be downloaded from AWS Artifact, and customers receive audit-ready documentation.
Q30. What is the primary function of Amazon GuardDuty?
A) Monitors database performance B) An intelligent threat detection service that monitors AWS accounts and workloads C) Encrypts network traffic D) Manages user authentication
Answer: B
Explanation: Amazon GuardDuty is a threat detection service that continuously monitors AWS accounts, workloads, and S3 data for malicious activity and unauthorized behavior using machine learning, anomaly detection, and integrated threat intelligence. It analyzes CloudTrail logs, VPC Flow Logs, and DNS queries to detect threats such as account compromise, malware infections, and cryptocurrency mining. It can be easily enabled from the AWS console without installing any agents.
Q31. Which service is NOT relevant for achieving PCI DSS compliance on AWS?
A) AWS CloudTrail B) AWS Config C) AWS Translate D) Amazon VPC
Answer: C
Explanation: AWS Translate is a language translation service with no direct relationship to PCI DSS compliance. Services required for PCI DSS (Payment Card Industry Data Security Standard) compliance include CloudTrail (API auditing), AWS Config (configuration management and compliance), VPC (network isolation), KMS (encryption), Shield (DDoS protection), and WAF (web security). AWS operates PCI DSS Level 1 certified infrastructure that customers can use as the foundation for card payment environments.
Q32. What is the primary function of Amazon Inspector?
A) Analyzes AWS costs and provides optimization recommendations B) Automatically scans EC2 instances, container images, and Lambda functions for software vulnerabilities C) Blocks network intrusions in real time D) Provides data loss prevention (DLP) capabilities
Answer: B
Explanation: Amazon Inspector is a vulnerability management service that automatically discovers software vulnerabilities and unintended network exposure in AWS workloads. It continuously scans EC2 instances, Amazon ECR container images, and Lambda functions, comparing findings against the CVE (Common Vulnerabilities and Exposures) database and assessing risk levels. Agent-based operation evaluates OS packages and network accessibility to identify security vulnerabilities early in the development lifecycle.
Q33. What is the primary function of Amazon Macie?
A) Analyzes network traffic B) Automatically discovers and protects sensitive data like PII in S3 C) Automates security patching for EC2 instances D) Simulates IAM policies
Answer: B
Explanation: Amazon Macie is a data security service that uses machine learning to automatically discover, classify, and protect sensitive data (personally identifiable information (PII), financial data, healthcare information, etc.) stored in S3. It continuously monitors S3 buckets to detect whether sensitive data is publicly accessible or unencrypted and generates alerts. Macie helps with compliance with data protection regulations like GDPR and HIPAA by providing automated data discovery and risk scoring.
Domain 3: Cloud Technology and Services (Q34-Q54)
Q34. What is the difference between "Instance Store" and "EBS (Elastic Block Store)" in Amazon EC2?
A) Instance Store is more expensive but provides better performance B) Instance Store is temporary storage that is deleted when an instance is stopped or terminated; EBS is persistent storage that is maintained independently of the instance C) EBS is always faster than Instance Store D) Instance Store is available on all EC2 instance types
Answer: B
Explanation: Instance Store is temporary storage physically attached to the EC2 host—all data is lost when the instance is stopped or terminated. It is suitable for caches, buffers, and temporary files. EBS is persistent block storage connected over the network that retains data even when an instance is terminated. EBS volumes can be detached from one instance and attached to another, and can be backed up to S3 via snapshots. For production workloads requiring data persistence, EBS is the recommended choice.
Q35. Which S3 storage class is designed for infrequently accessed data at a lower cost?
A) S3 Standard B) S3 Standard-IA (Infrequent Access) C) S3 Express One Zone D) S3 Intelligent-Tiering
Answer: B
Explanation: S3 Standard-IA is a storage class for data that is accessed less frequently but requires rapid access when needed. The storage cost is lower than S3 Standard, but there is an additional retrieval fee when accessing data. There is a minimum storage duration of 30 days. S3 Standard is for frequently accessed data, S3 Glacier is for archive data rarely accessed, and S3 Intelligent-Tiering automatically moves data to the appropriate tier based on access patterns, eliminating manual lifecycle management.
Q36. What are the primary characteristics of AWS Lambda?
A) Servers must be provisioned and managed B) No server management is needed to run code, and you only pay for the time your code runs C) Billing is in minimum 1-hour increments D) Only runs on EC2 instances
Answer: B
Explanation: AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. It is triggered by events (S3 uploads, API Gateway requests, DynamoDB streams, etc.) and charges are based on actual execution time (1ms increments) and request count. Lambda scales automatically and can run for up to 15 minutes. The free tier includes 1 million requests and 400,000 GB-seconds of compute time per month, making it extremely cost-effective for event-driven workloads.
Q37. What is the difference between a "Public Subnet" and a "Private Subnet" in Amazon VPC?
A) Public Subnet provides faster network speeds B) Public Subnet has a route to an Internet Gateway allowing direct internet access; Private Subnet has no direct internet access C) Private Subnet is less expensive D) Public Subnet only supports IPv6
Answer: B
Explanation: A Public Subnet has a route to an Internet Gateway (IGW) in its route table, allowing instances with public IPs to be directly accessible from the internet. A Private Subnet has no direct route to an IGW, making it inaccessible from the internet. Instances in private subnets can access the internet outbound through a NAT Gateway. The typical pattern is to place web servers in public subnets and databases or backend services in private subnets for security, creating a defense-in-depth network architecture.
Q38. What is the primary function of Amazon CloudFront?
A) Automatically provisions cloud infrastructure B) A CDN service that delivers content with low latency through edge locations worldwide C) Optimizes database queries D) Manages multi-region data replication
Answer: B
Explanation: Amazon CloudFront is a CDN (Content Delivery Network) service that delivers both static (HTML, CSS, JS, images) and dynamic content from over 400 edge locations worldwide, close to users. It caches content from origin servers (S3, EC2, ALB, etc.) to reduce latency and decrease origin load. CloudFront supports HTTPS by default and also provides DDoS protection, WAF integration, and edge computing through AWS Lambda@Edge, enabling sophisticated content delivery and transformation at the edge.
Q39. Which is NOT a primary function of Amazon Route 53?
A) Domain name registration B) DNS routing C) Translating domain names to IP addresses D) Managing database connections
Answer: D
Explanation: Amazon Route 53 is AWS's DNS (Domain Name System) service. Its primary functions include domain name registration, DNS routing (with various routing policies: simple, weighted, latency-based, failover, geolocation, multi-value), and health checks with failover capabilities. Database connection management is a function of Amazon RDS or AWS Secrets Manager. Route 53 offers a 99.99% SLA and provides high availability and scalability for DNS operations.
Q40. What is the difference between ALB (Application Load Balancer) and NLB (Network Load Balancer) in Elastic Load Balancing?
A) ALB operates at Layer 7 (HTTP/HTTPS) and NLB operates at Layer 4 (TCP/UDP) B) ALB is faster and NLB is cheaper C) NLB only supports HTTP and ALB supports all protocols D) Both load balancers provide identical functionality
Answer: A
Explanation: ALB (Application Load Balancer) operates at OSI Layer 7 (application layer) and handles HTTP/HTTPS traffic. It enables advanced routing based on URL path, host header, HTTP method, and more, making it suitable for microservices and container-based applications. NLB (Network Load Balancer) operates at Layer 4 (transport layer) and handles TCP/UDP traffic. It is an ultra-high-performance, ultra-low-latency load balancer capable of handling millions of requests per second, suitable for gaming, IoT, and real-time streaming applications.
Q41. In Amazon Auto Scaling, what is the correct relationship between "Desired Capacity," "Minimum," and "Maximum"?
A) All three values must be equal B) Desired Capacity is the current number of instances to run and must be between Min and Max C) Maximum can be less than Minimum D) Desired Capacity must always equal Maximum
Answer: B
Explanation: An Auto Scaling group has three capacity values: Minimum (min instances that must always run), Maximum (max instances during scale-out), and Desired (current number to run, must be between Min and Max). Auto Scaling adjusts the Desired Capacity based on CloudWatch alarms or scheduled scaling, maintaining instance count within the Min-Max range. This ensures you always have enough capacity while not over-provisioning, automatically adapting to changing workload demands.
Q42. What is the primary difference between Amazon SNS and Amazon SQS?
A) SNS stores data permanently and SQS stores it temporarily B) SNS is push-based publish/subscribe messaging; SQS is pull-based message queuing that buffers messages C) SQS only supports email delivery; SNS supports all message types D) Both services are identical with different names
Answer: B
Explanation: SNS is a publish/subscribe (Pub/Sub) messaging service that simultaneously pushes a single message to multiple subscribers (email, SMS, Lambda, SQS, HTTP, etc.). It's ideal for fan-out patterns. SQS is a message queue service that stores messages in a queue for consumers to pull and process. Message processing failures can be retried, enabling loose coupling between microservices. Combining SNS and SQS together enables both reliable message processing and fan-out patterns simultaneously.
Q43. What are the primary functions of Amazon CloudWatch?
A) Automates code deployment B) Monitoring AWS resources and applications, collecting logs, and setting alarms C) Optimizes DNS queries D) Automates database backups
Answer: B
Explanation: Amazon CloudWatch is a monitoring service for AWS resources and applications. Key functions include: metrics collection (EC2 CPU utilization, S3 bucket size, etc.), alarm setting (notifications or Auto Scaling triggers when thresholds are exceeded), log collection and analysis (CloudWatch Logs), dashboard creation, and event-based automation (EventBridge). Basic monitoring collects metrics at 5-minute intervals; detailed monitoring collects at 1-minute intervals for an additional cost.
Q44. What is the primary characteristic of Amazon DynamoDB?
A) A relational database that uses SQL B) A fully managed NoSQL database that provides single-digit millisecond performance C) Can only be operated on-premises D) Can only store up to 10GB of data
Answer: B
Explanation: Amazon DynamoDB is a fully managed NoSQL database service for key-value and document data. It provides consistent performance in single-digit milliseconds at any scale and scales horizontally automatically. It operates serverlessly—provisioning, patching, and backups are managed by AWS. It supports both on-demand and provisioned modes, and Global Tables enable multi-region replication. It is ideal for applications requiring high scalability such as gaming, IoT, and mobile apps.
Q45. What is the primary purpose of "Multi-AZ" deployment in Amazon RDS?
A) Creates read replicas across multiple AZs to improve read performance B) Maintains a synchronous replica in another AZ for high availability, with automatic failover if a failure occurs C) Replicates data across multiple regions D) Reduces database costs
Answer: B
Explanation: RDS Multi-AZ is a high availability and automatic failover feature. It maintains a standby instance in a different AZ through synchronous replication. If the primary DB instance fails, automatic failover to the standby instance occurs in approximately 1-2 minutes. The standby instance does not handle read requests (Read Replicas are used for read scaling); it exists solely for high availability. Multi-AZ is strongly recommended for production databases where uptime is critical.
Q46. What is the primary function of AWS Elastic Beanstalk?
A) A tool for manually managing EC2 instances B) A PaaS service for quickly deploying and managing applications that automates infrastructure management C) A container orchestration service D) A serverless function execution environment
Answer: B
Explanation: AWS Elastic Beanstalk is a PaaS (Platform as a Service) service that makes it easy to deploy and manage applications (web apps, APIs, etc.). Developers just upload their code and Elastic Beanstalk automatically handles EC2 instance provisioning, load balancing, Auto Scaling, monitoring, and patching. It supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. It allows you to reduce infrastructure management complexity while retaining full control over the underlying resources.
Q47. What do Amazon ECS and Amazon EKS have in common?
A) Both services are Kubernetes-based B) Both are container orchestration services for running and managing containerized applications C) Both services only support serverless deployments D) Both services are used for managing relational databases
Answer: B
Explanation: Both ECS and EKS are container orchestration services for running and managing containerized applications at scale. ECS is AWS's proprietary orchestration service that is deeply integrated with AWS services and simpler to manage. EKS provides open-source Kubernetes as a fully managed service on AWS, suitable when migrating existing Kubernetes workloads to AWS. Both can run on EC2 or Fargate (serverless containers), giving you flexibility in how you manage the underlying infrastructure.
Q48. What is Amazon S3 Glacier (formerly Amazon Glacier) best suited for?
A) High-performance storage for frequently accessed data B) Long-term archival storage for rarely accessed data C) In-memory cache storage D) Real-time streaming data processing
Answer: B
Explanation: Amazon S3 Glacier is a very low-cost storage service for long-term archival of rarely used data. Retrieval time varies from minutes to hours depending on the retrieval option chosen, and it costs over 90% less than S3 Standard. Three storage classes are available: S3 Glacier Instant Retrieval (millisecond retrieval), S3 Glacier Flexible Retrieval (1 minute to 12 hours), and S3 Glacier Deep Archive (12 to 48 hours). Suitable for regulatory data retention, backups, and log archiving.
Q49. What is the primary use of Amazon ElastiCache?
A) A relational database for permanent data storage B) Reduces database load and improves application performance through in-memory caching C) An object storage service D) Caches DNS query results
Answer: B
Explanation: Amazon ElastiCache is a fully managed in-memory cache service providing Redis or Memcached on AWS. It caches frequently requested data in memory to reduce database lookups and shorten response times to sub-millisecond levels. It is used for session storage, database query result caching, leaderboards, and real-time analytics. Placing ElastiCache in front of RDS can dramatically reduce database load and improve application response times during peak traffic periods.
Q50. Which statement about Amazon S3's "Versioning" feature is correct?
A) Enabling versioning deletes all existing files B) Enabling versioning on a bucket preserves all previous versions of every object, allowing recovery of accidentally deleted or overwritten objects C) Versioning is only supported for the S3 Standard class D) Enabling versioning decreases storage costs
Answer: B
Explanation: When you enable versioning on an S3 bucket, multiple versions of every object uploaded with the same key are stored. Even if a file is overwritten or deleted, previous versions can be recovered. Versioning is enabled at the bucket level and once enabled cannot be disabled (only suspended). After enabling versioning, all versions are stored so storage costs may increase; lifecycle policies can automatically delete older versions or move them to cheaper storage classes to manage costs.
Q51. Which combination of services makes up a "serverless" architecture on AWS?
A) EC2 + RDS + VPC B) Lambda + API Gateway + DynamoDB + S3 C) EC2 + EBS + RDS D) EMR + Kinesis + Redshift
Answer: B
Explanation: A serverless architecture is one where the cloud provider manages server management on behalf of the developer. The classic AWS serverless combination is Lambda (serverless function execution), API Gateway (serverless API endpoint), DynamoDB (serverless NoSQL database), and S3 (serverless object storage). Serverless architectures allow developers to focus solely on code without infrastructure management burdens, paying only for what they use and scaling automatically in response to demand.
Q52. What is the primary function of Amazon Kinesis?
A) A CDN that caches static web content B) A service for collecting, processing, and analyzing real-time streaming data C) A non-relational database D) A container image registry
Answer: B
Explanation: Amazon Kinesis is a fully managed service for processing real-time streaming data. It consists of Kinesis Data Streams (real-time data stream ingestion and processing), Kinesis Data Firehose (delivering streaming data to S3, Redshift, or Elasticsearch), Kinesis Data Analytics (analyzing streaming data with SQL or Apache Flink), and Kinesis Video Streams (processing video streams). It is used for IoT sensor data, clickstreams, financial transaction data, and other real-time processing needs.
Q53. What is the primary use of Amazon Redshift?
A) An OLTP database for transaction processing B) Petabyte-scale data warehousing and analytics C) Real-time streaming data processing D) An in-memory cache service
Answer: B
Explanation: Amazon Redshift is a fully managed, cloud-based data warehouse service optimized for OLAP (Online Analytical Processing) workloads that require fast analysis of petabyte-scale data. Columnar storage and massively parallel processing (MPP) enable fast processing of complex aggregation queries. Features include Redshift Spectrum (directly querying S3 data), ML-based query optimization, and a SQL interface. It is used for BI tools, dashboards, and business analytics.
Q54. What is the primary function of AWS CloudFormation?
A) A cloud cost analysis tool B) An IaC service that defines and provisions infrastructure as code using YAML/JSON templates C) Application performance monitoring D) Automates database migration
Answer: B
Explanation: AWS CloudFormation is a service that allows managing infrastructure as code (IaC). By defining AWS resources (EC2, VPC, RDS, etc.) in YAML or JSON templates, CloudFormation automatically provisions and configures those resources. Resources are managed in groups through Stacks, with version control, rollback, and Change Set features available. Development, staging, and production environments can be consistently provisioned using the same template, ensuring infrastructure consistency and repeatability.
Domain 4: Billing, Pricing, and Support (Q55-Q60)
Q55. Which statement about "Reserved Instances" in Amazon EC2 is correct?
A) They automatically stop when not in use to save costs B) Purchase a 1 or 3-year commitment to save up to 72% compared to On-Demand pricing C) Purchase unused EC2 capacity from other customers at a lower price D) Available for unlimited use at a fixed hourly rate
Answer: B
Explanation: Reserved Instances allow committing to 1 or 3 years of use with upfront or partial upfront payment, saving up to 72% compared to On-Demand prices. There are Standard RIs (locked to specific instance type) and Convertible RIs (can change instance type). Suitable for steady-state, predictable workloads (databases, web servers, etc.). Spot Instances purchase unused capacity at up to 90% discount but can be interrupted at any time with a 2-minute warning, making them suitable only for fault-tolerant workloads.
Q56. What is the primary function of AWS Cost Explorer?
A) Automatically optimizes AWS infrastructure B) Visualizes and analyzes AWS costs and usage, and forecasts future costs C) Automatically purchases Reserved Instances D) Automatically creates billing support tickets
Answer: B
Explanation: AWS Cost Explorer is a tool for visualizing and analyzing AWS cost and usage data. You can categorize costs by service, region, tag, and account to understand cost patterns. It provides analysis of past 12 months of cost data and forecasts for the next 3 months. It also provides Reserved Instance recommendations to identify cost savings opportunities. Basic usage is free; API calls incur a small charge. Cost Explorer is essential for financial governance in AWS environments.
Q57. Which of the following is NOT one of the check categories in AWS Trusted Advisor?
A) Cost Optimization B) Security C) Application Code Quality D) Performance
Answer: C
Explanation: AWS Trusted Advisor is a service that checks your AWS environment against AWS best practices and provides recommendations. The five check categories are: Cost Optimization (unused resources, Reserved Instance recommendations), Security (open ports, MFA not enabled, etc.), Performance (EBS throughput, EC2 utilization, etc.), Fault Tolerance (RDS without backup, etc.), and Service Limits (approaching service quotas). Application code quality inspection is a function of Amazon CodeGuru Reviewer.
Q58. What is the main difference between "Business Support" and "Enterprise Support" AWS Support plans?
A) Enterprise provides 24/7 phone support; Business only supports email B) Enterprise provides a dedicated TAM (Technical Account Manager) and guarantees a 15-minute response SLA for business-critical cases C) Business is free and Enterprise is paid D) Enterprise customers receive 50% discounts on AWS services
Answer: B
Explanation: AWS Support plans include Basic (free), Developer, Business, Enterprise On-Ramp, and Enterprise. Business Support provides 24/7 phone and chat support with a 1-hour response SLA for system-down cases. Enterprise Support additionally provides a dedicated TAM (Technical Account Manager), Concierge support team, 15-minute response SLA for business-critical system downtime, Infrastructure Event Management, and Well-Architected reviews, making it suitable for mission-critical enterprise workloads.
Q59. What is the most important difference between "On-Demand Instances" and "Spot Instances" in AWS?
A) On-Demand instances are always faster than Spot instances B) Spot Instances use AWS's unused EC2 capacity at up to 90% cheaper pricing, but can be interrupted at any time when AWS needs the capacity back C) Spot Instances only support specific instance types D) On-Demand instances can be interrupted
Answer: B
Explanation: Spot Instances use AWS's unused EC2 capacity at prices up to 90% cheaper than On-Demand. However, AWS can interrupt the instance with a 2-minute warning when that capacity is needed. Therefore, they are suitable for interruption-tolerant workloads (big data processing, CI/CD, batch jobs, containerized workloads) but not for production web servers or other critical applications that cannot tolerate interruption. On-Demand instances pay as you go with no commitment and are never interrupted.
Q60. What is the primary function of AWS Budgets?
A) Monitors the technical performance of AWS services B) Set custom cost and usage budgets and send notifications when thresholds are exceeded C) Automatically provisions AWS infrastructure D) Automatically renews Reserved Instances
Answer: B
Explanation: AWS Budgets allows you to set custom budgets for AWS costs, usage, Reserved Instances, and Savings Plans utilization. You can receive email or SNS notifications when actual or forecasted costs exceed set thresholds (e.g., 80% or 100% of budget). AWS Budgets Actions can automatically apply IAM policies or stop EC2/RDS instances when a budget is exceeded. Up to two budgets are free per month, making it an accessible tool for cost governance and preventing bill surprises.
Additional Study Resources
- AWS Official Documentation: https://docs.aws.amazon.com
- AWS Skill Builder: AWS official online learning platform
- AWS Cloud Practitioner Essentials: Free digital course from AWS
- AWS Official Practice Exam: Available on AWS Training and Certification
- Exam Guide: Refer to the official AWS CLF-C02 exam guide
Key Points for Passing
- Thoroughly understand the Shared Responsibility Model
- Know the basic functions of core services (EC2, S3, RDS, Lambda, VPC)
- Clearly distinguish between pricing models (On-Demand, Reserved, Spot, Savings Plans)
- Memorize security services (IAM, Shield, WAF, KMS, CloudTrail) with their specific use cases
- Understand the concepts of global infrastructure (Regions, AZs, Edge Locations)