- Authors

- Name
- Youngju Kim
- @fjvbn20031
- SCS-C02 Exam Overview
- Domain Breakdown
- AWS Security Core Services Summary
- Practice Exam — 65 Questions
- Study Resources
SCS-C02 Exam Overview
| Item | Details |
|---|---|
| Duration | 170 minutes |
| Questions | 65 |
| Passing Score | 750 / 1000 |
| Format | Single / Multiple Choice |
| Cost | USD 300 |
Domain Breakdown
| Domain | Weight |
|---|---|
| Threat Detection and Incident Response | 14% |
| Security Logging and Monitoring | 18% |
| Infrastructure Security | 20% |
| Identity and Access Management | 16% |
| Data Protection | 18% |
| Management and Security Governance | 14% |
An AWS Security Specialist designs and implements security controls across cloud environments, detects threats, and ensures regulatory compliance.
AWS Security Core Services Summary
Threat Detection: GuardDuty (intelligent threat detection), Security Hub (unified dashboard), Detective (root cause analysis)
Logging: CloudTrail (API auditing), VPC Flow Logs (network traffic), CloudWatch Logs (operational logs)
Infrastructure Security: Network Firewall (stateful inspection), WAF (web application firewall), Shield (DDoS protection)
IAM: STS (temporary credentials), Identity Center (SSO), Organizations (SCP)
Data Protection: KMS (key management), CloudHSM (dedicated HSM), Macie (PII detection), Secrets Manager (secret management)
Governance: Config (compliance), Inspector (vulnerability scanning), Audit Manager (audit automation)
Practice Exam — 65 Questions
Domain 1: Threat Detection and Incident Response
Q1. A security team received a GuardDuty finding: UnauthorizedAccess:EC2/SSHBruteForce. What is the most effective way to automatically isolate the instance and create a forensic snapshot?
A) Send the GuardDuty finding to SNS and have an operator handle it manually B) Create an EventBridge rule to capture the GuardDuty finding and trigger a Lambda function that replaces the SG with an isolation SG and creates an EBS snapshot C) Analyze CloudTrail logs and manually block the IP address D) Use an AWS Config rule to detect non-compliant instances
Answer: B
Explanation: The EventBridge → Lambda pattern is the standard for automated response to GuardDuty findings. Lambda replaces the instance's security group with an isolation-only SG (all traffic denied) and calls CreateSnapshot for forensic EBS snapshots. SSM Run Command can also be used to capture memory dumps.
Q2. A GuardDuty finding CryptoCurrency:EC2/BitcoinTool.B was generated. What does this finding indicate?
A) An EC2 instance has Bitcoin mining software installed B) The EC2 instance is communicating with IP addresses associated with Bitcoin mining pools C) Abnormal costs are occurring in the account D) An IAM user is calling cryptocurrency-related APIs
Answer: B
Explanation: CryptoCurrency:EC2/BitcoinTool.B indicates that an EC2 instance is communicating with IP addresses or domains associated with known Bitcoin-related activity. GuardDuty uses threat intelligence feeds to detect communication with known malicious IPs/domains.
Q3. A security team wants to centrally manage GuardDuty findings across multiple AWS accounts. What is the correct way to aggregate findings in a management account?
A) Configure GuardDuty S3 export in each account and replicate to a central bucket B) Designate a GuardDuty delegated administrator account and enable automatic enrollment for all Organizations member accounts C) Use Security Hub to aggregate GuardDuty findings D) Use CloudWatch Events to route findings to a central account
Answer: B
Explanation: When you designate a GuardDuty delegated administrator in AWS Organizations, findings from all member accounts are automatically aggregated to the management account. New accounts added to Organizations automatically have GuardDuty enabled. Security Hub can also aggregate findings, but GuardDuty's own multi-account feature is the more complete solution.
Q4. You want to investigate a security incident using Amazon Detective. Which data source does Detective NOT use?
A) VPC Flow Logs B) CloudTrail logs C) GuardDuty findings D) AWS Config change history
Answer: D
Explanation: Amazon Detective automatically collects and analyzes VPC Flow Logs, CloudTrail logs, and GuardDuty findings. AWS Config change history is not a Detective data source. Detective uses graph-based analysis to visualize relationships between entities (IPs, AWS accounts, EC2 instances).
Q5. You receive a notification from the AWS Abuse team that an EC2 instance in your account is being used in external attacks. What is the correct first immediate response?
A) Delete the account immediately B) Terminate the instance immediately C) Move the instance to an isolation security group, create a snapshot, then investigate D) Contact AWS Support to request instance blocking
Answer: C
Explanation: Immediately terminating the instance destroys forensic evidence. The correct approach is to move the instance to an isolation SG that blocks all inbound/outbound traffic, create an EBS snapshot for forensic analysis, and preserve it. SSM Session Manager (via VPC Endpoint) can still be used to analyze the instance.
Q6. You enabled the CIS AWS Foundations Benchmark standard in Security Hub. Which of the following is NOT checked by this standard?
A) Whether MFA is enabled on the root account B) Whether CloudTrail multi-region is enabled C) Whether VPC Flow Logs are enabled D) Whether EC2 instances use the latest AMI
Answer: D
Explanation: The CIS AWS Foundations Benchmark checks basic security configurations for IAM, logging, monitoring, and networking. Root MFA, CloudTrail multi-region, and VPC Flow Logs are all CIS benchmark items. Checking for the latest AMI is within the scope of Amazon Inspector.
Q7. You enabled GuardDuty Malware Protection. What does this feature scan?
A) Files uploaded to S3 buckets B) EBS volumes attached to EC2 instances and container workloads C) Lambda function code D) RDS database files
Answer: B
Explanation: GuardDuty Malware Protection scans EBS volumes attached to EC2 instances and ECS/EKS container workloads to detect malware. It creates EBS volume snapshots and scans them in an isolated environment without requiring agents. S3 object scanning is a separate GuardDuty S3 Protection feature.
Domain 2: Security Logging and Monitoring
Q8. You want to log GetObject events on an S3 bucket using CloudTrail. Which event type must you enable?
A) Management Events B) Data Events C) Insights Events D) Network Activity Events
Answer: B
Explanation: CloudTrail Data Events log object-level operations within S3 buckets (GetObject, PutObject, DeleteObject) and other direct resource operations such as Lambda function invocations and DynamoDB operations. Management Events log control plane operations for the account (instance creation, IAM policy changes, etc.).
Q9. You enabled CloudTrail Insights events. What does this feature detect?
A) API calls from malicious IPs B) IAM policy violations C) Unusual spikes or drops in API call frequency D) Failed authentication attempts
Answer: C
Explanation: CloudTrail Insights analyzes API usage patterns in the account to detect unusual activity (e.g., a sudden spike in DeleteSecurityGroup calls). It automatically learns a normal baseline and generates Insights events when deviations occur. GuardDuty handles malicious IP detection; IAM Access Analyzer handles policy analysis.
Q10. You examined a VPC Flow Log record: srcaddr=203.0.113.1, dstaddr=10.0.1.5, dstport=3389, action=ACCEPT. What security threat does this indicate?
A) Normal communication between internal systems B) RDP access from the internet to an EC2 instance is permitted — potential security vulnerability C) Remote access via VPN D) Outbound traffic through NAT Gateway
Answer: B
Explanation: 203.0.113.1 is a public IP address, dstport 3389 is RDP (Remote Desktop Protocol), and ACCEPT means this traffic was allowed. Allowing direct RDP access from the internet is a serious security vulnerability. RDP should only be accessible via VPN or Session Manager.
Q11. You need to audit whether sensitive files were deleted from a specific S3 bucket. What is the best log source for this task?
A) S3 Server Access Logging B) CloudTrail S3 data events C) CloudWatch metrics D) VPC Flow Logs
Answer: B
Explanation: CloudTrail S3 data events log object-level operations including DeleteObject, providing an accurate audit trail of who (IAM user/role), when, from where (IP), and which object was deleted. S3 server access logging also records deletions but does not provide as detailed IAM context as CloudTrail.
Q12. You enabled Route 53 Resolver DNS query logging. What security threat can be detected from these logs?
A) SQL injection attacks B) Data exfiltration via DNS tunneling C) Network port scanning D) Buffer overflow attacks
Answer: B
Explanation: DNS query log analysis can detect DNS tunneling (encoding data in DNS queries/responses), communication with C2 (Command and Control) servers, and access to known malicious domains. Abnormally long DNS query names or high query frequency are indicators of DNS tunneling.
Q13. You use CloudWatch Logs Insights to find root account usage in CloudTrail logs. Which approach is valid?
A) Only a query starting with filter userIdentity.type = "Root" works
B) Only CloudWatch metric filters work
C) Only Athena queries against CloudTrail logs in S3 work
D) All of the above approaches are valid
Answer: D
Explanation: Multiple methods can detect root account usage. CloudWatch Logs Insights is suited for real-time interactive queries; CloudWatch metric filters are suited for continuous monitoring and alarms; Athena is suited for large-scale long-term log analysis. CIS benchmarks recommend setting up alarms for root account usage.
Q14. You blocked certain IP ranges in your WAF web ACL, but those IP requests still appear in ALB access logs. What is a possible cause?
A) WAF rule priority error B) No CloudFront in front of the ALB; WAF is only applied at the CloudFront level C) WAF is associated with API Gateway rather than the ALB D) IP set updates have not propagated yet
Answer: C
Explanation: WAF is associated with specific resources (ALB, CloudFront, API Gateway, AppSync). If WAF is only associated with API Gateway, the ALB is not protected. To protect an ALB, the WAF web ACL must be associated directly with the ALB. WAF is also regional (for ALB) vs. global (for CloudFront).
Domain 3: Infrastructure Security
Q15. In a multi-tier VPC architecture using both Security Groups (SG) and Network ACLs (NACL), which statement is correct?
A) NACLs are stateful and SGs are stateless B) SGs are stateful and NACLs are stateless; NACLs must explicitly allow outbound ephemeral ports (1024-65535) C) Both services operate in a stateful manner D) SGs apply at the subnet level; NACLs apply at the instance level
Answer: B
Explanation: Security groups are stateful — return traffic is automatically allowed. NACLs are stateless — inbound and outbound rules must be configured separately. When a client sends a request on port 80, the server's response goes back to the client's ephemeral port (1024-65535), so NACL outbound rules must allow that range.
Q16. You use Suricata-compatible rules in an AWS Network Firewall stateful rule group. Which is a valid Suricata rule format?
A) ALLOW tcp any any -> 10.0.0.0/8 80 B) alert http any any -> any any (msg:"Blocked User-Agent"; http.user_agent; content:"BadBot"; sid:1000001;) C) DENY UDP 0.0.0.0/0 53 any any D) block tcp external any to internal 443
Answer: B
Explanation: AWS Network Firewall's stateful rule engine supports Suricata-compatible rules. The correct Suricata rule format is: action protocol src_ip src_port direction dst_ip dst_port (options). Example B detects HTTP requests containing a specific User-Agent string.
Q17. An AWS Shield Advanced subscriber wants billing protection for EC2 Auto Scaling costs that spike due to a DDoS attack. What must they do?
A) Report the DDoS attack to AWS Support B) Enable cost protection in the Shield Advanced console and request DRT (DDoS Response Team) engagement C) Enable anomaly detection in AWS Cost Explorer D) Check Trusted Advisor cost optimization recommendations
Answer: B
Explanation: Shield Advanced provides service credits for cost spikes on EC2, ELB, CloudFront, and Route 53 caused by DDoS attacks. Shield Advanced protection must be active for the affected resources, and the DRT must confirm the attack. The DRT is a specialized team that supports real-time attack mitigation.
Q18. What is the correct way to enforce IMDSv2 (Instance Metadata Service v2)?
A) Deny imds:GetToken in an IAM policy B) Set HttpTokens=required when launching EC2 instances, or run modify-instance-metadata-options on existing instances C) Restrict metadata service access via a VPC endpoint D) Block traffic to 169.254.169.254 in security groups
Answer: B
Explanation: IMDSv2 uses a session-oriented approach requiring a PUT request to first obtain a token, then a GET request including that token. This protects metadata from SSRF (Server-Side Request Forgery) attacks. Existing instances can enforce it with: aws ec2 modify-instance-metadata-options --http-tokens required.
Q19. You want to apply WAF policies organization-wide using AWS Firewall Manager. What are the prerequisites?
A) Enable AWS Config in all accounts B) Enable AWS Organizations, designate a Firewall Manager administrator account, and enable AWS Config in each account C) Enable Security Hub and apply CIS standards D) Enable GuardDuty and designate a delegated administrator
Answer: B
Explanation: Prerequisites for AWS Firewall Manager: 1) AWS Organizations enabled, 2) A Firewall Manager administrator account designated from the Organizations management account, 3) AWS Config enabled in all accounts where policies will be applied. AWS Config is required for Firewall Manager to evaluate resource configurations and apply policies.
Q20. What is the primary use case for Gateway Load Balancer (GWLB)?
A) Load balancing web applications B) Transparently inserting third-party network virtual appliances (IDS/IPS, firewalls) for traffic inspection C) Distributing API Gateway requests D) Database connection pooling
Answer: B
Explanation: Gateway Load Balancer uses the GENEVE protocol to send traffic to third-party network virtual appliances (Palo Alto, Fortinet, etc.) for inspection, then forwards it to the original destination. It enables transparent packet-level traffic inspection and supports horizontal scaling of appliances.
Q21. How do you ensure EC2 instances in a VPC access S3 without traversing the internet?
A) Use a NAT Gateway B) Create an S3 gateway endpoint and add it to the route table C) Set up a VPN connection D) Put a CloudFront caching layer in front of S3
Answer: B
Explanation: An S3 gateway endpoint is free and ensures traffic from the VPC to S3 stays entirely within the AWS network. Adding the S3 endpoint route to the route table bypasses the internet gateway and NAT Gateway. You can also add a bucket policy condition aws:SourceVpce to restrict access to a specific VPC endpoint.
Domain 4: Identity and Access Management
Q22. Which represents the correct IAM policy evaluation order?
A) Allow → Deny → implicit deny B) Explicit Deny → Organizations SCP → Permission Boundary → Session Policy → Identity-based Policy → Resource-based Policy → implicit deny C) Resource policy → IAM policy → SCP → implicit allow D) Permission Boundary → SCP → IAM policy → implicit deny
Answer: B
Explanation: IAM policy evaluation logic: 1) Explicit Deny immediately denies, 2) Organizations SCP must allow, 3) Permission Boundary must allow, 4) Session policy must allow, 5) Either identity-based or resource-based policy allows (same account), 6) Implicit deny if none of the above conditions are met.
Q23. A developer wants to assume an IAM role in another AWS account to access resources. What is the minimum configuration required for cross-account access?
A) Only set an IAM policy in the developer's account B) Specify the developer's account as a Principal in the target role's trust policy, and grant sts:AssumeRole on the developer's IAM user/role C) Set up VPC peering between the two accounts D) Connect the two accounts with AWS Direct Connect
Answer: B
Explanation: Cross-account access requires bilateral trust: 1) The target account's IAM role trust policy must specify the source account or specific IAM entity as a Principal, 2) The source account's IAM user/role must have sts:AssumeRole permission for the target role. Both conditions must be met.
Q24. Which statement about Permission Boundaries is correct?
A) They define the maximum permissions an IAM user can receive and automatically allow all actions within the boundary B) They are an advanced feature that defines the maximum permissions that can be granted to an IAM entity; effective permissions are the intersection of the boundary and identity-based policies C) They are an additional authorization mechanism to grant access to resources D) They are equivalent to Organizations SCPs
Answer: B
Explanation: A permission boundary sets the maximum permissions ceiling for an IAM user or role. The effective permissions are the intersection (AND) of the permission boundary and identity-based policies. A permission boundary alone does not grant permissions — it must be used together with identity-based policies. It is useful for delegating IAM management to developers while preventing excessive permission grants.
Q25. What is the difference between a Cognito User Pool and an Identity Pool?
A) User Pools provide temporary AWS credentials for resource access; Identity Pools manage user directories B) User Pools handle user authentication and directory management (login, registration, MFA); Identity Pools exchange authenticated tokens for temporary AWS credentials C) User Pools are for mobile apps only; Identity Pools are for web apps only D) The two features are identical; only the use case differs
Answer: B
Explanation: Cognito User Pools are user authentication services (JWT token issuance, user registration/login, MFA, social login federation). Cognito Identity Pools exchange User Pool tokens or social IdP tokens via AWS STS for temporary AWS credentials (Access Key, Secret Key, Session Token). The two services are typically used together.
Q26. An SCP in AWS Organizations blocks resource creation outside certain regions. Does this SCP apply to the root user of the management account?
A) Yes, SCPs apply to all entities B) No, SCPs do not apply to the management account C) Yes, but the root user has privileges to bypass SCPs D) It depends on the Organizations configuration
Answer: B
Explanation: SCPs apply to IAM users, roles, and root users of member accounts. However, SCPs do NOT apply to the management account (master account) of AWS Organizations. This is one reason why sensitive operations should not be performed from the management account — it should be used solely for Organizations management.
Q27. You enabled IAM Access Analyzer. What does this service analyze?
A) Syntax errors in IAM policies B) AWS resources shared with external entities (other accounts, internet) C) Unused IAM permissions D) Root account activity monitoring
Answer: B
Explanation: IAM Access Analyzer identifies resources accessible by entities outside the zone of trust (trusted account scope). Analyzed resources include S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and Secrets Manager secrets. It finds publicly accessible S3 buckets or IAM roles that can be assumed by other accounts.
Q28. You want to implement SSO access to the AWS Console using an enterprise IdP (ADFS) with SAML 2.0. What is the correct flow?
A) User → AWS → ADFS → credential validation → AWS Console B) User → ADFS authentication → SAML assertion → AWS STS AssumeRoleWithSAML → temporary credentials → AWS Console C) User → Cognito → ADFS → AWS Console D) User → AWS IAM → ADFS → Console
Answer: B
Explanation: SAML federation flow: 1) User authenticates with ADFS, 2) ADFS issues a SAML assertion (including role information), 3) The user's browser sends the SAML assertion to AWS STS's AssumeRoleWithSAML endpoint, 4) STS issues temporary credentials, 5) Access AWS Console. The IdP must be registered as a SAML provider in AWS IAM.
Domain 5: Data Protection
Q29. Which statement correctly describes the relationship between KMS CMK key policies and IAM policies?
A) IAM policies alone can grant CMK usage permissions B) Key policies take precedence; IAM policies are ignored C) The key policy must explicitly allow IAM usage before IAM policies can control access to the CMK D) Key policies and IAM policies operate independently
Answer: C
Explanation: KMS CMK access cannot be controlled by IAM policies alone without a key policy. The key policy must include "Principal": {"AWS": "arn:aws:iam::ACCOUNT-ID:root"} with kms:* permission so that IAM policies in the account can control the CMK. The default key policy allows full access to the root account, enabling IAM policy-based control.
Q30. Which statement correctly describes Envelope Encryption?
A) Data is double-encrypted with two different KMS keys B) A KMS CMK (data key encryption key) encrypts the data key; the data key encrypts the actual data; the encrypted data key is stored alongside the data C) Data is stored in S3 and then the bucket is encrypted with KMS D) Data in transit is protected with SSL/TLS
Answer: B
Explanation: Envelope encryption flow: 1) Call KMS GenerateDataKey API to get a plaintext data key and an encrypted data key, 2) Encrypt data with the plaintext data key, 3) Immediately delete the plaintext data key from memory, 4) Store the encrypted data key alongside the encrypted data. For decryption, call KMS Decrypt to recover the data key, then decrypt the data. Envelope encryption is used because directly encrypting large data with KMS has a 4KB limit.
Q31. When using SSE-KMS encryption on an S3 bucket, what IAM permissions does the object uploader need?
A) Only s3:PutObject B) Both s3:PutObject and kms:GenerateDataKey C) s3:PutObject, kms:GenerateDataKey, and kms:Decrypt D) Only kms:Encrypt
Answer: B
Explanation: When uploading objects to S3 with SSE-KMS encryption: 1) s3:PutObject — permission to write objects to S3, 2) kms:GenerateDataKey — permission to generate a data key from KMS. For downloads, s3:GetObject and kms:Decrypt are required.
Q32. Comparing CloudHSM and KMS, when should you choose CloudHSM?
A) When minimizing cost is the priority B) When FIPS 140-2 Level 3 certification, single-tenant HSM, full control of keys, or Oracle TDE/PKCS#11 support is required C) When simple S3 encryption is needed D) When automatic key rotation is needed
Answer: B
Explanation: Reasons to choose CloudHSM: 1) FIPS 140-2 Level 3 certification required (KMS is Level 2), 2) Single-tenant dedicated HSM required, 3) Regulatory requirement that AWS cannot access key material, 4) Oracle DB Transparent Data Encryption (TDE), 5) PKCS#11, JCE, or CNG API support needed. KMS excels in ease of management and AWS service integration.
Q33. Which of the following is NOT a sensitive data type detected by Amazon Macie in S3?
A) Credit card numbers B) AWS access keys C) Social Security Numbers (SSN) D) Encrypted password hashes
Answer: D
Explanation: Amazon Macie uses machine learning to automatically detect sensitive data in S3. Detectable types include PII (SSN, passport numbers, driver's license numbers), financial information (credit card numbers, bank account numbers), credentials (AWS access keys, API keys, OAuth tokens), and medical information. Encrypted hashes cannot identify the original data, so they are not classified as sensitive data.
Q34. What are the key differences between AWS Secrets Manager and SSM Parameter Store SecureString?
A) Only Secrets Manager supports KMS encryption B) Secrets Manager provides automatic rotation, cross-account access, and native RDS/Redshift/DocumentDB integration and has a cost; Parameter Store SecureString is free but has no automatic rotation C) Parameter Store integrates with more services D) The two services have identical features; only pricing differs
Answer: B
Explanation: Secrets Manager advantages: 1) Automatic secret rotation (using Lambda functions), 2) Native support for automatic rotation of RDS, Redshift, and DocumentDB passwords, 3) Cross-account access, 4) Secret version management. Cost: ~$0.40/secret/month. Parameter Store SecureString: free (standard parameters), KMS encryption supported, but no automatic rotation. Recommended: Secrets Manager for database passwords, Parameter Store for configuration values.
Q35. When is DSSE-KMS (Dual-layer Server-Side Encryption with KMS) used?
A) To simultaneously store data in two S3 buckets B) When dual encryption layers required by CNSSI 1253 and FIPS 200 are needed (protecting S3 objects with two independent encryption layers) C) To apply both client-side and server-side encryption simultaneously D) To replicate data across multiple AWS regions
Answer: B
Explanation: DSSE-KMS applies two independent KMS encryption layers to S3 objects. It is used when specific compliance requirements (CNSSI 1253, DoD Mission Assurance Category) mandate dual encryption, such as for certain US government agencies. Each layer uses an independent data encryption key (DEK).
Domain 6: Management and Security Governance
Q36. You configured AUTO_REMEDIATION in an AWS Config rule. How does automatic remediation work when a non-compliant resource is detected?
A) AWS Config directly modifies the resource B) The Config rule triggers an SSM Automation document to execute remediation actions C) A Lambda function is directly invoked D) CloudFormation StackSets perform the remediation
Answer: B
Explanation: AWS Config automatic remediation uses SSM (Systems Manager) Automation documents. A remediation action is linked to the Config rule, and when a non-compliant resource is detected, the specified SSM Automation document is executed. Example: when the s3-bucket-public-read-prohibited rule is violated, the AWS-DisableS3BucketPublicReadWrite automation document runs to block public access.
Q37. Which of the following is NOT a scan type supported by Amazon Inspector v2?
A) EC2 instance OS vulnerabilities (CVE) B) ECR container image vulnerabilities C) Lambda function code vulnerabilities D) S3 bucket configuration vulnerabilities
Answer: D
Explanation: Amazon Inspector v2 scan targets: 1) EC2 instances — OS package vulnerabilities (CVE) and network exposure, 2) ECR container images — OS package vulnerabilities in images, 3) Lambda functions — software vulnerabilities in function code and layers. S3 bucket configurations are checked by AWS Config or Security Hub.
Q38. What is the difference between a Preventive Guardrail and a Detective Guardrail in AWS Control Tower?
A) Preventive guardrails use AWS Config rules; detective guardrails use SCPs B) Preventive guardrails use SCPs to outright block non-compliant actions; detective guardrails use AWS Config rules to detect and report non-compliant states C) The two types are the same; only the naming differs D) Preventive guardrails use CloudWatch alarms; detective guardrails use EventBridge rules
Answer: B
Explanation: Control Tower guardrail types: 1) Preventive — use SCPs to block non-compliant actions themselves (e.g., prohibiting root account access key creation), 2) Detective — use AWS Config managed rules to detect non-compliant states and generate alerts (e.g., detecting public read access on S3 buckets). Preventive is more powerful but not all policies can be implemented preventively.
Q39. What is the primary purpose of AWS Audit Manager?
A) Automatic remediation of security vulnerabilities B) Continuously auto-collecting audit evidence to simplify compliance reporting for PCI-DSS, HIPAA, SOC 2, etc. C) Account cost analysis and optimization D) Network traffic analysis
Answer: B
Explanation: AWS Audit Manager automatically collects audit evidence from AWS Config, CloudTrail, Security Hub, and other services to automate compliance reporting. It provides pre-built assessment templates for major frameworks including PCI-DSS, HIPAA, GDPR, SOC 2, and ISO 27001. It significantly reduces the time audit teams spend manually gathering evidence.
Q40. Why would you use Resource Access Manager (RAM)?
A) To share IAM role permissions with other accounts B) To share AWS resources (subnets, Transit Gateway, License Manager configurations, etc.) with other accounts in an organization, preventing duplicate resource creation C) To replicate data between S3 buckets D) To create multi-account CloudWatch dashboards
Answer: B
Explanation: AWS RAM shares VPC subnets, Transit Gateways, Route 53 Resolver rules, License Manager configurations, AWS Glue catalogs, and more with other accounts in AWS Organizations or specific accounts. Example: sharing VPC subnets from a central networking account with application accounts allows each account to deploy resources into the same subnet, enabling centralized network management.
Advanced Scenario Questions
Q41. A company wants to centrally manage S3 public access blocking, WAF policy application, and security group policies across a multi-account AWS environment. What is the most efficient approach?
A) Configure settings manually in each account B) Use AWS Firewall Manager to define central security policies and apply them automatically C) Deploy configurations to each account using CloudFormation StackSets D) Use an AWS Config Aggregator to detect non-compliant accounts and fix them manually
Answer: B
Explanation: AWS Firewall Manager automatically applies consistent security policies across an entire Organization. Supported policy types: 1) WAF policies (ALB, API GW, CloudFront), 2) Shield Advanced, 3) VPC security group policies, 4) Network Firewall, 5) Route 53 DNS Firewall, 6) S3 bucket policies. When new accounts are added to Organizations, policies are automatically applied.
Q42. An attacker has stolen IAM role credentials from an EC2 instance and is attempting to access other resources. Which GuardDuty finding would likely be generated?
A) UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B B) UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS C) Policy:IAMUser/RootCredentialUsage D) Recon:IAMUser/UserPermissions
Answer: B
Explanation: GuardDuty detects when temporary credentials issued via EC2 instance metadata are used from outside that instance (a different IP). UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS is generated when EC2 role credentials are used from an IP address outside AWS. This is a strong indicator of credential theft (via SSRF, instance compromise, etc.).
Q43. Data stored in an S3 bucket is encrypted with a KMS customer-managed key. You accidentally scheduled key deletion with the 7-30 day waiting period. What happens to the data?
A) Data is immediately inaccessible during the waiting period B) During the waiting period, the key is disabled so new encryption is not possible, but existing data can still be decrypted; after key deletion, decryption is permanently impossible C) After key deletion, S3 automatically re-encrypts with a new key via automatic key rotation D) Before key deletion, data is automatically migrated to SSE-S3
Answer: B
Explanation: During the KMS key deletion waiting period (7-30 days): the key becomes disabled and cannot be used for new encryption operations, but data already encrypted with this key can still be decrypted. The waiting period is a safety mechanism to prevent accidental deletion. After the key is fully deleted, there is no way to decrypt data encrypted with that key. Key Disable (not deletion) can be re-enabled at any time, so try disabling first.
Q44. EC2 instances in a private subnet are managed via SSM Session Manager. What VPC endpoints are required for Session Manager to function?
A) Only the ssm endpoint is required B) Three interface endpoints are required: com.amazonaws.region.ssm, com.amazonaws.region.ssmmessages, and com.amazonaws.region.ec2messages C) Only an S3 gateway endpoint is required D) A NAT Gateway makes VPC endpoints unnecessary
Answer: B
Explanation: VPC endpoints needed for SSM Session Manager in private subnets: 1) com.amazonaws.region.ssm — primary SSM service endpoint, 2) com.amazonaws.region.ssmmessages — Session Manager channel creation/management, 3) com.amazonaws.region.ec2messages — communication between SSM Agent and SSM service. The com.amazonaws.region.s3 gateway endpoint is also recommended for patch management.
Q45. You want to configure GitHub Actions to access AWS via OIDC. What is the security benefit of this approach?
A) GitHub Actions automatically acquires administrator privileges B) No long-term AWS credentials (access keys) need to be stored in GitHub Secrets; short-term temporary credentials are obtained dynamically C) AWS account information is shared with GitHub D) All GitHub workflows are granted the same AWS permissions
Answer: B
Explanation: OIDC (OpenID Connect) integration for GitHub Actions: By registering GitHub as an IAM OIDC provider and configuring an IAM role, the workflow obtains a JWT token issued by GitHub and calls AssumeRoleWithWebIdentity to acquire short-term temporary credentials at runtime. No long-term access keys are stored in GitHub Secrets, eliminating credential leak risk. The IAM role trust policy can restrict access to specific repositories/branches.
Q46. Which S3 bucket policy condition key allows access only through a VPC endpoint?
A) aws:SourceVpc B) aws:SourceVpce C) aws:VpcId D) s3:VpcEndpoint
Answer: B
Explanation: aws:SourceVpce allows access only from a specific VPC endpoint ID. aws:SourceVpc allows access from a specific VPC ID. A Deny all + Allow only via VpceID pattern in the bucket policy creates a bucket accessible only from a specific VPC endpoint.
Q47. When should you use KMS key grants instead of key policies?
A) When permanent access permissions are required B) When temporarily or programmatically delegating key usage permissions to specific AWS services (EBS, RDS, etc.) C) When managing keys from another account D) When preventing key deletion
Answer: B
Explanation: KMS grants are temporary permissions allowing a specific principal to perform only certain key operations (Decrypt, Encrypt, etc.). AWS services (EBS, RDS, Redshift) automatically create grants with their service-linked roles when using customer KMS keys. Grants can be created/revoked programmatically for flexible access control. Revoke them with RetireGrant or RevokeGrant.
Q48. You are investigating a large-scale data breach in which millions of S3 objects were deleted. What is the fastest way to identify the IAM entity that performed the deletions?
A) Analyze S3 server access logs B) Use Athena to query CloudTrail logs for DeleteObject/DeleteObjects events C) Check S3 deletion operations in CloudWatch metrics D) Analyze VPC Flow Logs
Answer: B
Explanation: The CloudTrail + Athena combination is effective for large-scale log analysis. CloudTrail S3 data events must be enabled. Create a table in Athena pointing to the S3 bucket storing CloudTrail logs and query: SELECT userIdentity.arn, eventTime, requestParameters.bucketName, requestParameters.key FROM cloudtrail_logs WHERE eventName IN ('DeleteObject', 'DeleteObjects') AND eventTime BETWEEN '...' AND '...'
Q49. What is an advantage of using certificates issued by ACM Private CA?
A) Internet trust chain is automatically included B) Enables mTLS between internal microservices, issuance of private certificates for internal applications, and management of a private CA hierarchy C) Free to use D) Can issue certificates for all public domains
Answer: B
Explanation: ACM Private CA (Certificate Authority) builds internal PKI infrastructure. Used for mTLS between internal services, internal API authentication, and IoT device certificate management. There is no public trust chain, so internet users do not trust it, but internal systems can be configured to trust the root CA.
Q50. Which of the following is NOT a design principle of the AWS Well-Architected Framework's Security pillar?
A) Implement a strong identity foundation B) Enable traceability C) Apply a single security layer to simplify management D) Prepare for security events
Answer: C
Explanation: The 7 design principles of the AWS Well-Architected Security pillar: 1) Implement a strong identity foundation, 2) Enable traceability, 3) Apply security at all layers (defense in depth), 4) Automate security best practices, 5) Protect data in transit and at rest, 6) Keep people away from data, 7) Prepare for security events. "Single security layer" contradicts the defense-in-depth principle.
Q51. A GuardDuty finding Trojan:EC2/DNSDataExfiltration was generated. What does this mean?
A) Traffic from EC2 instance to malicious DNS server B) An EC2 instance is exfiltrating data through DNS queries C) A DNS server is under DDoS attack D) Route 53 Resolver is receiving malicious queries
Answer: B
Explanation: DNS data exfiltration encodes data in the subdomain portion of DNS queries and sends them to a malicious DNS server. Example: queries like dGhpcyBpcyBzZWNyZXQ.malicious.com. Because firewalls often allow DNS (port 53) even when blocking HTTP/HTTPS, this technique is frequently used. Route 53 DNS Firewall or Network Firewall can block known malicious domains.
Q52. What does an AWS Config Aggregator provide?
A) Automatic deployment of the same Config rules to all accounts B) Aggregating AWS Config data from multiple accounts/regions into a single account for centralized compliance visibility C) Automatic remediation of non-compliant resources D) Reduced Config rule execution costs
Answer: B
Explanation: An AWS Config Aggregator allows centralized viewing of Config resource configurations and compliance data from multiple Organizations member accounts or specific accounts in a single aggregation account. Useful for organization-wide compliance dashboards and resource inventory. The aggregator itself does not provide automatic remediation or rule deployment.
Q53. An IAM role attached to an EC2 instance has the AdministratorAccess policy. What is the best approach to reduce this risk?
A) Remove the IAM role and use hardcoded credentials B) Apply the principle of least privilege: create a custom policy allowing only the services and actions the application actually needs, and set permission boundaries C) Add MFA to the EC2 role D) Minimize the role session duration
Answer: B
Explanation: The principle of least privilege is a security fundamental. IAM Access Analyzer's "policy generation" feature can analyze CloudTrail logs and automatically create a minimum-privilege policy based on only the API calls that were actually used. Adding a permission boundary prevents excessive permissions from being granted to the role in the future.
Q54. What is the best practice for securely managing database passwords in a Lambda function?
A) Store the plaintext password in environment variables B) Store KMS-encrypted values in Lambda environment variables, or dynamically retrieve from Secrets Manager at runtime with a caching strategy C) Hardcode the password in the Lambda function code D) Store an encrypted file in an S3 bucket
Answer: B
Explanation: Recommended approach: Use AWS Secrets Manager to store passwords and retrieve them with the SDK at Lambda execution time. Cache within the Lambda execution environment (for the duration of one invocation) to avoid calling the Secrets Manager API on every invocation. AWS Parameter Store SecureString is an alternative. Lambda environment variables are KMS-encrypted (in transit), but Secrets Manager provides additional security features like automatic rotation and audit trail.
Q55. Which of the following is the customer's responsibility in the AWS Shared Responsibility Model?
A) Physical data center security B) Hypervisor patching C) Operating system patching, IAM configuration, security group settings, data encryption D) Global infrastructure availability
Answer: C
Explanation: AWS Shared Responsibility Model: AWS responsibility (Security OF the Cloud) — physical infrastructure, hypervisor, global network, patching of managed services. Customer responsibility (Security IN the Cloud) — guest OS patching, IAM configuration, security groups/NACLs, data encryption, network traffic protection, application security. IaaS (EC2) has a wider customer responsibility scope; SaaS (S3) has a wider AWS responsibility scope.
Q56. Find the problem in the following IAM policy: Effect:Allow, Action:s3:, Resource:.
A) There is a syntax error B) Grants all S3 actions on all S3 resources — violates the principle of least privilege C) It is too restrictive because it is limited to S3 only D) Resource wildcards are required for S3
Answer: B
Explanation: The s3:_ wildcard allows all S3 actions including s3:DeleteBucket and s3:DeleteObject. Resource:_ includes all S3 buckets in the account. According to least privilege, only specific bucket ARNs and required specific actions should be allowed. Example: {"Action": ["s3:GetObject", "s3:PutObject"], "Resource": "arn:aws:s3:::my-bucket/*"}
Q57. What is the most comprehensive layered approach to defend against ransomware attacks in an AWS environment?
A) Enable S3 bucket encryption B) S3 versioning + MFA Delete + immutable backups (S3 Object Lock) + least-privilege IAM + GuardDuty + offline backups C) Enable CloudTrail logging D) Enable VPC Flow Logs
Answer: B
Explanation: Layered defense for ransomware: 1) S3 versioning to recover deleted files, 2) MFA Delete to prevent version deletion, 3) S3 Object Lock (WORM) to make objects non-deletable for a specified period, 4) Least-privilege IAM to limit bulk deletion permissions, 5) GuardDuty to detect abnormal deletion patterns, 6) AWS Backup for regular backups + Vault Lock (immutable backup vault). Multi-region/multi-account backups are also recommended.
Q58. How do you prevent outages caused by certificate expiration in an AWS environment?
A) Use self-signed certificates with a 10-year validity period B) Use ACM managed certificates (auto-renewal), configure EventBridge notifications for approaching expiration, set 90-day advance renewal alerts for third-party certificates C) Manage all certificates manually D) Use certificates only in CloudFront
Answer: B
Explanation: ACM managed certificates auto-renew before expiration (for domain-validated certificates). ACM sends expiration notifications via aws.health or EventBridge at 45, 30, 15, 7, 3, and 1 days before expiration. Imported (third-party) certificates in ACM do NOT auto-renew and require separate monitoring.
Q59. A security team needs to verify the integrity of AWS CloudTrail logs. What feature does CloudTrail provide for this?
A) Log file encryption B) Log File Validation — verifies log file integrity using SHA-256 hashes and RSA signatures C) Real-time log streaming D) Automatic log file deletion
Answer: B
Explanation: When CloudTrail Log File Validation is enabled, CloudTrail generates a digest file every hour. The digest file contains a list of hashes for the past hour's log files and the hash of the previous digest file. The aws cloudtrail validate-logs CLI command can verify that log files have not been deleted or tampered with.
Q60. A Well-Architected review found that "IAM users routinely use the root account." What is the immediate corrective action?
A) Delete the root account B) Create an IAM administrator user, enable MFA on the root account, delete root access keys, and use IAM roles/users for daily work C) Change the root account password D) Apply IP restrictions to the root account
Answer: B
Explanation: AWS root account security best practices: 1) Enable MFA on the root account (hardware MFA recommended), 2) Delete or deactivate root access keys, 3) Create IAM administrator users/roles for daily operations, 4) Use the root account only for specific tasks (Organizations management, services requiring root access), 5) Set CloudWatch alarms for root account usage.
Q61. Which of the following CANNOT be accomplished with a Service Control Policy (SCP)?
A) Prohibiting the use of specific AWS regions B) Restricting the use of specific EC2 instance types C) Allowing IAM role creation D) Granting permissions to IAM users in member accounts
Answer: D
Explanation: SCPs are guardrails that set the maximum permissions ceiling available to member accounts in an organization. SCPs restrict but do not grant permissions. Actual permission grants are made in each account's IAM policies. What SCPs can do: prohibit use of specific regions, prohibit specific services/actions, require specific resource tags, prohibit root account access.
Q62. What does Amazon Cognito Advanced Security Features provide?
A) Automatic IAM role creation B) Adaptive Authentication — MFA requirements or login blocking based on risk score, compromised credential protection C) Automatic S3 bucket encryption D) Automatic CloudTrail log analysis
Answer: B
Explanation: Cognito Advanced Security Features: 1) Risk-based adaptive authentication — detecting abnormal login patterns (new device, new location, unusual time) and requiring MFA or blocking, 2) Compromised credential protection — blocking use of known leaked passwords, 3) Security event logging and monitoring. This defends against Account Takeover (ATO) attacks.
Q63. A security engineer needs to run commands on EC2 instances without SSH access. What is the AWS-native replacement for SSH?
A) AWS Systems Manager Session Manager — agent-based, provides encrypted sessions without SSH port (22) B) AWS Direct Connect C) VPN connection D) CloudShell
Answer: A
Explanation: SSM Session Manager advantages: 1) No SSH key management needed, 2) No inbound port 22 required (only outbound HTTPS needed), 3) All sessions auto-logged to CloudTrail/S3, 4) IAM-based access control, 5) Instances in VPCs without internet access can be reached via VPC endpoints. Port forwarding and SSH tunneling are also supported.
Q64. Which combination of AWS services is used to implement a Zero Trust architecture in an AWS environment?
A) Traditional VPN + firewall B) IAM Identity Center (SSO) + conditional access + VPC endpoints + mTLS (ACM Private CA) + Network Firewall + GuardDuty C) VPC peering alone is sufficient D) Block public internet
Answer: B
Explanation: Implementing Zero Trust ("never trust, always verify") on AWS: 1) IAM Identity Center for centralized identity verification, 2) Conditional access policies (based on device posture, location), 3) VPC endpoints for private AWS service access, 4) ACM Private CA for mTLS service-to-service authentication, 5) Network Firewall/SGs for microsegmentation, 6) GuardDuty + Detective for continuous anomaly detection.
Q65. For compliance purposes, all S3 buckets must have public access blocked. What is the most effective way to enforce this across the entire organization?
A) Manually enable S3 public access block in each account B) Use an Organizations SCP to deny s3:PutBucketPublicAccessBlock removal + AWS Config rules for detection + Firewall Manager to automatically apply S3 public access block policies C) Deny all public access in S3 bucket policies D) Restrict S3 access via VPC endpoints
Answer: B
Explanation: Multi-layered approach: 1) SCP prevents member accounts from removing the public access block setting (preventive), 2) AWS Config rule s3-bucket-level-public-access-prohibited detects non-compliant buckets (detective), 3) AWS Firewall Manager S3 policy automatically applies public access block to all new and existing buckets in all accounts (automation). Combining these three satisfies prevention, detection, and automation requirements.
Study Resources
- AWS Official SCS-C02 Exam Guide
- AWS Security Best Practices Whitepaper
- AWS re:Inforce Security Session Videos
- AWS Skill Builder SCS-C02 Official Practice Questions
Passing Tip: For scenario-based questions, the "most secure," "least privilege," and "automated" options tend to be correct. Precisely understanding the AWS Shared Responsibility Model and the unique purpose of each security service is essential.