Skip to content
Published on

AWS Advanced Networking Specialty (ANS-C01) Practice Exam — 65 Questions

Authors

ANS-C01 Exam Overview

ItemDetails
Duration170 minutes
Questions65
Passing Score750 / 1000
FormatSingle / Multiple Choice
CostUSD 300

Domain Breakdown

DomainWeight
Network Design30%
Network Implementation26%
Network Management and Operations20%
Network Security, Compliance, and Governance14%
Optimizing Network Performance10%

AWS Network Core Architecture Summary

VPC: Isolated virtual network, CIDR blocks, subnets, route tables, IGW, NAT Gateway

Connectivity: Transit Gateway (hub-and-spoke), VPC Peering, PrivateLink, VPN, Direct Connect

DNS: Route 53 (public/private hosted zones), Resolver (inbound/outbound endpoints)

Security: Security Groups, NACLs, Network Firewall, WAF, Shield, GWLB

Performance: Enhanced Networking (ENA/EFA), Placement Groups, Global Accelerator, CloudFront


Practice Exam — 65 Questions

Domain 1: Network Design

Q1. A company needs to connect 10 VPCs. Which should they choose between VPC Peering and Transit Gateway, and why?

A) VPC Peering — cheaper and simpler B) Transit Gateway — centralized routing, supports transitive routing, O(1) management complexity for N VPC connections C) Both are the same; only the cost differs D) VPC Peering — lower latency

Answer: B

Explanation: VPC Peering does not support transitive routing. Even if A-B and B-C are peered, A cannot communicate directly with C. Connecting all 10 VPCs requires up to N*(N-1)/2 = 45 peering connections. Transit Gateway acts as a central hub — connecting all VPCs to one TGW enables transitive routing. Transit Gateway is far more efficient in environments with 10+ VPCs.

Q2. Two VPCs have overlapping CIDR blocks (both 10.0.0.0/16). Applications in both VPCs need to communicate. What is a viable solution?

A) Create VPC peering — overlapping CIDRs are supported B) VPC peering is not possible; use PrivateLink (endpoint service) to expose a specific service C) Use NAT Gateway for address translation D) Communicate via the internet

Answer: B

Explanation: VPC Peering and Transit Gateway do not allow overlapping CIDR blocks. AWS PrivateLink allows a service behind an NLB to be exposed as a VPC Endpoint Service so that another VPC can access it via an Interface Endpoint, even with CIDR conflicts. Long-term, a VPC CIDR redesign is recommended.

Q3. You configured Transit Gateway inter-region peering in a multi-account environment. Which statement is correct?

A) TGW inter-region peering automatically supports route propagation B) TGW inter-region peering supports only static routing; routes must be manually added to the peering attachment C) Dynamic routing with BGP is supported D) It behaves identically to a same-region TGW

Answer: B

Explanation: Transit Gateway inter-region peering does not support BGP dynamic routing. Only static routing is supported, so you must manually add the remote region's CIDR blocks to each TGW route table. Inter-region traffic is encrypted and transmitted over the AWS global network.

Q4. You are designing a dual-stack VPC with IPv4 and IPv6. EC2 instances need IPv6-only outbound internet access (inbound blocked). What configuration is required?

A) Use an IPv6 NAT Gateway B) Use an Egress-Only Internet Gateway (EIGW) and add a ::/0 → EIGW route to the route table C) A standard Internet Gateway (IGW) is sufficient D) IPv6 requires no NAT, so no additional configuration is needed

Answer: B

Explanation: An Egress-Only Internet Gateway is IPv6-specific. Similar to an IPv4 NAT Gateway, it allows outbound IPv6 traffic but blocks inbound connections from the internet to instances. It is used when IPv6 instances in private subnets need to receive patches from the internet or call external APIs.

Q5. You want to use AWS PrivateLink to access a SaaS service from your VPC. What configuration is required on the SaaS provider side?

A) Set up VPC peering B) Place the service behind an NLB (Network Load Balancer) and create a VPC Endpoint Service C) Expose the service via a public IP D) Set up Site-to-Site VPN

Answer: B

Explanation: In the AWS PrivateLink architecture, the service provider places the service behind an NLB (or GWLB) and creates a VPC Endpoint Service. Consumers create an Interface VPC Endpoint and access the service via a private IP. Traffic stays entirely within the AWS network without traversing the internet. NLB supports IP, instance, and Lambda targets.

Q6. You want to integrate on-premises DNS with VPC DNS using Route 53 Resolver. What configuration allows on-premises systems to resolve internal VPC domain names (e.g., app.internal.corp)?

A) Create a Route 53 public hosted zone B) Create a Route 53 Resolver inbound endpoint in the VPC and configure a conditional forwarder on the on-premises DNS server pointing the VPC domain to that endpoint C) Route 53 Resolver outbound endpoint alone is sufficient D) Analyze DNS queries with VPC Flow Logs

Answer: B

Explanation: Route 53 Resolver inbound endpoints create ENIs within the VPC, allowing on-premises DNS servers to resolve VPC-internal domain names via Direct Connect or VPN. Configure a conditional forwarder on the on-premises DNS server for internal.corp pointing to the inbound endpoint's IP addresses.

Q7. In a centralized internet access pattern, you want to route all outbound internet traffic from multiple VPCs through a single VPC's (egress VPC) NAT Gateway. What is the correct configuration?

A) Create a separate NAT Gateway in every VPC B) Attach all VPCs and the Egress VPC to the Transit Gateway, route default traffic (0.0.0.0/0) from spoke VPCs to TGW, then route to NAT Gateway in the Egress VPC C) Connect to the Egress VPC via VPC Peering D) Attach an internet gateway in each VPC

Answer: B

Explanation: Centralized internet outbound design: 1) Configure NAT Gateway and IGW in the Egress VPC, 2) Attach all spoke VPCs and the Egress VPC to the TGW, 3) Add a 0.0.0.0/0 → TGW route to spoke VPC route tables, 4) Set the TGW route table's default route toward the Egress VPC, 5) Route from the Egress VPC to the NAT Gateway. This pattern reduces NAT Gateway costs and enables centralized security controls.

Q8. You want to use Transit Gateway multicast. Which scenario is NOT supported?

A) Multicast between EC2 instances B) Multicast via VPN attachment C) Multicast across peered TGWs D) Multiple groups within a multicast domain

Answer: C

Explanation: Transit Gateway multicast is not supported across peered TGWs (inter-region or intra-region TGW peering). Direct Connect Gateway and VPN attachments also do not support multicast. EC2 instances can be registered as members of a TGW multicast group.

Q9. What are the best practices for planning VPC CIDR blocks with future scalability in mind?

A) Start with a small CIDR like 10.0.0.0/24 B) Allocate a sufficiently large CIDR within RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), avoid overlap with on-premises and other VPCs, recommend /16 to /20 C) Use public IP ranges as VPC CIDRs D) Start small since CIDRs can be changed at any time

Answer: B

Explanation: VPC CIDR planning best practices: 1) Use RFC 1918 private addresses, 2) Allocate a sufficiently large range (/16 recommended — 65,536 IPs), 3) Create an organization-wide IP plan to avoid overlap with on-premises networks, 4) Prevent overlap between VPCs (considering peering/TGW connections), 5) Leave room for subnet planning. VPC CIDRs can be added to but not modified.

Q10. What characterizes a Distributed Deployment of AWS Network Firewall?

A) All VPC traffic passes through a single centralized Network Firewall VPC B) Network Firewall endpoints are deployed in each VPC, inspecting traffic locally within the VPC C) Deployed independently of Transit Gateway D) Only inspects inbound traffic

Answer: B

Explanation: Network Firewall deployment models: 1) Distributed — firewall endpoints deployed in each VPC, traffic inspected at the VPC level, 2) Centralized — firewall deployed in a dedicated security VPC, all traffic routed through TGW for inspection, 3) Combined — centralized for internet edge, distributed for East-West. The distributed model enables fine-grained per-VPC policies but at higher cost.

Domain 2: Network Implementation

Q11. How should you choose the Virtual Interface (VIF) type for Direct Connect?

A) Always use Transit VIF B) Public AWS services (S3, DynamoDB) → Public VIF; single VPC access → Private VIF; multi-VPC access via Transit Gateway → Transit VIF C) Use only Private VIF for security D) Use the same VIF regardless of purpose

Answer: B

Explanation: Direct Connect VIF types: 1) Public VIF — access AWS public services (S3, Glacier, SNS, etc.) and AWS public IP addresses; receives AWS public IPs via BGP, 2) Private VIF — access a single VPC or VPCs in the same region via Direct Connect Gateway, 3) Transit VIF — access multiple VPCs and regions via Direct Connect Gateway and Transit Gateway. Transit VIF and Private VIF cannot be used simultaneously.

Q12. Which statement about Direct Connect Link Aggregation Group (LAG) is correct?

A) LAG can be configured across multiple Direct Connect locations B) LAG logically bundles multiple connections of the same speed at the same Direct Connect location to increase bandwidth and provide failover C) LAG automatically provides Active/Passive failover D) Connections within a LAG can be of different speeds

Answer: B

Explanation: A LAG (Link Aggregation Group) combines multiple physical connections of the same speed at the same Direct Connect location into a single logical connection using LACP (Link Aggregation Control Protocol). All connections must be the same speed and at the same Direct Connect location. A minimum number of active connections can be set to guarantee SLA.

Q13. Why would you configure both Direct Connect and VPN from on-premises to AWS?

A) For cost reduction B) Direct Connect as the primary path, Site-to-Site VPN as backup for high availability C) The two cannot be used simultaneously D) VPN is faster than Direct Connect

Answer: B

Explanation: DX + VPN redundancy pattern: DX is used as the primary path with low latency and high bandwidth; VPN is the backup path for automatic failover when DX fails. BGP MED (Multi-Exit Discriminator) attributes or AS Path Prepending are used to prefer DX routes. When DX fails, traffic automatically switches to VPN.

Q14. What is the role of Dead Peer Detection (DPD) in Site-to-Site VPN?

A) VPN performance monitoring B) Periodically checks peer liveness to detect inactive peers and reinitialize the tunnel or trigger failover C) Determines VPN traffic encryption method D) Sets the Maximum Transmission Unit (MTU)

Answer: B

Explanation: Dead Peer Detection (DPD) is an IKE (Internet Key Exchange) extension that periodically sends R-U-THERE messages to verify VPN peer liveness. On DPD timeout, the dead peer action determines whether to Clear (remove the tunnel), Restart (reinitialize), or Hold (retain the SA). AWS VPN supports DPD; BGP keepalives or DPD configuration is important for maintaining active tunnels.

Q15. How is Accelerated Site-to-Site VPN better than standard Site-to-Site VPN?

A) Provides stronger encryption B) Uses AWS Global Accelerator Anycast IPs and edge locations to minimize internet segments and use optimal paths through the AWS backbone C) Provided at no additional cost D) Supports more tunnels

Answer: B

Explanation: Accelerated VPN uses AWS Global Accelerator to route VPN traffic to the nearest AWS edge location, then delivers it to the AWS region via the AWS global network. The internet-traversal segment is minimized, resulting in lower latency and improved connection stability. Only available for VPNs attached to Transit Gateway.

Q16. Why would you use MACsec on Direct Connect?

A) To replace VPN encryption B) Provides Layer 2 security at the physical Direct Connect connection layer (Ethernet frame encryption), protecting against physical eavesdropping C) To reduce latency D) To increase bandwidth

Answer: B

Explanation: MACsec (IEEE 802.1AE) encrypts data at the Ethernet link level. Enabling MACsec on Direct Connect applies Layer 2 encryption on the physical link between on-premises equipment and the AWS Direct Connect location. It prevents physical eavesdropping especially in shared colocation environments. Supported on 10Gbps and 100Gbps dedicated connections.

Q17. What is the advantage of using BGP in AWS Site-to-Site VPN?

A) Slower than static routing but more secure B) Dynamic routing updates, automatic failover, and ECMP (Equal Cost Multi-Path) support for load balancing across multiple VPN tunnels C) Easier to configure D) Encryption is automatically applied when BGP is used

Answer: B

Explanation: BGP VPN advantages: 1) Automatic routing updates when networks change, 2) Automatic failover on tunnel failure (works with DPD), 3) Bandwidth aggregation across multiple VPN tunnels when using ECMP with Transit Gateway, 4) BGP community attributes for route preference tuning. Each VPN connection provides 2 IPSec tunnels, and BGP can use both tunnels.

Q18. How do you aggregate VPN bandwidth using ECMP (Equal Cost Multi-Path) with Transit Gateway?

A) A single VPN connection is sufficient B) Create multiple Site-to-Site VPN connections to TGW and enable ECMP; advertise the same BGP route from the on-premises router over all tunnels C) ECMP is only possible when used with Direct Connect D) ECMP is not supported for VPN

Answer: B

Explanation: Transit Gateway aggregates bandwidth from multiple VPN connections via ECMP. Each VPN connection provides two IPSec tunnels, and using multiple VPN connections increases maximum bandwidth (up to 50Gbps aggregate). The on-premises router must advertise the same BGP route over all tunnels. ECMP must be enabled on the VPN attachment in TGW.

Domain 3: Network Management and Operations

Q19. What is the difference between AWS Reachability Analyzer and Network Access Analyzer?

A) Both services provide the same functionality B) Reachability Analyzer analyzes the connectivity path between two specific endpoints and identifies blocking factors; Network Access Analyzer identifies unintended network access paths C) Only Network Access Analyzer analyzes VPC Flow Logs D) Reachability Analyzer performs real-time traffic analysis

Answer: B

Explanation: AWS Reachability Analyzer: analyzes reachability between two network endpoints (EC2, ENI, IGW, etc.), shows whether connectivity exists and the path, and identifies blockers (SG, NACL, routing). Network Access Analyzer: defines network security requirements and identifies unexpected access paths that violate them (e.g., internet → EC2 access). Both perform static analysis without sending actual traffic.

Q20. You configured VPC Traffic Mirroring. What is this feature used for?

A) Replicating and synchronizing traffic between VPCs B) Copying network traffic from EC2 instances and sending it to an IDS (Intrusion Detection System) or packet analysis tool C) Load balancer traffic distribution D) VPN traffic encryption

Answer: B

Explanation: VPC Traffic Mirroring mirrors network traffic from EC2 instance ENIs (Elastic Network Interfaces) for security analysis, threat detection, and troubleshooting. Traffic is sent to another EC2 instance (IDS/IPS appliance) or NLB. Filters can be applied to the source and target ENIs to mirror only specific traffic. It operates agentlessly.

Q21. How do you use Amazon Athena to query VPC Flow Logs for the top 10 source IPs generating the most traffic to port 443 in a specific time window?

A) Use CloudWatch metrics B) Create an external table in Athena pointing to the Flow Logs S3 path, then run an SQL query C) Check the GuardDuty dashboard D) Filter in the VPC Flow Logs console

Answer: B

Explanation: VPC Flow Logs + Athena analysis steps: 1) Create an external table in Athena pointing to the Flow Logs S3 bucket (define schema matching Flow Logs fields), 2) Run SQL query: SELECT srcaddr, SUM(bytes) as total_bytes FROM vpc_flow_logs WHERE dstport = 443 AND start BETWEEN timestamp1 AND timestamp2 GROUP BY srcaddr ORDER BY total_bytes DESC LIMIT 10. Use partitioning to optimize query performance and cost.

Q22. What is the primary purpose of CloudWatch Network Monitor?

A) Monitoring AWS service API errors B) Continuously monitoring latency, packet loss, and jitter for hybrid networks (Direct Connect, VPN) C) Monitoring CPU utilization of EC2 instances within a VPC D) Monitoring S3 bucket access

Answer: B

Explanation: CloudWatch Network Monitor continuously monitors performance metrics for hybrid networks. It measures latency, packet loss rate, and jitter between on-premises locations and AWS resources. It detects Direct Connect or VPN performance degradation and notifies operations teams via alarms. Used for network performance SLA monitoring.

Q23. Why would you use Transit Gateway Network Manager?

A) Automatic configuration of TGW route tables B) Visualize the global network (Transit Gateway, Direct Connect, VPN, SD-WAN) in a central dashboard and monitor events C) TGW cost analysis D) Centralized VPC Flow Logs aggregation

Answer: B

Explanation: Transit Gateway Network Manager is a centralized management service for visualizing global network topology. It displays TGWs, VPC connections, Direct Connect, VPN, and SD-WAN devices across AWS regions in a map-based view. It monitors events (routing changes, connection status) and integrates with CloudWatch. Route Analyzer is also available for path analysis.

Domain 4: Network Security, Compliance, and Governance

Q24. What is the advantage of using Managed Prefix Lists in security groups?

A) Reduces security group rule count and allows consistent IP list management shared across multiple security groups B) Automatically blocks threat IPs C) Replaces NACL rules D) Reduces cost

Answer: A

Explanation: A Managed Prefix List is a set of CIDR blocks that can be referenced in security group rules or route tables. Example: managing 10 IP addresses as one prefix list lets multiple security groups reference this list, reducing the rule count. When IP addresses change, only the prefix list is updated, and all referencing security groups are automatically updated. AWS also provides managed prefix lists for AWS services (CloudFront, S3, etc.).

Q25. What is the AWS architecture for implementing mTLS (Mutual TLS)?

A) ALB + ACM public certificate B) Enable mTLS on ALB + issue client/server certificates from ACM Private CA, or configure API Gateway with mTLS C) CloudFront + WAF combination D) NLB + TCP pass-through

Answer: B

Explanation: mTLS implementation options: 1) ALB — enable mTLS support to validate client certificates and pass certificate information as headers to the backend; issue client certificates from ACM Private CA, 2) API Gateway — implement mTLS with client certificate configuration and trust store upload, 3) App Mesh/Service Mesh — automated mTLS handling between services.

Q26. Why must ephemeral ports be allowed in NACLs, and what is the port range?

A) These are ports the server uses to send responses; no allowance needed B) Because NACLs are stateless, the server's response going to the client's ephemeral port (Linux: 32768-60999, Windows: 49152-65535, AWS recommended: 1024-65535) must be explicitly allowed outbound C) Ephemeral ports are managed only in security groups D) The ephemeral port range is not fixed, so all ports must be allowed

Answer: B

Explanation: NACLs are stateless (stateless), so response traffic also requires separate rules. When a client requests port 80 on the server, the server's response returns to the client's ephemeral port. AWS recommends allowing 1024-65535. SGs are stateful and automatically allow response traffic. In practice, SGs alone are often sufficient, but when using NACLs as an additional defense layer, ephemeral port configuration is required.

Q27. What role does the GENEVE tunnel protocol play in a Gateway Load Balancer (GWLB) deployment?

A) Provides VPN encryption B) Encapsulates the original packet and sends it to the virtual appliance; the appliance returns it to GWLB after inspection; preserves source/destination information of the original packet C) Performs Layer 7 load balancing D) Provides DNS-based routing

Answer: B

Explanation: GWLB uses GENEVE (Generic Network Virtualization Encapsulation) protocol to encapsulate original packets and deliver them to virtual appliances (IDS/IPS, firewalls). The appliance inspects the packet and returns it to GWLB, which forwards traffic to the original destination. GENEVE preserves the original IP/port information so the appliance can see the actual traffic details.

Q28. What are the best practices for ACM Private CA hierarchy design?

A) Issue all certificates from a single root CA B) Root CA (kept offline) → Intermediate CA (Issuing CA) → Leaf certificates; separate Issuing CAs per environment/team C) Keep all CAs in active state D) Use only leaf certificates and outsource CA management

Answer: B

Explanation: PKI hierarchy design best practices: 1) Root CA — top-level trust anchor, kept offline (not used for actual signing), 2) Intermediate/Issuing CA — CA that actually issues leaf certificates; separate per environment (prod/dev) or team, 3) Leaf certificates — actual certificates used for TLS/mTLS. This hierarchy is important to minimize the impact of a root CA compromise.

Domain 5: Optimizing Network Performance

Q29. Why use Elastic Fabric Adapter (EFA) for HPC workloads?

A) Faster internet connectivity B) OS-bypass communication for MPI-based HPC and ML distributed training, achieving ultra-low latency and high bandwidth C) Improved VPN performance D) Faster S3 transfer

Answer: B

Explanation: EFA (Elastic Fabric Adapter) is AWS's custom high-performance network interface. Its OS-bypass capability communicates directly with the NIC without going through the kernel, achieving microsecond-level ultra-low latency. Used for HPC cluster MPI applications (weather forecasting, fluid dynamics, molecular dynamics) and deep learning distributed training (PyTorch Distributed, Horovod). Maximum benefit when used with cluster placement groups.

Q30. What are the three types of Placement Groups and their use cases?

A) Active/Passive/Standby B) Cluster (ultra-low latency HPC), Spread (high availability, single failure point prevention), Partition (large-scale distributed processing like Kafka, HDFS) C) All placement groups provide identical functionality D) Classified by On-Demand/Spot/Reserved instance type

Answer: B

Explanation: EC2 Placement Group types: 1) Cluster — places instances in the same rack or adjacent racks in one AZ; 10Gbps network, low latency, for HPC, 2) Spread — places each instance on separate hardware; max 7 instances per AZ; high availability for critical workloads, 3) Partition — places each partition on a different rack; up to 7 partitions per AZ; for large-scale distributed systems like Kafka, Cassandra, HDFS.

Q31. How is AWS Global Accelerator different from standard CloudFront?

A) Global Accelerator provides content caching B) Global Accelerator uses Anycast IPs to receive TCP/UDP traffic at the optimal AWS edge and route it via the AWS backbone; supports non-HTTP protocols and static IP requirements C) Slower than CloudFront D) Supports only HTTP traffic

Answer: B

Explanation: Global Accelerator vs CloudFront: GA provides 2 Anycast IPs, receives traffic at edge locations worldwide, and routes via the AWS global network. Supports TCP and UDP protocols — suitable for gaming, IoT, and VoIP, and for cases requiring static IPs. Traffic dial supports weighted routing; health checks enable automatic failover. CloudFront specializes in HTTP/HTTPS content caching and dynamic acceleration.

Q32. What characterizes Route 53 Geoproximity routing policy?

A) DNS responses based on user location B) Fine-grained control of traffic distribution by adjusting bias values for AWS resources or custom locations C) Latency-based routing D) IP address-based routing

Answer: B

Explanation: Route 53 Geoproximity routing routes traffic based on geographic distance between users and resources. Setting a positive bias for a resource directs more traffic to it; a negative value reduces traffic. AWS resources are specified by region; non-AWS resources use latitude/longitude coordinates. Can be configured visually using Traffic Flow policies.

Q33. What is the difference between CloudFront Lambda@Edge and CloudFront Functions?

A) Both features are identical; only the names differ B) Lambda@Edge can handle 4 events (viewer/origin request/response), supports Node.js/Python, max execution 30s; CloudFront Functions support only viewer request/response, JavaScript only, millisecond execution, much cheaper C) Only Lambda@Edge can modify HTTP headers D) CloudFront Functions can modify origin requests

Answer: B

Explanation: Lambda@Edge: executes at 4 events (viewer request, viewer response, origin request, origin response), supports Node.js/Python, max 5s (viewer)/30s (origin), 1MB package size. CloudFront Functions: viewer request and viewer response only, JavaScript (ECMAScript 5.1), sub-millisecond execution, 2KB code size limit, 100x cheaper. CloudFront Functions are suitable for simple header manipulation, URL rewrites, and A/B testing.


Advanced Scenario Questions

Q34. A company is migrating from on-premises to AWS and needs to gradually shift DNS. How do you route some traffic on-premises and the rest to AWS?

A) Use two separate domains B) Use Route 53 Weighted routing policy with weights assigned to the on-premises IP and AWS resource for gradual traffic migration C) Distribute traffic with CloudFront D) Distribute traffic with ALB

Answer: B

Explanation: Gradual migration with Route 53 Weighted routing: for example, initially set on-premises weight to 90 and AWS to 10, then gradually increase the AWS weight after testing. Combined with health checks, traffic automatically shifts to healthy resources on failure. Final cutover sets on-premises to 0 and AWS to 100.

Q35. In a multi-region Active-Active architecture, you need to route users to the nearest region with automatic failover on failure. What is the optimal Route 53 configuration?

A) Simple routing policy B) Latency-based routing + health checks: create latency records for each regional ALB and associate health checks C) Use only Geolocation routing D) Use only Weighted routing

Answer: B

Explanation: Active-Active multi-region design: 1) Create Route 53 latency-based records for each region's ALB/NLB, 2) Associate health checks with each record, 3) In normal state, route to the lowest-latency region, 4) When a health check fails, that region's record is automatically excluded and traffic routes to the next lowest-latency region. Global Accelerator can implement the same pattern at the TCP/UDP layer.

Q36. What is the recommended configuration for high availability with AWS Direct Connect?

A) A single Direct Connect connection is sufficient B) Two Direct Connect connections from different Direct Connect locations, each with a separate customer router and AWS device C) Direct Connect + NAT Gateway redundancy D) Two VPN connections to replace DX

Answer: B

Explanation: Direct Connect high availability architecture: Maximum resilience — two different DX locations, different AWS devices for each, different customer-side routers. This eliminates all single points of failure: device failure, location failure, cable cut. Minimum recommendation — two connections at the same DX location (different AWS devices). VPN alone is insufficient as a redundancy mechanism for DX; it is used as a backup when DX fails.

Q37. What VPC attributes must be enabled for DNS resolution to work within a VPC?

A) Only enableDnsSupport B) Both enableDnsSupport (providing DNS server) and enableDnsHostnames (assigning DNS names to EC2) must be enabled C) Only Route 53 Resolver endpoints are needed D) Modify the DHCP options set

Answer: B

Explanation: VPC DNS attributes: 1) enableDnsSupport — makes the AWS-provided DNS server (169.254.169.253 or VPC CIDR +2) available to instances in the VPC. Default: enabled. 2) enableDnsHostnames — assigns public DNS names to EC2 instances with public IPs. Default VPC: enabled; new VPC: disabled. Both must be enabled to associate a private hosted zone with a VPC.

Q38. An on-premises application needs to access an ECS service (10.0.1.0/24) in an AWS region. The on-premises DNS server must resolve the ECS service's internal DNS name. What is the architecture?

A) Register hardcoded IPs on the on-premises DNS server B) Direct Connect/VPN connection + create Route 53 Resolver inbound endpoint + configure conditional forwarder on on-premises DNS server (AWS internal domain → inbound endpoint IPs) C) Use a public Route 53 hosted zone D) Use CloudFront for DNS handling

Answer: B

Explanation: Hybrid DNS integration architecture: 1) Connect on-premises to AWS via Direct Connect or VPN, 2) Create Route 53 Resolver inbound endpoint in the VPC (2 ENIs, one per AZ), 3) Configure conditional forwarder on on-premises DNS server: *.ap-northeast-1.compute.internal or custom domain → IP addresses of the inbound endpoint, 4) On-premises application accesses AWS resources using internal DNS names.

Q39. Why use a VPC Endpoint Policy?

A) To optimize VPC endpoint performance B) To restrict which AWS service resources and actions can be accessed through the VPC endpoint C) To allow/deny connections between VPCs D) To replace Network ACLs

Answer: B

Explanation: A VPC Endpoint Policy is a resource-based policy that restricts service access through the endpoint. Example: an S3 gateway endpoint policy that allows only a specific S3 bucket: "Resource": "arn:aws:s3:::my-company-bucket/*". Applicable to both gateway endpoints (S3, DynamoDB) and interface endpoints. This prevents data exfiltration by restricting S3 access from a VPC to specific buckets.

Q40. For security requirements, all S3 API traffic must stay within the VPC without traversing the internet, and S3 access from non-VPC IPs must be blocked. How is this implemented?

A) Set the S3 bucket to private B) Create an S3 gateway endpoint + update route tables + add an aws:SourceVpce condition in the S3 bucket policy to deny access not coming through the endpoint C) Disable S3 Transfer Acceleration D) Use CloudFront + OAC (Origin Access Control)

Answer: B

Explanation: Implementation steps: 1) Create an S3 gateway endpoint in the VPC, 2) Add the S3 endpoint route to subnet route tables, 3) Add condition to S3 bucket policy: "Condition": {"StringNotEquals": {"aws:SourceVpce": "vpce-xxxxx"}} — deny access not from the specified VPC endpoint, 4) Grant S3 access permissions to the EC2 instance's IAM role. This ensures all S3 traffic traverses only the AWS internal network.

Q41. What characterizes the domain list rule group in AWS Network Firewall stateful rules?

A) Only supports IP address-based blocking B) Inspects HTTP SNI and HTTPS SNI to allow/block specific domains, supports wildcards (.example.com) C) Blocks only DNS queries D) Only supports regular expressions

Answer: B

Explanation: Network Firewall domain list rule groups inspect the HTTP Host header and HTTPS SNI (Server Name Indication) to allow or block traffic to specific domains. Using wildcards like .example.com covers all subdomains. Whitelist/blacklist policies can be implemented for HTTP and HTTPS traffic.

Q42. Why use multiple Transit Gateway Route Tables?

A) For performance improvement B) To implement traffic separation and network segmentation (e.g., separating dev/prod VPCs, blocking Spoke-to-Spoke traffic, forcing routing through an inspection VPC) C) For cost reduction D) Due to AWS service limits

Answer: B

Explanation: TGW multiple route table use cases: 1) Environment isolation — connect Prod and Dev VPCs to separate route tables to prevent direct communication, 2) Security inspection — force all East-West traffic through a security VPC (Network Firewall, IDS), 3) Shared services — shared service VPCs (DNS, Active Directory) can communicate with all spokes, but direct spoke-to-spoke communication is blocked. Black-hole routing can completely block traffic for specific CIDRs.

Q43. When would you use a CloudFront Origin Group?

A) Load balancing across multiple origins B) Automatic failover to a secondary origin when the primary origin fails C) Merging content from multiple origins D) Improving origin server performance

Answer: B

Explanation: A CloudFront Origin Group provides automatic origin failover for high availability. Configure a primary and secondary origin in a group; when the primary origin returns specified HTTP response codes (4xx, 5xx) or connection fails, it automatically switches to the secondary. Example: S3 bucket as primary and a different region S3 as secondary to prepare for regional failures.

Q44. After configuring VPC Peering, you cannot reach instances in the peered VPC. What could be causing this?

A) VPC Peering automatically configures routing B) Route tables not updated, security groups not allowing, NACLs blocking, DNS resolution settings missing C) Peering connection not in Active state D) Two VPCs are in different regions

Answer: B

Explanation: VPC Peering post-configuration communication failure checklist: 1) Verify both VPCs' route tables have a route for the other VPC's CIDR → peering connection, 2) Verify security groups allow the other VPC's CIDR or SG as source, 3) Verify NACLs allow the traffic (including ephemeral ports), 4) Verify peering connection status is Active, 5) Verify DNS resolution option for peered VPC is enabled.

Q45. What is the difference between AWS PrivateLink Interface Endpoints vs. S3/DynamoDB Gateway Endpoints?

A) Both types operate the same way B) Interface endpoints are ENI-based with private IP in the VPC, have a cost, support security groups, support many services; Gateway endpoints are route table entry-based, free, support only S3/DynamoDB C) Gateway endpoints support all services D) Interface endpoints include public traffic

Answer: B

Explanation: Endpoint type comparison: Interface endpoint — creates ENI in VPC, assigns private IP, security group can be attached, hourly cost per endpoint + data processing cost, supports most AWS services, accessed via DNS names. Gateway endpoint — adds a route to the route table, no additional cost, supports only S3 and DynamoDB, access controlled via policy.

Q46. How does ENA (Elastic Network Adapter) Enhanced Networking differ from a standard virtual NIC?

A) No noticeable difference B) ENA uses SR-IOV to bypass the hypervisor and communicate directly with hardware; up to 100Gbps bandwidth, lower latency, higher PPS (Packets Per Second) C) ENA supports only IPv6 D) ENA is only available in certain AZs

Answer: B

Explanation: Enhanced Networking (ENA): uses SR-IOV (Single Root I/O Virtualization) to communicate directly with the NIC without hypervisor overhead. Maximum 100Gbps network bandwidth, high PPS (packet processing count), low jitter. Enabled by default on latest-generation EC2 instances (C5, M5, R5, etc.). Intel 82599 VF (10Gbps) is also supported as an earlier version of ENA.

Q47. When using a private S3 bucket as a CloudFront origin, what is the latest recommended method for origin protection?

A) Add the CloudFront IP list to the bucket policy B) Use Origin Access Control (OAC) — CloudFront sends SigV4-signed requests to S3, and the S3 bucket policy allows only the CloudFront service principal C) Allow public S3 read access and protect with WAF at CloudFront D) Use Origin Access Identity (OAI) — simpler configuration

Answer: B

Explanation: OAC (Origin Access Control) is the successor to OAI (Origin Access Identity) and provides stronger security. OAC signs S3 requests with SigV4 for authentication. The S3 bucket policy uses "Principal": {"Service": "cloudfront.amazonaws.com"} with "Condition": {"StringEquals": {"AWS:SourceArn": "arn:aws:cloudfront::ACCOUNT:distribution/DIST_ID"}} to allow only the specific CloudFront distribution.

Q48. What is the primary reason to use a Direct Connect Gateway (DXGW)?

A) Increase Direct Connect connection bandwidth B) Connect a single Private VIF or Transit VIF to VPCs or Transit Gateways in multiple AWS regions (cross-region connectivity) C) Reduce Direct Connect connection costs D) Support VPN connections

Answer: B

Explanation: Direct Connect Gateway allows a single DX connection from on-premises to access VPCs in multiple AWS regions. Connecting a Private VIF to DXGW enables access to multiple regional VPCs via a single VIF. Connecting a Transit VIF to DXGW accesses multiple VPCs via Transit Gateway. Note that DXGW does not provide routing between VPCs in the same account and region.

Q49. A company wants to serve dynamic content via CloudFront. How do you optimize the cache key to maximize cache hits and reduce origin load?

A) Include all headers, query strings, and cookies in the cache key B) Include only headers, query strings, and cookies that actually affect the cached result in the Cache Policy; deliver the rest to the origin only via Origin Request Policy C) Disable all caching D) Change CloudFront distribution settings

Answer: B

Explanation: CloudFront cache hit rate optimization: the more items included in the cache key, the more cache entries are created, lowering the hit rate. Example: including User-Agent in the cache key creates separate cache entries per browser. The Cache Policy should include only parameters that actually affect the response, while information needed by the origin but unnecessary for the cache key (auth headers, etc.) is separated into the Origin Request Policy.

Q50. An on-premises server needs to access an RDS instance in a private subnet of an AWS VPC. What is the optimal architecture?

A) Assign a public IP to the RDS instance B) Direct Connect or Site-to-Site VPN for on-premises to VPC connectivity, allow the on-premises CIDR in the RDS subnet's security group, resolve RDS DNS names via Route 53 Resolver C) Expose the RDS proxy to the internet D) Use Lambda as an intermediate tier

Answer: B

Explanation: On-premises to RDS private access architecture: 1) Network connection via DX or VPN, 2) Allow inbound DB port (3306, 5432, etc.) from the on-premises IP range in the security group of the subnet hosting RDS, 3) Resolve RDS DNS names (FQDN) from on-premises via Route 53 Resolver inbound endpoint, 4) Disable public access on RDS. Using Secrets Manager for DB credential management is recommended.

Q51. In an environment using AWS Transit Gateway, how do you implement Network Firewall inspection for Spoke VPC-to-Spoke VPC traffic?

A) Deploy Network Firewall in each spoke VPC B) Centrally deploy Network Firewall in an Inspection VPC and force-route spoke VPC traffic through it via TGW (enable Appliance Mode) C) Control direct spoke-to-spoke communication with security groups D) Inspect East-West traffic with WAF

Answer: B

Explanation: Centralized East-West traffic inspection: 1) Deploy Network Firewall endpoint in the Inspection VPC, 2) Attach the Inspection VPC to TGW, 3) In the TGW route table, route traffic between spoke VPCs through the Inspection VPC, 4) Enable Appliance Mode on the TGW attachment in the Inspection VPC (prevent asymmetric routing), 5) Within the Inspection VPC, traffic passes through Network Firewall before being forwarded to the destination spoke VPC.

Q52. What is the difference between Route 53 Latency routing and Geolocation routing?

A) Both policies produce the same result B) Latency-based routes to the optimal region based on AWS-measured network latency (faster response); Geolocation routes based on user IP location for regional compliance/content delivery C) Geolocation always provides lower latency D) Latency-based routing is based on IP address

Answer: B

Explanation: Latency routing: routes to the region that provides the fastest response based on actual inter-region network latency measured by AWS. Ideal for performance optimization. Geolocation routing: determines country/continent/state from user IP and routes to the appropriate resource for that location. Suitable for GDPR (EU data sovereignty), region-specific content (language, pricing), and compliance. A user's physical location doesn't necessarily correspond to the lowest latency.

Q53. Your VPC CIDR is 10.0.0.0/16 and you need additional CIDR for private subnets. What are the restrictions when adding a secondary CIDR block to a VPC?

A) Secondary CIDRs cannot be added B) CIDRs that do not overlap with the VPC can be added, they must not overlap with existing peering/TGW connections, up to 5 CIDR blocks allowed (default) C) The secondary CIDR must be a subset of the primary CIDR D) Existing subnets must be recreated when adding a secondary CIDR

Answer: B

Explanation: VPC secondary CIDR block restrictions: 1) Must not overlap with the primary VPC CIDR, 2) Must not overlap with other VPCs in connected peering connections or TGW, 3) RFC 1918 address ranges recommended, 4) Default max 5 CIDR blocks (AWS Support can increase the limit), 5) Certain CIDR ranges are reserved and cannot be used (e.g., 198.19.0.0/16 is for AWS internal use).

Q54. What is the role of a Stateless rule group in an AWS Network Firewall policy?

A) Performs complex Deep Packet Inspection (DPI) B) Per-packet processing (no session tracking), simple 5-tuple (protocol, source/dest IP, source/dest port)-based allow/deny/forward decisions C) DNS query inspection D) TLS inspection

Answer: B

Explanation: Stateless rule groups process each packet independently (no session state tracking, unlike SGs). Packets are allowed, denied, or forwarded to a Stateful rule group based on 5-tuple. Used for high-performance basic filtering. Stateful rule groups track sessions for more sophisticated inspection (HTTP inspection, Suricata IDS rules, TLS SNI inspection).

Q55. How do you integrate SD-WAN solutions with AWS?

A) SD-WAN is not supported on AWS B) Deploy SD-WAN virtual appliances (VMware SD-WAN, Cisco Viptela, etc.) on EC2 or integrate with Transit Gateway Network Manager C) Only direct API integration is possible D) Only deployable from AWS Marketplace

Answer: B

Explanation: SD-WAN integration with AWS: 1) Deploy virtual SD-WAN appliance instances on EC2, connect to on-premises SD-WAN fabric via Site-to-Site VPN or Direct Connect, 2) Register SD-WAN devices in Transit Gateway Network Manager for global network visualization, 3) Use third-party SD-WAN solution AMIs from AWS Marketplace. SD-WAN centrally manages branch networking and optimizes multiple WAN links.

Q56. How do you implement a centralized DNS architecture in a multi-account environment?

A) Create separate Route 53 hosted zones in each account B) Centrally deploy Route 53 Resolver endpoints and private hosted zones in a shared services VPC, and share Resolver rules with other accounts using RAM (Resource Access Manager) C) Use only public DNS D) Configure independent DNS servers in each account

Answer: B

Explanation: Centralized DNS architecture: 1) Deploy Route 53 inbound/outbound endpoints in a central VPC in the shared services account, 2) Associate private hosted zones for all internal domains with the central VPC, 3) Share Resolver forwarding rules with other accounts via RAM, 4) In spoke account VPCs, associate the shared Resolver rules to forward DNS to central DNS. This enables consistent namespace and DNS policy management.

Q57. VPC Flow Logs detected a specific IP (e.g., 192.0.2.1) scanning all ports. What is the immediate response?

A) Manually block the IP in every instance's SG B) Add a block rule to WAF IP set or NACL, or block that IP in Network Firewall stateless rules C) Terminate EC2 instances D) Delete the internet gateway

Answer: B

Explanation: Port scan attack response: 1) Network Firewall (fastest block — at the network layer), 2) WAF IP set (block HTTP/HTTPS traffic, in front of ALB/CloudFront), 3) NACL (block at subnet layer, but manual and has rule count limits). If GuardDuty is enabled, you may receive a Recon:EC2/PortProbeUnprotectedPort finding. Automated response uses EventBridge + Lambda to automatically update NACL or WAF IP sets.

Q58. What is the difference between a Direct Connect Hosted Connection and a Dedicated Connection?

A) The two types are technically identical B) Dedicated Connection is a physical connection dedicated to a single customer (1Gbps, 10Gbps, 100Gbps); Hosted Connection provides smaller bandwidth (50Mbps-10Gbps) through a partner on shared infrastructure C) Hosted Connections are more expensive D) Dedicated Connections support more virtual interfaces

Answer: B

Explanation: DX connection types: Dedicated Connection — dedicated physical Ethernet connection between AWS and customer; 1/10/100Gbps; up to 50 VIFs can be created. Hosted Connection — provided by AWS Direct Connect partners; 50Mbps-10Gbps; only 1 VIF can be created (hosted VIF). Hosted Connections are suitable for smaller bandwidth requirements or when faster provisioning is needed.

Q59. Why store VPC Flow Logs directly in S3 instead of CloudWatch Logs?

A) For real-time analysis B) Cost reduction (S3 storage is cheaper than CW Logs), large-scale long-term analysis with Athena, S3 Lifecycle policies for data retention, third-party SIEM integration C) For faster collection D) CloudWatch Logs does not support Flow Logs

Answer: B

Explanation: Flow Logs destination selection: CloudWatch Logs — real-time monitoring, metric filters, alarms, subscription filters (stream to Lambda, Kinesis), higher cost. S3 — cost-effective storage for large data, direct query with Athena, third-party tool integration, Glacier archiving. Both destinations can be used simultaneously. In most large-scale environments, S3 + Athena is the default; real-time security alarms use CloudWatch subscription filters to trigger Lambda.

Q60. What happens when an AWS Global Accelerator health check fails?

A) Service is interrupted B) Global Accelerator automatically routes traffic to other healthy endpoints (different region or AZ) C) Users must manually execute a failover D) Must wait for DNS cache to expire

Answer: B

Explanation: Global Accelerator health checks support TCP, HTTP, and HTTPS, and continuously check the status of each endpoint (ALB, NLB, EIP, EC2). When endpoint failure is detected, traffic is automatically redirected to a healthy endpoint within 1 minute. This provides much faster failover than Route 53 health checks (which depend on DNS TTL). Traffic Dial can also be used to set a specific endpoint group's traffic percentage to 0 for manual failover.

Q61. Which statement about AWS Network Firewall's TLS inspection feature is correct?

A) Can view encryption keys for TLS traffic B) Performs TLS decryption (SSL inspection) to inspect HTTPS payloads; Network Firewall acts as MITM and uses certificates issued by ACM Private CA C) Can only inspect TLS 1.0 and below D) TLS inspection has no performance impact

Answer: B

Explanation: Network Firewall TLS inspection: can decrypt outbound or bidirectional TLS traffic and inspect it with Suricata rules. Since Network Firewall acts as MITM (Man-In-The-Middle), a CA certificate issued by ACM Private CA is required, and clients must trust this CA. After inspection, a new TLS session is established with the original destination. Performance impact exists, so proper capacity planning is required.

Q62. Which of the following is NOT a Transit Gateway attachment type?

A) VPC attachment B) VPN attachment C) Direct Connect Gateway attachment D) S3 bucket attachment

Answer: D

Explanation: Transit Gateway supported attachment types: 1) VPC — connect TGW to VPC, 2) VPN — IPSec VPN connection with Customer Gateway, 3) Direct Connect Gateway — DX connection via DXGW, 4) Transit Gateway Peering — peer with another TGW (inter-region or intra-region), 5) Connect — GRE/BGP connection with SD-WAN devices. S3 buckets are not a TGW attachment type.

Q63. In which scenario should you choose a Network Load Balancer (NLB)?

A) When HTTP cookie-based session stickiness is needed B) When ultra-low latency, extremely high throughput (millions RPS), TCP/UDP/TLS pass-through, static IP addresses, or PrivateLink backend service exposure is needed C) When path-based routing is needed D) When WAF integration is needed

Answer: B

Explanation: NLB use cases: 1) Extremely high throughput and ultra-low latency (microsecond latency), 2) TCP/UDP/TLS traffic pass-through (Layer 4), 3) Static IP addresses (can attach EIP), 4) Used as an endpoint for PrivateLink services, 5) Source IP preservation (client IP passed directly to backend). ALB is suited for HTTP/HTTPS (Layer 7), path/header-based routing, and WAF integration.

Q64. What is the best practice for activating both IPSec tunnels in AWS Site-to-Site VPN?

A) Activate only one tunnel at a time to prevent conflicts B) Activate both tunnels and use both via BGP or static routing (Active/Active or Active/Standby) C) The number of tunnels has no performance impact D) AWS automatically manages the second tunnel

Answer: B

Explanation: AWS Site-to-Site VPN always provides two IPSec tunnels (high availability). Activating both tunnels is best practice. With BGP: maintain BGP sessions on both tunnels and use MED values or AS Path to designate primary/secondary tunnels. Using TGW with ECMP allows both tunnels in Active/Active configuration to aggregate bandwidth. Activating only one tunnel risks connection drops during AWS maintenance.

Q65. What are methods to reduce errors caused by origin response timeout in AWS CloudFront?

A) Set the CloudFront cache TTL to 0 B) Configure secondary origin failover with an origin group, increase origin response timeout value, DDoS protection with AWS Shield Advanced, WAF rate limiting to prevent origin overload C) Delete the CloudFront distribution and access the origin directly D) Monitor origin health with Route 53 health checks

Answer: B

Explanation: Strategies to reduce CloudFront origin errors: 1) Automatic failover with origin group (switch to secondary when primary fails), 2) Increase origin response timeout (default 30s, can increase to max 60s), 3) Increase origin connection attempts (default 3), 4) WAF rate limiting to prevent origin overload, 5) AWS Shield Advanced to prevent origin overload due to DDoS attacks, 6) Scaling automation to expand origin capacity.


Study Resources

  • AWS Official ANS-C01 Exam Guide
  • AWS Networking Whitepaper (AWS Global Networking Overview)
  • AWS re:Invent Networking Session Videos
  • AWS Skill Builder ANS-C01 Official Practice Questions

Passing Tip: The networking specialty exam tests real architectural design ability. Precisely understanding the limitations of each service (Transit Gateway's transitive routing, VPC Peering's CIDR restrictions, etc.) and the ability to design solutions that balance cost optimization, performance, and security are essential.