Skip to content
Published on

AWS Database Specialty (DBS-C01) Practice Exam — 65 Questions

Authors

Exam Overview

ItemDetails
Duration180 minutes
Questions65
Passing Score750 / 1000
Question TypesSingle answer, Multiple answer
Exam CostUSD 300

Domain Breakdown

DomainWeight
Domain 1: Workload-Specific Database Design26%
Domain 2: Deployment and Migration20%
Domain 3: Management and Operations18%
Domain 4: Monitoring and Troubleshooting18%
Domain 5: Database Security18%

AWS Database Service Selection Guide

[By Workload Type]

Relational OLTP
  ├── Small scale / easy migration: RDS (MySQL, PostgreSQL, MariaDB)
  ├── High performance / cloud-native: Aurora (MySQL/PostgreSQL compatible)
  └── Enterprise / license retention: RDS Oracle, RDS SQL Server

NoSQL
  ├── Key-value / Document / Serverless: DynamoDB (single-table design)
  ├── Document (MongoDB compatible): DocumentDB
  └── Caching: ElastiCache (Redis / Memcached)

Special Purpose
  ├── Graph: Neptune (Gremlin, SPARQL)
  ├── Ledger/Audit: QLDB
  ├── Time Series: Timestream
  └── Cassandra compatible: Keyspaces

[Decision Tree]
Need transactions? YESNeed relational? YESAurora or RDS
                                        NODynamoDB
                NOSearch?OpenSearch
                     Cache?ElastiCache
                     Graph?Neptune
                     Time series?Timestream

Practice Questions

Domain 1: Workload-Specific Database Design

Q1. You are building a product catalog and order management system for an online store. You need thousands of reads/writes per second, 99.99% availability, and automatic storage expansion. Which database is most appropriate?

A) RDS MySQL Multi-AZ B) Aurora MySQL (Multi-AZ cluster) C) DynamoDB D) RDS PostgreSQL

Answer: B

Explanation: Aurora MySQL delivers up to 5x the performance of RDS MySQL and the cluster volume automatically scales from 10 GB to 128 TB. A Multi-AZ cluster maintains replicas across three AZs for high availability. It is ideal for high-performance OLTP workloads.

Q2. Which scenario is most suitable for Aurora Serverless v2?

A) Consistent high-volume transaction processing 24 hours a day B) Irregular, unpredictable traffic where you want to pay only for what you use C) Migrating an on-premises Oracle database D) An analytical data warehouse

Answer: B

Explanation: Aurora Serverless v2 automatically scales in fine-grained increments from 0.5 ACU to 128 ACU within seconds. It is ideal for development/test environments, intermittent traffic, and SaaS applications with near-zero traffic at night. You are charged only for the capacity consumed.

Q3. You want to store user profiles (UserID), user orders (OrderID), and order items (ItemID) in a single DynamoDB table design. What is the correct approach?

A) Separate each entity into its own table B) Add an entity type prefix to the partition key and use the sort key to express hierarchy C) Store all attributes in a single JSON column D) Create a GSI per entity

Answer: B

Explanation: In DynamoDB single-table design, the partition key (PK) uses entity type prefixes like USER#UserID or ORDER#OrderID. The sort key (SK) uses values like PROFILE, ORDER#OrderID, or ITEM#ItemID to express hierarchy. This allows one table to efficiently serve multiple access patterns.

Q4. What is the most accurate distinction between ElastiCache Redis Cluster Mode Enabled and Cluster Mode Disabled?

A) Cluster Mode Enabled supports only a single node B) Cluster Mode Enabled distributes data across multiple shards, supporting larger memory capacity and write throughput C) Cluster Mode Disabled delivers superior read performance D) Cluster Mode Enabled does not support Multi-AZ

Answer: B

Explanation: ElastiCache Redis Cluster Mode Enabled distributes data across multiple shards (each shard: one primary + replicas). Total memory and write throughput scale linearly with shard count. Cluster Mode Disabled uses a single shard (one primary + read replicas) and can only scale reads.

Q5. What is an appropriate scenario for choosing Amazon Neptune?

A) Processing tens of thousands of financial transactions per second B) Relationship analysis for social network friend recommendations or fraud detection C) Storing large volumes of time-series sensor data D) Storing an e-commerce product catalog

Answer: B

Explanation: Neptune is a graph database that efficiently models and queries relationships between entities. It is well-suited for social graph friend-of-friend traversal, financial fraud pattern detection, knowledge graphs, and recommendation engines. It supports Gremlin (Property Graph) and SPARQL (RDF) query languages.

Q6. When choosing between RDS for PostgreSQL and Aurora PostgreSQL, in which situation would you prefer RDS PostgreSQL?

A) When maximum performance is required B) When specific PostgreSQL extensions or plugins are not supported in Aurora C) When automatic scaling is needed D) When global distribution is required

Answer: B

Explanation: RDS PostgreSQL is preferred when specific extensions not supported by Aurora (e.g., certain advanced PostGIS features, specific fdw extensions) are required, or when immediate support for a particular major version is needed. Aurora's internal architecture differs from standard PostgreSQL and does not support every PostgreSQL feature.

Q7. What is the primary purpose of the DynamoDB GSI overloading technique?

A) Combine multiple GSIs into one to save costs B) Reuse a single GSI for multiple query patterns across different entity types C) Dynamically change the projected attributes of a GSI D) Automatically optimize GSI write capacity

Answer: B

Explanation: GSI overloading stores different semantic values for different entity types in the GSI's PK and SK. For example, storing STATUS#active for users, DATE#2026-03 for orders, and CATEGORY#electronics for products in gsi1pk allows a single GSI to serve querying users by status, orders by date, and products by category.

Q8. What is the key feature of Amazon QLDB (Quantum Ledger Database)?

A) Ultra-fast in-memory caching B) Maintains an immutable, cryptographically verifiable transaction log C) Storing graph relationship data D) Cassandra API compatibility

Answer: B

Explanation: QLDB maintains a complete and verifiable history of data changes in an immutable journal secured with SHA-256 hashes. It is suitable for regulated industries (finance, supply chain, HR) requiring audit trails. All changes are tracked with full history and integrity is verifiable cryptographically.

Q9. In which scenario would you choose ElastiCache Memcached over Redis?

A) When session storage, high availability, and persistence are needed B) When simple object caching is needed and maximum CPU core utilization via multi-threading is desired C) When Pub/Sub messaging is needed D) When geographically distributed caching is needed

Answer: B

Explanation: Memcached uses a multi-threaded architecture optimized for simple key-value caching. It scales linearly with CPU core count. When persistence, replication, Pub/Sub, snapshots, and rich data structures are not needed, Memcached provides excellent raw caching performance. Redis offers richer data structures and high availability.

Q10. What is the primary use case for Aurora Global Database?

A) Read scaling within a single region B) Disaster recovery (RPO in seconds, RTO under 1 minute) and reduced global read latency C) Optimizing Aurora Serverless v2 workloads D) Processing large-scale batch jobs

Answer: B

Explanation: Aurora Global Database consists of one primary region and up to five secondary regions. Inter-region replication lag is typically under 1 second. In a disaster, promoting a secondary region achieves RTO under 1 minute and RPO in seconds. Users in each region benefit from low-latency local reads.

Q11. Which workload is most suitable for Amazon Timestream?

A) Social media relationship data B) IoT sensor readings, application metrics, and operational events (time-series data) C) Financial transaction ledger management D) E-commerce product listings

Answer: B

Explanation: Timestream is a serverless database specialized for time-series data. It provides built-in functions for time-based queries (aggregation, interpolation, smoothing). Recent data is stored in memory; historical data is automatically moved to S3-backed magnetic storage. Time-series specific queries are far faster than on RDS.

Q12. Which statement correctly describes the standby instance in RDS Multi-AZ deployment?

A) The standby can serve read queries B) The standby maintains synchronous replication for automatic failover but is not directly accessible C) The standby is located in a different AWS Region D) Failover to standby must be triggered manually

Answer: B

Explanation: The RDS Multi-AZ standby maintains synchronous replication with the primary instance. The standby cannot serve read or write traffic directly (use Read Replicas for reads). When the primary fails, the DNS endpoint is automatically switched to the standby (approximately 60–120 seconds).

Domain 2: Deployment and Migration

Q13. You are migrating an on-premises Oracle database to AWS and want to avoid Oracle licensing costs. What is the best migration path?

A) RDS for Oracle (bring your own license — BYOL) B) Use SCT + DMS to convert to Aurora PostgreSQL or RDS PostgreSQL C) RDS for Oracle (License Included) D) Self-install Oracle on EC2

Answer: B

Explanation: Migrating from Oracle to Aurora PostgreSQL or RDS PostgreSQL completely eliminates Oracle licensing costs. AWS SCT (Schema Conversion Tool) converts Oracle schema to PostgreSQL, and DMS migrates the data. Assess Oracle-specific features (PL/SQL, Oracle-specific functions, packages) using SCT to understand conversion complexity.

Q14. What is the benefit of using RDS Blue/Green Deployments?

A) Replicates the database to another region B) Minimizes downtime by switching traffic from production (Blue) to a pre-validated staging environment (Green) C) Automatically performs minor version upgrades D) Creates a new database from a snapshot

Answer: B

Explanation: RDS Blue/Green Deployments create a complete copy (Green) of the current production (Blue) and keep it in sync. Changes (new versions, parameter changes, schema updates) are validated in the Green environment. After validation, traffic is switched from Blue to Green in seconds. Rollback is straightforward.

Q15. Is SCT (Schema Conversion Tool) required when migrating from MySQL to Aurora MySQL using DMS?

A) Yes, MySQL and Aurora MySQL are different engines, so SCT is needed B) No, homogeneous migration (MySQL to Aurora MySQL) can be done with DMS alone without SCT C) SCT must always be run first D) Aurora MySQL is not supported as a DMS target

Answer: B

Explanation: SCT is required for heterogeneous migrations (Oracle to PostgreSQL, SQL Server to MySQL, etc.) where schema conversion is needed. Homogeneous migrations (MySQL to Aurora MySQL, PostgreSQL to Aurora PostgreSQL) have compatible schemas, so DMS can perform the migration without SCT.

Q16. What is the recommended approach for performing an Aurora major version upgrade (e.g., Aurora MySQL 2.x to 3.x)?

A) Perform an in-place upgrade immediately B) Create a new cluster from a snapshot, upgrade it, validate, then switch traffic C) Upgrade a read replica to the major version first D) Use DMS to migrate data to the new cluster

Answer: B

Explanation: Aurora major version upgrades carry risk. The recommended procedure is: 1) Create a snapshot of the current cluster → 2) Create a new cluster from the snapshot and upgrade to the major version → 3) Validate application compatibility → 4) Switch DNS (CNAME) or use Blue/Green Deployments to cut over.

Q17. How do you migrate a DynamoDB table to another AWS account or region?

A) Use DMS for DynamoDB-to-DynamoDB migration B) Export the DynamoDB table to S3, then import from S3 in the target account/region C) Real-time replication via Kinesis Data Streams + Lambda D) Copy the CloudFormation stack

Answer: B

Explanation: DynamoDB Export to S3 writes table data in DynamoDB JSON or ION format to S3. The target account/region then uses DynamoDB Import from S3 to load the data into a new table. DynamoDB Global Tables enables continuous multi-region replication.

Q18. What is the most important consideration when migrating from on-premises SQL Server to Amazon RDS for SQL Server?

A) SQL Server licenses can be transferred to AWS B) Verify whether features like SQL Server Agent, SSIS, SSRS, and SSAS are supported C) RDS SQL Server does not support Multi-AZ D) RDS SQL Server supports a maximum of 1 TB storage

Answer: B

Explanation: RDS for SQL Server supports SQL Server Agent but does not support SSIS, SSRS, or SSAS. If these services are in use, you must host them on EC2 or evaluate alternative services. Also verify the specific SQL Server versions supported by RDS.

Q19. What configuration is required on the source MySQL database for DMS CDC (Change Data Capture)?

A) Create a replication slot B) Enable binary logging in ROW format: set binlog_format=ROW C) Create a read replica D) Processed automatically without any parameter group changes

Answer: B

Explanation: To use DMS CDC with MySQL, binary logging must be enabled in ROW format. Required settings: binlog_format=ROW, binlog_row_image=FULL, and an appropriate expire_logs_days value. On RDS MySQL, these are configured via the parameter group. PostgreSQL uses replication slots.

Q20. When evaluating migration complexity, what is the difference between Lift-and-Shift and Re-architect?

A) Both approaches have the same migration complexity B) Lift-and-shift moves to AWS with minimal changes; re-architect redesigns using cloud-native services C) Re-architect is always cheaper D) Lift-and-shift guarantees the same performance as on-premises

Answer: B

Explanation: Lift-and-shift moves existing databases to EC2 or RDS with minimal changes. It is fast and low-risk but does not fully leverage cloud advantages. Re-architect redesigns with cloud-native services like DynamoDB or Aurora, gaining more cloud benefits but requiring greater complexity and cost.

Domain 3: Management and Operations

Q21. What is the difference between RDS Parameter Groups and Option Groups?

A) Parameter Groups configure networking; Option Groups configure security B) Parameter Groups configure database engine settings; Option Groups enable additional features (e.g., Oracle TDE, MSSQL SQL Server Audit) C) Both groups serve the same purpose D) Option Groups are only used with PostgreSQL

Answer: B

Explanation: RDS Parameter Groups configure database engine parameters such as max_connections and innodb_buffer_pool_size. Option Groups enable add-on features: Oracle TDE (Transparent Data Encryption), MSSQL SQL Server Audit, MySQL memcached plugin, and similar optional capabilities.

Q22. In RDS Performance Insights, you see many db/lock/row wait events. What does this indicate?

A) CPU bottleneck B) Row-level lock contention — queries waiting for other transactions to release row locks C) Network latency D) Storage I/O bottleneck

Answer: B

Explanation: The db/lock/row wait event indicates row-level lock contention. Transaction A holds a lock on specific rows while Transaction B waits to update or delete the same rows. Remediation includes reducing transaction size, optimizing indexes (to narrow lock scope), and analyzing deadlocks.

Q23. How does Aurora Auto Scaling work?

A) Automatically increases the compute size of the primary instance B) Automatically adds or removes read replicas based on CloudWatch metrics (CPU utilization, connection count) C) Automatically expands storage D) Automatically creates Aurora clusters across multiple regions

Answer: B

Explanation: Aurora Auto Scaling dynamically adds or removes Aurora read replicas in response to CloudWatch metrics (average CPU utilization or average connection count). During traffic surges, replicas are added automatically; when traffic drops, unnecessary replicas are removed to optimize cost.

Q24. Which statement correctly compares DynamoDB On-Demand and Provisioned capacity modes?

A) On-Demand requires manually specifying read/write capacity B) Provisioned mode is cost-effective for predictable traffic patterns; On-Demand is suitable for unpredictable traffic C) On-Demand does not include a free tier D) Provisioned mode cannot be scaled

Answer: B

Explanation: Provisioned mode is cost-effective for predictable workloads: specify RCU/WCU and enable Auto Scaling. On-Demand mode automatically scales to traffic but has a higher per-request cost. On-Demand suits new tables, unpredictable traffic, and development/test environments.

Q25. What is the key difference between RDS automated backups and manual snapshots?

A) Automated backups are not encrypted B) Automated backups are deleted after the retention period (1–35 days); manual snapshots are retained until explicitly deleted C) Manual snapshots support PITR (Point-in-Time Recovery) D) Automated backups cannot be copied to another region

Answer: B

Explanation: RDS automated backups are automatically deleted after the configured retention period (up to 35 days). Manual snapshots are retained permanently until explicitly deleted. Automated backups also retain transaction logs for Point-in-Time Recovery (PITR).

Q26. What is the primary benefit of the Aurora Backtrack feature?

A) Replicates data to another region B) Rewinds the database to a previous point in time (within 5 minutes to 72 hours) without restoring from a snapshot C) Automatically performs major version upgrades D) Automatically distributes read traffic

Answer: B

Explanation: Aurora Backtrack quickly rewinds the database to a specific point in the past. It is much faster than restoring from a snapshot (completes in minutes) and is ideal for rapid recovery after an accidental DELETE or UPDATE. Backtrack supports rewinding up to 72 hours and is available for Aurora MySQL.

Q27. What happens when you enable DynamoDB Global Tables?

A) Multi-AZ replication within the same region B) Fully managed multi-master replication across multiple AWS Regions — each region can accept writes C) Creates read replicas in other regions D) Provides read-only access to another region's DynamoDB table

Answer: B

Explanation: DynamoDB Global Tables provides fully managed multi-master replication across multiple AWS Regions. Reads and writes are accepted in every region. Inter-region replication typically completes in under 1 second. If one region fails, other regions continue operating normally.

Domain 4: Monitoring and Troubleshooting

Q28. ProvisionedThroughputExceededExceptions are occurring frequently in DynamoDB. Which CloudWatch metrics should you examine first?

A) ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits B) SystemErrors and UserErrors C) SuccessfulRequestLatency D) ReturnedItemCount

Answer: A

Explanation: ProvisionedThroughputExceededException occurs when read or write capacity exceeds the provisioned amount. Examine ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits to determine which capacity type (read or write) is being exceeded. Also review the ThrottledRequests metric.

Q29. What is the correct way to enable and analyze the slow query log in RDS MySQL?

A) Track slow queries with CloudTrail B) Set slow_query_log=1 and configure long_query_time in the RDS parameter group, then analyze via CloudWatch Logs or the RDS console C) Analyze query patterns with VPC Flow Logs D) Detect slow queries with Enhanced Monitoring

Answer: B

Explanation: Enable slow query logging in the RDS MySQL parameter group by setting slow_query_log=1 and configuring the long_query_time threshold (in seconds). Logs can be viewed in the RDS console or published to CloudWatch Logs for analysis with CloudWatch Insights. Set log_queries_not_using_indexes=1 to also track queries that skip indexes.

Q30. The AuroraReplicaLag metric is high in Aurora PostgreSQL. What does this mean and how do you resolve it?

A) High CPU on primary instance → upgrade instance size B) Read replica is falling behind primary — upgrade replica instance size or optimize write operations C) Storage volume is full → expand storage D) Network latency → review VPC configuration

Answer: B

Explanation: AuroraReplicaLag measures the lag for a read replica to apply changes from the primary. High lag means reads from that replica may return stale data. Remediation: upgrade the read replica instance size, optimize write-intensive operations, or route write queries exclusively to the primary instance.

Q31. How does RDS Enhanced Monitoring differ from CloudWatch standard monitoring?

A) Enhanced Monitoring is free B) Enhanced Monitoring provides OS-level metrics (processes, memory usage) at 1–60 second granularity C) CloudWatch standard monitoring provides finer-grained metrics D) Enhanced Monitoring only tracks database connections

Answer: B

Explanation: CloudWatch standard monitoring collects hypervisor-level instance metrics at 1-minute or 5-minute intervals. Enhanced Monitoring uses an agent installed on the RDS instance to collect OS-level process lists, memory usage, CPU scheduling, and more at intervals from 1 to 60 seconds.

Q32. What is the primary purpose of enabling Database Activity Streams?

A) Database query caching B) Real-time streaming of all database activity to Kinesis Data Streams for compliance and security auditing C) Sending database metrics to CloudWatch D) Automating database backups

Answer: B

Explanation: Database Activity Streams streams all database activity (SQL statements, connections, authentication) in real time to Kinesis Data Streams encrypted with KMS. This enables integration with SIEM tools for real-time security monitoring and audit trails. Even DBAs cannot access the stream contents, preventing tampering.

Q33. The Aurora CommitLatency metric suddenly increases. What are the probable causes?

A) Too many read replicas B) Storage I/O bottleneck, excessive write workload, or network latency C) Connection pooling misconfiguration D) Parameter group change needed

Answer: B

Explanation: CommitLatency measures the time to complete a transaction commit. A sudden increase suggests Aurora storage layer I/O bottleneck, excessive write transactions, or log flush delays. Analyze together with WriteLatency, DiskQueueDepth, and FreeableMemory metrics to identify the root cause.

Q34. A Hot Partition problem occurs in DynamoDB where excessive traffic concentrates on a specific partition. How do you resolve it?

A) Increase the overall RCU/WCU of the table B) Change the partition key design to use a high-cardinality key or add a random suffix C) Add a GSI to distribute partitions D) Switch to On-Demand mode

Answer: B

Explanation: Hot partitions occur when the partition key has low cardinality or traffic concentrates on specific keys. Solutions: 1) Choose a high-cardinality partition key (UserID, UUID), 2) Add a random suffix to partition keys (Write Sharding), 3) Apply appropriate distribution for time-based keys. Switching to On-Demand hides the bottleneck but does not solve it fundamentally.

Q35. The FreeStorageSpace metric on an RDS MySQL instance is decreasing rapidly. What is the immediate resolution?

A) Restart the database B) Enable RDS Storage Autoscaling or manually increase the storage allocation C) Add a read replica D) Switch to Multi-AZ

Answer: B

Explanation: Enabling RDS Storage Autoscaling automatically increases storage when FreeStorageSpace falls below the specified threshold. If storage is already low, manually increase the allocated storage (note: reducing is not possible). Also consider reducing binary log retention and archiving old data.

Domain 5: Database Security

Q36. What is the benefit of enabling IAM authentication for RDS?

A) Faster authentication processing B) Connect using short-lived IAM-generated authentication tokens instead of a database password C) Automatic re-authentication during Multi-AZ failover D) Automatic data encryption activation

Answer: B

Explanation: With RDS IAM authentication enabled, you generate a 15-minute authentication token from AWS IAM credentials to connect without storing a database password in the application. Database access is controlled via IAM policies. It integrates cleanly with EC2 instance roles and Lambda execution roles.

Q37. What is the advantage of the Alternating Users strategy for automatic credential rotation in AWS Secrets Manager?

A) The old credentials remain valid during rotation, enabling zero-downtime rotation B) Security is stronger C) Credential storage costs are reduced D) Rotation frequency can be increased

Answer: A

Explanation: The Alternating Users strategy maintains two users, rotating between them. During rotation, only the inactive user's password is changed while the active user remains valid. This allows the application to continue connecting with the current password during rotation — no downtime. Single-user rotation can momentarily break connections.

Q38. You deployed an RDS database in a VPC private subnet and on-premises applications need access. What is the most secure connection method?

A) Assign a public IP to RDS and access over the internet B) Private network connection via Direct Connect or Site-to-Site VPN C) Access through an internet gateway D) Export RDS snapshots to on-premises

Answer: B

Explanation: To securely access RDS in a VPC private subnet from on-premises, use AWS Direct Connect (dedicated line) or AWS Site-to-Site VPN. Both bypass the internet and communicate via private networks, improving security. Explicitly allow the on-premises IP range in security group inbound rules.

Q39. You want to enable encryption at rest for an RDS for MySQL instance. What is the critical constraint?

A) Encryption can be enabled or disabled at any time B) You cannot directly convert an existing unencrypted RDS instance — you must create a snapshot, copy it as encrypted, then restore C) Encryption is only supported with Multi-AZ D) Enabling encryption reduces performance by 50%

Answer: B

Explanation: Encryption cannot be enabled on an already-created RDS instance. You must: create a snapshot of the unencrypted instance → copy the snapshot enabling encryption → restore a new instance from the encrypted snapshot. This is why it is important to enable encryption from the start when creating new databases.

Q40. You want to implement fine-grained access control on a DynamoDB table so that specific users can only access items matching their own UserID. How do you do this?

A) Apply column-level security in DynamoDB B) Use the dynamodb:LeadingKeys condition in IAM policies to restrict the partition key value to the user's ID C) Apply row-level security in DynamoDB via Lake Formation D) Network-level access control via VPC endpoint

Answer: B

Explanation: The dynamodb:LeadingKeys IAM condition key restricts which partition key values a user can access. If you constrain the partition key with the aws:userid policy variable for the current IAM identity, each user can only read and write items whose partition key matches their own UserID.

Advanced Scenarios

Q41. A fintech startup is building a high-performance payment system. Requirements: 1) Tens of thousands of transactions per second, 2) 99.999% availability, 3) Recovery within 30 seconds of regional failure, 4) Complete audit trail. What architecture do you choose?

A) DynamoDB (single-table) + Lambda B) Aurora Global Database (multi-region) + Database Activity Streams + Secrets Manager C) RDS MySQL Multi-AZ + Read Replicas D) ElastiCache Redis + DynamoDB

Answer: B

Explanation: Aurora Global Database delivers RTO under 1 minute and RPO in seconds for regional failures. Database Activity Streams provides real-time audit trails of all transaction activity to support financial compliance. Secrets Manager automatically rotates credentials for enhanced security.

Q42. When migrating a legacy Oracle database to Aurora PostgreSQL, select two sources of migration complexity.

A) Converting Oracle SEQUENCE to PostgreSQL B) Converting PL/SQL stored procedures to PL/pgSQL C) Renaming tables D) Data type conversion (DATE, NUMBER) E) Dropping indexes

Answer: B, D

Explanation: Key complexity factors for Oracle-to-PostgreSQL migration: 1) PL/SQL to PL/pgSQL conversion requires translating Oracle-specific functions, packages, and cursors. 2) Data types: Oracle's NUMBER, DATE, VARCHAR2 map to NUMERIC, TIMESTAMP, VARCHAR in PostgreSQL, with some behavioral differences. Use SCT to assess what can be auto-converted.

Q43. An application uses ElastiCache Redis as a session cache. What configuration minimizes session data loss in the event of a Redis cluster node failure?

A) Redis Cluster Mode Disabled with no replicas B) Redis Cluster Mode Enabled + 2 replicas per shard + Multi-AZ enabled C) Replace with a Memcached cluster D) Create periodic Redis snapshots

Answer: B

Explanation: With Cluster Mode Enabled and 2 replicas per shard, when a node fails, a replica is automatically promoted to primary. Two replicas tolerate the simultaneous failure of up to two nodes. Session data is replicated in memory, surviving node failures.

Q44. What DynamoDB design pattern isolates tenant data in a multi-tenant SaaS application?

A) Create a separate DynamoDB table per tenant B) Include the tenant ID in the partition key within a single table for data isolation C) Create a separate AWS account per tenant D) Replicate per-tenant data via DynamoDB Streams

Answer: B

Explanation: In most cases, include the tenant ID in the partition key (e.g., TENANT#tenantId#ENTITY#entityId) in a single table. Use the dynamodb:LeadingKeys IAM condition to restrict each tenant to only their own partition key range. For strict isolation requirements with large tenant counts, per-table isolation may be considered.

Q45. An Aurora MySQL cluster has five read replicas serving read workloads. You want to direct analytical queries to specific replicas only. How do you do this?

A) Configure query routing with RDS Proxy B) Use individual replica endpoints or Aurora Custom Endpoints C) Route queries with CloudFront D) Use Route 53 weighted routing to concentrate traffic on specific replicas

Answer: B

Explanation: Aurora Custom Endpoints let you route connections to a specific set of replicas. Create a custom endpoint grouping large-instance replicas for analytical queries, and use the cluster reader endpoint for general reads. Individual instance endpoints for each replica are also available.

Q46. An RDS MySQL instance frequently exceeds max_connections. How do you resolve this without changing application code?

A) Upgrade to a larger RDS instance (max_connections increases automatically with memory) B) Deploy Amazon RDS Proxy to implement connection pooling C) Add read replicas to distribute connections D) Directly increase the max_connections parameter value

Answer: B

Explanation: Amazon RDS Proxy sits between the application and RDS and multiplexes thousands of application connections into fewer actual database connections. Without changing application code, you only need to point the endpoint to the Proxy endpoint. This is especially useful for Lambda functions that can create connection bursts.

Q47. What is the benefit of enabling the pg_stat_statements extension in Aurora PostgreSQL?

A) Automatically optimizes slow queries B) Tracks execution statistics (call count, average time, total time) for all executed SQL statements C) Automatically manages the database connection count D) Automatically recommends index creation

Answer: B

Explanation: pg_stat_statements is a PostgreSQL extension that tracks statistics for all executed SQL statements: call counts, total/average/max/min execution time, rows processed, and more. Use it with Performance Insights to identify the most frequently executed or slowest queries. Enable it in the RDS PostgreSQL parameter group.

Q48. In a large microservices architecture, each service uses its own database. Container auto-scaling causes connection surges to RDS. How do you resolve this?

A) Create a separate RDS instance per microservice B) Deploy RDS Proxy in front of each RDS database to implement connection pooling C) Upgrade to a larger RDS instance D) Replace all RDS instances with DynamoDB

Answer: B

Explanation: Container-based microservices can instantaneously create hundreds or thousands of database connections during auto-scaling, exceeding max_connections. RDS Proxy pools these connections so the actual database maintains only a manageable number. It also supports IAM authentication for enhanced security.

Q49. You need to upgrade Amazon DocumentDB compatibility from MongoDB 4.0 to 5.0. What should you consider?

A) DocumentDB is fully compatible with all MongoDB versions B) DocumentDB is compatible with the MongoDB wire protocol but does not support all MongoDB features — verify feature support C) DocumentDB must be migrated to MongoDB Atlas D) An upgrade causes data loss

Answer: B

Explanation: Amazon DocumentDB is compatible with the MongoDB wire protocol but does not fully implement every MongoDB feature. Before upgrading, verify: MongoDB driver versions in use, commands and aggregation pipeline stages supported by DocumentDB, and the support status of MongoDB-specific features (Change Streams, Transactions, etc.).

Q50. Secrets Manager automatic rotation is failing. When examining Lambda function errors in CloudWatch Logs, what is the most common cause?

A) Secrets Manager cannot access RDS directly B) The Lambda function is not in the same VPC as the RDS instance, preventing network access C) KMS key permission error D) Insufficient IAM role permissions

Answer: B

Explanation: The Secrets Manager rotation Lambda must connect to the database to change the password. If Lambda is in a different VPC than RDS, or if the security group does not allow Lambda's access, the connection fails. Deploy Lambda in the same VPC as RDS and configure security groups correctly.

Q51. You implement a pattern that caches Aurora MySQL query results in ElastiCache Redis. How do you handle cache invalidation on database updates?

A) Rely solely on ElastiCache TTL B) Explicitly delete the relevant cache key in the application when data changes (Cache-Aside pattern) C) Automatically update the cache via DynamoDB Streams D) Invoke Lambda via an Aurora trigger to invalidate the cache

Answer: B

Explanation: The Cache-Aside (Lazy Loading) pattern is the most common caching strategy. On reads, check the cache first; on a miss, read from the database and populate the cache. On writes, update the database and delete the corresponding cache key. TTL serves as an additional safety net.

Q52. An RDS database in one AWS account must be accessed by a Lambda function in a different AWS account. What is the most secure approach?

A) Enable public access on RDS and restrict with IAM policies B) Connect the two VPCs privately via VPC Peering or Transit Gateway, then use IAM authentication C) Share the RDS snapshot with the target account D) Use a cross-account IAM role to access RDS directly

Answer: B

Explanation: The most secure approach: establish private network connectivity between the two account VPCs via VPC Peering or AWS Transit Gateway, grant the Lambda execution role RDS IAM authentication permissions, and connect using RDS IAM authentication. RDS is never exposed publicly, maintaining network-level security.

Q53. An Aurora Serverless v2 database has hit the MaxCapacity limit. How do you respond?

A) Switch from Aurora Serverless v2 to Provisioned B) Raise the MaxCapacity value and add Provisioned read replicas as needed C) Delete the cluster and recreate it D) Forcibly terminate application connections

Answer: B

Explanation: When MaxCapacity is reached, new requests cannot be processed. The immediate action is to increase the MaxCapacity value (up to 128 ACU). For the long term, add Provisioned read replicas to distribute read load. If write capacity is the bottleneck, consider query optimization or sharding.

Q54. A financial services company needs to implement audit trails for all database changes. How do you implement this while minimizing additional cost and complexity?

A) Log all changes at the application layer B) RDS/Aurora Database Activity Streams + Kinesis Data Firehose → S3 archive C) Add audit triggers to all database tables D) Record RDS API calls with CloudTrail

Answer: B

Explanation: Database Activity Streams sends all database activity over a KMS-encrypted Kinesis stream. Kinesis Firehose archives data to S3 for long-term retention, queryable with Athena. No application code changes are needed, and it provides tamper-proof audit trails that even DBAs cannot access.

Q55. What happens when DynamoDB Time To Live (TTL) is enabled?

A) Items are deleted immediately after the specified time B) Items are automatically deleted after the Unix timestamp specified in the TTL attribute (within 48 hours, at no cost) C) Expired items are hidden but remain in storage D) TTL deletions consume write capacity units

Answer: B

Explanation: DynamoDB TTL sets a Unix timestamp on each item. After that timestamp, items are automatically deleted within 48 hours (exact timing is not guaranteed). TTL deletions incur no additional cost and consume no write capacity units. Useful for session data, temporary data, and expiry token management.

Q56. In Neptune graph database, which query language should you choose between Gremlin and SPARQL?

A) Always use Gremlin B) Use Gremlin for Property Graphs (social networks, recommendations); use SPARQL for RDF/Ontology (knowledge graphs, semantic web) C) SPARQL is always faster D) You can only choose one and cannot change it

Answer: B

Explanation: Neptune supports two graph models. Property Graph (Gremlin) attaches properties to nodes and edges and is suited for social graphs, fraud detection, and recommendation engines. RDF (SPARQL) uses the Resource Description Framework and is suited for knowledge graphs, semantic web, and ontology modeling.

Q57. You need native backup/restore capability for RDS for SQL Server. What method should you use?

A) Use DMS for SQL Server-to-SQL Server migration B) Use RDS SQL Server native backup/restore integrated with S3 (enable SQLSERVER_BACKUP_RESTORE in the Option Group) C) Download a snapshot and restore on-premises D) Export data with the bcp utility

Answer: B

Explanation: For RDS for SQL Server native backup/restore, enable the SQLSERVER_BACKUP_RESTORE option in the Option Group and specify an S3 bucket. Use the rdsadmin.dbo.rds_backup_database stored procedure to back up to an S3 .bak file, and rdsadmin.dbo.rds_restore_database to restore.

Q58. Query plans in Aurora PostgreSQL change frequently, causing unstable performance. How do you resolve this?

A) Increase max_connections B) Use Query Plan Management (QPM) or the pg_hint_plan extension to pin query plans C) Add read replicas D) Upgrade the instance size

Answer: B

Explanation: Query plan instability is caused by statistics changes or planner behavior changes. Aurora PostgreSQL's apg_plan_mgmt (Query Plan Management) extension allows pinning execution plans for specific SQL statements. pg_hint_plan provides SQL hints, and parameters like enable_seqscan and enable_indexscan can guide the planner.

Q59. When is it appropriate to use DynamoDB transactions (TransactWriteItems, TransactGetItems)?

A) Reading or writing a single item B) When atomic reads/writes across multiple items or tables are required (e.g., account transfers, inventory deductions) C) Large-scale batch data processing D) When TTL needs to be set

Answer: B

Explanation: DynamoDB transactions atomically read or write up to 100 items (or 4 MB) across multiple tables. They are well-suited for consistent updates spanning multiple tables (transfer from account A to B, create order + deduct inventory). However, transactions consume 2x WCU compared to standard API calls — use them only when necessary.

Q60. Which pattern maintains data consistency when decomposing a monolithic database into multiple microservices?

A) All microservices continue to share a single database B) Manage distributed transactions with the Saga pattern using event/message-based coordination C) Implement distributed transactions with Two-Phase Commit (2PC) D) Operate each service independently without real-time data synchronization

Answer: B

Explanation: The Saga pattern manages distributed transactions as a sequence of local transactions where each service executes a local transaction and publishes an event. On failure, compensating transactions roll back previous steps. Implement as Choreography (event-based) or Orchestration (central orchestrator).

Q61. How is conflict resolution handled in DynamoDB Global Tables?

A) Conflicts must be resolved manually B) Last Writer Wins — the write with the most recent timestamp wins C) The first write to arrive wins D) Determined by region priority

Answer: B

Explanation: DynamoDB Global Tables uses Last Writer Wins conflict resolution. When concurrent writes to the same item occur in multiple regions, the write with the most recent timestamp wins as the final value. To avoid conflicts, design applications so each region writes to different partition key ranges.

Q62. You want to stream data changes from RDS MySQL to other systems in real time, without upgrading to Aurora. How do you do this?

A) Use RDS event notifications B) Configure a DMS CDC task with a Kinesis Data Streams target to stream changes C) Record changes with CloudWatch Logs D) Periodically analyze RDS snapshots

Answer: B

Explanation: AWS DMS CDC reads changes from the RDS MySQL binlog and streams them to Kinesis Data Streams. Setting Kinesis as the DMS task target delivers each INSERT/UPDATE/DELETE event to the Kinesis stream. Lambda or Kinesis Analytics can then process those events downstream.

Q63. What is an appropriate scenario for choosing Amazon Keyspaces (for Apache Cassandra)?

A) An OLTP system requiring complex multi-table joins B) Running existing Cassandra-compatible workloads on AWS without operational overhead C) Graph database queries D) SQL-based analytics

Answer: B

Explanation: Amazon Keyspaces is a fully managed Apache Cassandra-compatible service. Existing Cassandra CQL code and drivers work without modification. Choose it when you want to run Cassandra workloads serverlessly without the overhead of managing Cassandra cluster nodes, patching, and scaling.

Q64. Which application-level solution reduces connection count and minimizes connection latency in RDS PostgreSQL?

A) Execute all queries synchronously B) Use an external connection pooler like PgBouncer or RDS Proxy C) Create a new connection per request D) Set connection timeout to maximum

Answer: B

Explanation: PgBouncer is a PostgreSQL-specific connection pooler that multiplexes many application connection requests into fewer actual database connections. Transaction pooling mode allocates connections per transaction for maximum efficiency. RDS Proxy provides the same functionality as an AWS-managed service.

Q65. To protect PHI (Protected Health Information) in a medical database, which two elements are part of a multi-layered security strategy?

A) Deploy RDS in a VPC private subnet + use security groups to enforce least-privilege network access B) Encryption at rest with KMS customer-managed key (CMK) + enforce SSL/TLS for in-transit encryption C) Expose RDS publicly and restrict by IP D) Use the default encryption key E) Grant all users the same database access level

Answer: A, B

Explanation: For HIPAA compliance and PHI protection: 1) VPC private subnet + security groups minimize network-level exposure. 2) KMS CMK for encryption at rest, SSL/TLS for encryption in transit. Additionally apply IAM authentication, Secrets Manager credential management, Database Activity Streams auditing, and the principle of least privilege.


Study Resources

This practice exam is created for study purposes. Actual exam questions may differ.