AWS Certified Solutions Architect Associate (SAA-C03) Exam Questions
Comprehensive list of Free AWS Certified Solutions Architect Associate (SAA-C03) exam questions curated for cracking the exam with confidence.
Disclaimer: AWS is a protected Brand. These exam questions are neither endorsed by nor affiliated with AWS. These are not the AWS official exam questions/dumps. These questions are created from the web resources of the AWS. These questions cover all the objectives and services of the AWS SAA-C03 official exam and once you go through these questions and their concepts, you are more than ready to crack the exam in first attempt.
Overview
- Prepare well for the exam, it is the toughest exam I cracked in recent years.
- Requires 2 to 3 month of preparation depending upon your commitment per day.
- Exam code is SAA-C03 (third version) and cost you 150 USD per attempt.
- You need to solve 65 questions in 130 mins from your laptop under the supervision of online proctor.
- Passing score is 720 (out of 1000) means you should answer at least 47 (out of 65) questions correctly. No negative scoring so answer all the questions!
- You get the result (Pass or Fail) once you submit the exam, however you don’t receive any email immediately. It generally takes 2-3 days. I received an email with digital certificate, score-card and badge after two days. You can also login to AWS training to get them later.
- You can schedule exam with Pearson VUE or PSI. I heard bad reviews about PSI and chose Pearson VUE for my exam. The exam went smooth.
- You get discount vouchers under Benefits tab of AWS training portal once you crack at least one AWS exam. You can use these vouchers for subsequent exams.
- Exam Guide for more details.
Practice Questions
A solutions architect needs to optimize a large data analytics job that runs on an Amazon EMR cluster. The job takes 13 hours to finish. The cluster has multiple core nodes and worker nodes deployed on large, compute-optimized instances.
After reviewing EMR logs, the solutions architect discovers that several nodes are idle for more than 5 hours while the job is running. The solutions architect needs to optimize cluster performance.
Which solution will meet this requirement MOST cost-effectively?
⬜ A. Increase the number of core nodes to ensure there is enough processing power to handle the analytics job without any idle time.
✅ B. Use the EMR managed scaling feature to automatically resize the cluster based on workload.
⬜ C. Migrate the analytics job to a set of AWS Lambda functions. Configure reserved concurrency for the functions.
⬜ D. Migrate the analytics job core nodes to a memory-optimized instance type to reduce the total job runtime.
Explanation:
EMR managed scaling dynamically resizes the cluster by adding or removing nodes based on the workload. This feature helps minimize idle time and reduces costs by scaling the cluster to meet processing demands efficiently.
Incorrect Options:
Option A: Increasing the number of core nodes might increase idle time further, as it does not address the root cause of underutilization.
Option C: Migrating the job to Lambda is infeasible for large analytics jobs due to resource and runtime constraints.
Option D: Changing to memory-optimized instances may not necessarily reduce idle time or optimize costs.
Source: Using managed scaling in Amazon EMR
A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.
Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.
The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.
How should the developer resolve this issue?
⬜ A. Change the capacity mode from provisioned to on-demand.
⬜ B. Double the number of shards until the throttling errors stop occurring.
✅ C. Change the partition key from service name to creation timestamp.
⬜ D. Use a separate Kinesis stream for each service to generate the logs.
Explanation:
Partition Key Issue: Using ‘service name’ as the partition key results in uneven data distribution. Some shards may become hot due to excessive logs from certain services, leading to throttling errors. Changing the partition key to ‘creation timestamp’ ensures a more even distribution of records across shards.
Incorrect Options:
Option A: On-demand capacity mode eliminates throughput management but is more expensive and does not address the root cause.
Option B: Adding more shards does not solve the issue if the partition key still creates hot shards.
Option D: Using separate streams increases complexity and is unnecessary.
Source: Amazon Kinesis Data Streams Terminology and concepts
An e-commerce company has an application that uses Amazon DynamoDB tables configured with provisioned capacity. Order data is stored in a table named Orders. The Orders table has a primary key of order-ID and a sort key of product-ID. The company configured an AWS Lambda function to receive DynamoDB streams from the Orders table and update a table named Inventory. The company has noticed that during peak sales periods, updates to the Inventory table take longer than the company can tolerate.
Which solutions will resolve the slow table updates? (Select TWO.)
⬜ A. Add a global secondary index to the Orders table. Include the product-ID attribute.
✅ B. Set the batch size attribute of the DynamoDB streams to be based on the size of items in the Orders table.
✅ C. Increase the DynamoDB table provisioned capacity by 1,000 write capacity units (WCUs).
⬜ D. Increase the DynamoDB table provisioned capacity by 1,000 read capacity units (RCUs).
⬜ E. Increase the timeout of the Lambda function to 15 minutes.
Explanation:
Key Problem:
Delayed Inventory table updates during peak sales.
DynamoDB Streams and Lambda processing require optimization.
Analysis of Options:
Option A: Adding a GSI is unrelated to the issue. It does not address stream processing delays or capacity issues.
Option B: Optimizing batch size reduces latency and allows the Lambda function to process larger chunks of data at once, improving performance during peak load.
Option C: Increasing write capacity for the Inventory table ensures that it can handle the increased volume of updates during peak times.
Option D: Increasing read capacity for the Orders table does not directly resolve the issue since the problem is with updates to the Inventory table.
Option E: Increasing Lambda timeout only addresses longer processing times but does not solve the underlying throughput problem.
Source: Best practices for designing and architecting with DynamoDB
Source: DynamoDB provisioned capacity mode
A developer is creating an ecommerce workflow in an AWS Step Functions state machine that includes an HTTP Task state. The task passes shipping information and order details to an endpoint.
The developer needs to test the workflow to confirm that the HTTP headers and body are correct and that the responses meet expectations.
Which solution will meet these requirements?
⬜ A. Use the TestState API to invoke only the HTTP Task. Set the inspection level to TRACE.
⬜ B. Use the TestState API to invoke the state machine. Set the inspection level to DEBUG.
⬜ C. Use the data flow simulator to invoke only the HTTP Task. View the request and response data.
✅ D. Change the log level of the state machine to ALL. Run the state machine.
Explanation:
State Machine Testing with Logs:
Changing the log level to ALL enables capturing detailed request and response data. This helps verify HTTP headers, body, and responses.
Incorrect Options:
Option A and B: The TestState API is not a valid option for Step Functions.
Option C: A data flow simulator does not exist for AWS Step Functions.
Source: What is Step Functions?
A company is deploying a critical application by using Amazon RDS for MySQL. The application must be highly available and must recover automatically. The company needs to support interactive users (transactional queries) and batch reporting (analytical queries) with no more than a 4-hour lag. The analytical queries must not affect the performance of the transactional queries.
⬜ A. Configure Amazon RDS for MySQL in a Multi-AZ DB instance deployment with one standby instance. Point the transactional queries to the primary DB instance. Point the analytical queries to a secondary DB instance that runs in a different Availability Zone.
⬜ B. Configure Amazon RDS for MySQL in a Multi-AZ DB cluster deployment with two standby instances. Point the transactional queries to the primary DB instance. Point the analytical queries to the reader endpoint.
✅ C. Configure Amazon RDS for MySQL to use multiple read replicas across multiple Availability Zones. Point the transactional queries to the primary DB instance. Point the analytical queries to one of the replicas in a different Availability Zone.
⬜ D. Configure Amazon RDS for MySQL as the primary database for the transactional queries with automated backups enabled. Configure automated backups. Each night, create a read-only database from the most recent snapshot to support the analytical queries. Terminate the previously created database.
Explanation:
Key Requirements:
High availability and automatic recovery.
Separate transactional and analytical queries with minimal performance impact.
Allow up to a 4-hour lag for analytical queries.
Analysis of Options:
Option A: Multi-AZ deployments provide high availability but do not include read replicas for separating transactional and analytical queries. Analytical queries on the secondary DB instance would impact the transactional workload. Does not meet the requirement of query separation.
Option B: Multi-AZ DB clusters provide high availability and include a reader endpoint. However, these are better suited for Aurora and not RDS for MySQL. Not applicable to standard RDS for MySQL.
Option C: Multiple read replicas allow separation of transactional and analytical workloads. Queries can be pointed to a replica in a different AZ, ensuring no impact on transactional queries. Meets all requirements with high availability and query separation.
Option D: Creating nightly snapshots and read-only databases adds significant operational overhead and does not support the 4-hour lag requirement. Not practical for dynamic query separation.
Source: Working with DB instance read replicas
Source: Configuring and managing a Multi-AZ deployment for Amazon RDS
A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS Single Sign-On) and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all the accounts.
The permissions will be used by multiple IAM users and must be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users that are hired on both teams.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create individual users in IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign the users to the appropriate groups Create a custom IAM policy for each group to set fine-grained permissions.
⬜ B. Create individual users in IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed IAM policies to each user as needed for fine-grained permissions.
✅ C. Create individual users in IAM Identity Center Create new developer and administrator groups in IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts Assign the new permission sets to the new groups When new users are hired, add them to the appropriate group.
⬜ D. Create individual users in IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each user. Assign the users to the appropriate accounts. Grant additional IAM permissions to the users from within specific accounts. When new users are hired, add them to IAM Identity Center and assign them to the accounts.
Explanation:
The best approach for least operational overhead is to use groups in IAM Identity Center and permission sets:
● IAM Identity Center users are managed centrally (not per account individually).
● Groups (e.g., “Developers” and “Administrators”) make managing large numbers of users easier.
● Permission sets define reusable permission templates (fine-grained) and are assigned once per group.
● New users can simply be added to the appropriate group — no need to reconfigure policies or individual user permissions manually.
This is a standard, scalable design for AWS Organizations environments using IAM Identity Center and AWS Control Tower.
Why other options are wrong:
A: Creates custom IAM policies per group — but manually per user/account, which is high overhead.
B: Suggests attaching managed policies directly to users — poor practice at scale and harder to maintain.
D: Creates permission sets per user — not scalable, and increases management complexity when users change roles.
Source: https://docs.aws.amazon.com/singlesignon/latest/userguide/howtocreategroups.html
How can a company detect and notify security teams about PII in S3 buckets?
✅ A. Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
⬜ B. Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
⬜ C. Use Amazon Macie. Create an EventBridge rule for SensitiveData:S3Object/Personal findings and send an SQS notification.
⬜ D. Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.
Explanation:
Amazon Macie is purpose-built for detecting PII in S3.
Option A uses EventBridge to filter SensitiveData findings and notify via SNS, meeting the requirements.
Options B and D involve GuardDuty, which is not designed for PII detection.
Option C uses SQS, which is less suitable for immediate notifications.
An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists of a web application and a backend Microsoft SQL database for storage.
The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS Cloud must be highly available and scalable.
Which solution will meet these requirements with the LEAST administrative overhead?
⬜ A. Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.
⬜ B. Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across separate AWS Regions with database replication.
✅ C. Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.
⬜ D. Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.
Explanation:
The correct solution with high availability, scalability, and minimal administrative overhead is:
● C. Deploy the web application on EC2 instances in an Auto Scaling group across multiple AZs for scalability and high availability,
● Use Amazon RDS with Multi-AZ deployment for automatic failover, managed backups, and minimal downtime without the need to manually manage replication or databases.
This combination offers server scaling, database resilience, and AWS-managed maintenance with lowest operational complexity.
Why other options are wrong:
A. RDS Multi-AZ provides high availability, but read replicas are intended for read scaling, not high availability — not the best for failover protection.
B. Replication across separate AWS Regions manually on EC2 adds heavy operational overhead and complexity.
D. Managing databases manually across EC2 instances and AZs requires high operational effort compared to using managed RDS.
Source: Auto Scaling groups
Source: Configuring and managing a Multi-AZ deployment for Amazon RDS
A company is developing a new application that uses a relational database to store user data and application configurations. The company expects the application to have steady user growth. The company expects the database usage to be variable and read-heavy, with occasional writes.
The company wants to cost-optimize the database solution. The company wants to use an AWS managed database solution that will provide the necessary performance.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Deploy the database on Amazon RDS. Use Provisioned IOPS SSD storage to ensure consistent performance for read and write operations.
✅ B. Deploy the database on Amazon Aurora Serveriess to automatically scale the database capacity based on actual usage to accommodate the workload.
⬜ C. Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically scale throughput to accommodate the workload.
⬜ D. Deploy the database on Amazon RDS Use magnetic storage and use read replicas to accommodate the workload.
Explanation:
Amazon Aurora Serverless is a cost-effective, on-demand, autoscaling configuration for Amazon Aurora. It automatically adjusts the database’s capacity based on the current demand, which is ideal for workloads with variable and unpredictable usage patterns. Since the application is expected to be read-heavy with occasional writes and steady growth, Aurora Serverless can provide the necessary performance without requiring the management of database instances.
Cost-Optimization: Aurora Serverless only charges for the database capacity you use, making it a more cost-effective solution compared to always running provisioned database instances, especially for workloads with fluctuating demand.
Scalability: It automatically scales database capacity up or down based on actual usage, ensuring that you always have the right amount of resources available.
Performance: Aurora Serverless is built on the same underlying storage as Amazon Aurora, providing high performance and availability.
Why other options are wrong:
Option A (RDS with Provisioned IOPS SSD): While Provisioned IOPS SSD ensures consistent performance, it is generally more expensive and less flexible compared to the autoscaling nature of Aurora Serverless.
Option C (DynamoDB with On-Demand Capacity): DynamoDB is a NoSQL database and may not be the best fit for applications requiring relational database features.
Option D (RDS with Magnetic Storage and Read Replicas): Magnetic storage is outdated and generally slower. While read replicas help with read-heavy workloads, the overall performance might not be optimal, and magnetic storage doesn’t provide the necessary performance.
Source: Using Amazon Aurora Serverless v1
Source: Amazon Aurora Pricing
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Create a cross-Region read replica and promote the read replica to the primary instance
⬜ B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
⬜ C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket
✅ D. Copy automatic snapshots to another Region every 24 hours.
Explanation:
This solution is the most cost-effective and meets the RPO and RTO requirements of 24 hours.
● Automatic Snapshots: Amazon RDS automatically creates snapshots of your DB instance at regular intervals. By copying these snapshots to another AWS Region every 24 hours, you ensure that you have a backup available in a different geographic location, providing disaster recovery capability.
● RPO and RTO: Since the company’s RPO and RTO are both 24 hours, copying snapshots daily to another Region is sufficient. In the event of a disaster, you can restore the DB instance from the most recent snapshot in the target Region.
Why other options are wrong:
Option A (Cross-Region Read Replica): This could provide a faster recovery time but is more costly due to the ongoing replication and resource usage in another Region.
Option B (DMS Cross-Region Replication): While effective for continuous replication, it introduces complexity and cost that isn’t necessary given the 24-hour RPO/RTO.
Option C (Cross-Region Native Backup Copy): This involves more manual steps and doesn’t offer as straightforward a solution as automated snapshot copying.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html
A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.
The company needs a managed solution with proactive engagement to detect against DDoS attacks.
Which solution will meet these requirements?
⬜ A. Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.
⬜ B. Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.
⬜ C. Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.
✅ D. Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as protected resources.
Explanation:
AWS Shield Advanced is designed to provide enhanced protection against DDoS attacks with proactive engagement and response capabilities, making it the best solution for this scenario.
● AWS Shield Advanced: This service provides advanced protection against DDoS attacks. It includes detailed attack diagnostics, 24/7 access to the AWS DDoS Response Team (DRT), and financial protection against DDoS-related scaling charges. Shield Advanced also integrates with Route 53 and the Application Load Balancer (ALB) to ensure comprehensive protection for your web applications.
● Route 53 and ALB Protection: By adding your Route 53 hosted zones and ALB resources to AWS Shield Advanced, you ensure that these components are covered under the enhanced protection plan. Shield Advanced actively monitors traffic and provides real-time attack mitigation, minimizing the impact of DDoS attacks on your application.
Why other options are wrong:
Option A (AWS Config): AWS Config is a configuration management service and does not provide DDoS protection or detection capabilities.
Option B (AWS WAF): While AWS WAF can help mitigate some types of attacks, it does not provide the comprehensive DDoS protection and proactive engagement offered by Shield Advanced.
Option C (GuardDuty): GuardDuty is a threat detection service that identifies potentially malicious activity within your AWS environment, but it is not specifically designed to provide DDoS protection.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create Amazon RDS automated backups. Set the retention period to 90 days.
⬜ B. Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.
⬜ C. Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days
✅ D. Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.
Explanation:
AWS Backup is the most appropriate solution for managing backups with minimal operational overhead while meeting the regulatory requirement to retain data for 90 days and enabling point-in-time restore for up to 14 days.
● AWS Backup: AWS Backup provides a centralized backup management solution that supports automated backup scheduling, retention management, and compliance reporting across AWS services, including Amazon RDS. By creating a backup plan, you can define a retention period (in this case, 90 days) and automate the backup process.
● Point-in-Time Restore (PITR): Amazon RDS supports point-in-time restore for up to 35 days with automated backups. By using AWS Backup in conjunction with RDS, you ensure that your backup strategy meets the requirement for restoring data to a specific point in time within the last 14 days.
Why other options are wrong:
Option A (RDS Automated Backups): While RDS automated backups support PITR, they do not directly support retention beyond 35 days without manual intervention.
Option B (Manual Snapshots): Manually creating and managing snapshots is operationally intensive and less automated compared to AWS Backup.
Option C (Aurora Clones): Aurora Clone is a feature specific to Amazon Aurora and is not applicable to Amazon RDS for Oracle.
Source: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
A company has an employee web portal. Employees log in to the portal to view payroll details. The company is developing a new system to give employees the ability to upload scanned documents for reimbursement. The company runs a program to extract text-based data from the documents and attach the extracted information to each employee’s reimbursement IDs for processing.
The employee web portal requires 100% uptime. The document extract program runs infrequently throughout the day on an on-demand basis. The company wants to build a scalable and cost-effective new system that will require minimal changes to the existing web portal. The company does not want to make any code changes.
Which solution will meet these requirements with the LEAST implementation effort?
✅ A. Run Amazon EC2 On-Demand Instances in an Auto Scaling group for the web portal. Use an AWS Lambda function to run the document extract program. Invoke the Lambda function when an employee uploads a new reimbursement document.
⬜ B. Run Amazon EC2 Spot Instances in an Auto Scaling group for the web portal. Run the document extract program on EC2 Spot Instances Start document extract program instances when an employee uploads a new reimbursement document.
⬜ C. Purchase a Savings Plan to run the web portal and the document extract program. Run the web portal and the document extract program in an Auto Scaling group.
⬜ D. Create an Amazon S3 bucket to host the web portal. Use Amazon API Gateway and an AWS Lambda function for the existing functionalities. Use the Lambda function to run the document extract program. Invoke the Lambda function when the API that is associated with a new document upload is called.
Explanation:
This solution offers the most scalable and cost-effective approach with minimal changes to the existing web portal and no code modifications.
Amazon EC2 On-Demand Instances in an Auto Scaling Group: Running the web portal on EC2 On-Demand instances ensures 100% uptime and scalability. The Auto Scaling group will maintain the desired number of instances, automatically scaling up or down as needed, ensuring high availability for the employee web portal.
AWS Lambda for Document Extraction: Lambda is a serverless compute service that allows you to run code in response to events without provisioning or managing servers. By using Lambda to run the document extraction program, you can trigger the function whenever an employee uploads a document. This approach is cost-effective since you only pay for the compute time used by the Lambda function.
No Code Changes Required: This solution integrates with the existing infrastructure with minimal implementation effort and does not require any modifications to the web portal’s code.
Why other options are wrong:
Option B (Spot Instances): Spot Instances are not suitable for workloads requiring 100% uptime, as they can be terminated by AWS with short notice.
Option C (Savings Plan): A Savings Plan could reduce costs but does not address the requirement for running the document extraction program efficiently or without code changes.
Option D (S3 with API Gateway and Lambda): This would require significant changes to the existing web portal setup, including moving the portal to S3 and reconfiguring its architecture, which contradicts the requirement of minimal implementation effort and no code changes.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Source: https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
A company has one million users that use its mobile app. The company must analyze the data usage in near-real time. The company also must encrypt the data in near-real time and must store the data in a centralized location in Apache Parquet format for further processing.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.
⬜ B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS Lambda function to send the data to the EMR cluster.
⬜ C. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data.
✅ D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data.
Explanation:
The solution must deliver near-real-time ingestion, encryption, storage in Parquet format, and centralized storage — with least operational overhead:
● Kinesis Data Firehose automatically ingests, encrypts, buffers, and delivers streaming data to S3 (with built-in support for Parquet format and encryption via KMS).
● Kinesis Data Analytics can analyze streaming data directly in real-time without needing manual cluster management like EMR.
● Both Kinesis services are fully managed (no servers or manual scaling), meeting the least operational overhead requirement perfectly.
Thus, Kinesis Data Firehose + Kinesis Data Analytics is the optimal serverless, efficient solution.
Why other options are wrong:
A: Kinesis Data Stream + manual Lambda invocation + Kinesis Data Analytics = more components to manage (Lambda triggers, stream consumers).
B: Kinesis Data Stream + EMR cluster = EMR adds high operational overhead (you need to manage cluster lifecycle, scaling, costs).
C: Firehose to S3 is good, but EMR again requires cluster management — not as low-overhead as using fully managed Kinesis Data Analytics.
Source: https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
Source: https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html
A company has 15 employees. The company stores employee start dates in an Amazon DynamoDB table. The company wants to send an email message to each employee on the day of the employee’s work anniversary.
Which solution will meet these requirements with the MOST operational efficiency?
⬜ A. Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
⬜ B. Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
✅ C. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda function to run every day.
⬜ D. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary Schedule this Lambda function to run every day.
Explanation:
For most operational efficiency, we want a serverless solution with minimal management overhead:
● AWS Lambda allows you to run code without managing servers (no EC2 needed).
● Amazon SNS is a simple, scalable service for sending emails (directly integrated with email endpoints).
● Scheduling Lambda functions (using EventBridge or CloudWatch Events) is a fully managed and reliable way to run daily jobs.
Thus, Lambda + DynamoDB + SNS is the most efficient and cost-effective solution for small-scale tasks like sending anniversary emails daily.
Why other options are wrong:
A: Requires managing an EC2 instance and cron job — adds operational overhead (servers to patch, secure, etc.).
B: Same problem as A — plus SQS is a queue system, not an email service. You would still need another consumer to send emails.
D: Using Lambda is good, but SQS is unnecessary here. SNS is purpose-built for sending notifications/emails directly.
Source: https://docs.aws.amazon.com/lambda/latest/dg/services-cwe-tutorial.html
Source: https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZ Amazon RDS for MySQL server forms the database layer. Amazon ElastiCache forms the cache layer. The company wants a caching strategy that adds or updates data in the cache when a customer adds an item to the database. The data in the cache must always match the data in the database.
Which solution will meet these requirements?
⬜ A. Implement the lazy loading caching strategy
✅ B. Implement the write-through caching strategy
⬜ C. Implement the adding TTL caching strategy
⬜ D. Implement the AWS AppConfig caching strategy
Explanation:
The write-through caching strategy ensures that every time new data is written to the database, it is also written to the cache immediately.
● This keeps the cache and the database perfectly in sync, meeting the requirement that the cache must always reflect the database accurately.
● It eliminates the chances of a cache miss or stale data after a database update.
Why other options are wrong:
A. Lazy loading: Updates the cache only on read, not when data is written — causing potential inconsistency.
C. Adding TTL: Only sets expiration times for cache entries, but does not update cache when database changes.
D. AWS AppConfig caching strategy: AWS AppConfig is for application configuration management, not for database caching.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html
A company has two AWS accounts: Production and Development. The company needs to push code changes in the Development account to the Production account. In the alpha phase, only two senior developers on the development team need access to the Production account. In the beta phase, more developers will need access to perform testing.
Which solution will meet these requirements?
⬜ A. Create two policy documents by using the AWS Management Console in each account. Assign the policy to developers who need access.
⬜ B. Create an IAM role in the Development account. Grant the IAM role access to the Production account. Allow developers to assume the role.
✅ C. Create an IAM role in the Production account. Define a trust policy that specifies the Development account. Allow developers to assume the role.
⬜ D. Create an IAM group in the Production account. Add the group as a principal in a trust policy that specifies the Production account. Add developers to the group.
Explanation:
The correct solution is to create an IAM role in the Production account and configure a trust policy that allows users from the Development account to assume the role.
● This is the standard cross-account access pattern in AWS: the resource account (Production) owns the role, and the trusted account (Development) users assume it.
● As team size grows (from two senior developers to more developers later), you just need to manage permissions in the Development account — no change needed in Production.
Why other options are wrong:
A. Create two policy documents: Policy documents alone do not allow cross-account access. You need a trust relationship.
B. Create an IAM role in Development account: The role must exist in the Production account, not Development, because you are accessing Production resources.
D. Create an IAM group in Production: IAM groups cannot be a principal in a trust policy. Trust policies are only for IAM roles.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html#roles-creatingrole-trust-policy
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
⬜ A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
✅ B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
⬜ C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
⬜ D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
Explanation:
The best solution for migrating 70 TB of data quickly with least possible network bandwidth usage is to use AWS Snowball Edge.
● Snowball Edge is a physical device that AWS ships to you.
● You load data locally, and ship the device back to AWS.
● AWS then imports the data directly into S3 from their side.
● This approach avoids overloading the network and speeds up large-scale data transfers.
Why other options are wrong:
A. AWS CLI upload: Copying 70 TB over the network using CLI would be very slow and consume heavy bandwidth.
C. S3 File Gateway: Suitable for ongoing access to S3 as NFS, not ideal for a one-time bulk migration.
D. AWS Direct Connect: Setting up Direct Connect is time-consuming and expensive, unnecessary for a one-time migration.
Source: https://docs.aws.amazon.com/snowball/latest/developer-guide/whatis.html
A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.
Which solution meets these requirements?
✅ A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
⬜ B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
⬜ C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.
⬜ D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.
Explanation:
The correct and most efficient solution for fast, global data uploads with minimal complexity is:
● A. Enable S3 Transfer Acceleration on the destination bucket.
● S3 Transfer Acceleration uses Amazon CloudFront’s globally distributed edge locations to accelerate uploads from geographically dispersed clients directly to S3, minimizing latency and maximizing speed.
● Using multipart uploads further speeds up large file transfers and improves reliability.
Why other options are wrong:
B. Cross-Region Replication adds extra steps, higher costs, and additional management complexity.
C. Snowball devices are best for offline, bulk transfers, not for daily, fast uploads over high-speed internet.
D. Using EC2 and EBS snapshots is overly complex, costly, and unnecessary for simple file transfers.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html
A company is deploying an application in three AWS Regions using an Application Load Balancer. Amazon Route 53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
✅ A. Create an A record with a latency policy.
⬜ B. Create an A record with a geolocation policy.
⬜ C. Create a CNAME record with a failover policy.
⬜ D. Create a CNAME record with a geoproximity policy.
Explanation:
The latency-based routing policy in Route 53 helps direct users to the AWS Region that provides the lowest network latency, thus providing the most high-performing experience.
● It improves end-user experience by routing requests to the Region that responds the fastest.
● An A record with a latency policy ensures that users get routed to the closest, best-performing Region automatically based on real-time measurements.
Why other options are wrong:
B. Geolocation policy: Routes traffic based on the user’s location (e.g., country or continent), not on network latency or performance.
C. Failover policy: Focuses on availability and recovery, not on performance optimization.
D. Geoproximity policy: Routes based on physical distance (with traffic bias), but latency is a better metric for performance compared to pure distance.
Source: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency
A company runs containers in a Kubernetes environment in the company’s local data center. The company wants to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services. Data must remain locally in the company’s data center and cannot be stored in any remote site or cloud to maintain compliance.
Which solution will meet these requirements?
⬜ A. Deploy AWS Local Zones in the company’s data center
⬜ B. Use an AWS Snowmobile in the company’s data center
✅ C. Install an AWS Outposts rack in the company’s data center
⬜ D. Install an AWS Snowball Edge Storage Optimized node in the data center
Explanation:
AWS Outposts is the correct solution — it brings AWS services, APIs, and managed infrastructure to the company’s on-premises data center, while keeping the data local to meet compliance requirements.
● Outposts supports Amazon EKS, Amazon RDS, S3 APIs, and many AWS managed services locally.
● It is specifically designed for use cases where data must remain on-premises while still leveraging AWS-native capabilities.
Why other options are wrong:
A. Local Zones: Local Zones are owned and operated by AWS in nearby AWS-managed sites, not inside your own data center.
B. Snowmobile: AWS Snowmobile is for bulk data transfer (exabytes) to AWS — not for running services on-premises.
D. Snowball Edge: A Snowball Edge node is designed for edge computing and limited storage/compute tasks, but not suitable for fully running managed services like EKS.
Source: https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html
A company’s near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.
Which combination of steps should the solutions architect take? (Select TWO.)
✅ A. Use Amazon Kinesis Data Firehose to ingest the data.
⬜ B. Use AWS Lambda with AWS Step Functions to process the data.
⬜ C. Use AWS Database Migration Service (AWS DMS) to ingest the data.
⬜ D. Use Amazon EC2 instances in an Auto Scaling group to process the data.
✅ E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
Explanation:
A. Kinesis Data Firehose is a fully managed, serverless service that ingests streaming data and can automatically deliver it to destinations like S3, Redshift, or Elasticsearch, handling large volumes with minimal setup.
E. AWS Fargate with Amazon ECS allows running containerized jobs serverlessly without managing any EC2 instances.
Since the job takes 30 minutes, AWS Lambda is not a good fit (Lambda has a 15-minute timeout), but Fargate can easily handle longer-running tasks.
Thus, combining Kinesis Data Firehose for ingestion and Fargate/ECS for processing offers a scalable, serverless architecture.
Why other options are wrong:
B. Lambda with Step Functions: Lambda cannot handle 30-minute jobs due to its 15-minute max timeout.
C. AWS DMS: DMS is for database migration, not for general-purpose streaming data ingestion.
D. EC2 Auto Scaling group: EC2 instances are not serverless — you have to manage and scale instances manually.
Source: https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
Source: https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) to run its self-managed database. The company has 350 TB of data spread across all EBS volumes. The company takes daily EBS snapshots and keeps the snapshots for 1 month. The daily change rate is 5% of the EBS volumes.
Because of new regulations, the company needs to keep the monthly snapshots for 7 years. The company needs to change its backup strategy to comply with the new regulations and to ensure that data is available with minimal administrative effort.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Keep the daily snapshot in the EBS snapshot standard tier for 1 month. Copy the monthly snapshot to Amazon S3 Glacier Deep Archive with a 7-year retention period.
✅ B. Continue with the current EBS snapshot policy. Add a new policy to move the monthly snapshot to Amazon EBS Snapshots Archive with a 7-year retention period.
⬜ C. Keep the daily snapshot in the EBS snapshot standard tier for 1 month. Keep the monthly snapshot in the standard tier for 7 years. Use incremental snapshots.
⬜ D. Keep the daily snapshot in the EBS snapshot standard tier. Use EBS direct APIs to take snapshots of all the EBS volumes every month. Store the snapshots in an Amazon S3 bucket in the Infrequent Access tier for 7 years.
Explanation:
The most cost-effective solution is to use Amazon EBS Snapshots Archive, which provides a lower-cost storage tier designed for long-term retention of snapshots.
● You can keep daily snapshots in the standard tier for 1 month for fast restores.
● Then archive monthly snapshots for 7 years at much lower cost without needing manual movement or conversions.
● AWS natively supports snapshot lifecycle policies to automatically manage transitions between standard and archive tiers, ensuring minimal administrative effort.
Why other options are wrong:
A. Copying to Glacier Deep Archive: EBS snapshots are not directly copied to Glacier — they stay within EBS snapshot management. Manual copy to S3/Glacier increases operational overhead.
C. Keeping snapshots in the standard tier for 7 years: Much higher cost compared to moving to Archive tier.
D. Using direct APIs and S3 Infrequent Access: EBS snapshots cannot be directly stored in S3 without custom export/import logic, adding unnecessary complexity.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-archive.html
A company wants to use NAT gateways in its AWS environment. The company’s Amazon EC2 instances in private subnets must be able to connect to the public internet through the NAT gateways.
Which solution will meet these requirements?
⬜ A. Create public NAT gateways in the same private subnets as the EC2 instances
⬜ B. Create private NAT gateways in the same private subnets as the EC2 instances
✅ C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances
⬜ D. Create private NAT gateways in public subnets in the same VPCs as the EC2 instances
Explanation:
To allow EC2 instances in private subnets to access the public internet:
● You must deploy public NAT gateways in a public subnet of the same VPC.
● A public subnet means the subnet has a route to an internet gateway.
● The private subnet EC2 instances route their internet-bound traffic to the NAT gateway, which forwards it to the internet gateway.
This setup ensures private instances are not directly exposed to the internet while still allowing them to initiate outbound connections.
Why other options are wrong:
A. Public NAT in private subnet: You cannot create a public NAT gateway in a private subnet; it needs a public IP and internet route.
B. Private NAT in private subnet: Private NAT is used for VPC-to-VPC communication, not internet access.
D. Private NAT in public subnet: Private NAT gateways are for private communications only, not for internet-bound traffic.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
A social media company has workloads that collect and process data. The workloads store the data in on-premises NFS storage. The data store cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the current data store to AWS.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Set up an AWS Storage Gateway Volume Gateway. Use an Amazon S3 Lifecycle policy to transition the data to the appropriate storage class.
✅ B. Set up an AWS Storage Gateway Amazon S3 File Gateway. Use an Amazon S3 Lifecycle policy to transition the data to the appropriate storage class.
⬜ C. Use the Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access (Standard-IA) storage class. Activate the infrequent access lifecycle policy.
⬜ D. Use the Amazon Elastic File System (Amazon EFS) One Zone-Infrequent Access (One Zone-IA) storage class. Activate the infrequent access lifecycle policy.
Explanation:
The most cost-effective solution to migrate NFS-based on-premises storage to AWS is to use AWS Storage Gateway - S3 File Gateway:
● It allows your applications to access Amazon S3 objects as NFS file shares, appearing like a local file system.
● You can apply S3 Lifecycle policies to automatically move older data to lower-cost storage classes like S3 Standard-IA, Glacier Instant Retrieval, or Deep Archive.
● This solution provides scalability, durability, and low cost, while preserving NFS compatibility.
Why other options are wrong:
A. Volume Gateway: Volume Gateway presents block storage (iSCSI) — not NFS file system — so it does not match NFS needs.
C. EFS Standard-IA: EFS is great but much more expensive compared to S3-based storage for massive, less frequently accessed data.
D. EFS One Zone-IA: Slightly cheaper than Standard-IA but still more costly than an S3-backed Storage Gateway for large, cold datasets.
Source: https://docs.aws.amazon.com/filegateway/latest/files3/WhatIsFileGateway.html
A company runs a container application on a Kubernetes cluster in the company’s data center. The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the workloads to AWS.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS). Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
✅ B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon MQ to retrieve the messages.
⬜ C. Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve the messages.
⬜ D. Use AWS Lambda functions to run the application. Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
Explanation:
● The application is already containerized on Kubernetes and uses AMQP, a protocol not natively supported by Amazon SQS.
● Amazon MQ fully supports AMQP and provides a managed message broker service (e.g., ActiveMQ, RabbitMQ) with minimal operational overhead.
● Amazon EKS allows for a seamless migration from on-prem Kubernetes to AWS-managed Kubernetes with scaling, patching, and upgrades managed by AWS.
Thus, migrating to Amazon EKS + Amazon MQ is the solution that matches the existing architecture and minimizes operational burden.
Why other options are wrong:
A. ECS + SQS: SQS does not support AMQP protocol.
C. EC2 instances: Running Kubernetes on EC2 manually would increase operational overhead (managing nodes, patching, scaling manually).
D. Lambda + SQS: Lambda functions are not suitable for long-running containerized applications, and again SQS does not support AMQP.
Source: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/what-is-amazon-mq.html
A company website hosted on Amazon EC2 instances processes classified data stored in the application. The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution will meet this requirement?
⬜ A. Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
✅ B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
⬜ C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the EBS level.
⬜ D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account. Ensure that the key policy is active.
Explanation:
● When you create EBS volumes, you can specify encryption at creation time.
● AWS handles encryption at rest transparently using AWS-managed or customer-managed KMS keys.
● Encrypted volumes ensure that all data written to disk is encrypted automatically without needing manual intervention later.
Why other options are wrong:
A. IAM role for encryption: IAM roles control permissions, but they do not encrypt EBS volumes automatically.
C. Instance tags: Tags are metadata for resources; they have no effect on encryption settings.
D. KMS key policy: A key policy controls access to encryption keys, but by itself does not enforce EBS volume encryption.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to IAM permissions. The company has AWS CloudTrail turned on.
Which solution will meet these requirements with the LEAST effort?
⬜ A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
⬜ B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
✅ C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
⬜ D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Explanation:
The easiest and most efficient way to analyze CloudTrail logs for Access Denied and Unauthorized errors is to use Amazon Athena.
● Athena allows you to run SQL queries directly against CloudTrail logs stored in S3 without setting up infrastructure or writing custom scripts.
● This provides fast, serverless, ad-hoc querying at a low cost with minimal effort compared to building custom ETL pipelines or dashboards.
Why other options are wrong:
A. AWS Glue requires building ETL jobs and custom scripts, which adds complexity.
B. AWS Batch also needs custom scripting and compute management, making it higher effort.
D. Amazon QuickSight is good for visualization, not for quick and easy log searching; it requires Athena or another data source under the hood.
Source: https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html
A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The application will also use an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.
Which solution will meet these requirements?
⬜ A. Use AWS Key Management Service (AWS KMS) certificates on the ALB to encrypt data in transit. Use AWS Certificate Manager (ACM) to encrypt the EBS volumes and Aurora database storage at rest.
⬜ B. Use the AWS root account to log in to the AWS Management Console. Upload the company’s encryption certificates. While in the root account, select the option to turn on encryption for all data at rest and in transit for the account.
✅ C. Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate Manager (ACM) certificate to the ALB to encrypt data in transit.
⬜ D. Use BitLocker to encrypt all data at rest. Import the company’s TLS certificate keys to AWS Key Management Service (AWS KMS). Attach the KMS keys to the ALB to encrypt data in transit.
Explanation:
The correct way to meet encryption requirements is:
● Use AWS KMS to encrypt EBS volumes and Aurora storage at rest.
● Use AWS Certificate Manager (ACM) to issue and attach SSL/TLS certificates to the ALB for data in transit encryption.
● This is a standard, AWS-native, secure, and low-effort approach that fully complies with encryption best practices.
Why other options are wrong:
A. KMS is not used to issue certificates for ALB; ACM manages certificates for in-transit encryption.
B. The AWS root account should not be used for day-to-day operations, and there is no single toggle for account-wide encryption.
D. BitLocker is a Microsoft on-premises solution, not applicable to AWS services like EBS or Aurora.
Source: https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html
Source: https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html
A company uses AWS Organizations to run workloads within multiple AWS accounts. A tagging policy adds department tags to AWS resources when the company creates tags.
An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must determine which departments are responsible for the costs regardless of AWS account. The accounting team has access to AWS Cost Explorer for all AWS accounts within the organization and needs to access all reports from Cost Explorer.
Which solution meets these requirements in the MOST operationally efficient way?
✅ A. From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
⬜ B. From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
⬜ C. From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by the tag name, and filter by EC2.
⬜ D. From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name and filter by EC2.
Explanation:
The correct and most efficient solution is:
● In the management account’s billing console, you must activate the user-defined cost allocation tag (department).
● After activation, you can group by the department tag and filter by Amazon EC2 usage inside Cost Explorer.
● Only the management account has the ability to activate tags for consolidated billing across all accounts.
Why other options are wrong:
B. AWS-defined tags are automatically created by AWS for resources (e.g., createdBy), not custom department tags created by users.
C. Member accounts cannot activate tags for consolidated billing across all accounts; activation must happen from the management account.
D. Same problem as C, plus it wrongly assumes department is an AWS-defined tag.
Source: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
A company wants to run its payment application on AWS. The application receives payment notifications from mobile devices. Payment notifications require a basic validation before they are sent for further processing.
The backend processing application is long running and requires compute and memory to be adjusted. The company does not want to manage the infrastructure.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere. Create a standalone cluster.
⬜ B. Create an Amazon API Gateway API. Integrate the API with an AWS Step Functions state machine to receive payment notifications from mobile devices. Invoke the state machine to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure an EKS cluster with self-managed nodes.
⬜ C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon EC2 Spot Instances. Configure a Spot Fleet with a default allocation strategy.
✅ D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment notifications from mobile devices. Invoke a Lambda function to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.
Explanation:
The solution with the least operational overhead is:
● Use API Gateway to receive mobile notifications and Lambda to do lightweight validation (serverless, fully managed, minimal operations).
● Deploy the backend application using Amazon ECS with AWS Fargate, where AWS handles the underlying server provisioning, scaling, and maintenance automatically.
● This setup ensures that both validation and long-running backend processing require minimal infrastructure management while supporting auto-scaling compute and memory adjustments.
Why other options are wrong:
A. EKS Anywhere requires managing on-premises or own cluster infrastructure, increasing operational overhead.
B. EKS with self-managed nodes again requires managing EC2 instances, patching, scaling, which is not “least operational overhead.”
C. EC2 Spot Instances introduce potential interruptions and infrastructure management, not ideal for critical payment processing.
Source: https://docs.aws.amazon.com/ecs/latest/developerguide/what-is-fargate.html
A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. The AMIs contain critical data and configurations that are necessary for the company’s operations. The company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate AWS account.
⬜ B. Copy all AMIs to another AWS account periodically.
✅ C. Create a retention rule in Recycle Bin.
⬜ D. Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
Explanation:
The Recycle Bin for EBS-backed AMIs allows you to set retention rules so that if an AMI is accidentally deleted, it is retained for a specified period and can be recovered easily.
● This solution requires very little operational overhead because AWS automatically retains and manages the deleted AMIs according to the retention policy.
● No need for manual copying, snapshot management, or external backup scripts.
Why other options are wrong:
A. EBS snapshots store the volume data but do not store the full AMI metadata (like launch permissions and block device mappings).
B. Copying AMIs periodically to another account adds manual effort and scripting to manage versions.
D. Uploading AMIs to S3 is not a supported process for AMI management — AMIs are not simple files and cannot be directly stored in S3 buckets.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recycle-bin.html
A company’s developers want a secure way to gain SSH access on the company’s Amazon EC2 instances that run the latest version of Amazon Linux. The developers work remotely and in the corporate office.
The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private subnet and access the internet through a NAT gateway that is deployed in a public subnet.
What should a solutions architect do to meet these requirements MOST cost-effectively?
⬜ A. Create a bastion host in the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2 instances.
⬜ B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct the developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on the corporate network. Instruct the developers to set up another VPN connection for access when they work remotely.
⬜ C. Create a bastion host in the public subnet of the VPC. Configure the security groups and SSH keys of the bastion host to only allow connections and SSH authentication from the developers’ corporate and remote networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2 instances.
✅ D. Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.
Explanation:
The most cost-effective, secure, and low-operational overhead solution is to use AWS Systems Manager Session Manager:
● It allows developers to access EC2 instances without needing SSH keys, bastion hosts, or open inbound ports.
● It uses IAM policies and secure tunneling through the Systems Manager service, even for instances in private subnets.
● This solution eliminates the need to maintain bastion hosts, VPNs, or manage SSH key distributions.
Why other options are wrong:
A. Bastion hosts add additional infrastructure management and costs; and EC2 Instance Connect is designed for instances with public IPs.
B. Site-to-Site VPN is complex and expensive, especially requiring two different VPN solutions for remote and corporate users.
C. Bastion hosts require ongoing patching, hardening, and expose additional risk with open inbound ports.
Source: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Select TWO.)
{
"Statement": [
{
"Action": [
"ssm:ListDocuments",
"ssm:GetDocument"
],
"Effect": "Allow",
"Resource": "*",
"Sid": ""
}
],
"Version": "2012-10-17"
}
✅ A. Role
✅ B. Group
⬜ C. Organization
⬜ D. Amazon Elastic Container Service (Amazon ECS) resource
⬜ E. Amazon EC2 resource
Explanation:
This JSON represents an identity-based policy, which can be attached to IAM principals such as:
● IAM Roles (A) — to grant permissions to assume and perform actions.
● IAM Groups (B) — to group users and assign permissions at the group level.
Identity-based policies cannot be directly attached to:
● AWS Organizations entities (C),
● ECS resources like tasks/services (D),
● EC2 instances (E).
For ECS and EC2 resources, you would typically use resource-based policies or instance profiles/roles, not direct identity policies.
Why other options are wrong:
C. Organization: You attach service control policies (SCPs) at the organization or organizational unit (OU) level, not identity policies.
D. Amazon ECS resource: ECS tasks or services assume IAM roles but do not have identity policies directly attached.
E. Amazon EC2 resource: EC2 instances assume IAM roles (instance profiles) rather than being assigned identity policies directly.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and the MOST operationally efficient?
⬜ A. Server-side encryption with customer-provided keys (SSE-C)
⬜ B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
⬜ C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
✅ D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation
Explanation:
The most operationally efficient solution is to use SSE-KMS with automatic key rotation:
● AWS KMS automatically logs key usage through AWS CloudTrail, meeting auditing requirements.
● Automatic key rotation happens yearly without needing manual intervention, ensuring compliance with rotation policies.
● Encryption at rest is fully managed, with minimal operational overhead.
Why other options are wrong:
A. SSE-C requires clients to manage encryption keys themselves for each request — high operational overhead.
B. SSE-S3 encrypts data but does not log individual key usage for auditing.
C. Manual rotation of KMS keys introduces unnecessary operational burden compared to automated rotation.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
A company wants to use an AWS CloudFormation stack for its application in a test environment. The company stores the CloudFormation template in an Amazon S3 bucket that blocks public access. The company wants to grant CloudFormation access to the template in the S3 bucket based on specific user requests to create the test environment. The solution must follow security best practices.
Which solution will meet these requirements?
⬜ A. Create a gateway VPC endpoint for Amazon S3. Configure the CloudFormation stack to use the S3 object URL.
⬜ B. Create an Amazon API Gateway REST API that has the S3 bucket as the target. Configure the CloudFormation stack to use the API Gateway URL.
✅ C. Create a presigned URL for the template object. Configure the CloudFormation stack to use the presigned URL.
⬜ D. Allow public access to the template object in the S3 bucket. Block the public access after the test environment is created.
Explanation:
The most secure and best-practice approach is to create a presigned URL for the S3 object.
● A presigned URL grants temporary, limited access to the private S3 object without exposing it publicly.
● CloudFormation can use this presigned URL to access the template securely during stack creation, satisfying both access control and security best practices.
Why other options are wrong:
A. A VPC endpoint allows internal access but does not directly help CloudFormation access a private template unless the user is in the same VPC.
B. Creating an API Gateway to proxy S3 access is unnecessarily complex and not cost-effective for a simple use case.
D. Allowing public access, even temporarily, violates security best practices.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution to store and deliver the images to users.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
⬜ B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
⬜ C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
✅ D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
Explanation:
The most cost-effective solution for infrequently accessed images is Amazon S3 Standard-IA.
● S3 Standard-IA offers lower storage costs compared to S3 Standard, with slightly higher retrieval costs — ideal for objects accessed only a few times per year.
● Hosting the images using S3 static website hosting provides high availability without the need for managing servers.
Why other options are wrong:
A. EBS volumes are attached to EC2 instances and are not highly available across multiple Availability Zones without complex setups.
B. EFS is designed for frequent access, and is more expensive for infrequent access scenarios.
C. S3 Standard is highly available but more expensive than S3 Standard-IA when access frequency is low.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-diff-ia
A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very quickly.
What should a solutions architect recommend?
✅ A. Create a DynamoDB table in on-demand capacity mode.
⬜ B. Create a DynamoDB table with a global secondary index.
⬜ C. Create a DynamoDB table with provisioned capacity and auto scaling.
⬜ D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.
Explanation:
The on-demand capacity mode for DynamoDB is ideal for:
● Unpredictable workloads with irregular traffic patterns.
● Automatic scaling to handle sudden traffic spikes without manual capacity planning.
● Cost optimization because you only pay for the reads and writes you use, without overprovisioning.
This makes on-demand mode the best fit for unpredictable, spike-prone, and cost-sensitive scenarios.
Why other options are wrong:
B. A global secondary index (GSI) helps in querying but does not address cost optimization or capacity scaling.
C. Provisioned capacity with auto scaling helps with some variability but may not scale fast enough for sudden spikes.
D. Global tables are for multi-Region replication, not for handling unpredictable traffic or cost optimization within a single Region.
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability.
What should a solutions architect do to meet these requirements?
✅ A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon S3 bucket.
⬜ B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an Amazon RDS database.
⬜ C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket.
⬜ D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the resize jobs.
Explanation:
The best solution is to use AWS Lambda with Amazon S3:
● Lambda is serverless, automatically scales to handle unpredictable traffic, and minimizes operational overhead.
● S3 provides highly durable, highly available storage for both the original and resized images.
● Triggering Lambda from S3 upload events ensures real-time, scalable, and efficient image processing without the need to manage servers.
Why other options are wrong:
B. Step Functions are meant for orchestrating workflows, not for directly processing image resizing tasks; storing images in RDS is not cost-effective or scalable for large objects.
C. A single EC2 instance does not automatically scale and introduces operational management overhead.
D. Using ECS and EC2 adds complexity and infrastructure management, and is unnecessary for a simple image resize task.
Source: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application’s infrastructure?
⬜ A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection.
✅ B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
⬜ C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
⬜ D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.
Explanation:
The most reliable solution is to:
● Enable Multi-AZ on the RDS DB instance to provide automatic failover and high availability.
● Enable deletion protection to prevent accidental deletion of the RDS DB instance.
● Deploy EC2 instances across multiple Availability Zones within an Auto Scaling group behind an Application Load Balancer to ensure web server high availability and fault tolerance.
This setup maximizes reliability across both compute and database layers.
Why other options are wrong:
A. Deleting one EC2 instance reduces redundancy; keeping only one instance does not improve reliability.
C. Adding Lambda/API Gateway is unnecessary and overcomplicates the architecture for a traditional web application.
D. Using Spot Instances is not reliable for steady workloads because Spot Instances can be interrupted at any time.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
A company has an application that processes customer orders. The company hosts the application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database. Occasionally when traffic is high, the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as possible?
⬜ A. Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic.
✅ B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
⬜ C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SNS topic.
⬜ D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
Explanation:
The best approach is to decouple the application by writing orders into an SQS queue, then having EC2 instances in an Auto Scaling group behind an Application Load Balancer read from the queue and process the orders into Aurora.
● SQS allows for reliable buffering during traffic spikes.
● Auto Scaling allows the application tier to scale dynamically based on demand, ensuring fast and reliable processing.
Why other options are wrong:
A. SNS is a pub/sub service, not a queuing service; subscribing a database endpoint directly is not valid.
C. Similarly, SNS is designed for notification delivery, not buffered task processing.
D. Writing to SQS only when CPU thresholds are hit delays queuing and can cause orders to be lost or delayed during rapid spikes.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of images each second. The company wants to resize the images dynamically and serve appropriate formats to clients.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Install an external image management library on an EC2 instance. Use the image management library to process the images.
⬜ B. Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
✅ C. Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
⬜ D. Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
Explanation:
The best solution with the least operational overhead is to use a Lambda@Edge function with CloudFront:
● Lambda@Edge allows serverless image processing very close to the client, reducing latency.
● You can dynamically resize and transform images based on the incoming request, such as adjusting based on the User-Agent header.
● No need to manage EC2 instances, load balancers, or manual backend processing, significantly reducing operational overhead.
Why other options are wrong:
A. Managing external libraries on EC2 instances requires infrastructure maintenance and scaling concerns.
B. A CloudFront origin request policy only forwards headers; it cannot resize or process images automatically.
D. A CloudFront response headers policy manages HTTP response headers, but it does not modify or resize images.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html
A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company has no machine learning (ML) experience and wants to use a managed service for the training and predictions.
Which combination of steps will meet these requirements? (Select TWO.)
⬜ A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
⬜ B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
⬜ C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create predictions based on the inputs.
✅ D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to create a prediction based on the inputs.
✅ E. Train an Amazon Forecast predictor by using the historical data in the S3 bucket.
Explanation:
The best solution, given no machine learning experience and a need for managed services, is to use Amazon Forecast:
● Amazon Forecast is a fully managed service that automates model building, training, and deploying forecasts based on time series data without requiring ML expertise.
● E: Train an Amazon Forecast predictor using historical data stored in S3.
● D: Use an AWS Lambda function that calls the Forecast predictor to generate forecasts dynamically based on new inputs.
Why other options are wrong:
A. SageMaker requires ML expertise for model development and tuning, which the company lacks.
B. Same as A; SageMaker model training is not fully managed for non-ML users.
C. Lambda integrating with SageMaker endpoints still implies custom ML model handling, unsuitable for non-ML experienced teams.
Source: https://docs.aws.amazon.com/forecast/latest/dg/what-is-forecast.html
A company offers a food delivery service that is growing rapidly. Because of the growth, the company’s order processing system is experiencing scaling problems during peak traffic hours. The current architecture includes the following:
● A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application.
● Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfill orders.
The order collection process occurs quickly, but the order fulfillment process can take longer. Data must not be lost because of a scaling event.
A solutions architect must ensure that the order collection process and the order fulfillment process can both scale properly during peak traffic hours. The solution must optimize utilization of the company’s AWS resources.
Which solution meets these requirements?
⬜ A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure each Auto Scaling group’s minimum capacity according to peak workload values.
⬜ B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure a CloudWatch alarm to invoke an Amazon Simple Notification Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.
⬜ C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the EC2 instances to poll their respective queue. Scale the Auto Scaling groups based on notifications that the queues send.
✅ D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the EC2 instances to poll their respective queue. Create a metric based on a backlog per instance calculation. Scale the Auto Scaling groups based on this metric.
Explanation:
The most efficient solution is to:
● Use Amazon SQS queues to decouple order collection and order fulfillment, ensuring no data is lost if instances are scaled in or out.
● Use a backlog per instance metric (number of queued messages divided by the number of running instances) to dynamically scale EC2 instances in the Auto Scaling groups.
This approach optimizes resource utilization by scaling based on the actual workload rather than CPU utilization or manual capacity settings.
Why other options are wrong:
A. Setting minimum capacity based on peak workloads leads to overprovisioning and inefficient resource use during normal hours.
B. Creating new Auto Scaling groups dynamically is complex and unnecessary when scaling can be done within existing groups.
C. Scaling based only on SQS notifications is less precise compared to using a calculated backlog per instance metric.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
A company has a three-tier environment on AWS that ingests sensor data from its users’ devices. The traffic flows through a Network Load Balancer (NLB), then to Amazon EC2 instances for the web tier, and finally to EC2 instances for the application tier that makes database calls.
What should a solutions architect do to improve the security of data in transit to the web tier?
✅ A. Configure a TLS listener and add the server certificate on the NLB.
⬜ B. Configure AWS Shield Advanced and enable AWS WAF on the NLB.
⬜ C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it.
⬜ D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS).
Explanation:
The best way to improve security of data in transit is to encrypt traffic between clients and the web tier using TLS.
● Network Load Balancers support TLS termination by configuring a TLS listener and attaching a server certificate to handle secure connections.
● This ensures that all data transmitted over the network to the web servers is encrypted in transit.
Why other options are wrong:
B. AWS Shield Advanced and AWS WAF protect against DDoS and application-layer attacks, not specifically encryption of data in transit.
C. Switching to an Application Load Balancer is unnecessary just for encryption; NLB can also handle TLS termination.
D. Encrypting EBS volumes protects data at rest, not data in transit.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
A company plans to use Amazon ElastiCache for its multi-tier web application. A solutions architect creates a Cache VPC for the ElastiCache cluster and an App VPC for the application’s Amazon EC2 instances. Both VPCs are in the us-east-1 Region.
The solutions architect must implement a solution to provide the application’s EC2 instances with access to the ElastiCache cluster.
Which solution will meet these requirements MOST cost-effectively?
✅ A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
⬜ B. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
⬜ C. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the peering connection’s security group to allow inbound connection from the application’s security group.
⬜ D. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an inbound rule for the Transit VPC’s security group to allow inbound connection from the application’s security group.
Explanation:
The most cost-effective and simple solution is to use a VPC peering connection:
● VPC peering allows direct network connectivity between two VPCs in the same Region without the need for additional network appliances like Transit VPCs.
● Update the route tables for both VPCs to route traffic through the peering connection.
● Adjust the ElastiCache security group to allow inbound traffic from the application’s security group.
This approach avoids the complexity and cost associated with Transit VPC architectures.
Why other options are wrong:
B. Transit VPCs are more expensive and unnecessarily complex for two VPCs within the same Region.
C. Security groups are attached to resources like EC2 or ElastiCache, not to peering connections themselves.
D. Same as B — Transit VPCs are not needed here and introduce avoidable operational and cost overhead.
Source: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure the database access credentials. The company’s security team wants to protect the application and the database from SQL injection and other web-based attacks.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Use security groups and network ACLs to secure the database and application servers.
✅ B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
⬜ C. Use AWS Network Firewall to protect the application and the database.
⬜ D. Use different database accounts in the application code for different functions. Avoid granting excessive privileges to the database users.
Explanation:
The best solution with the least operational overhead is to use AWS WAF to protect the application:
● AWS WAF can automatically detect and block SQL injection and other common web-based attacks without needing to manage infrastructure manually.
● Additionally, RDS parameter groups help fine-tune the database’s security settings, such as enforcing SSL connections.
This approach leverages managed services to improve security while keeping operational burden low.
Why other options are wrong:
A. Security groups and network ACLs control network traffic, not application-layer attacks like SQL injection.
C. AWS Network Firewall is mainly used for network-level traffic filtering, not specifically web application-level attacks.
D. Using different database accounts improves database security but does not protect against web-based threats like SQL injection.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html
A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed.
What should the solutions architect do to ensure that the architecture supports distributed session data management?
✅ A. Use Amazon ElastiCache to manage and store session data.
⬜ B. Use session affinity (sticky sessions) of the ALB to manage session data.
⬜ C. Use Session Manager from AWS Systems Manager to manage the session.
⬜ D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
Explanation:
The most scalable and reliable way to handle distributed session data is to use Amazon ElastiCache (such as Redis or Memcached) to store session state centrally:
● ElastiCache allows any EC2 instance to read/write session data regardless of which instance handled the original request.
● This approach supports scaling across Availability Zones and frequent instance scale-in/scale-out without losing session data.
Why other options are wrong:
B. Sticky sessions tie users to specific instances, which does not scale well when instances are replaced frequently.
C. Session Manager is for secure shell access to EC2 instances, not for application session management.
D. AWS STS GetSessionToken provides temporary security credentials, not web application session data management.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ManagingSessions.Redis.html
A company runs a high performance computing (HPC) workload on AWS. The workload requires low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.
What should a solutions architect propose to improve the performance of the workload?
✅ A. Choose a cluster placement group while launching Amazon EC2 instances.
⬜ B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
⬜ C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
⬜ D. Choose the required capacity reservation while launching Amazon EC2 instances.
Explanation:
For high performance computing (HPC) workloads requiring low-latency, high-throughput, tightly coupled node-to-node communication, the best solution is to launch EC2 instances in a cluster placement group:
● Cluster placement groups place instances physically close together inside a single Availability Zone, optimizing for high network performance.
● This reduces network latency and increases network throughput between instances.
Why other options are wrong:
B. Dedicated tenancy provides isolated hardware but does not optimize network latency or throughput.
C. Elastic Inference accelerators are used to boost machine learning inference performance, not general HPC networking performance.
D. Capacity reservations ensure instance availability but do not affect network performance.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available.
Which combination of actions should the company take to meet these requirements? (Select TWO.)
⬜ A. Refactor the application as serverless with AWS Lambda functions running .NET Core.
✅ B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
⬜ C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).
⬜ D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.
✅ E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.
Explanation:
The company wants to minimize development changes, so the best approach is:
● B. Rehost the application on Elastic Beanstalk using the .NET platform. Elastic Beanstalk supports Windows-based applications and automatically handles Multi-AZ deployments for high availability.
● E. Use AWS DMS to migrate from Oracle to Oracle on Amazon RDS, allowing the company to continue using Oracle without needing to change the application’s database logic. RDS also offers Multi-AZ deployments for high availability.
Why other options are wrong:
A. Refactoring into serverless with Lambda would require major development changes (contrary to the requirement).
C. Replatforming to EC2 with Amazon Linux would require OS and platform changes, increasing development and migration efforts.
D. Migrating from Oracle to DynamoDB would require significant schema and application logic changes, not minimal changes.
Source: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_NET.html
Source: https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
A company’s application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
✅ B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
⬜ C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule’s target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule’s target.
⬜ D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Explanation:
The best solution with the least operational overhead is to use Amazon AppFlow:
● Amazon AppFlow is a fully managed integration service that allows you to transfer data securely and directly from SaaS applications into AWS services like S3 without running any servers.
● Combined with S3 event notifications to SNS for user notifications after uploads, this completely removes the need for EC2 instance management, improving performance and reliability.
Why other options are wrong:
A. Using Auto Scaling EC2 instances still requires managing servers, which increases operational overhead.
C. EventBridge rules for direct data ingestion from SaaS sources are complex to set up and not the intended use case.
D. Containerizing with ECS reduces EC2 overhead but still requires managing containers and scaling policies, making it more complex than a managed service like AppFlow.
Source: https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary.
Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?
⬜ A. Use an Amazon Aurora global database with a pilot light deployment.
✅ B. Use an Amazon Aurora global database with a warm standby deployment.
⬜ C. Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment.
⬜ D. Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment.
Explanation:
The best solution for lowest RTO and up-to-date data with minimal latency is to use an Amazon Aurora global database with a warm standby deployment:
● Aurora global databases are designed for low-latency cross-Region replication (typically less than 1 second lag).
● A warm standby means that the DR environment is running at reduced capacity but can scale up quickly when needed, providing a much faster recovery compared to cold or pilot light strategies.
Why other options are wrong:
A. A pilot light approach keeps minimal services running, but warm standby provides faster recovery (lower RTO).
C. RDS Multi-AZ operates only within a single Region, not across multiple Regions.
D. RDS Multi-AZ still remains in one Region; it does not support cross-Region disaster recovery natively.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-databases.html
A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an AWS managed solution that will control access to the REST API to reduce development efforts.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Configure an AWS Lambda function to be an authorizer in API Gateway to validate which user made the request.
⬜ B. For each user, create and assign an API key that must be sent with each request. Validate the key by using an AWS Lambda function.
⬜ C. Send the user’s email address in the header with every request. Invoke an AWS Lambda function to validate that the user with that email address has proper access.
✅ D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
Explanation:
The most efficient, AWS managed solution with least operational overhead is to use an Amazon Cognito user pool authorizer directly in API Gateway:
● API Gateway can natively integrate with Cognito to authenticate and authorize API requests.
● No need to write custom Lambda functions for validation, reducing development and operational burden.
● Cognito will automatically handle token validation and user authentication with minimal configuration.
Why other options are wrong:
A. Lambda authorizers require custom coding and management, increasing operational overhead.
B. API keys are primarily used for usage tracking and throttling, not authentication or fine-grained access control.
C. Sending user information in headers and validating manually requires custom logic, making it less secure and more complex.
Source: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which steps should the solutions architect do in conjunction to reach this goal? (Select two.)
⬜ A. Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
⬜ B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
⬜ C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the AdministratorAccess IAM policy attached.
✅ D. Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
✅ E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using that IAM role.
Explanation:
To follow the principle of least privilege:
● D. Create an IAM user and assign permissions only for CloudFormation actions needed for their role, not broad permissions like Administrator or PowerUser.
● E. Creating an IAM role with specific permissions to launch CloudFormation stacks provides controlled and auditable access, ensuring tight privilege boundaries.
Why other options are wrong:
A. Never use the AWS root account for daily activities; it’s reserved for account and security administration only.
B. PowerUsers policy grants too many permissions (almost admin access excluding IAM and billing).
C. AdministratorAccess grants full access to all AWS services and resources, which violates least privilege principles.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files must be stored for 7 years for auditing purposes.
Which solution will meet these requirements?
⬜ A. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for transcript file analysis.
✅ B. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
⬜ C. Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file analysis.
⬜ D. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file analysis.
Explanation:
The best solution is to use Amazon Transcribe:
● Amazon Transcribe is specifically designed for audio transcription and supports multiple speaker recognition (speaker diarization).
● Amazon Athena can query the transcript files stored in Amazon S3 directly using SQL queries, without the need for complex ETL processes.
● This combination meets the need for easy analysis, scalability, and 7 years of cost-effective S3 storage for compliance.
Why other options are wrong:
A. Amazon Rekognition is used for image and video analysis, not audio transcription.
C. Amazon Translate is for language translation, not speaker recognition or audio transcription.
D. Amazon Textract is used for extracting text from documents, not analyzing audio transcriptions.
Source: https://docs.aws.amazon.com/transcribe/latest/dg/what-is-transcribe.html
An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some customers experienced timeouts and the application did not process the orders of those customers. A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open connections. The solutions architect needs to prevent the timeout errors while making the least possible changes to the application.
Which solution will meet these requirements?
⬜ A. Configure provisioned concurrency for the Lambda function. Modify the database to be a global database in multiple AWS Regions.
✅ B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the database endpoint.
⬜ C. Create a read replica for the database in a different AWS Region. Use query string parameters in API Gateway to route traffic to the read replica.
⬜ D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). Modify the Lambda function to use the DynamoDB table.
Explanation:
The best solution with the least changes to the application is to use Amazon RDS Proxy:
● RDS Proxy manages database connections efficiently, pooling and reusing connections instead of opening a new one for every Lambda invocation.
● This reduces CPU and memory load on the database, preventing timeout errors during sudden traffic spikes.
● Updating the Lambda function to connect to the RDS Proxy endpoint instead of the direct database endpoint is a minimal code change.
Why other options are wrong:
A. Provisioned concurrency improves Lambda cold starts but does not address database connection overload.
C. Read replicas are for read scaling only, and routing traffic manually via query strings is not practical for this use case.
D. Migrating to DynamoDB involves major application changes, which contradicts the requirement for minimal changes.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
A company’s application is having performance issues. The application is stateful and needs to complete in-memory tasks on Amazon EC2 instances. The company used AWS CloudFormation to deploy infrastructure and used the M5 EC2 instance family. As traffic increased, the application performance degraded. Users are reporting delays when the users attempt to access the application.
Which solution will resolve these issues in the MOST operationally efficient way?
⬜ A. Replace the EC2 instances with T3 EC2 instances that run in an Auto Scaling group. Make the changes by using the AWS Management Console.
⬜ B. Modify the CloudFormation templates to run the EC2 instances in an Auto Scaling group. Increase the desired capacity and the maximum capacity of the Auto Scaling group manually when an increase is necessary.
✅ C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Use Amazon CloudWatch built-in EC2 memory metrics to track the application performance for future capacity planning.
⬜ D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2 instances to generate custom application latency metrics for future capacity planning.
Explanation:
The application is stateful and performs in-memory operations, meaning memory-optimized instances are a better fit.
● C. Replacing M5 instances with R5 memory-optimized instances improves performance for memory-intensive workloads.
● Using built-in CloudWatch memory metrics ensures operational efficiency without needing extra agents for monitoring, making future capacity planning easier.
Why other options are wrong:
A. T3 instances are burstable and not designed for consistent high memory workloads.
B. Auto Scaling groups are helpful for stateless applications; scaling stateful apps is complex and not addressed just by adding instances.
D. Deploying the CloudWatch agent introduces additional management overhead, whereas built-in metrics are easier and faster to use.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#AvailableInstanceTypes
A company is running several business applications in three separate VPCs within the us-east-1 Region. The applications must be able to communicate between VPCs. The applications also must be able to consistently send hundreds of gigabytes of data each day to a latency-sensitive application that runs in a single on-premises data center.
A solutions architect needs to design a network connectivity solution that maximizes cost-effectiveness.
Which solution meets these requirements?
⬜ A. Configure three AWS Site-to-Site VPN connections from the data center to AWS. Establish connectivity by configuring one VPN connection for each VPC.
⬜ B. Launch a third-party virtual network appliance in each VPC. Establish an IPsec VPN tunnel between the data center and each virtual appliance.
⬜ C. Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1. Establish connectivity by configuring each VPC to use one of the Direct Connect connections.
✅ D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit gateway. Establish connectivity between the Direct Connect connection and the transit gateway.
Explanation:
The best solution for high throughput, low latency, and cost-effectiveness is to:
● D. Use a single AWS Direct Connect connection (cost-effective for large data volumes) and a Transit Gateway to connect multiple VPCs and the on-premises data center.
● Transit Gateway allows easy and scalable VPC-to-VPC communication and efficient on-premises connectivity through the Direct Connect link.
Why other options are wrong:
A. Site-to-Site VPNs are cheaper but introduce higher latency and lower throughput compared to Direct Connect — not ideal for latency-sensitive, high-data-volume applications.
B. Using third-party appliances adds complexity and extra cost without solving the primary throughput/latency requirement efficiently.
C. Setting up three separate Direct Connect connections is unnecessary and expensive when one connection and a Transit Gateway can achieve the same result.
Source: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways.html
Source: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
A company wants to measure the effectiveness of its recent marketing campaigns. The company performs batch processing on CSV files of sales data and stores the results in an Amazon S3 bucket once every hour. The S3 bucket contains petabytes of objects. The company runs one-time queries in Amazon Athena to determine which products are most popular on a particular date for a particular region. Queries sometimes fail or take longer than expected to finish.
Which actions should a solutions architect take to improve the query performance and reliability? (Select TWO.)
⬜ A. Reduce the S3 object sizes to less than 126 MB.
✅ B. Partition the data by date and region in Amazon S3.
⬜ C. Store the files as large, single objects in Amazon S3.
⬜ D. Use Amazon Kinesis Data Analytics to run the queries as part of the batch processing operation.
✅ E. Use an AWS Glue extract, transform, and load (ETL) process to convert the CSV files into Apache Parquet format.
Explanation:
To improve Athena query performance and reliability:
● B. Partitioning the data by date and region reduces the amount of data scanned per query, significantly improving speed and reducing costs.
● E. Converting CSV files to Parquet (a columnar format) reduces storage size and increases query efficiency because Athena can read only the relevant columns needed for a query.
Why other options are wrong:
A. Reducing object size below 126 MB could cause too many small files, leading to performance degradation in Athena.
C. Large, unpartitioned files can slow down queries and increase scan costs.
D. Amazon Kinesis Data Analytics is for real-time stream processing, not for optimizing batch query performance on static S3 datasets.
Source: https://docs.aws.amazon.com/athena/latest/ug/optimizing-queries.html
A company runs a global web application on Amazon EC2 instances behind an Application Load Balancer. The application stores data in Amazon Aurora. The company needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The solution does not need to handle the load when the primary infrastructure is healthy.
What should a solutions architect do to meet these requirements?
✅ A. Deploy the application with the required infrastructure elements in place. Use Amazon Route 53 to configure active-passive failover. Create an Aurora Replica in a second AWS Region.
⬜ B. Host a scaled-down deployment of the application in a second AWS Region. Use Amazon Route 53 to configure active-active failover. Create an Aurora Replica in the second Region.
⬜ C. Replicate the primary infrastructure in a second AWS Region. Use Amazon Route 53 to configure active-active failover. Create an Aurora database that is restored from the latest snapshot.
⬜ D. Back up data with AWS Backup. Use the backup to create the required infrastructure in a second AWS Region. Use Amazon Route 53 to configure active-passive failover. Create an Aurora second primary instance in the second Region.
Explanation:
The best solution for 30 minutes of acceptable downtime and some data loss tolerance is an active-passive disaster recovery setup:
● A. Set up the necessary infrastructure in a second Region but keep it passive (inactive) unless a failover occurs.
● Use Route 53 active-passive failover to switch DNS to the secondary Region if the primary fails.
● An Aurora cross-Region replica provides near real-time replication and faster recovery in case of disaster.
This approach is cost-effective because the secondary infrastructure is not actively serving traffic.
Why other options are wrong:
B. Active-active setups are more expensive and are intended for high availability, not just disaster recovery with some downtime tolerance.
C. Restoring from a snapshot introduces more downtime than allowed (snapshot restores are slower than promoting a replica).
D. AWS Backup is suitable for backups but not ideal for minimizing RTO/RPO compared to Aurora replicas for DR scenarios.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
A company wants to direct its users to a backup static error page if the company’s primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure overhead.
Which solution will meet these requirements?
⬜ A. Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an Amazon S3 bucket to the records so that the traffic is sent to the most responsive endpoints.
✅ B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
⬜ C. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoints. Configure Route 53 to send requests to the instance only if the health checks fail for the ALB.
⬜ D. Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic to the website if the health check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass.
Explanation:
The most efficient solution with minimal infrastructure changes and low overhead is:
● B. Use Route 53 active-passive failover.
● Primary traffic is directed to the ALB while healthy, and when the ALB health check fails, traffic is automatically routed to the static S3 bucket hosting the error page.
● This setup is simple, serverless, and cost-effective.
Why other options are wrong:
A. Latency routing is for choosing the lowest latency endpoint, not for failover in case of unavailability.
C. Active-active setup adds unnecessary complexity; the company only needs a backup page, not a second active website.
D. Multivalue answer routing is not as reliable for precise failover behavior compared to a dedicated active-passive failover policy.
Source: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
A company has a data ingestion workflow that includes the following components:
● An Amazon Simple Notification Service (Amazon SNS) topic that receives notifications about new data deliveries
● An AWS Lambda function that processes and stores the data
The ingestion workflow occasionally fails because of network connectivity issues. When failure occurs, the corresponding data is not ingested unless the company manually reruns the job. What should a solutions architect do to ensure that all notifications are eventually processed?
⬜ A. Configure the Lambda function for deployment across multiple Availability Zones.
⬜ B. Modify the Lambda function’s configuration to increase the CPU and memory allocations for the function.
⬜ C. Configure the SNS topic’s retry strategy to increase both the number of retries and the wait time between retries.
✅ D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process messages in the queue.
Explanation:
The most reliable way to ensure that all notifications are eventually processed is to:
● D. Use an Amazon SQS queue as a dead-letter queue (DLQ) or on-failure destination for the SNS topic.
● This ensures that if the Lambda function invocation fails, the notification is stored in SQS and can be retried later.
● Lambda can then process messages from the SQS queue, making the ingestion more fault-tolerant without manual intervention.
Why other options are wrong:
A. Lambda is already highly available across multiple AZs by design — no extra configuration needed.
B. Increasing CPU and memory might improve performance, but does not solve network failure issues.
C. SNS retry strategies are helpful but do not guarantee delivery beyond a few retries if persistent failures occur — using SQS ensures durable message storage.
Source: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations
A business’s backup data totals 700 terabytes (TB) and is kept in network-attached storage (NAS) at its data center. This backup data must be available in the event of occasional regulatory inquiries and preserved for a period of seven years. The organization has chosen to relocate its backup data from its on-premises data center to Amazon Web Services (AWS). Within one month, the migration must be completed. The company’s public internet connection provides 500 Mbps of dedicated capacity for data transport.
What should a solutions architect do to ensure that data is migrated and stored at the LOWEST possible cost?
✅ A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
⬜ B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier.
⬜ C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
⬜ D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.
Explanation:
The best option for migrating 700 TB of data within one month and at the lowest cost is to:
● A. Use AWS Snowball devices, which are designed for large-scale data migrations without depending on network bandwidth.
● After ingestion into S3, S3 lifecycle policies can automatically transition data to S3 Glacier Deep Archive for low-cost long-term storage.
Transferring 700 TB over a 500 Mbps link would take many months, not meet the one-month timeline, and incur high data transfer costs.
Why other options are wrong:
B. VPN connection over public internet would be too slow and is not intended for large bulk data migrations.
C. Provisioning Direct Connect might improve speed but requires weeks to set up, costs more, and may not meet the one-month deadline.
D. AWS DataSync uses network bandwidth, and 700 TB over a 500 Mbps line would not complete in time.
Source: https://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html
A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application’s architecture.
What should a solutions architect do to meet these requirements?
✅ A. Use Amazon ElastiCache in front of the database.
⬜ B. Use RDS Proxy between the application and the database.
⬜ C. Migrate the application from EC2 instances to AWS Lambda.
⬜ D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.
Explanation:
The most effective way to improve read performance while minimizing changes to the current architecture is to:
● A. Implement Amazon ElastiCache (like Redis or Memcached) in front of the RDS database.
● ElastiCache will cache frequent read queries, reducing the load on RDS and significantly improving application responsiveness without major changes to the application code.
Why other options are wrong:
B. RDS Proxy helps with connection pooling and scaling connections, but does not significantly improve read performance for heavy query loads.
C. Migrating to Lambda would require major refactoring of the application, which goes against the requirement to minimize changes.
D. Migrating to DynamoDB would require significant application redesign, as DynamoDB is a NoSQL database, very different from MySQL.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html
A company is planning to build a high performance computing (HPC) workload as a service solution that is hosted on AWS. A group of 16 Amazon EC2 Linux instances requires the lowest possible latency for node-to-node communication. The instances also need a shared block device volume for high-performing storage.
Which solution will meet these requirements?
✅ A. Use a cluster placement group. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume to all the instances by using Amazon EBS Multi-Attach.
⬜ B. Use a cluster placement group. Create shared file systems across the instances by using Amazon Elastic File System (Amazon EFS).
⬜ C. Use a partition placement group. Create shared file systems across the instances by using Amazon Elastic File System (Amazon EFS).
⬜ D. Use a spread placement group. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume to all the instances by using Amazon EBS Multi-Attach.
Explanation:
For HPC workloads requiring lowest possible network latency and high-performance shared storage:
● A. Use a cluster placement group to place all instances physically close together inside a single AZ, optimizing network performance (low latency and high throughput).
● Attach a Provisioned IOPS SSD EBS volume with EBS Multi-Attach, allowing multiple instances to access the same block storage device simultaneously for high-performance shared storage.
Why other options are wrong:
B. EFS is a file system, not a block device, and introduces higher latency — not ideal for HPC requiring block-level storage and low latency.
C. Partition placement groups isolate groups of instances for large-scale distributed systems but do not optimize for low-latency communication like cluster placement groups do.
D. Spread placement groups maximize instance separation for fault tolerance, not for minimizing network latency.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html
A solutions architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The solutions architect must improve the security posture and minimize the impact of a DDoS attack on resources.
Which solution is MOST effective?
✅ A. Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the WAF ACL on the CloudFront distribution.
⬜ B. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access.
⬜ C. Enable VPC Flow Logs and store them in Amazon S3. Create a custom AWS Lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
⬜ D. Enable Amazon GuardDuty and configure findings written to Amazon CloudWatch. Create an event with CloudWatch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS Lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
Explanation:
The most effective way to improve the security posture and protect against DDoS attacks is to:
● A. Use AWS WAF with rate-based rules to automatically block IP addresses that exceed request thresholds.
● Place AWS WAF in front of an Amazon CloudFront distribution, and point the CloudFront distribution to the ALB, which helps absorb and mitigate DDoS attacks at the edge layer (CloudFront).
This method provides automated protection, reduces load on backend systems, and minimizes operational overhead.
Why other options are wrong:
B. Building a custom Lambda function and managing NACLs manually is complex, reactive, and error-prone compared to using managed services.
C. VPC Flow Logs are useful for monitoring but do not actively block DDoS traffic.
D. GuardDuty provides threat detection, not immediate traffic filtering; mitigation through custom workflows increases latency and operational overhead.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web.html
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Select TWO.)
✅ A. Create an ongoing replication task.
⬜ B. Create a database backup of the on-premises database.
✅ C. Create an AWS Database Migration Service (AWS DMS) replication server.
⬜ D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
⬜ E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.
Explanation:
To migrate a live, online database while keeping it synchronized:
● A. An ongoing replication task with AWS DMS captures ongoing changes (CDC - Change Data Capture) after the initial load.
● C. An AWS DMS replication server is needed to manage and run the replication process from the source (on-premises PostgreSQL) to the target (Aurora PostgreSQL).
Why other options are wrong:
B. Creating a backup is helpful but does not support ongoing synchronization — backups are one-time, not continuous replication.
D. The AWS Schema Conversion Tool (AWS SCT) is only needed for heterogeneous migrations (different engines). Since both are PostgreSQL, no schema conversion is needed.
E. EventBridge can monitor events but does not synchronize databases; monitoring alone is not sufficient to ensure migration success.
Source: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Overview.html
A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has automated backups configured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
⬜ A. Take a manual snapshot of the DB cluster.
⬜ B. Create a lifecycle policy for the automated backups.
⬜ C. Configure automated backup retention for 5 years.
✅ D. Configure an Amazon CloudWatch Logs export for the DB cluster.
✅ E. Use AWS Backup to take the backups and to keep the backups for 5 years.
Explanation:
To meet the requirements:
● D. Export the Aurora audit logs to Amazon CloudWatch Logs to retain audit activities indefinitely and meet compliance requirements.
● E. Use AWS Backup to manage database backups and set a retention policy to keep backups for 5 years and delete them after that.
Why other options are wrong:
A. Manual snapshots do not expire automatically, requiring manual deletion and management, which increases operational overhead.
B. There is no lifecycle policy for automated Aurora backups (Aurora automatically manages backups within a limited retention window).
C. Aurora automated backup retention supports up to 35 days only, not 5 years — AWS Backup is required for longer retention periods.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-postgresql-logs.html
A company has migrated a fleet of hundreds of on-premises virtual machines (VMs) to Amazon EC2 instances. The instances run a diverse fleet of Windows Server versions along with several Linux distributions. The company wants a solution that will automate inventory and updates of the operating systems. The company also needs a summary of common vulnerabilities of each instance for regular monthly reviews.
What should a solutions architect recommend to meet these requirements?
⬜ A. Set up AWS Systems Manager Patch Manager to manage all the EC2 instances. Configure AWS Security Hub to produce monthly reports.
✅ B. Set up AWS Systems Manager Patch Manager to manage all the EC2 instances. Deploy Amazon Inspector, and configure monthly reports.
⬜ C. Set up AWS Shield Advanced, and configure monthly reports. Deploy AWS Config to automate patch installations on the EC2 instances.
⬜ D. Set up Amazon GuardDuty in the account to monitor all EC2 instances. Deploy AWS Config to automate patch installations on the EC2 instances.
Explanation:
The best solution is:
● B. Use AWS Systems Manager Patch Manager to automate patching and updating across diverse EC2 instances.
● Deploy Amazon Inspector, which automatically assesses instances for common vulnerabilities and exposures (CVEs) and provides monthly reports on findings.
This setup addresses both patch automation and vulnerability assessment in a scalable, managed way.
Why other options are wrong:
A. AWS Security Hub aggregates findings from services like Inspector but does not perform vulnerability scans itself.
C. AWS Shield Advanced is for DDoS protection, not patching or vulnerability management.
D. Amazon GuardDuty is for threat detection, not managing patches or generating CVE vulnerability reports.
Source: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
Source: https://docs.aws.amazon.com/inspector/latest/user/inspector_introduction.html
A company wants to build an online marketplace application on AWS as a set of loosely coupled microservices. For this application, when a customer submits a new order two microservices should handle the event simultaneously:
● The Email microservice will send a confirmation email.
● The OrderProcessing microservice will start the order delivery process.
If a customer cancels an order, the OrderCancellation and Email microservices should handle the event simultaneously.
A solutions architect wants to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) to design the messaging between the microservices.
How should the solutions architect design the solution?
⬜ A. Create a single SQS queue and publish order events to it. The Email, OrderProcessing, and OrderCancellation microservices can then consume messages off the queue.
⬜ B. Create three SNS topics for each microservice. Publish order events to the three topics. Subscribe each of the Email, OrderProcessing, and OrderCancellation microservices to its own topic.
✅ C. Create an SNS topic and publish order events to it. Create three SQS queues for the Email, OrderProcessing, and OrderCancellation microservices. Subscribe all SQS queues to the SNS topic with message filtering.
⬜ D. Create two SQS queues and publish order events to both queues simultaneously. One queue is for the Email and OrderProcessing microservices. The second queue is for the Email and OrderCancellation microservices.
Explanation:
The most efficient and scalable solution is:
● C. Create an SNS topic for publishing order events.
● Create three SQS queues (one for each microservice: Email, OrderProcessing, OrderCancellation).
● Subscribe the queues to the SNS topic and use message filtering to route only relevant events (e.g., new order or cancel order) to the appropriate microservices.
This allows loose coupling, fan-out architecture, and efficient event distribution with minimal duplication.
Why other options are wrong:
A. A single SQS queue does not allow multiple services to independently consume the same event.
B. Creating separate SNS topics for each microservice is overcomplicating the design unnecessarily.
D. Publishing to multiple SQS queues directly increases complexity and breaks event-driven best practices.
Source: https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html
A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a solution that provides secure access to the EC2 instances.
Which solution will meet this requirement with the LEAST amount of administrative overhead?
✅ A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.
⬜ B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
⬜ C. Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.
⬜ D. Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.
Explanation:
The best solution with the least administrative overhead is to:
● A. Use AWS Systems Manager Session Manager to securely connect to EC2 instances without the need for SSH keys.
● It eliminates the need to manage SSH keys, supports fine-grained IAM access control, provides audit logging to Amazon CloudWatch Logs or Amazon S3, and works over the AWS network without needing to open inbound ports.
Why other options are wrong:
B. AWS STS is mainly for temporary credentials for AWS APIs, not for managing SSH access.
C. Bastion hosts still require managing SSH keys and expose security risks and operational overhead.
D. Cognito + Lambda adds unnecessary complexity for simple EC2 access management.
Source: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company’s solutions architect creates a CloudFront distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.
Which solution will meet these requirements?
⬜ A. Create a virtual server by using Amazon Lightsail. Configure the web server in the Lightsail instance. Upload website content by using an SFTP client.
⬜ B. Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer. Upload website content by using an SFTP client.
✅ C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website content by using the AWS CLI.
⬜ D. Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content by using the SFTP client.
Explanation:
The most cost-effective and resilient solution is:
● C. Store the static website content in a private Amazon S3 bucket and configure access through a CloudFront origin access identity (OAI).
● Uploading via the AWS CLI is straightforward and secure.
● This approach provides high availability, automatic scaling, and low cost without running servers.
Why other options are wrong:
A. Amazon Lightsail adds unnecessary server management and higher costs compared to S3 for static content.
B. An Auto Scaling group and ALB are overkill for hosting static documents, and much more expensive.
D. Making the S3 bucket public is not recommended due to security risks; best practice is using private buckets with CloudFront OAI.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon S3 buckets so that the data can be processed in the future. According to compliance laws, the data must not be transmitted over the public internet. Servers in the company’s on-premises data center will consume the output from an application that runs on the EC2 instances.
Which solution will meet these requirements?
⬜ A. Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the company and the VPC.
✅ B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between the on-premises network and the VPC.
⬜ C. Set up an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN connection between the company and the VPC.
⬜ D. Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances to fetch S3 data and feed the application instances.
Explanation:
The correct and most compliant solution is:
● B. Use a gateway VPC endpoint for S3 to allow EC2 instances to access S3 privately without using the public internet.
● Use AWS Direct Connect to create a private, dedicated connection between the on-premises data center and AWS for secure data transfer without touching the internet.
This ensures that all traffic stays private and meets compliance requirements.
Why other options are wrong:
A. An interface VPC endpoint is for AWS services like EC2 APIs, not for S3 data access.
C. Transit Gateway is for connecting VPCs and on-premises networks, not for direct S3 access.
D. Using proxy EC2 instances with NAT gateways would still route traffic through public IPs, violating the compliance requirement.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage. What should a solutions architect do to meet these requirements?
⬜ A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
✅ B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
⬜ C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.
⬜ D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchronize the EBS volumes across the different EC2 instances.
Explanation:
The best solution is:
● B. Use Amazon Elastic File System (Amazon EFS), which is a fully managed, scalable, NFS-based shared file storage service.
● EFS allows multiple EC2 instances across multiple AZs to simultaneously mount and read/write to the same shared file system.
● This setup is perfect for applications requiring hierarchical directory structures and concurrent access with high throughput and low latency.
Why other options are wrong:
A. Amazon S3 is object storage, not a shared file system — not suitable for hierarchical file system requirements or concurrent file operations.
C. EBS volumes can only be attached to one instance at a time unless using EBS Multi-Attach (which is still block-level, not shared file system semantics).
D. Synchronizing EBS volumes manually across instances is complex, error-prone, and inefficient.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
A company wants to implement a disaster recovery plan for its primary on-premises file storage volume. The file storage volume is mounted from an Internet Small Computer Systems Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of terabytes (TB) of data.
The company wants to ensure that end users retain immediate access to all file types from the on-premises systems without experiencing latency.
Which solution will meet these requirements with the LEAST amount of change to the company’s existing infrastructure?
⬜ A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises. Set the local cache to 10 TB. Modify existing applications to access the files through the NFS protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3 bucket that contains the files.
⬜ B. Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all existing data to a virtual tape library. Configure the data backup solution to run nightly after the initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in the virtual tape library.
⬜ C. Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to 10 TB. Mount the Volume Gateway cached volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2 instance.
✅ D. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volume. Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2 instance.
Explanation:
The best solution with the least change to the existing on-premises iSCSI setup and low-latency access is:
● D. Use a Storage Gateway Volume Gateway stored volume.
● Stored volumes keep a complete copy of your data on-premises for low-latency access while asynchronously backing up data to AWS.
● This meets the requirement for immediate access without latency and provides an easy disaster recovery mechanism via snapshots to Amazon EBS.
Why other options are wrong:
A. S3 File Gateway would require changing protocols (to NFS), which means modifying applications — more change than desired.
B. Tape Gateway is primarily used for archival backup workflows, not live access to files.
C. Cached volumes only cache frequently used data locally, so access to less-frequent files would experience latency when pulling from AWS.
Source: https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html#volume-gateway-concepts
A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The company’s chief information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on-premises backup applications and workflows.
What should a solutions architect recommend?
⬜ A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
⬜ B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
⬜ C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
✅ D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
Explanation:
The correct solution is:
● D. Use AWS Storage Gateway (Tape Gateway) with an iSCSI virtual tape library (VTL) interface.
● This allows the company to replace physical tape infrastructure with a cloud-based virtual tape solution without changing existing backup applications or workflows.
● It emulates tape libraries that backup software can recognize, thereby preserving current investments and simplifying backup management.
Why other options are wrong:
A. NFS Storage Gateway is used for file storage, not a tape replacement.
B. Amazon EFS is a file system, not a backup tape replacement or VTL solution.
C. Amazon EFS does not support iSCSI; EFS uses the NFS protocol, so it cannot emulate tape backup systems.
Source: https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html#tape-gateway-concepts
A company has a web application that runs on Amazon EC2 instances. The company wants end users to authenticate themselves before they use the web application. The web application accesses AWS resources, such as Amazon S3 buckets, on behalf of users who are logged on.
Which combination of actions must a solutions architect take to meet these requirements? (Select TWO.)
⬜ A. Configure AWS App Mesh to log on users.
⬜ B. Enable and configure AWS Single Sign-On in AWS Identity and Access Management (IAM).
✅ C. Define a default IAM role for authenticated users.
⬜ D. Use AWS Identity and Access Management (IAM) for user authentication.
✅ E. Use Amazon Cognito for user authentication.
Explanation:
The correct solution to handle user authentication and accessing AWS resources on behalf of authenticated users is:
● E. Use Amazon Cognito to handle user authentication (sign-up, sign-in, and user management) securely.
● C. Define a default IAM role for authenticated users through Amazon Cognito identity pools to allow temporary access to AWS resources like S3.
This ensures that users can authenticate and the application can safely assume roles to access AWS services on their behalf.
Why other options are wrong:
A. AWS App Mesh is for service-to-service communication inside microservices, not user authentication.
B. AWS Single Sign-On is used for centralized access management to AWS accounts, not for external application users.
D. IAM is intended for controlling access for AWS resources and users — it is not meant for direct end-user authentication at the application level.
Source: https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
An ecommerce company is building a distributed application that involves several serverless functions and AWS services to complete order-processing tasks. These tasks require manual approvals as part of the workflow. A solutions architect needs to design an architecture for the order-processing application. The solution must be able to combine multiple AWS Lambda functions into responsive serverless applications. The solution also must orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers.
Which solution will meet these requirements with the LEAST operational overhead?
✅ A. Use AWS Step Functions to build the application.
⬜ B. Integrate all the application components in an AWS Glue job.
⬜ C. Use Amazon Simple Queue Service (Amazon SQS) to build the application.
⬜ D. Use AWS Lambda functions and Amazon EventBridge (Amazon CloudWatch Events) events to build the application.
Explanation:
The best solution for orchestrating multiple serverless functions, services, and manual approval steps with low operational overhead is:
● A. AWS Step Functions.
● Step Functions can sequence Lambda functions, integrate with EC2, containers, and on-premises systems, and add manual approval steps into a workflow with visual monitoring and error handling.
● It provides a serverless, fully managed orchestration service, significantly simplifying complex workflows.
Why other options are wrong:
B. AWS Glue is for ETL jobs and data transformation, not workflow orchestration.
C. Amazon SQS is good for decoupling components but does not manage complex workflows or manual approvals.
D. EventBridge helps with event-driven architectures, but does not handle workflow orchestration and manual approval steps out of the box.
Source: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every day through the company’s website.
The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage performance is the problem.
Which solution addresses this performance issue?
✅ A. Change the storage type to Provisioned IOPS SSD.
⬜ B. Change the DB instance to a memory optimized instance class.
⬜ C. Change the DB instance to a burstable performance instance class.
⬜ D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
Explanation:
The correct solution is:
● A. Change the RDS storage type from General Purpose SSD (gp2) to Provisioned IOPS SSD (io1/io2), which provides consistent, high-performance I/O required for high-throughput, low-latency workloads like this one.
● Provisioned IOPS storage is specifically designed for transaction-intensive workloads with millions of inserts/updates, eliminating I/O bottlenecks.
Why other options are wrong:
B. A memory-optimized instance improves query caching and read performance, but the issue here is storage I/O performance.
C. Burstable instances are designed for low to moderate baseline workloads, not high-volume, high-update workloads.
D. Read replicas help scale read operations, but insert and update operations would still be bottlenecked at the primary database storage level.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#gp2-storage
A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS CLI to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.
The administrator is using an IAM role that has the following IAM policy attached:
What is the cause of the unsuccessful request?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:TerminateInstances"],
"Resource": ["*"]
},
{
"Effect": "Deny",
"Action": ["ec2:TerminateInstances"],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"192.0.2.0/24",
"203.0.113.0/24"
]
}
},
"Resource": ["*"]
}
]
}
⬜ A. The EC2 instance has a resource-based policy with a Deny statement.
⬜ B. The principal has not been specified in the policy statement.
⬜ C. The ‘Action’ field does not grant the actions that are required to terminate the EC2 instance.
✅ D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.
Explanation:
The policy attached has:
● An Allow to terminate instances generally.
● A Deny conditionally applied if the request does NOT come from the IP addresses 192.0.2.0/24 or 203.0.113.0/24.
Since the administrator is receiving a 403 Access Denied, it indicates that the request did not originate from the allowed CIDR blocks, triggering the Deny condition.
In AWS IAM policies, explicit Deny overrides any Allow.
Why other options are wrong:
A. Resource-based policies are not relevant to EC2 for actions like termination — this is IAM identity policy based.
B. The Principal field is not necessary inside an IAM identity policy (it’s needed for resource policies).
C. The Action field is correctly set to ec2:TerminateInstances.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html
A company is building a media sharing application and decides to use Amazon S3 for storage. When a media file is uploaded, the company starts a multi-step process to create thumbnails, identify objects in the images, transcode videos into standard formats and resolutions, and extract and store the metadata to an Amazon DynamoDB table. The metadata is used for searching and navigation.
The amount of traffic is variable. The solution must be able to scale to handle spikes in load without unnecessary expenses.
What should a solutions architect recommend to support this workload?
⬜ A. Build the processing into the website or mobile app used to upload the content to Amazon S3. Save the required data to the DynamoDB table when the objects are uploaded.
✅ B. Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table.
⬜ C. Trigger an AWS Lambda function when an object is stored in the S3 bucket. Have the Lambda function start AWS Batch to perform the steps to process the object. Place the object data in the DynamoDB table when complete.
⬜ D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object is uploaded to Amazon S3. Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for unprocessed items, and use the program to perform the processing.
Explanation:
The best solution for handling variable load, multiple processing steps, and scaling efficiently is:
● B. Use AWS Step Functions to orchestrate the multi-step process triggered by S3 uploads.
● Step Functions can coordinate multiple AWS services (e.g., Lambda, Batch) easily with serverless scaling, error handling, and pay-per-use cost model.
● This approach ensures low operational overhead and automatic scaling for spikes in load without running unnecessary compute resources.
Why other options are wrong:
A. Pushing processing into the mobile/web app adds complexity and ties processing to the client, which is not scalable.
C. Using Lambda to start AWS Batch adds unnecessary complexity — Step Functions can directly manage workflows efficiently.
D. Using EC2 instances to poll and process adds management overhead and fixed costs, making it less cost-efficient and less scalable compared to serverless solutions.
Source: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination.
The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen a MySQL-compatible edition of Amazon Aurora to host the DB instance.
Which solutions will create the new DB instance? (Select TWO.)
✅ A. Import the RDS snapshot directly into Aurora.
⬜ B. Upload the RDS snapshot to Amazon S3, then import the RDS snapshot into Aurora.
✅ C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.
⬜ D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
⬜ E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.
Explanation:
The correct options are:
● A. You can directly restore an RDS snapshot into Amazon Aurora MySQL-Compatible Edition by using the Snapshot Import feature.
● C. mysqldump creates a database dump (SQL file), which can be uploaded to S3 and imported into Aurora using the Aurora MySQL import feature (mysqlimport or LOAD DATA FROM S3).
Why other options are wrong:
B. You cannot manually upload a snapshot to S3 and import it — RDS snapshots are managed internally by AWS.
D. AWS DMS is used for live replication and ongoing migrations, not for direct snapshot imports.
E. Although DMS could be used to migrate from a running database, using DMS to import a mysqldump file is unnecessary and complicated compared to native Aurora import tools.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replicate.html
A company uses a simple static website and wants to host it on AWS. The company already has a domain that it uses for email. The company needs a hosting solution that supports HTTPS. Which solution will meet these requirements MOST cost-effectively?
⬜ A. Create an Amazon S3 bucket with a name to match the website. Upload the website to the S3 bucket. Set up website hosting for the S3 bucket. Set up the DNS to point to the S3 website endpoint.
⬜ B. Create an Amazon S3 bucket, upload the website to the S3 bucket. Set up an HTTPS certificate by using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution for the S3 bucket and choose Price Class All.
⬜ C. Set up an open-source content management system (CMS) from AWS Marketplace. Deploy the CMS across two Availability Zones. Copy the website onto the CMS. Set up the DNS to point to the CMS.
✅ D. Create an Amazon S3 bucket. Upload the website to the S3 bucket. Set up an HTTPS certificate by using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution for the S3 bucket and choose Price Class 100. Point to the CloudFront distribution.
Explanation:
The most cost-effective way to host a static website with HTTPS support on AWS is:
● D. Host the static content in an Amazon S3 bucket,
● Use AWS Certificate Manager (ACM) to provision an SSL/TLS certificate for HTTPS,
● Distribute the content using Amazon CloudFront (choosing Price Class 100 keeps costs lower by using edge locations mostly in the US, Canada, and Europe).
S3 website hosting endpoints do not support HTTPS natively without CloudFront.
Why other options are wrong:
A. S3 static website hosting supports HTTP but not HTTPS directly — CloudFront is needed for HTTPS.
B. Price Class All includes global edge locations, which is more expensive than necessary if the website’s audience is mostly regional.
C. Deploying a CMS is overkill for a simple static website and costlier than using S3 + CloudFront.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html
A solutions architect at an ecommerce company wants to back up application log data to Amazon S3. The solutions architect is unsure how frequently the logs will be accessed or which logs will be accessed the most. The company wants to keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?
⬜ A. S3 Glacier
✅ B. S3 Intelligent-Tiering
⬜ C. S3 Standard-Infrequent Access (S3 Standard-IA)
⬜ D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Explanation:
The best choice when access patterns are unknown and cost savings are needed is:
● B. S3 Intelligent-Tiering automatically moves objects between two access tiers (frequent and infrequent access) based on changing access patterns without performance impact or operational overhead.
● It is cost-optimized for data with unknown or unpredictable access patterns, and avoids unnecessary retrieval or transition fees.
Why other options are wrong:
A. S3 Glacier is for archival storage — not suitable for data that may need quick or unpredictable access.
C. S3 Standard-IA requires objects to be accessed infrequently, and early retrieval or minimum storage charges apply if accessed too soon.
D. S3 One Zone-IA stores data in a single AZ — it’s cheaper but less resilient, and still best for rarely accessed data, not uncertain access.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-overview.html
A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years. The log files will be analyzed by a reporting tool that must be able to access all the files concurrently.
Which storage solution meets these requirements MOST cost-effectively?
⬜ A. Amazon Elastic Block Store (Amazon EBS)
⬜ B. Amazon Elastic File System (Amazon EFS)
⬜ C. Amazon EC2 instance store
✅ D. Amazon S3
Explanation:
The most cost-effective and scalable solution for long-term storage with concurrent access is:
● D. Amazon S3.
● S3 provides durable object storage with virtually unlimited capacity, and multiple clients (such as EC2 instances or reporting tools) can access objects concurrently.
● S3 is far cheaper for long-term storage compared to EBS or EFS, and it supports storage class tiers (e.g., S3 Glacier for archival) for additional cost optimization.
Why other options are wrong:
A. EBS volumes are block storage attached to a single EC2 instance and are not shared across instances without complex setup (and costlier for 7 years of storage).
B. EFS is shared file storage but is more expensive than S3 and better suited for frequent access.
C. Instance store is ephemeral and data is lost if the instance stops or terminates — unsuitable for compliance and long-term retention.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html
An online retail company needs to run near-real-time analytics on website traffic to analyze top-selling products across different locations. The product purchase data and the user location details are sent to a third-party application that runs on premises. The application processes the data and moves the data into the company’s analytics engine.
The company needs to implement a cloud-based solution to make the data available for near-real-time analytics.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Use Amazon Kinesis Data Streams to ingest the data. Use AWS Lambda to transform the data. Configure Lambda to write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).
⬜ B. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket. Schedule an AWS Glue crawler job to enrich the data and update the AWS Glue Data Catalog. Use Amazon Athena for analytics.
⬜ C. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket. Add an Apache Spark job on Amazon EMR to enrich the data in the S3 bucket and write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).
✅ D. Use Amazon Kinesis Data Firehose to ingest the data. Enable Kinesis Data Firehose data transformation with AWS Lambda. Configure Kinesis Data Firehose to write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).
Explanation:
The best solution for near-real-time analytics with minimal operational overhead is:
● D. Use Amazon Kinesis Data Firehose, a fully managed service that can ingest, transform (using AWS Lambda), and deliver data directly to Amazon OpenSearch Service.
● Kinesis Data Firehose handles scaling, retries, and delivery automatically, reducing the need for operational management.
Why other options are wrong:
A. Kinesis Data Streams + Lambda would require more setup and management (manual provisioning, scaling, and error handling).
B. Writing to S3 and analyzing with Athena is batch-based, not near-real-time.
C. Using EMR and Spark involves more operational overhead (cluster management and tuning) — not the simplest option.
Source: https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and other website assets in the content management system.
A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content with the least possible lag time.
Which solution meets these requirements?
⬜ A. Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
✅ B. Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
⬜ C. Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
⬜ D. Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new EC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EBS volume.
Explanation:
The correct and most efficient solution is:
● B. Use Amazon EFS (Elastic File System) to provide a shared file system that is accessible simultaneously by all EC2 instances across multiple Availability Zones.
● EFS ensures that all EC2 instances have up-to-date access to the same files with minimal lag time and no need for syncing or manual copying.
Why other options are wrong:
A. Copying from the newest instance is manual, error-prone, and not scalable. It doesn’t ensure low-latency or real-time updates across multiple instances.
C. S3 sync every hour creates a delay of up to an hour, which does not meet near-real-time update requirements.
D. EBS snapshots are static; they do not update in real-time. Also, EBS volumes cannot be shared across multiple instances.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solutions architect must configure the necessary permissions.
Which solution will allow least privilege access to the DynamoDB table from the EC2 instance?
✅ A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance.
⬜ B. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role.
⬜ C. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly.
⬜ D. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.
Explanation:
The best and most secure solution for least privilege access is:
● A. Create an IAM role with the minimal permissions needed for the DynamoDB table, and assign the role to the EC2 instance through an instance profile.
● This allows the EC2 instance to assume the role automatically without hardcoding or manually managing credentials, ensuring secure and temporary access.
Why other options are wrong:
B. EC2 instances assume roles through instance profiles, not directly by modifying the trust policy for the instance itself.
C. Storing IAM user credentials in an S3 bucket and fetching them is insecure and unnecessary.
D. Storing IAM user credentials locally on the EC2 instance violates security best practices and increases operational risk.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
A solutions architect is designing a new hybrid architecture to extend a company’s on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?
✅ A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails.
⬜ B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
⬜ C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
⬜ D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.
Explanation:
The best solution that meets high availability, low latency for primary, and cost efficiency is:
● A. Use AWS Direct Connect for primary, low-latency, high-performance private connectivity.
● Provision a VPN connection as a lower-cost, slower backup in case the Direct Connect link fails.
● This setup ensures primary fast access and fallback resilience without the expense of provisioning a second Direct Connect link.
Why other options are wrong:
B. VPNs are cheaper but have higher latency and are less reliable than Direct Connect.
C. A second Direct Connect link would meet latency goals but greatly increases cost, which the company wants to avoid.
D. The Direct Connect failover attribute does not automatically create backup links — it only adjusts routing behavior if you have multiple existing connections.
Source: https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
A company is building a disaster recovery (DR) solution. The company wants to rotate its primary systems between AWS Regions on a regular basis. The company’s application is geographically distributed and includes a serverless web tier. The application’s database tier runs on Amazon Aurora.
A solutions architect needs to build an architecture for the database layer to implement managed, planned failover.
Which combination of actions will meet these requirements with the LEAST downtime? (Select TWO.)
✅ A. Create an Aurora DB cluster. Configure Aurora Replicas.
⬜ B. Fail over to one of the secondary DB clusters from another Region.
⬜ C. Create an Aurora DB cluster snapshot. Restore from the snapshot.
✅ D. Configure an Aurora global database. Set up a secondary DB cluster.
⬜ E. Promote one of the read replicas as a writer from the Amazon RDS console.
Explanation:
The correct solution to rotate primary systems between Regions with minimal downtime is:
● A. Set up an Aurora DB cluster with Aurora Replicas to maintain local high availability.
● D. Configure an Aurora Global Database to replicate the primary cluster to another Region with low-lag, fast failover capabilities.
● Aurora Global Database is designed for cross-Region disaster recovery, allowing managed planned failover with downtime usually less than a minute.
Why other options are wrong:
B. Simply failing over to a secondary cluster implies manual processes if not using Aurora Global Database — more downtime.
C. Restoring from snapshots introduces significant downtime and data staleness — not suitable for low downtime DR.
E. Promoting a read replica is a manual process with longer downtime and potential replication lag issues.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture.
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
⬜ A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
⬜ B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
✅ C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
⬜ D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.
Explanation:
The best solution with minimal operational overhead is:
● C. Use Amazon Athena directly to query the JSON logs in Amazon S3 using standard SQL syntax.
● Athena is serverless, so there is no infrastructure to manage, and you pay per query. It can directly query structured, semi-structured (like JSON), and unstructured data stored in S3 without moving the data.
Why other options are wrong:
A. Amazon Redshift would require loading data into Redshift, which introduces extra steps and management overhead.
B. CloudWatch Logs is for real-time log storage and monitoring, not designed for large-scale ad-hoc querying on existing S3 data.
D. AWS Glue + EMR introduces more complexity and cost, and is overkill for simple, on-demand SQL queries.
Source: https://docs.aws.amazon.com/athena/latest/ug/what-is.html
A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
✅ A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
⬜ B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
⬜ C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
⬜ D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.
Explanation:
The best solution with minimal operational overhead is:
● A. Use the aws:PrincipalOrgID condition key in the S3 bucket policy to automatically allow access only to identities that belong to accounts within a specific AWS Organization.
● This is simple, scalable, and automatically covers all accounts in the organization without needing to track or manually update policies as accounts are added or removed.
Why other options are wrong:
B. aws:PrincipalOrgPaths is more specific to organizational unit (OU) paths, but not needed just to restrict access to the entire organization.
C. Monitoring CloudTrail events and updating policies manually would be high operational overhead and error-prone.
D. Tagging individual users introduces manual tagging effort and requires managing tag consistency, which is unnecessary for this use case.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgid
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
✅ A. Create a gateway VPC endpoint to the S3 bucket.
⬜ B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
⬜ C. Create an instance profile on Amazon EC2 to allow S3 access.
⬜ D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.
Explanation:
The correct solution for private access to Amazon S3 from within a VPC is:
● A. Create a gateway VPC endpoint for S3.
● A VPC gateway endpoint enables private connections between your VPC and S3 without requiring an internet gateway, NAT device, or VPN connection, ensuring that traffic does not leave the AWS network.
Why other options are wrong:
B. Streaming logs to CloudWatch and then exporting to S3 is not direct access and adds unnecessary complexity.
C. An instance profile provides authorization (IAM permissions) to access S3, but it does not change the network path — the EC2 instance would still need internet access unless using a VPC endpoint.
D. API Gateway is unnecessary and overcomplicates a simple S3 access requirement.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
⬜ A. Copy the data so both EBS volumes contain all the documents.
⬜ B. Configure the Application Load Balancer to direct a user to the server with the documents.
✅ C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.
⬜ D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.
Explanation:
The correct solution is:
● C. Use Amazon EFS (Elastic File System), which provides a shared file system accessible from multiple EC2 instances across Availability Zones.
● By storing all documents centrally in EFS, both EC2 instances can simultaneously access and update the same files, ensuring users always see the complete set of documents, regardless of which instance serves the request.
Why other options are wrong:
A. Manually copying data is error-prone, not scalable, and does not ensure real-time consistency.
B. Routing users to the “correct server” does not solve the problem for new uploads or failover situations.
D. Sending requests to both servers would double network overhead, complicate application logic, and still not guarantee consistency.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability.
Which solution meets these requirements?
⬜ A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.
⬜ B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics.
⬜ C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.
✅ D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues.
Explanation:
The correct solution for highly scalable, decoupled, fan-out messaging architecture is:
● D. Use Amazon SNS to publish messages once and fan them out to multiple SQS queues.
● Each consumer application can process messages independently from its own SQS queue, allowing scalability and decoupling.
● SQS scales automatically to handle sudden surges in messages, including tens of thousands of messages per second.
Why other options are wrong:
A. Kinesis Data Analytics is for real-time analytics, not for fan-out message distribution.
B. Scaling EC2 instances manually based on CPU metrics would not handle sudden spikes instantly and adds more operational complexity.
C. A single shard in Kinesis Data Streams would not handle 100,000 messages/sec (each shard supports only about 1 MB/sec or 1,000 records/sec).
Source: https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
⬜ A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
✅ B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
⬜ C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
⬜ D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.
Explanation:
The correct and most resilient and scalable solution is:
● B. Use Amazon SQS to decouple job coordination and Auto Scaling based on queue size.
● When the number of pending messages grows, EC2 Auto Scaling can launch more compute instances to process the jobs, ensuring the system automatically adapts to workload variability without depending on a single primary server.
Why other options are wrong:
A. Scheduled scaling is static and cannot adapt dynamically to sudden changes in workload.
C. CloudTrail is for logging API events, not job coordination.
D. EventBridge is for event-driven architecture, but not a queue service designed for handling large volumes of jobs in a decoupled manner.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days, the files are rarely accessed.
The total data size is increasing and is close to the company’s total storage capacity. A solutions architect must increase the company’s available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.
Which solution will meet these requirements?
⬜ A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
✅ B. Create an Amazon S3 File Gateway to extend the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
⬜ C. Create an Amazon FSx for Windows File Server file system to extend the company’s storage space.
⬜ D. Install a utility on each user’s computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
Explanation:
The correct solution that extends storage, preserves low-latency access for new files, and automates lifecycle management is:
● B. Use an Amazon S3 File Gateway to provide on-premises SMB access to files stored in Amazon S3, while using local caching for recently accessed files.
● Additionally, an S3 Lifecycle policy can automatically transition older files to S3 Glacier Deep Archive to save costs on infrequently accessed data.
Why other options are wrong:
A. AWS DataSync copies data but does not extend the local file system or provide ongoing, seamless access.
C. Amazon FSx is a fully managed file system but would require migrating the workload and does not integrate with the existing on-premises server easily.
D. Installing a utility on each user’s computer adds complexity and breaks the SMB user experience — not a seamless extension.
Source: https://docs.aws.amazon.com/filegateway/latest/files3/WhatIsStorageGateway.html
A company is implementing a shared storage solution for a media application that is hosted in the AWS Cloud. The company needs the ability to use SMB clients to access data. The solution must be fully managed.
Which AWS solution meets these requirements?
⬜ A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol. Connect the application server to the file share.
⬜ B. Create an AWS Storage Gateway tape gateway. Configure tapes to use Amazon S3. Connect the application server to the tape gateway.
⬜ C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to the file share.
✅ D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system.
Explanation:
The best solution for SMB (Server Message Block) access with a fully managed AWS service is:
● D. Amazon FSx for Windows File Server.
● It provides a fully managed, native Windows file system with SMB protocol support, Windows ACLs, Active Directory integration, and high availability across multiple AZs.
Why other options are wrong:
A. Storage Gateway volume gateway provides iSCSI block storage, not SMB file storage.
B. Storage Gateway tape gateway is for virtual tape libraries, not file sharing.
C. Setting up a Windows file server on EC2 requires server management, which goes against the requirement for a fully managed solution.
Source: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
A solutions architect must migrate a Windows Internet Information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solutions architect has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
⬜ A. Migrate the file share to Amazon RDS.
⬜ B. Migrate the file share to AWS Storage Gateway.
✅ C. Migrate the file share to Amazon FSx for Windows File Server.
⬜ D. Migrate the file share to Amazon Elastic File System (Amazon EFS).
Explanation:
The best replacement for an on-premises Windows-based file share that supports IIS web applications is:
● C. Use Amazon FSx for Windows File Server, which provides a fully managed, highly available, and durable Windows-native SMB file system.
● It natively supports Windows file system features (such as NTFS, Active Directory integration, and DFS namespaces), making it the ideal solution for a seamless migration with high resilience and durability across Availability Zones.
Why other options are wrong:
A. Amazon RDS is for relational databases, not file shares.
B. AWS Storage Gateway is typically used to extend on-premises storage to AWS, not as a full native AWS file system.
D. Amazon EFS is primarily for Linux-based file systems (NFS protocol), not Windows IIS environments.
Source: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
Your team is developing a high-performance computing (HPC) application. The application resolves complex, compute-intensive problems and needs a high-performance and low-latency Lustre file system. You need to configure this file system in AWS at a low cost. Which method is the most suitable?
✅ A. Create a Lustre file system through Amazon FSx.
⬜ B. Launch a high-performance Lustre file system in Amazon EBS.
⬜ C. Create a high-speed volume cluster in an EC2 placement group.
⬜ D. Launch the Lustre file system from AWS Marketplace.
Explanation:
● Amazon FSx for Lustre is a fully managed service that provides a high-performance file system optimized for fast processing of workloads like machine learning, HPC, and analytics.
● It offers low-latency and high-throughput performance while being cost-effective by integrating with Amazon S3 and using cost-optimized storage.
● FSx for Lustre simplifies deployment without requiring manual configuration and maintenance, making it the best option for HPC needs.
Why other options are wrong:
B. Launch a high-performance Lustre file system in Amazon EBS is incorrect because Amazon EBS provides block storage, not a distributed file system like Lustre.
C. Create a high-speed volume cluster in an EC2 placement group focuses on optimizing networking between instances, not on providing a Lustre file system.
D. Launch the Lustre file system from AWS Marketplace would require manual setup and management, increasing operational complexity and costs compared to using Amazon FSx.
Source: https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html
A company is implementing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use Lustre clients to access data. The solution must be fully managed. Which solution meets these requirements?
⬜ A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
⬜ B. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the file share.
⬜ C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach the file system to the origin server. Connect the application server to the file system.
✅ D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
Explanation:
● Amazon FSx for Lustre is a fully managed service that provides a high-performance file system optimized for fast processing of workloads like gaming applications, big data analytics, and machine learning that use Lustre clients. It is the correct choice when Lustre compatibility is required.
Why other options are wrong:
A. AWS DataSync is used for transferring data between storage systems but does not provide a mountable Lustre file system.
B. AWS Storage Gateway (file gateway) provides SMB or NFS access, not Lustre protocol support.
C. Amazon EFS supports NFS protocol but not Lustre; it cannot fulfill the requirement for Lustre client access.
Source: https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
⬜ A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
✅ B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
⬜ C. Use an API Gateway authorizer to block any requests while the application processes an order.
⬜ D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.
Explanation:
The correct solution for preserving the order of message processing is:
● B. Use Amazon SQS FIFO (First-In-First-Out) queues which guarantee the order of message delivery and processing.
● API Gateway will send the order message to the FIFO queue, and then Lambda can process them one by one in the exact order received.
Why other options are wrong:
A. SNS is a pub/sub system and does not guarantee ordering of message delivery.
C. An API Gateway authorizer is used for authentication/authorization, not for sequencing or managing request flow.
D. An SQS standard queue provides best-effort ordering but does not guarantee strict ordering, which is required here.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?
✅ A. Use AWS Secrets Manager. Turn on automatic rotation.
⬜ B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
⬜ C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
⬜ D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.
Explanation:
The correct and most efficient solution to minimize operational overhead for secure credential management is:
● A. Use AWS Secrets Manager to store and automatically rotate database credentials securely.
● Secrets Manager integrates easily with Amazon Aurora, and you can configure automatic rotation without changing application code significantly, reducing manual credential maintenance and increasing security.
Why other options are wrong:
B. AWS Systems Manager Parameter Store can store secrets, but automatic rotation is only supported in Secrets Manager, not native in Parameter Store.
C. Storing credentials in an S3 bucket, even encrypted, requires custom access logic and manual rotation — more operational overhead.
D. Storing credentials on encrypted EBS volumes does not solve the credential management or rotation challenge — it’s still manual and less secure.
Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
✅ A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
⬜ B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Configure Route 53 to route traffic to the CloudFront distribution.
⬜ C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
⬜ D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.
Explanation:
The correct and most efficient solution is:
● A. Use Amazon CloudFront as a single distribution with multiple origins (S3 for static content and ALB for dynamic content).
● This provides global caching to reduce latency, improve performance, and simplify routing with one unified domain via Route 53.
Why other options are wrong:
● B, C, and D involve using AWS Global Accelerator, which primarily optimizes TCP/UDP traffic rather than providing caching and content distribution like CloudFront.
● Additionally, creating multiple domain names and accelerators unnecessarily increases complexity without providing better performance for static content caching compared to CloudFront.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
✅ A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
⬜ B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule.
⬜ C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
⬜ D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets.
Explanation:
The correct solution that provides least operational overhead with automatic rotation and multi-Region replication is:
A. Use AWS Secrets Manager, which natively supports secret rotation, multi-Region replication, and integration with Amazon RDS for managed credentials without needing to build custom Lambda functions or scripts.
Why other options are wrong:
B. AWS Systems Manager Parameter Store does not natively support automatic rotation or seamless multi-Region replication like Secrets Manager.
C. Managing credentials in S3 with custom EventBridge and Lambda adds high operational complexity.
D. Using DynamoDB and KMS with custom Lambda logic is overly complicated and manual compared to Secrets Manager’s managed features.
Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets.html
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database’s performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?
⬜ A. Use Amazon Redshift with a single node for leader and compute functionality.
⬜ B. Use Amazon RDS with a Single-AZ deployment. Configure Amazon RDS to add reader instances in a different Availability Zone.
✅ C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
⬜ D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.
Explanation:
The correct solution to handle unpredictable read workloads with automatic scaling and high availability is:
● C. Use Amazon Aurora with a Multi-AZ deployment and Aurora Auto Scaling for Aurora Replicas.
● Aurora Replicas can scale out read workloads automatically based on demand, while the primary instance handles writes.
● Aurora’s architecture provides high availability, low replication lag, and fault tolerance across multiple Availability Zones.
Why other options are wrong:
A. Amazon Redshift is for analytical (OLAP) workloads, not transactional (OLTP) ecommerce applications.
B. Amazon RDS Single-AZ deployment does not provide high availability. Also, manually adding readers is not as seamless as Aurora Auto Scaling.
D. ElastiCache (e.g., Memcached) is for caching — it cannot replace a relational database for transaction persistence.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replica-autoscaling.html
A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?
⬜ A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
⬜ B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
✅ C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
⬜ D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.
Explanation:
The correct solution for inspecting, filtering, and protecting traffic in and out of a VPC is:
● C. Use AWS Network Firewall, which provides stateful traffic inspection, intrusion prevention, and fine-grained traffic filtering capabilities inside the VPC.
● Network Firewall is managed, scalable, and natively integrated with VPCs, making it ideal for production-level traffic control and threat protection.
Why other options are wrong:
A. Amazon GuardDuty detects threats but does not actively filter or inspect traffic in real time.
B. Traffic Mirroring captures packets for analysis only — it does not block or filter traffic.
D. AWS Firewall Manager manages firewall policies across multiple accounts, but it uses Network Firewall or WAF — by itself it does not perform direct traffic inspection.
Source: https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html
A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company’s management team should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?
⬜ A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles.
✅ B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
⬜ C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
⬜ D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
Explanation:
The correct solution for data visualization across multiple data sources with fine-grained user access control is:
● B. Use Amazon QuickSight, which can connect to multiple data sources (like S3 and RDS) and allows sharing dashboards with specific users and groups for precise access control.
● QuickSight allows easy management of user-based access with different levels of visibility based on groups and users.
Why other options are wrong:
A. IAM roles are not the correct method for managing QuickSight user access to dashboards. QuickSight natively manages user and group sharing.
C. AWS Glue ETL jobs and S3 reports do not provide interactive visualizations — they just produce static files.
D. Athena with S3 reports similarly lacks live dashboards and interactive visualization, and S3 access policies are less flexible for user-specific visual controls.
Source: https://docs.aws.amazon.com/quicksight/latest/user/welcome.html
A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.
What should the solutions architect do to meet this requirement?
✅ A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
⬜ B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
⬜ C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
⬜ D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.
Explanation:
The correct and secure way for EC2 instances to access AWS services like S3 is:
● A. Create an IAM role with the appropriate permissions to access the S3 bucket, and attach the role to the EC2 instances.
● The instances then automatically obtain temporary security credentials via the role, which is the best practice for secure, automatic credential management in AWS.
Why other options are wrong:
B. You cannot directly attach an IAM policy to an EC2 instance; it must be attached to a role.
C. IAM groups are for grouping users, not for attaching to EC2 instances.
D. IAM users are meant for people, not for EC2 instances — attaching user credentials manually is not recommended and less secure.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket.
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)
✅ A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.
✅ B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
⬜ C. Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed.
⬜ D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function.
⬜ E. Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon Simple Notification Service (Amazon SNS) topic with the application owner’s email address for further processing.
Explanation:
The best solution that uses durable, stateless, and serverless components is:
A. Use Amazon SQS to queue notifications from the S3 bucket, ensuring durability and resiliency even if Lambda is temporarily unavailable.
B. Configure the Lambda function to poll the SQS queue. Lambda can automatically scale to process messages and delete messages after successful processing.
This decouples upload events from processing, ensuring reliable and automatic image compression without manual tracking or state maintenance.
Why other options are wrong:
C. Writing filenames manually to memory is not durable or stateless — memory is lost when the Lambda function execution ends.
D. Launching EC2 to monitor the queue adds unnecessary operational overhead — Lambda can natively poll SQS.
E. EventBridge and SNS only alert — they do not trigger processing automatically for the uploaded images.
Source: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Create a Network Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection.
⬜ B. Create an Application Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection.
⬜ C. Deploy a transit gateway in the inspection VPC. Configure route tables to route the incoming packets through the transit gateway.
✅ D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.
Explanation:
The best solution that provides automatic traffic inspection with minimal operational overhead is:
● D. Use an AWS Gateway Load Balancer (GWLB), which is specifically designed to deploy and scale third-party virtual appliances like firewalls, intrusion detection systems, etc.
● The Gateway Load Balancer endpoint makes it easy to redirect and inspect traffic without modifying the application architecture manually.
Why other options are wrong:
A. A Network Load Balancer (NLB) does not natively integrate with firewall appliances for packet-level inspection.
B. An Application Load Balancer (ALB) works at the HTTP/HTTPS layer (Layer 7), not packet inspection (Layer 3/4).
C. Transit Gateway provides routing between VPCs but does not perform automatic packet inspection and would require complex route configuration.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/introduction.html
A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
⬜ A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
⬜ B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
⬜ C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots.
✅ D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
Explanation:
The correct solution that minimizes cloning time while ensuring high I/O performance is:
● D. Use EBS fast snapshot restore, which allows you to quickly create fully-initialized EBS volumes from snapshots without the typical performance penalties of restoring data on-demand.
● This guarantees immediate, high performance from the moment the new EBS volumes are attached to the EC2 instances, which is essential for test environments needing production-like I/O performance.
Why other options are wrong:
A. EC2 instance store volumes are ephemeral, not persistent, and cannot restore from EBS snapshots.
B. EBS Multi-Attach is only supported for specific EBS volume types (io1/io2) and does not support cloning or isolation between production and test environments.
C. Attaching volumes before restoring snapshots would not speed up cloning — restoration still needs to complete, and performance would be slow initially.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones.
Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet.
Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)
✅ A. Configure a NAT gateway in a public subnet.
✅ B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier.
⬜ C. Assign Elastic IP addresses to the EC2 instances.
⬜ D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier.
⬜ E. Configure a NAT instance in a private subnet.
Explanation:
The correct approach for allowing private EC2 instances to access the internet securely is:
● A. Deploy a NAT gateway in a public subnet. NAT gateways allow instances in private subnets to initiate outbound traffic to the internet but block inbound connections initiated from the internet.
● B. Modify the route table for the private subnets to send internet-bound traffic (0.0.0.0/0) to the NAT gateway.
This setup ensures that EC2 instances remain private (no direct internet access) while being able to download patches and updates.
Why other options are wrong:
C. Assigning Elastic IP addresses would make instances publicly accessible, which violates the requirement.
D. Associating a private subnet with an internet gateway would expose instances to the internet, which is not secure.
E. A NAT instance could work, but requires more operational overhead compared to a fully managed NAT gateway — and thus is not the best choice for least operational burden.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html
A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation.
Which approach should the solutions architect recommend to shut down and resume the EC2 instances?
⬜ A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them.
⬜ B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances.
✅ C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2-week company shutdown.
⬜ D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.
Explanation:
The correct solution to preserve instance memory (RAM) and save costs during the shutdown is:
● C. Use EC2 Hibernation, which saves the instance’s in-memory state (RAM) to the root EBS volume and stops the instance, allowing it to resume with memory contents intact after the shutdown.
● During hibernation, you only pay for storage (EBS), not compute (EC2 instance running cost).
Why other options are wrong:
A. Instance store volumes are ephemeral and data is lost when the instance stops or terminates.
B. Snapshots capture the disk state, not the in-memory state (RAM).
D. Restarting in the same Availability Zone does not preserve RAM contents — only hibernation can do that.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html
A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable.
Which approach will meet these requirements?
⬜ A. Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance.
⬜ B. Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails.
✅ C. Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable.
⬜ D. Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.
Explanation:
The best solution to quickly fail over to a standby EC2 instance using a private IP address is:
● C. Use a secondary Elastic Network Interface (ENI) that is attached to the primary instance.
● Upon failure, detach the ENI and reattach it to the standby instance.
● The private IP address moves along with the ENI, allowing traffic redirection with minimal downtime and without changing DNS records.
Why other options are wrong:
A. An Application Load Balancer (ALB) does not work with private IP addresses directly — it expects target groups by instance IDs or IPs in dynamic scaling environments.
B. DHCP option sets cannot dynamically reassign a specific private IP to a standby instance in case of failure.
D. Elastic IP addresses are public IP addresses — the question explicitly mentions private IP connectivity, not public.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket.
What must a solutions architect do to ensure that the script will successfully execute?
✅ A. Enable cross-origin resource sharing (CORS) on the S3 bucket.
⬜ B. Enable S3 Versioning on the S3 bucket.
⬜ C. Provide the users with a signed URL for the script.
⬜ D. Configure an S3 bucket policy to allow public execute privileges.
Explanation:
The correct action to allow a script on a user’s web page to make cross-origin HTTP requests to an S3 bucket is:
● A. Enable Cross-Origin Resource Sharing (CORS) on the S3 bucket.
● Without CORS enabled, browsers block cross-origin JavaScript calls for security reasons.
● CORS policies define who can access resources, what methods (GET, POST, etc.) are allowed, and under what conditions.
Why other options are wrong:
B. S3 Versioning is about object versions, not cross-origin access.
C. Signed URLs are used for temporary, secure access, but do not solve cross-origin browser restrictions.
D. Public execute privileges are not a valid permission type for S3 buckets — and public access might be unnecessary or risky here.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises.
Which encryption options meet these requirements? (Select TWO.)
⬜ A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
⬜ B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
✅ C. Use server-side encryption with customer-provided encryption keys (SSE-C).
✅ D. Use client-side encryption to provide at-rest encryption.
⬜ E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.
Explanation:
The correct approaches to ensure encryption with keys stored on premises are:
C. Server-side encryption with customer-provided keys (SSE-C) allows the client to provide encryption keys for S3, ensuring that AWS never stores the encryption keys.
D. Client-side encryption encrypts data before sending it to AWS using encryption keys that are managed and stored locally (on-premises), ensuring compliance with the requirement.
Why other options are wrong:
A. SSE-S3 uses S3-managed keys, not customer-managed keys stored on premises.
B. SSE-KMS uses AWS Key Management Service (KMS) keys — even if customer-managed, they are stored within AWS.
E. Using a Lambda function would still require S3 to temporarily store unencrypted data before the Lambda function encrypts it, violating strict “always encrypted at rest” requirements.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html
A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible.
What should a solutions architect do to accomplish this?
✅ A. Deploy On-Demand Instances during periods of high demand.
⬜ B. Create a second EC2 reservation for additional instances.
⬜ C. Deploy Spot Instances during periods of high demand.
⬜ D. Increase the EC2 instance size in the EC2 reservation to support the increased workload.
Explanation:
The best approach to cost-effectively and reliably scale temporarily without risking interruption is:
● A. Use On-Demand Instances during periods of temporary high demand.
● On-Demand Instances are not interruptible (unlike Spot Instances) and provide flexible scaling without needing long-term commitments (like new Reserved Instances).
● Since the job cannot be interrupted once started, Spot Instances are not suitable despite being cheaper.
Why other options are wrong:
B. Purchasing another Reserved Instance is not cost-effective for temporary needs — Reserved Instances require a long-term commitment (1 or 3 years).
C. Spot Instances are interruptible at any time by AWS, which risks job failure.
D. Increasing the instance size would increase costs permanently — and the workload spike is temporary.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-purchasing-options.html
A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime.
Which solution meets these requirements?
⬜ A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions.
⬜ B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances.
✅ C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database.
⬜ D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.
Explanation:
The correct solution to smooth sudden traffic spikes without downtime is:
● C. Use Amazon SQS to decouple the vote submissions from the database writes.
● Incoming votes are queued reliably and then processed asynchronously by a fleet of worker instances that write to the database at a sustainable rate.
● This buffers the traffic spikes, preventing the RDS database from being overwhelmed.
Why other options are wrong:
A. Migrating to Lambda is a major re-architecture, unnecessary for the immediate scaling problem.
B. Multi-AZ deployment improves availability, not performance. Also, the secondary DB is read-only — you cannot write to it.
D. Dynamically resizing the DB with EventBridge introduces downtime and complexity — not ideal for real-time, high-throughput voting systems.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
⬜ A. Create new public and private subnets in the same AZ.
✅ B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances.
⬜ C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
⬜ D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database.
✅ E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.
Explanation:
To provide high availability across multiple AZs for both application and database layers:
B. Create an Auto Scaling group with instances spread across multiple AZs and an Application Load Balancer (ALB) to distribute traffic between healthy instances.
E. Create new public and private subnets in a second AZ and move the database to Amazon RDS with Multi-AZ deployment, which automatically provides failover and resiliency across AZs.
Why other options are wrong:
A. Creating new subnets in the same AZ does not provide high availability — need multiple AZs.
C. Adding existing instances to an Auto Scaling group without spanning multiple AZs does not meet HA requirements.
D. Running a database instance in a public subnet is against best practices for security — databases should always reside in private subnets.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-auto-scaling-groups.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests.
How should a solutions architect redesign the architecture to better respond to changing traffic?
⬜ A. Configure a Network Load Balancer with a slow start configuration.
⬜ B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances.
✅ C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition.
⬜ D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Explanation:
The best solution to address instance initialization delays during a predictable traffic burst is:
● C. Configure a step scaling policy and use an EC2 instance warmup setting in the Auto Scaling group.
● The warmup ensures that the new instance is not considered available (and does not receive traffic) until after its boot/initiation delay (in this case, about 1 minute), improving the user experience during scale-outs.
Why other options are wrong:
A. A Network Load Balancer operates at Layer 4 (TCP/UDP) and does not directly manage application readiness or warmup periods.
B. ElastiCache improves read performance, but does not solve application initialization delays after scaling events.
D. CloudFront helps with content delivery and caching but does not fix instance warm-up or Auto Scaling responsiveness issues.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-warmup.html
An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.
What should the solutions architect do to separate the read requests from the write requests?
⬜ A. Enable read-through caching on the Aurora database.
⬜ B. Update the application to read from the Multi-AZ standby instance.
✅ C. Create an Aurora replica and modify the application to use the appropriate endpoints.
⬜ D. Create a second Aurora database and link it to the primary database as a read replica.
Explanation:
The best way to separate reads and writes in Aurora is:
● C. Create Aurora Replicas and update the application to use the read endpoint for read queries and the writer endpoint for writes.
● Aurora Replicas automatically scale read traffic and offload reads from the primary writer instance, improving write performance by reducing read contention.
Why other options are wrong:
A. Aurora does not support read-through caching natively — you would use services like ElastiCache separately, and it wouldn’t solve write pressure directly.
B. The Multi-AZ standby instance in Aurora is for failover only, not for read scaling.
D. Creating a separate Aurora cluster as a replica is not the right solution — Aurora supports built-in replicas within the same cluster for read scaling.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances with different instance types and sizes. The application is extensively used during office hours from 9 in the morning to 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours.
Which of the following is the MOST operationally efficient solution to implement to ensure the application works properly at the beginning of the day?
⬜ A. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization.
⬜ B. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization.
✅ C. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.
⬜ D. Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances.
Explanation:
The best solution for a predictable, scheduled traffic pattern (like office hours) is:
● C. Use a Scheduled Scaling policy to proactively launch new EC2 instances before 9 AM.
● Scheduled scaling ensures that the required capacity is already available when users log in, preventing delays caused by dynamic scaling.
Why other options are wrong:
A. CPU-based dynamic scaling reacts after utilization is high — not proactive.
B. Memory utilization dynamic scaling also reacts late and memory metrics are not natively available without CloudWatch Agent.
D. Predictive scaling needs time and historical data to learn patterns; Scheduled scaling is simpler and faster to implement for a known schedule.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html
A financial application consists of an Auto Scaling group of Amazon EC2 instances, an Application Load Balancer, and a MySQL RDS instance set up in a Multi-AZ Deployment configuration. To protect customers’ confidential data, it must be ensured that the Amazon RDS database is only accessible using an authentication token specific to the profile credentials of EC2 instances.
Which of the following actions should be taken to meet this requirement?
✅ A. Enable the IAM DB Authentication.
⬜ B. Configure SSL in your application to encrypt the database connection to RDS.
⬜ C. Create an IAM Role and assign it to your EC2 instances which will grant exclusive access to your RDS instance.
⬜ D. Use a combination of IAM and STS to enforce restricted access to your RDS instance using a temporary authentication token.
Explanation:
The correct method is:
● A. Enable IAM DB Authentication for Amazon RDS.
● This allows database access to be authenticated using AWS IAM tokens instead of traditional username/passwords. The tokens are generated based on IAM role credentials that EC2 instances use, ensuring that only authorized instances can access the database securely.
Why other options are wrong:
B. SSL encrypts the connection but does not authenticate access based on IAM credentials.
C. Assigning an IAM role to EC2 is necessary, but alone it does not enable token-based RDS authentication without IAM DB Authentication enabled.
D. IAM and STS provide temporary credentials for AWS services generally, but IAM DB Authentication is the specific solution for RDS access control.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
A company hosted a web application in an Auto Scaling group of EC2 instances. The IT manager is concerned about the over-provisioning of the resources that can cause higher operating costs. A Solutions Architect has been instructed to create a cost-effective solution without affecting the performance of the application.
Which dynamic scaling policy should be used to satisfy this requirement?
⬜ A. Use simple scaling.
⬜ B. Use scheduled scaling.
⬜ C. Use suspend and resume scaling.
✅ D. Use target tracking scaling.
Explanation:
The best solution to automatically maintain optimal capacity and avoid over-provisioning is:
● D. Use target tracking scaling.
● Target tracking scaling adjusts the number of EC2 instances dynamically based on a defined metric target, such as maintaining CPU utilization at 50%. It automatically adds or removes capacity to match the actual demand, preventing both under- and over-provisioning without manual intervention.
Why other options are wrong:
A. Simple scaling is reactive and slower — it requires alarms to trigger scaling and may not proactively prevent over-provisioning.
B. Scheduled scaling is based on time, not real-time usage — not ideal when load patterns vary unpredictably.
C. Suspend and resume scaling is used for pausing scaling activities during maintenance or troubleshooting — not for cost optimization.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html#as-scaling-target-tracking
An online medical system hosted in AWS stores sensitive Personally Identifiable Information (PII) of the users in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company.
Which S3 encryption technique should the Architect use?
⬜ A. Use S3 client-side encryption with an AWS KMS key.
✅ B. Use S3 client-side encryption with a client-side master key.
⬜ C. Use S3 server-side encryption with an AWS KMS key.
⬜ D. Use S3 server-side encryption with customer provided key.
Explanation:
The correct choice to ensure that neither the encryption keys nor unencrypted data ever leave the client-side is:
● B. Use S3 client-side encryption with a client-side master key.
● With client-side encryption, encryption happens before the data is sent to AWS, and decryption happens after it is retrieved.
● The master key is managed entirely on the client-side, meeting strict compliance requirements where AWS should never see the key or the unencrypted data.
Why other options are wrong:
A. Using AWS KMS still sends the encryption request to AWS, meaning AWS has some involvement with key management.
C. Server-side encryption with KMS encrypts after the object reaches S3, violating the “never send unencrypted data to AWS” rule.
D. Server-side encryption with customer-provided keys (SSE-C) still sends the key to AWS during upload (even though it’s discarded after use).
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html
A Solutions Architect is hosting a website in an Amazon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://codingnconcepts.s3-website-us-east-1.amazonaws.com. A new requirement has been introduced to add JavaScript on the webpages to make authenticated HTTP GET requests against the same bucket using the S3 API endpoint (codingnconcepts.s3.amazonaws.com). However, upon testing, the web browser blocks JavaScript from allowing those requests.
Which of the following options is the MOST suitable solution to implement for this scenario?
⬜ A. Enable cross-account access.
⬜ B. Enable Cross-Zone Load Balancing.
✅ C. Enable Cross-origin resource sharing (CORS) configuration in the bucket.
⬜ D. Enable Cross-Region Replication (CRR).
Explanation:
The best solution to allow the browser to permit JavaScript requests from a different origin is:
● C. Enable Cross-origin resource sharing (CORS) configuration in the bucket.
● CORS allows a browser to access resources in an S3 bucket from a different domain or endpoint, thus enabling cross-origin JavaScript HTTP requests without being blocked by the browser’s same-origin policy.
Why other options are wrong:
A. Cross-account access allows different AWS accounts to access a bucket, not related to browser-origin restrictions.
B. Cross-Zone Load Balancing is about distributing traffic across Availability Zones, not related to CORS or S3 access.
D. Cross-Region Replication (CRR) is for copying objects to another region, and does not solve browser cross-origin issues.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
A company is designing a banking portal that uses Amazon ElastiCache for Redis as its distributed session management component. To secure session data and ensure that Cloud Engineers must authenticate before executing Redis commands, specifically MULTI EXEC commands, the system should enforce strong authentication by requiring users to enter a password. Additionally, access should be managed with long-lived credentials while supporting robust security practices.
Which of the following actions should be taken to meet the above requirement?
⬜ A. Generate an IAM authentication token using AWS credentials and provide this token as a password.
⬜ B. Set up a Redis replication group and enable the AtRestEncryptionEnabled parameter.
✅ C. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled
and --auth-token
parameters enabled.
⬜ D. Enable the in-transit encryption for Redis replication groups.
Explanation:
The correct approach to enforce strong password authentication and secure in-transit communication for ElastiCache Redis is:
● C. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled
and --auth-token
parameters enabled.
● --auth-token
enables password-based authentication for Redis, and --transit-encryption-enabled
ensures TLS encryption for all Redis traffic, meeting both strong authentication and encryption requirements.
Why other options are wrong:
A. IAM authentication tokens are used for RDS databases, not for ElastiCache Redis.
B. AtRestEncryptionEnabled encrypts data on disk, but does not handle authentication of users or in-transit encryption.
D. Enabling only in-transit encryption secures traffic but does not enforce authentication without --auth-token
.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html
A company runs an online payments application in an Auto Scaling group of Amazon EC2 instances in multiple Availability Zones. The EC2 instances are all launched in private subnets. An internet-facing Application Load Balancer (ALB) has been provisioned and points to the existing EC2 instances as the target group. The team noticed that the internet traffic was not reaching the Amazon EC2 instances.
What is the MOST operationally efficient solution that meets these requirements?
⬜ A. Set up a NAT gateway in a public subnet to allow incoming Internet traffic. Use a Gateway Load Balancer instead of an Application Load Balancer.
⬜ B. Move the existing Amazon EC2 instances that are running from the private subnets to public subnets. Allow outbound traffic to 0.0.0.0/0 in the security groups of the EC2 instances.
⬜ C. Add a rule to allow outbound traffic to 0.0.0.0/0 in the security groups of the EC2 instances. Update the route tables of the existing subnets to send all 0.0.0.0/0 traffic through the internet gateway route.
✅ D. Launch public subnets in each Availability Zone and associate them with the Application Load Balancer. Modify the route tables for the public subnets with a route to the private subnets of the EC2 instances.
Explanation:
D. An internet-facing Application Load Balancer (ALB) must be placed in public subnets that have a route to an internet gateway. The backend Amazon EC2 instances must remain in private subnets for security. The ALB forwards incoming internet traffic to EC2 instances across Availability Zones without requiring the EC2 instances to have public IP addresses. This architecture minimizes exposure of backend instances while maintaining internet accessibility through the ALB. It is the most secure and operationally efficient design for this scenario.
Why other options are wrong:
A. A NAT gateway enables outbound internet access for private instances but does not allow inbound traffic from the internet. NATs are for egress traffic only, not ingress.
B. Moving EC2 instances to public subnets and assigning public IPs would expose them directly to the internet, which violates best practices for secure backend instances behind an ALB.
C. Modifying the security groups and routing outbound traffic would not fix the inbound traffic problem. Inbound requests from the internet must still flow through the ALB, which requires proper public subnet configuration.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-subnets.html
The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch.
Which of the following is a custom metric in CloudWatch which you have to manually set up?
✅ A. Memory Utilization of an EC2 instance
⬜ B. CPU Utilization of an EC2 instance
⬜ C. Disk Reads activity of an EC2 instance
⬜ D. Network packets out of an EC2 instance
Explanation:
A. Memory Utilization is not available by default in Amazon CloudWatch for EC2 instances. It requires installing and configuring the CloudWatch agent on the instance to collect and publish this custom metric manually. AWS does not automatically push memory usage statistics to CloudWatch, unlike CPU, disk, and network metrics.
Why other options are wrong:
B. CPU Utilization is a default metric provided by CloudWatch for EC2 instances without needing any custom setup.
C. Disk Read (and Write) metrics are available by default in CloudWatch under EC2 instance metrics.
D. Network packets out (and in) are default metrics collected automatically by CloudWatch for EC2 instances.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-cloudwatch.html
A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. The company has a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform, and also uses a third-party API to fetch certain data for its application. One of the developers was instructed to create the environment variables for the MongoDB database hostname, username, and password, as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT, and PROD environments.
Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers on the team, or anyone, from seeing these credentials in plain text? Select the best option that provides maximum security.
⬜ A. There is no need to do anything because, by default, Lambda already encrypts the environment variables using the AWS Key Management Service.
⬜ B. Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information.
⬜ C. Lambda does not provide encryption for the environment variables. Deploy your code to an Amazon EC2 instance instead.
✅ D. Create a new AWS KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information.
Explanation:
D. For maximum security, you should create a new AWS KMS key and configure your Lambda function to use KMS encryption helpers. This ensures that the sensitive environment variables (such as database credentials and API keys) are encrypted at rest and only decrypted at runtime by Lambda functions with the appropriate permissions. While Lambda provides encryption for environment variables by default using a service-managed KMS key, using a customer-managed KMS key gives you full control over key rotation, key policies, and auditing access.
Why other options are wrong:
A. Although AWS Lambda encrypts environment variables by default using a service-managed KMS key, maximum security requires creating and managing your own KMS key to have full control over encryption and access.
B. CloudHSM is generally used for very high-security use cases that require dedicated hardware security modules. It is not necessary for basic Lambda environment variable encryption and adds unnecessary complexity.
C. Lambda does provide encryption for environment variables. Moving the code to EC2 would increase operational overhead without solving the problem.
Source: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-encryption
There was an incident in a production environment where user data stored in an Amazon S3 bucket was accidentally deleted by a Junior DevOps Engineer. The issue was escalated to management, and after a few days, an instruction was given to improve the security and protection of AWS resources.
What combination of the following options will protect the S3 objects in the bucket from both accidental deletion and overwriting? (Select TWO.)
✅ A. Enable Versioning
⬜ B. Provide access to S3 data strictly through pre-signed URL only
⬜ C. Disallow S3 Delete using an IAM bucket policy
⬜ D. Enable S3 Intelligent-Tiering
✅ E. Enable Multi-Factor Authentication Delete
Explanation:
A. Enabling Versioning preserves every version of every object. If an object is deleted or overwritten, the previous version is still available for recovery. This protects against both accidental deletions and overwrites.
E. Multi-Factor Authentication (MFA) Delete adds an additional security layer by requiring MFA authentication for permanently deleting objects or permanently deleting object versions. This minimizes the risk of accidental or malicious deletions.
Why other options are wrong:
B. Pre-signed URLs control who can access an object temporarily, but they do not prevent overwrites or deletions once access is granted.
C. While blocking deletes in an IAM policy can prevent deletion, it is too restrictive and does not provide the flexibility and auditability that Versioning + MFA Delete provide. IAM deny policies would also prevent authorized deletions.
D. S3 Intelligent-Tiering only manages object storage class transitions for cost optimization based on access patterns. It does not protect against overwrites or deletions.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html
You are an AWS Solutions Architect. Your company has a successful web application deployed in an AWS Auto Scaling group. The application attracts more and more global customers. However, the application’s performance is impacted. Your manager asks you how to improve the performance and availability of the application. Which of the following AWS services would you recommend?
⬜ A. AWS DataSync
⬜ B. Amazon DynamoDB Accelerator
⬜ C. AWS Lake Formation
✅ D. AWS Global Accelerator
Explanation:
● AWS Global Accelerator improves the availability and performance of your application by routing traffic through the AWS global network infrastructure. It uses anycast IP addresses to direct users to the optimal endpoint based on health, geography, and routing policies, thereby reducing latency and increasing reliability.
● It is specifically designed to benefit global customers who experience performance issues due to long internet travel paths, making it a perfect match for this scenario.
Why other options are wrong:
A. AWS DataSync is a service for data transfer between on-premises storage and AWS. It does not improve application performance or availability.
B. Amazon DynamoDB Accelerator (DAX) is a caching service for DynamoDB, useful only if the application’s bottleneck was DynamoDB access, which is not indicated here.
C. AWS Lake Formation is for building secure data lakes, and is unrelated to improving the performance or availability of a web application.
Source: https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html
You host a static website in an S3 bucket and there are global clients from multiple regions. You want to use an AWS service to store cache for frequently accessed content so that the latency is reduced and the data transfer rate is increased. Which of the following options would you choose?
⬜ A. Use AWS SDKs to horizontally scale parallel requests to the Amazon S3 service endpoints.
⬜ B. Create multiple Amazon S3 buckets and put Amazon EC2 and S3 in the same AWS Region.
⬜ C. Enable Cross-Region Replication to several AWS Regions to serve customers from different locations.
✅ D. Configure CloudFront to deliver the content in the S3 bucket.
Explanation:
● Amazon CloudFront is a content delivery network (CDN) that caches copies of your content at edge locations around the world, helping to reduce latency and improve download speeds for your global clients.
● By configuring CloudFront to deliver the static website content stored in your S3 bucket, users will get the cached content from the nearest edge location, leading to faster access and reduced data transfer costs.
● This approach is highly efficient, cost-effective, and requires minimal changes to your existing S3 setup.
Why other options are wrong:
A. Use AWS SDKs to horizontally scale parallel requests to the Amazon S3 service endpoints does not solve the latency problem; it only tries to increase throughput to S3, but the data still comes from the original S3 location.
B. Create multiple Amazon S3 buckets and put Amazon EC2 and S3 in the same AWS Region would add complexity and management overhead without providing caching benefits.
C. Enable Cross-Region Replication to several AWS Regions replicates the data but does not cache or optimize user access the way a CDN like CloudFront would. It increases storage cost instead of reducing access latency.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Your company has an online game application deployed in an Auto Scaling group. The traffic of the application is predictable. Every Friday, the traffic starts to increase, remains high on weekends and then drops on Monday. You need to plan the scaling actions for the Auto Scaling group. Which method is the most suitable for the scaling policy?
⬜ A. Configure a scheduled CloudWatch event rule to launch/terminate instances at the specified time every week.
⬜ B. Create a predefined target tracking scaling policy based on the average CPU metric and the ASG will scale automatically.
⬜ C. Select the ASG and on the Automatic Scaling tab, add a step scaling policy to automatically scale-out/in at fixed time every week.
✅ D. Configure a scheduled action in the Auto Scaling group by specifying the recurrence, start/end time, capacities, etc.
Explanation:
● Scheduled actions in Auto Scaling groups allow you to increase or decrease the number of instances at predictable times based on recurring schedules.
● Since the application’s traffic pattern is predictable (every Friday to Monday), configuring scheduled actions to add capacity before the peak and reduce capacity afterward is the most operationally efficient and cost-effective solution.
● You can specify recurrence patterns (like cron expressions), start/end times, and the desired, minimum, and maximum capacities in the scheduled action.
Why other options are wrong:
A. Configure a scheduled CloudWatch event rule would require additional setup and scripting to manually launch and terminate instances, increasing operational overhead compared to using built-in scheduled actions in the Auto Scaling group.
B. Create a predefined target tracking scaling policy based on the average CPU metric is better for unpredictable or dynamic traffic patterns, not for known, scheduled traffic increases.
C. Select the ASG and add a step scaling policy is based on metrics crossing thresholds (e.g., CPU usage) rather than predictable, time-based scaling, which is needed here.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/scheduled-scaling.html
You are creating several EC2 instances for a new application. For better performance of the application, both low network latency and high network throughput are required for the EC2 instances. All instances should be launched in a single availability zone. How would you configure this?
✅ A. Launch all EC2 instances in a placement group using a Cluster placement strategy.
⬜ B. Auto-assign a public IP when launching the EC2 instances.
⬜ C. Launch EC2 instances in an EC2 placement group and select the Spread placement strategy.
⬜ D. When launching the EC2 instances, select an instance type that supports enhanced networking.
Explanation:
● Cluster placement group is specifically designed to provide low-latency and high-throughput networking between instances by placing them physically close together in the same Availability Zone.
● This setup is ideal for applications that require tightly coupled node-to-node communication, such as high-performance computing (HPC), big data workloads, or low-latency workloads.
● Instances launched into a Cluster placement group can take full advantage of their high network performance capabilities.
Why other options are wrong:
B. Auto-assign a public IP when launching the EC2 instances only provides internet access but does not impact network latency or throughput between instances.
C. Launch EC2 instances in an EC2 placement group and select the Spread placement strategy is intended to spread instances across different underlying hardware for high availability, not for achieving low latency and high throughput.
D. Select an instance type that supports enhanced networking does improve network performance, but without grouping instances together closely (as in a Cluster placement group), it does not guarantee low latency and high throughput between them.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
You need to deploy a machine learning application in AWS EC2. The performance of inter-instance communication is very critical for the application and you want to attach a network device to the instance so that the performance can be greatly improved. Which option is the most appropriate to improve the performance?
⬜ A. Enable enhanced networking features in the EC2 instance.
✅ B. Configure Elastic Fabric Adapter (EFA) in the instance.
⬜ C. Attach high-speed Elastic Network Interface (ENI) in the instance.
⬜ D. Create an Elastic File System (EFS) and mount the file system in the instance.
Explanation:
● Elastic Fabric Adapter (EFA) is a network device that you can attach to an EC2 instance to achieve the low latency, high throughput, and highly scalable inter-instance communication necessary for tightly coupled node applications, such as machine learning and HPC workloads.
● EFA provides the capability for applications to communicate directly with other instances with very low latency, bypassing the operating system kernel, which is essential for performance-critical applications.
Why other options are wrong:
A. Enable enhanced networking features in the EC2 instance improves network performance in general but does not match the extremely low-latency communication performance that EFA provides.
C. Attach high-speed Elastic Network Interface (ENI) in the instance provides network connectivity, but it is not specifically optimized for the low-latency, high-throughput communication that machine learning workloads require.
D. Create an Elastic File System (EFS) and mount the file system in the instance is used for shared file storage, not for improving inter-instance network communication.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html
You have an S3 bucket that receives photos uploaded by customers. When an object is uploaded, an event notification is sent to an SQS queue with the object details. You also have an ECS cluster that gets messages from the queue to do the batch processing. The queue size may change greatly depending on the number of incoming messages and backend processing speed. Which metric would you use to scale up/down the ECS cluster capacity?
✅ A. The number of messages in the SQS queue.
⬜ B. Memory usage of the ECS cluster.
⬜ C. Number of objects in the S3 bucket.
⬜ D. Number of containers in the ECS cluster.
Explanation:
● The number of messages in the SQS queue is the most relevant metric because it directly reflects the backlog of work that needs to be processed. If the number of messages increases, it indicates the need for more ECS tasks (containers) to handle the processing. Similarly, when the number decreases, you can scale down.
● This method allows dynamic scaling based on workload and ensures you are not overprovisioning or underprovisioning your ECS resources.
Why other options are wrong:
B. Memory usage of the ECS cluster measures resource usage but does not directly correlate with the amount of unprocessed workload (messages waiting in the queue).
C. Number of objects in the S3 bucket is irrelevant because the backend only processes what gets pushed to the SQS queue, not necessarily all objects stored in S3.
D. Number of containers in the ECS cluster is an output of scaling decisions, not an input metric that triggers scaling.
Source: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html
You are planning to build a fleet of EBS-optimized EC2 instances for your new application. Due to security compliance, your organization wants you to encrypt root volume which is used to boot the instances. How can this be achieved?
⬜ A. Select the Encryption option for the root EBS volume while launching the EC2 instance.
⬜ B. Once the EC2 instances are launched, encrypt the root volume using AWS KMS Master Key.
⬜ C. Root volumes cannot be encrypted. Add another EBS volume with an encryption option selected during launch. Once EC2 instances are launched, make encrypted EBS volume as root volume through the console.
✅ D. Launch an unencrypted EC2 instance and create a snapshot of the root volume. Make a copy of the snapshot with the encryption option selected and CreateImage using the encrypted snapshot. Use this image to launch EC2 instances.
Explanation:
● The correct approach is to first create a snapshot of the unencrypted root volume, then copy the snapshot with encryption enabled. After that, you can create a new AMI from the encrypted snapshot and use that AMI to launch new EC2 instances with an encrypted root volume.
● AWS does not allow you to directly encrypt an existing root EBS volume after instance launch without this snapshot and re-image process.
Why other options are wrong:
A. Select the Encryption option for the root EBS volume while launching the EC2 instance — You cannot directly encrypt a root volume at launch time if using an unencrypted AMI; you must first create an encrypted AMI.
B. Once the EC2 instances are launched, encrypt the root volume using AWS KMS Master Key — EBS volumes cannot be retroactively encrypted after the instance has launched; you must encrypt at the snapshot or AMI creation stage.
C. Root volumes cannot be encrypted. Add another EBS volume with an encryption option selected during launch — This is incorrect. Root volumes can be encrypted, just not by adding a secondary volume; you need to recreate the AMI using an encrypted snapshot.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
Organization XYZ is planning to build an online chat application for their enterprise level collaboration for their employees across the world. They are looking for a single digit latency fully managed database to store and retrieve conversations. What would AWS Database service you recommend?
✅ A. AWS DynamoDB
⬜ B. AWS RDS
⬜ C. AWS Redshift
⬜ D. AWS Aurora
Explanation:
● AWS DynamoDB is a fully managed NoSQL database service designed for high performance applications needing single-digit millisecond latency at any scale. It is ideal for use cases like real-time chat applications where quick read and write operations are critical.
● DynamoDB also provides seamless scaling, durability, and built-in security, which makes it highly suitable for enterprise-grade collaboration tools.
Why other options are wrong:
B. AWS RDS is a managed relational database service better suited for structured transactional workloads, but it does not guarantee the ultra-low latency needed for real-time chat.
C. AWS Redshift is designed for data warehousing and analytics, not for real-time transactional workloads like chat applications.
D. AWS Aurora is a high-performance relational database but is still not optimized for the single-digit millisecond latency typically required by real-time messaging applications.
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
When creating an AWS CloudFront distribution, which of the following is not an origin?
⬜ A. Elastic Load Balancer
⬜ B. AWS S3 bucket
⬜ C. AWS MediaPackage channel endpoint
✅ D. AWS Lambda
Explanation:
● AWS Lambda is not used as an origin for a CloudFront distribution. CloudFront origins are locations where CloudFront fetches content that it delivers to viewers. Lambda is a compute service, not a content source.
● Valid CloudFront origins include Amazon S3 buckets, HTTP servers, Elastic Load Balancers, and AWS MediaPackage endpoints.
Why other options are wrong:
A. Elastic Load Balancer can serve as a CloudFront origin, allowing CloudFront to distribute content from backend services running behind the load balancer.
B. AWS S3 bucket is a common origin for CloudFront, especially for static website hosting and file distribution.
C. AWS MediaPackage channel endpoint is supported as an origin for streaming media workflows, enabling CloudFront to cache live video streams.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOrigin
Which of the following statements are true with respect to VPC? (Choose Two.)
⬜ A. A subnet can have multiple route tables associated with it.
✅ B. A network ACL can be associated with multiple subnets.
⬜ C. A route with target “local” on the route table can be edited to restrict traffic within VPC.
✅ D. Subnet’s IP CIDR block can be same as the VPC CIDR block.
Explanation:
● B is correct because a single network ACL (NACL) can indeed be associated with multiple subnets, but a subnet can only be associated with one NACL at a time.
● D is correct because the subnet CIDR block can be the same as the VPC CIDR block if only one subnet is required in the VPC (though usually, subnets are smaller).
Why other options are wrong:
A. A subnet can have multiple route tables associated with it — Incorrect. A subnet can be associated with only one route table at a time.
C. A route with target “local” on the route table can be edited to restrict traffic within VPC — Incorrect. The “local” route enabling internal communication within the VPC is automatically created and cannot be modified or removed.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
Organization ABC has a customer base in the US and Australia that would be downloading 10s of GBs files from your application. For them to have a better download experience, they decided to use the AWS S3 bucket with cross-region replication with the US as the source and Australia as the destination. They are using existing unused S3 buckets and had set up cross-region replication successfully. However, when files uploaded to the US bucket, they are not being replicated to Australia bucket. What could be the reason?
⬜ A. Versioning is not enabled on the source and destination buckets.
⬜ B. Encryption is not enabled on the source and destination buckets.
✅ C. Source bucket has a policy with DENY and the role used for replication is not excluded from DENY.
⬜ D. Destination bucket’s default CORS policy does not have source bucket added as the origin.
Explanation:
● C is correct because if the source bucket has a DENY policy that blocks replication actions and the IAM role for replication is not excluded from that DENY, the replication operation will fail even if CRR is successfully configured.
Why other options are wrong:
A. Versioning is not enabled on the source and destination buckets — Incorrect. Cross-Region Replication cannot even be enabled without versioning. Since the question clearly says “cross-region replication set up successfully,” versioning must already be enabled.
B. Encryption is not enabled on the source and destination buckets — Incorrect. Encryption is not mandatory for cross-region replication to work.
D. Destination bucket’s default CORS policy does not have source bucket added as the origin — Incorrect. CORS settings control browser-based requests, not S3 replication between buckets.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
Which of the following is not a category in AWS Trusted Advisor service checks?
⬜ A. Cost Optimization
⬜ B. Fault Tolerance
⬜ C. Service Limits
✅ D. Network Optimization
Explanation:
● D is correct because Network Optimization is not a category in AWS Trusted Advisor. The main categories that Trusted Advisor covers are Cost Optimization, Performance, Security, Fault Tolerance, and Service Limits.
● Trusted Advisor helps customers reduce costs, increase performance, improve security, and monitor service limits, but Network Optimization is not a specific category.
Why other options are wrong:
A. Cost Optimization — This is a core category in Trusted Advisor, helping to identify unused or underutilized resources.
B. Fault Tolerance — Trusted Advisor checks for things like backup, redundancy, and disaster recovery, making this an important category.
C. Service Limits — Trusted Advisor monitors service usage against AWS service limits to prevent disruptions.
Source: https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html
Your organization is building a collaboration platform for which they chose AWS EC2 for web and application servers and MySQL RDS instance as the database. Due to the nature of the traffic to the application, they would like to increase the number of connections to RDS instances. How can this be achieved?
⬜ A. Login to RDS instance and modify database config file under /etc/mysql/my.cnf
✅ B. Create a new parameter group, attach it to the DB instance and change the setting.
⬜ C. Create a new option group, attach it to the DB instance and change the setting.
⬜ D. Modify setting in the default options group attached to the DB instance.
Explanation:
● B is correct because in Amazon RDS, you cannot directly access the operating system or the database’s configuration files. To modify database settings such as the maximum number of connections, you must create a custom DB parameter group, modify the required parameters, and attach the parameter group to the RDS instance.
Why other options are wrong:
A. Login to RDS instance and modify database config file under /etc/mysql/my.cnf — Incorrect. You cannot log in to the underlying host or modify OS-level files in RDS managed services.
C. Create a new option group, attach it to the DB instance and change the setting — Incorrect. Option groups are used for enabling features like Oracle TDE, SQL Server Audit, not for adjusting configuration parameters.
D. Modify setting in the default options group attached to the DB instance — Incorrect. Default option groups cannot be modified. You must create a new parameter group for configuration changes.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html
You will be launching and terminating EC2 instances on a need basis for your workloads. You need to run some shell scripts and perform certain checks connecting to the AWS S3 bucket when the instance is getting launched. Which of the following options will allow performing any tasks during launch? (Choose Two.)
✅ A. Use Instance user data for shell scripts.
⬜ B. Use Instance metadata for shell scripts.
✅ C. Use AutoScaling Group lifecycle hooks and trigger AWS Lambda function through CloudWatch events.
⬜ D. Use Placement Groups and set “InstanceLaunch” state to trigger AWS Lambda functions.
Explanation:
● A is correct because instance user data is specifically designed to run shell scripts and perform configurations during the instance launch time automatically.
● C is correct because Auto Scaling Group lifecycle hooks allow you to pause the instance launch or termination process and run custom actions (like Lambda functions) before the instance transitions to its next state.
Why other options are wrong:
B. Use Instance metadata for shell scripts — Incorrect. Instance metadata provides information about the instance (such as instance ID or security groups) but it is read-only and cannot be used to run scripts.
D. Use Placement Groups and set “InstanceLaunch” state to trigger AWS Lambda functions — Incorrect. Placement groups are for controlling instance placement for low-latency and high-throughput applications; they do not provide lifecycle hooks or event triggers.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
Your organization has an AWS setup and planning to build Single Sign-On for users to authenticate with on-premise Microsoft Active Directory Federation Services (ADFS) and let users log in to the AWS console using AWS STS Enterprise Identity Federation. Which of the following services do you need to call from AWS STS service after you authenticate with your on-premise?
✅ A. AssumeRoleWithSAML
⬜ B. GetFederationToken
⬜ C. AssumeRoleWithWebIdentity
⬜ D. GetCallerIdentity
Explanation:
● A is correct because when federating with on-premises Active Directory via ADFS and SAML, the correct AWS STS API call is AssumeRoleWithSAML, which exchanges the SAML assertion for temporary AWS credentials.
Why other options are wrong:
B. GetFederationToken — Incorrect. This is used when federating with custom identity brokers, not for SAML-based authentication like ADFS.
C. AssumeRoleWithWebIdentity — Incorrect. This is for web identity federation (like Google, Facebook, Amazon Cognito), not SAML-based enterprise identity federation.
D. GetCallerIdentity — Incorrect. This simply returns details about the IAM user or role making the request; it is not related to authentication or federation.
Source: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithSAML.html
Your organization was planning to develop a web application on AWS EC2. Application admin was tasked to perform AWS setup required to spin EC2 instance inside an existing private VPC. He/she has created a subnet and wants to ensure no other subnets in the VPC can communicate with your subnet except for the specific IP address. So he/she created a new route table and associated with the new subnet. When he/she was trying to delete the route with the target as local, there is no option to delete the route. What could have caused this behavior?
⬜ A. Policy attached to IAM user does not have access to remove routes.
✅ B. A route with the target as local cannot be deleted.
⬜ C. You cannot add/delete routes when associated with the subnet. Remove associated, add/delete routes and associate again with the subnet.
⬜ D. There must be at least one route on the route table. Add a new route to enable delete option on existing routes.
Explanation:
● B is correct because the “local” route in a VPC route table enables communication within the VPC and is automatically created by AWS. It cannot be deleted or modified manually. It ensures that resources in the VPC can reach each other.
Why other options are wrong:
A. Policy attached to IAM user does not have access to remove routes — Incorrect. Even with full permissions, AWS does not allow deletion of the local route.
C. You cannot add/delete routes when associated with the subnet — Incorrect. You can freely modify route tables whether or not they are associated with a subnet, except for the local route.
D. There must be at least one route on the route table. Add a new route to enable delete option on existing routes — Incorrect. The inability to delete the “local” route is not because of the number of routes, it’s a system-enforced rule.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html#RouteTableRules
You have configured AWS S3 event notification to send a message to AWS Simple Queue Service whenever an object is deleted. You are performing a ReceiveMessage API operation on the AWS SQS queue to receive the S3 delete object message onto AWS EC2 instance. For any successful message operations, you are deleting them from the queue. For failed operations, you are not deleting the messages. You have developed a retry mechanism which reruns the application every 5 minutes for failed ReceiveMessage operations. However, you are not receiving the messages again during the rerun. What could have caused this?
⬜ A. AWS SQS deletes the message after it has been read through ReceiveMessage API
⬜ B. You are using Long Polling which does not guarantee message delivery.
⬜ C. Failed ReceiveMessage queue messages are automatically sent to Dead Letter Queues. You need to ReceiveMessage from Dead Letter Queue for failed retries.
✅ D. Visibility Timeout on the SQS queue is set to 10 minutes.
Explanation:
● D is correct because when a message is received from SQS, it becomes invisible for the duration of the visibility timeout (default is 30 seconds but can be configured). If the visibility timeout is set to 10 minutes, the message will not be available for reprocessing for 10 minutes, even if the application retries in 5 minutes. The message will only reappear in the queue after the visibility timeout expires if not deleted.
Why other options are wrong:
A. AWS SQS deletes the message after it has been read through ReceiveMessage API — Incorrect. SQS does not delete a message just because it was read; the application must explicitly call DeleteMessage API to remove it.
B. You are using Long Polling which does not guarantee message delivery — Incorrect. Long polling reduces the number of empty ReceiveMessage responses and can improve efficiency; it does not impact delivery guarantees.
C. Failed ReceiveMessage queue messages are automatically sent to Dead Letter Queues — Incorrect. Messages only move to Dead Letter Queues after exceeding the configured maxReceiveCount, not immediately after a failed receive.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
Your organization has an existing VPC setup and has a requirement to route any traffic going from VPC to AWS S3 bucket through AWS internal network. So they have created a VPC endpoint for S3 and configured it to allow traffic for S3 buckets. The application you are developing involves sending traffic to AWS S3 bucket from VPC for which you planned to use a similar approach. You have created a new route table, added a route to VPC endpoint and associated the route table with your new subnet. However, when you are trying to send a request from EC2 to S3 bucket using AWS CLI, the request is getting failed with 403 access denied errors. What could be causing the failure?
⬜ A. AWS S3 bucket is in a different region than your VPC.
⬜ B. EC2 security group outbound rules not allowing traffic to S3 prefix list.
✅ C. VPC endpoint might have a restrictive policy and does not contain the new S3 bucket.
⬜ D. S3 bucket CORS configuration does not have EC2 instances as the origin.
Explanation:
● C is correct because if your VPC endpoint policy is restrictive and does not allow access to the specific S3 bucket you are trying to access, AWS will return a 403 Access Denied error. You must ensure that the VPC endpoint policy allows the necessary S3 actions on the relevant bucket.
Why other options are wrong:
A. AWS S3 bucket is in a different region than your VPC — Incorrect. S3 buckets are globally accessible and VPC endpoints can only be used with S3 buckets in the same Region, but if that were the issue, the connection itself would fail, not cause an access denied (403) error.
B. EC2 security group outbound rules not allowing traffic to S3 prefix list — Incorrect. Security groups do not restrict outbound traffic by default. Outbound rules are open unless explicitly denied, and a 403 is not a networking error but a permissions error.
D. S3 bucket CORS configuration does not have EC2 instances as the origin — Incorrect. CORS configuration matters only for cross-origin browser-based requests, not for CLI/API traffic from EC2 instances.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
You have launched an RDS instance with MySQL database with default configuration for your file sharing application to store all the transactional information. Due to security compliance, your organization wants to encrypt all the databases and storage on the cloud. They approached you to perform this activity on your MySQL RDS database. How can you achieve this?
✅ A. Copy snapshot from the latest snapshot of your RDS instance, select encryption during copy and restore a new DB instance from the newly encrypted snapshot.
⬜ B. Stop the RDS instance, modify and select the encryption option. Start the RDS instance, it may take a while to start an RDS instance as existing data is getting encrypted.
⬜ C. Create a case with AWS support to enable encryption for your RDS instance.
⬜ D. AWS RDS is a managed service and the data at rest in all RDS instances are encrypted by default.
Explanation:
● A is correct because once an Amazon RDS database instance is created without encryption, you cannot enable encryption directly on it.
● Instead, you must take a snapshot, copy the snapshot while selecting encryption, and then restore a new DB instance from this encrypted snapshot.
Why other options are wrong:
B. Stop the RDS instance, modify and select the encryption option — Incorrect. Encryption cannot be enabled on an existing RDS instance by simply modifying it. It must be done through creating an encrypted snapshot and restoring from it.
C. Create a case with AWS support to enable encryption for your RDS instance — Incorrect. AWS support cannot retroactively encrypt an existing RDS instance.
D. AWS RDS is a managed service and the data at rest in all RDS instances are encrypted by default — Incorrect. Encryption is optional at creation. It is not enabled by default unless explicitly selected.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
You have successfully set up a VPC peering connection in your account between two VPCs – VPC A and VPC B, each in a different region. When you are trying to make a request from VPC A to VPC B, the request fails. Which of the following could be a reason?
⬜ A. Cross-region peering is not supported in AWS.
⬜ B. CIDR blocks of both VPCs might be overlapping.
✅ C. Routes not configured in route tables for peering connections.
⬜ D. VPC A security group default outbound rules not allowing traffic to VPC B IP range.
Explanation:
● C is correct because even after successfully establishing a VPC peering connection, you must manually configure the route tables in both VPCs to allow communication through the peering connection. Without appropriate routes, the traffic will not flow.
Why other options are wrong:
A. Cross-region peering is not supported in AWS — Incorrect. AWS fully supports cross-region VPC peering.
B. CIDR blocks of both VPCs might be overlapping — Incorrect. If the CIDR blocks overlap, AWS will not allow the creation of the peering connection in the first place.
D. VPC A security group default outbound rules not allowing traffic — Incorrect. By default, a security group allows all outbound traffic, so this will not cause the issue.
Source: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
Which of the following statements are true in terms of allowing/denying traffic from/to VPC assuming the default rules are not in effect? (Choose Two.)
⬜ A. In a Network ACL, for a successful HTTPS connection, add an inbound rule with HTTPS type, IP range in source and ALLOW traffic.
✅ B. In a Network ACL, for a successful HTTPS connection, you must add an inbound rule and outbound rule with HTTPS type, IP range in source and destination respectively and ALLOW traffic.
✅ C. In a Security Group, for a successful HTTPS connection, add an inbound rule with HTTPS type and IP range in the source.
⬜ D. In a Security Group, for a successful HTTPS connection, you must add an inbound rule and outbound rule with HTTPS type, IP range in source and destination respectively.
Explanation:
● B is correct because in a Network ACL, traffic must be explicitly allowed both inbound and outbound since NACLs are stateless. You must create both rules to allow an HTTPS connection to succeed.
● C is correct because a Security Group is stateful, and you only need to allow inbound traffic for HTTPS (TCP port 443). The response traffic is automatically allowed.
Why other options are wrong:
A. In a Network ACL, only an inbound rule is not sufficient — Incorrect because NACLs are stateless and require both inbound and outbound rules to be set for the connection to succeed.
D. In a Security Group, you do not need to configure both inbound and outbound rules for a connection — Incorrect because Security Groups are stateful and once inbound is allowed, return traffic is automatically permitted.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
A company needs to ensure that its AWS cloud environment is protected from DDoS attacks. Which AWS service would be the most appropriate for this requirement?
⬜ A. AWS WAF
⬜ B. Amazon GuardDuty
⬜ C. Amazon Inspector
✅ D. AWS Shield
Explanation:
D. AWS Shield provides DDoS protection and is specifically designed to safeguard AWS applications against DDoS attacks.
Why other options are wrong:
A. AWS WAF is primarily used for filtering traffic to applications but does not specifically provide DDoS protection.
B. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity but does not specifically provide DDoS mitigation.
C. Amazon Inspector is used for security assessment and compliance of applications but does not offer DDoS protection.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html
A company has deployed a multi-tier web application in AWS. The application’s web servers are in an Auto Scaling group distributed across multiple Availability Zones. They need a resilient data store that can handle the loss of an Availability Zone. Which AWS services should be used together to meet these requirements? (Select TWO.)
✅ A. Amazon RDS with Multi-AZ deployment
✅ B. Amazon DynamoDB with global tables
⬜ C. Amazon EC2 with Elastic Load Balancing
⬜ D. Amazon S3 with cross-region replication
⬜ E. Amazon EBS with snapshot copies to another AZ
Explanation:
A. RDS Multi-AZ deployment ensures high availability and automatic failover capability.
B. DynamoDB with global tables offers replication across multiple regions, enhancing resilience.
Why other options are wrong:
C. EC2 with Elastic Load Balancing distributes load but does not address data store resilience.
D. S3 with cross-region replication is great for durability but not essential for database resilience in this context.
E. EBS snapshots provide backup solutions but do not offer real-time failover capability.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables.html
An application requires a database solution that can automatically scale to meet sudden increases in demand while maintaining consistent performance. Which combination of AWS services would best meet these requirements? (Select TWO.)
⬜ A. Amazon RDS with provisioned IOPS
✅ B. Amazon DynamoDB with on-demand capacity
⬜ C. Amazon EC2 Auto Scaling
✅ D. Amazon ElastiCache
⬜ E. AWS Lambda
Explanation:
B. DynamoDB with on-demand capacity can automatically adjust for varying loads.
D. ElastiCache enhances database performance through caching, handling spikes efficiently.
Why other options are wrong:
A. RDS with provisioned IOPS offers predictable performance but may not auto-scale rapidly.
C. EC2 Auto Scaling adjusts compute but not the database layer.
E. Lambda provides serverless compute, not directly related to database scalability.
Which of the following AWS services offers the most cost-effective solution for storing infrequently accessed data over a long period?
⬜ A. Amazon EBS
⬜ B. Amazon S3 Standard
✅ C. Amazon S3 Glacier
⬜ D. Amazon RDS
Explanation:
C. Amazon S3 Glacier is designed for long-term, infrequently accessed data storage at a very low cost. It is ideal for archival and backup use cases where retrieval times of minutes to hours are acceptable.
Why other options are wrong:
A. Amazon EBS is block storage optimized for low-latency access and is relatively expensive for infrequent access scenarios.
B. Amazon S3 Standard is optimized for frequent access to data, making it costlier than S3 Glacier for long-term storage.
D. Amazon RDS is a managed relational database service, intended for active databases and not suitable or cost-effective for simple, infrequently accessed data storage.
Source: https://aws.amazon.com/s3/storage-classes/
To ensure business continuity, a company requires an RTO (Recovery Time Objective) of 2 hours and an RPO (Recovery Point Objective) of 15 minutes for its critical application. Which AWS disaster recovery strategy best fits these requirements?
⬜ A. Backup and restore
⬜ B. Pilot light
✅ C. Multi-site active-active
⬜ D. Warm standby
Explanation:
C. Multi-site active-active provides near-zero RTO and very low RPO, making it the best choice for critical applications needing high availability and minimal data loss. Both environments (primary and secondary) are fully functional, and traffic can be shifted immediately in case of failure.
Why other options are wrong:
A. Backup and restore typically has high RTO and RPO because restoring systems from backup takes significant time.
B. Pilot light maintains a minimal version of the environment and requires time to scale up, thus RTO could be longer than 2 hours.
D. Warm standby is close but generally results in a slightly longer RPO/RTO compared to active-active, making it less ideal for a strict 15-minute RPO.
Source: https://aws.amazon.com/disaster-recovery/
Source: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
A Solutions Architect is designing a web application that experiences seasonal traffic. To optimize costs, which combination of AWS services should be used for compute resources? (Select TWO.)
✅ A. Amazon EC2 On-Demand Instances
⬜ B. Amazon EC2 Reserved Instances
✅ C. AWS Lambda
⬜ D. Amazon EC2 Spot Instances
⬜ E. Amazon EC2 Dedicated Hosts
Explanation:
● Amazon EC2 On-Demand Instances are flexible for fluctuating seasonal workloads without long-term commitment, ideal for handling unexpected spikes.
● AWS Lambda is a serverless option that automatically scales with demand, and you only pay per invocation, making it highly cost-efficient for unpredictable or seasonal traffic.
Why other options are wrong:
B. Amazon EC2 Reserved Instances are cost-effective for steady, predictable workloads, but not ideal for seasonal traffic patterns.
D. Amazon EC2 Spot Instances are cheaper but can be terminated with short notice, which might not be acceptable for a web application needing reliable availability.
E. Amazon EC2 Dedicated Hosts are intended for licensing and compliance needs, not cost optimization for seasonal workloads.
Source: https://aws.amazon.com/ec2/pricing/
An organization wants to secure its AWS infrastructure by ensuring that access keys are not embedded in code or left in accessible locations. Which AWS service or feature should be used to achieve this best practice?
✅ A. AWS Identity and Access Management (IAM)
⬜ B. AWS Key Management Service (KMS)
⬜ C. AWS Config
⬜ D. Amazon CloudWatch
Explanation:
A. AWS Identity and Access Management (IAM) provides roles and temporary security credentials so that applications can securely access AWS services without hardcoding access keys in code or storing them in insecure locations. This is the recommended best practice.
Why other options are wrong:
B. AWS Key Management Service (KMS) is used for encryption and key management, not for access key management in applications.
C. AWS Config is a service for tracking resource configurations and compliance but does not directly manage application access credentials.
D. Amazon CloudWatch is used for monitoring AWS resources and applications but is unrelated to access key management.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
A Solutions Architect is designing a cost-optimized architecture for a workload that has predictable, consistent performance requirements. Which EC2 purchasing option would be the most cost-effective?
⬜ A. On-Demand Instances
⬜ B. Spot Instances
✅ C. Reserved Instances
⬜ D. Dedicated Hosts
Explanation:
C. Reserved Instances provide a significant discount compared to On-Demand pricing when you commit to using Amazon EC2 over a one-year or three-year term. They are ideal for workloads with predictable, steady usage patterns, making them the most cost-effective choice in this scenario.
Why other options are wrong:
A. On-Demand Instances are better suited for short-term, irregular workloads, and are more expensive for steady, predictable usage.
B. Spot Instances offer the lowest cost but are designed for flexible workloads that can handle interruptions, not predictable workloads requiring consistent performance.
D. Dedicated Hosts provide physical servers for licensing or compliance needs, but are usually more costly and are not the best option just for cost-optimization.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-overview.html
A Solutions Architect needs to design a secure communication channel for transferring sensitive data between an Amazon EC2 instance and an S3 bucket. Which combination of actions should be taken to ensure data security? (Select TWO.)
✅ A. Enable SSL/TLS on the S3 bucket
⬜ B. Use Amazon Macie for data classification
✅ C. Implement server-side encryption with AWS KMS
⬜ D. Restrict S3 bucket access with IAM roles
⬜ E. Configure a VPC endpoint for S3
Explanation:
A. Enable SSL/TLS on the S3 bucket ensures that all data in transit between the EC2 instance and S3 bucket is encrypted, securing sensitive information from interception.
C. Implement server-side encryption with AWS KMS provides encryption at rest, ensuring that the data stored in the S3 bucket is protected using customer-managed encryption keys.
Why other options are wrong:
B. Use Amazon Macie is for data classification and identifying sensitive information, but it does not create a secure communication channel.
D. Restrict S3 bucket access with IAM roles controls who can access the bucket but does not inherently secure the communication channel itself.
E. Configure a VPC endpoint for S3 improves private connectivity but does not inherently encrypt data; SSL/TLS is still needed for encrypted transport.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
A Solutions Architect needs to design a system that can handle sudden, unpredictable spikes in web traffic. Which AWS service combination is the BEST for this requirement?
⬜ A. Amazon EC2 with Elastic Load Balancing
⬜ B. Amazon RDS with Read Replicas
⬜ C. Amazon S3 with AWS CloudFront
✅ D. Auto Scaling with Elastic Load Balancing
Explanation:
● Auto Scaling with Elastic Load Balancing is the best choice because Auto Scaling automatically adjusts the number of Amazon EC2 instances to handle sudden traffic spikes, while Elastic Load Balancing distributes incoming traffic across healthy instances to maintain availability and performance.
Why other options are wrong:
A. Amazon EC2 with Elastic Load Balancing can distribute traffic but without Auto Scaling, it cannot automatically add more instances during traffic spikes.
B. Amazon RDS with Read Replicas is used to scale database read operations, not web server traffic handling.
C. Amazon S3 with AWS CloudFront is excellent for static content delivery but not suitable for dynamically scaling compute resources needed for web traffic spikes.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
A Solutions Architect needs to improve the data processing speed of a large-scale analytics application. The current setup uses multiple EC2 instances accessing data stored in Amazon S3. Which service should be implemented to enhance the processing speed without significant changes to the existing infrastructure?
⬜ A. Amazon EBS
⬜ B. AWS Lambda
✅ C. Amazon ElastiCache
⬜ D. Amazon Elastic File System (EFS)
Explanation:
● Amazon ElastiCache is the best option because it provides in-memory caching, which significantly improves data retrieval speeds for frequently accessed data. This enhancement can be integrated with minimal changes to the existing architecture, boosting the application’s overall processing speed.
Why other options are wrong:
A. Amazon EBS is a block storage service for individual EC2 instances, not suitable for shared or distributed caching across multiple instances.
B. AWS Lambda is ideal for serverless compute tasks, not for accelerating data retrieval in an EC2-based analytics setup.
D. Amazon Elastic File System (EFS) provides shared file storage for EC2 but does not offer the low-latency in-memory performance that ElastiCache provides.
Source: https://docs.aws.amazon.com/elasticache/latest/mem-ug/WhatIs.html
To comply with data security standards, an organization needs to rotate its AWS IAM user access keys regularly. Which AWS service simplifies this process?
✅ A. AWS Secrets Manager
⬜ B. AWS Config
⬜ C. Amazon Inspector
⬜ D. AWS CloudTrail
Explanation:
● AWS Secrets Manager simplifies the management, rotation, and secure storage of credentials, including IAM access keys. It can automatically rotate access keys according to a defined schedule, helping organizations meet security compliance requirements.
Why other options are wrong:
B. AWS Config tracks configuration changes of AWS resources but does not perform rotation of credentials.
C. Amazon Inspector assesses the security vulnerabilities of EC2 instances and containers but does not manage IAM keys.
D. AWS CloudTrail records API calls and activities for auditing purposes but does not handle key rotation.
Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
A company has a financial application that produces reports. The reports average 50 KB in size and are stored in Amazon S3. The reports are frequently accessed during the first week after production and must be stored for several years. The reports must be retrievable within 6 hours. Which solution meets these requirements MOST cost-effectively?
✅ A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
⬜ B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
⬜ C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
⬜ D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
Explanation:
● S3 Glacier is a cost-effective storage class for data that is infrequently accessed but needs to be retrieved within hours. Since the requirement specifies a retrieval time of within 6 hours, S3 Glacier is ideal. Transitioning after 7 days when access drops is also a good optimization.
Why other options are wrong:
B. S3 Standard-IA is less costly than S3 Standard but is more expensive compared to Glacier for long-term storage needs.
C. S3 Intelligent-Tiering charges monitoring and automation fees; it’s better for unpredictable access patterns, not for known predictable patterns like in this case.
D. S3 Glacier Deep Archive has a retrieval time of up to 12 hours, which exceeds the 6-hour requirement.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes. What should a solutions architect do to accomplish this goal?
✅ A. Turn on AWS Config with the appropriate rules.
⬜ B. Turn on AWS Trusted Advisor with the appropriate checks.
⬜ C. Turn on Amazon Inspector with the appropriate assessment template.
⬜ D. Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events).
Explanation:
A. AWS Config continuously monitors and records AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired baselines. This is the most suitable service for detecting unauthorized changes to S3 buckets.
Why other options are wrong:
B. AWS Trusted Advisor provides real-time best practice guidance but does not continuously monitor for configuration changes like AWS Config does.
C. Amazon Inspector is designed for assessing security vulnerabilities and not for monitoring S3 configuration changes.
D. Amazon S3 server access logging provides detailed records of the requests made to a bucket but does not track configuration changes or help automate compliance checks.
Source: https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html
A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems. Which design should a solutions architect recommend?
✅ A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume.
⬜ B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume.
⬜ C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.
⬜ D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.
Explanation:
A. Using an AWS Lambda function triggered by an update to the RDS database is a highly efficient way to decouple the application logic. The Lambda function can push the information to an Amazon SQS queue, allowing multiple consumers to process the update asynchronously and reliably.
Why other options are wrong:
B. FIFO queues are used when strict message order is required, which is unnecessary for this use case. A standard SQS queue is more cost-effective and sufficient.
C. RDS event notifications primarily alert about changes in the RDS instance state, not fine-grained database record changes (like a listing being sold). Hence, they are not suitable for this use case.
D. Similar to option C, RDS event notifications do not detect individual row updates, so using SNS with RDS event notifications will not satisfy the requirement.
Source: https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html
A company uses AWS Organizations with all features enabled and runs multiple Amazon EC2 workloads in the ap-southeast-2 Region. The company has a service control policy (SCP) that prevents any resources from being created in any other Region. A security policy requires the company to encrypt all data at rest.
An audit discovers that employees have created Amazon Elastic Block Store (Amazon EBS) volumes for EC2 instances without encrypting the volumes. The company wants any new EC2 instances that any IAM user or root user launches in ap-southeast-2 to use encrypted EBS volumes. The company wants a solution that will have minimal effect on employees who create EBS volumes.
Which combination of steps will meet these requirements? (Choose two.)
⬜ A. In the Amazon EC2 console, select the EBS encryption account attribute and define a default encryption key.
⬜ B. Create an IAM permission boundary. Attach the permission boundary to the root organizational unit (OU). Define the boundary to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
✅ C. Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
⬜ D. Update the IAM policies for each account to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
✅ E. In the Organizations management account, specify the Default EBS volume encryption setting.
Explanation:
● C ensures that users cannot create unencrypted EBS volumes because the SCP will deny the ec2:CreateVolume action unless the ec2:Encrypted=true condition is met. This enforces encryption at the organization level.
● E ensures that all new EBS volumes are encrypted by default, without users needing to manually specify encryption. This meets the “minimal effect” requirement for employees.
Why other options are wrong:
A. Setting the default encryption at the EC2 console level only configures it for a single account, and not centrally across the organization.
B. IAM permission boundaries manage what roles/users can do, but do not centrally enforce default encryption across accounts like SCPs.
D. Updating IAM policies individually in each account is labor-intensive, error-prone, and operationally inefficient compared to SCP enforcement.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
A company has a three-tier web application that is in a single server. The company wants to migrate the application to the AWS Cloud. The company also wants the application to align with the AWS Well-Architected Framework and to be consistent with AWS recommended best practices for security, scalability, and resiliency.
Which combination of solutions will meet these requirements? (Choose three.)
⬜ A. Create a VPC across two Availability Zones with the application’s existing architecture. Host the application with existing architecture on an Amazon EC2 instance in a private subnet in each Availability Zone with EC2 Auto Scaling groups. Secure the EC2 instance with security groups and network access control lists (network ACLs).
⬜ B. Set up security groups and network access control lists (network ACLs) to control access to the database layer. Set up a single Amazon RDS database in a private subnet.
✅ C. Create a VPC across two Availability Zones. Refactor the application to host the web tier, application tier, and database tier. Host each tier on its own private subnet with Auto Scaling groups for the web tier and application tier.
⬜ D. Use a single Amazon RDS database. Allow database access only from the application tier security group.
✅ E. Use Elastic Load Balancers in front of the web tier. Control access by using security groups containing references to each layer’s security groups.
✅ F. Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database access only from application tier security groups.
Explanation:
C. is correct because splitting the web, application, and database tiers into separate subnets and scaling them independently follows the AWS Well-Architected Framework’s best practices for scalability and high availability.
E. is correct because placing an Elastic Load Balancer (ELB) in front of the web tier improves scalability and availability. Using security groups with reference-based rules enhances security.
F. is correct because an Amazon RDS Multi-AZ deployment ensures resiliency, durability, and high availability of the database.
Why other options are wrong:
A. Retaining the monolithic architecture does not align with AWS best practices of scalability and decoupling the tiers.
B. A single RDS instance without Multi-AZ would not meet the resiliency requirement.
D. Using a single RDS database without Multi-AZ does not provide fault tolerance if the database Availability Zone fails.
Source: https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket. Which solution will meet these requirements?
✅ A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
⬜ B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security groups to the endpoint. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
⬜ C. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
⬜ D. Use the AWS provided, publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
Explanation:
A. is correct because an interface VPC endpoint allows private connections from EC2 to S3 without going through the public internet. Resource policies can tightly control access at the S3 bucket level based on IAM roles.
Why other options are wrong:
B. Gateway VPC endpoints are correct for S3 access, but they operate at the route table level — they do not use security groups, so attaching security groups to a gateway endpoint is not possible.
C. Using nslookup to find S3 IP addresses is unreliable and not an AWS best practice. S3 endpoints are managed by AWS and IPs can change.
D. The ip-ranges.json file provides ranges of public IPs and does not guarantee private communication; it’s not suitable for private, secure routing to S3.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide access to the product manager by following the principle of least privilege. Which solution will meet these requirements?
✅ A. Share the dashboard from the CloudWatch console. Enter the product manager’s email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
⬜ B. Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.
⬜ C. Create an IAM user for the company’s employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section.
⬜ D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard.
Explanation:
A. is correct because CloudWatch dashboards now support sharing via a public link where access control is applied via a signed URL. This method allows external users (without AWS accounts) to securely access dashboards without creating IAM users and aligns with the principle of least privilege.
Why other options are wrong:
B. Creating an IAM user just for dashboard access is unnecessary and violates least privilege principles for users without AWS accounts.
C. Sharing IAM credentials is a security risk and is not best practice, even with limited permissions.
D. Setting up a bastion host with cached credentials adds unnecessary complexity, operational overhead, and security risks, which violates the principle of least privilege and simplicity.
Source: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboard_Sharing.html
A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?
✅ A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region.
⬜ B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the ALB as an AWS Global Accelerator endpoint in each Region.
⬜ C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as an origin.
⬜ D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted record as an origin.
Explanation:
A. is correct because AWS Global Accelerator provides static IP addresses that intelligently route traffic to the closest healthy endpoint based on geographic location and performance. Global Accelerator supports UDP traffic (which is important for VoIP) and provides automatic failover between Regions.
Why other options are wrong:
B. ALBs do not support UDP traffic, which is a key requirement for a VoIP service.
C. Route 53 latency routing helps with regional selection but does not provide the same dynamic performance improvements or automatic health-based failover as AWS Global Accelerator. CloudFront is optimized for HTTP/HTTPS, not UDP.
D. ALBs do not support UDP traffic, and weighted routing with CloudFront is not appropriate for latency-sensitive, real-time UDP applications.
Source: https://docs.aws.amazon.com/global-accelerator/latest/dg/introduction-what-is-global-accelerator.html
A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company’s security team needs a single sign-on (SSO) solution across all the company’s accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory. Which solution will meet these requirements?
⬜ A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
✅ B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
⬜ C. Use AWS Directory Service. Create a two-way trust relationship with the company’s self-managed Microsoft Active Directory.
⬜ D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
Explanation:
B. is correct because AWS SSO can integrate with an on-premises Active Directory using AWS Directory Service for Microsoft Active Directory by establishing a two-way trust relationship. This allows users to continue using their existing AD credentials for SSO across AWS accounts managed by AWS Organizations.
Why other options are wrong:
A. A one-way trust would not allow AWS SSO to validate user credentials against the on-premises Active Directory properly.
C. While creating a two-way trust with AWS Directory Service is part of the solution, AWS SSO must also be explicitly enabled and configured to work across AWS Organizations accounts.
D. Although deploying an IdP on premises is possible, it is more complex and unnecessary because AWS SSO natively supports integration with an on-premises Active Directory using Directory Service.
Source: https://docs.aws.amazon.com/singlesignon/latest/userguide/integrating-aws-sso-with-ad.html
A company runs several Amazon RDS for Oracle On-Demand DB instances that have high utilization. The RDS DB instances run in member accounts that are in an organization in AWS Organizations.
The company’s finance team has access to the organization’s management account and member accounts. The finance team wants to find ways to optimize costs by using AWS Trusted Advisor.
Which combination of steps will meet these requirements? (Choose two.)
✅ A. Use the Trusted Advisor recommendations in the management account.
⬜ B. Use the Trusted Advisor recommendations in the member accounts where the RDS DB instances are running.
✅ C. Review the Trusted Advisor checks for Amazon RDS Reserved Instance Optimization.
⬜ D. Review the Trusted Advisor checks for Amazon RDS Idle DB Instances.
⬜ E. Review the Trusted Advisor checks for compute optimization. Crosscheck the results by using AWS Compute Optimizer.
Explanation:
● A is correct because the management account can aggregate and view Trusted Advisor recommendations for all member accounts when using AWS Organizations.
● C is correct because the Amazon RDS Reserved Instance Optimization check in Trusted Advisor suggests opportunities to purchase Reserved Instances based on historical usage patterns, helping optimize costs.
Why other options are wrong:
B. Using Trusted Advisor individually in each member account is not necessary when centralized management through the management account is available.
D. “Idle DB Instances” checks are useful for unused resources, but the question mentions high utilization, making Reserved Instance recommendations more relevant.
E. Compute Optimizer primarily analyzes EC2, Auto Scaling, and Lambda, not directly RDS. Hence, RDS optimization is not covered by Compute Optimizer.
Source: https://docs.aws.amazon.com/awssupport/latest/user/ta-best-practices.html
A company runs an ecommerce application on AWS. Amazon EC2 instances process purchases and store the purchase details in an Amazon Aurora PostgreSQL DB cluster.
Customers are experiencing application timeouts during times of peak usage. A solutions architect needs to rearchitect the application so that the application can scale to meet peak usage demands.
Which combination of actions will meet these requirements MOST cost-effectively? (Choose two.)
✅ A. Configure an Auto Scaling group of new EC2 instances to retry the purchases until the processing is complete. Update the applications to connect to the DB cluster by using Amazon RDS Proxy.
⬜ B. Configure the application to use an Amazon ElastiCache cluster in front of the Aurora PostgreSQL DB cluster.
✅ C. Update the application to send the purchase requests to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an Auto Scaling group of new EC2 instances that read from the SQS queue.
⬜ D. Configure an AWS Lambda function to retry the ticket purchases until the processing is complete.
⬜ E. Configure an Amazon API Gateway REST API with a usage plan.
Explanation:
● A is correct because Amazon RDS Proxy manages a pool of database connections and helps reduce database load during high concurrency, and using Auto Scaling ensures that enough EC2 instances are available to handle retries.
● C is correct because using Amazon SQS decouples the ingestion and processing of purchase requests, allowing EC2 instances to scale out and process messages at their own pace without overwhelming the database.
Why other options are wrong:
B. ElastiCache is useful for caching read-heavy workloads but does not solve issues with write-intensive operations like purchase processing.
D. AWS Lambda is not ideal for long-running or retry-intensive operations that involve complex processing or database transactions.
E. API Gateway manages and throttles API calls but does not directly address the database or processing scalability issues faced during peak loads.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
A company runs a highly available web application on Amazon EC2 instances behind an Application Load Balancer. The company uses Amazon CloudWatch metrics.
As the traffic to the web application increases, some EC2 instances become overloaded with many outstanding requests. The CloudWatch metrics show that the number of requests processed and the time to receive the responses from some EC2 instances are both higher compared to other EC2 instances. The company does not want new requests to be forwarded to the EC2 instances that are already overloaded.
Which solution will meet these requirements?
⬜ A. Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
✅ B. Use the least outstanding requests algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
⬜ C. Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
⬜ D. Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
Explanation:
● B is correct because the least outstanding requests algorithm combined with RequestCountPerTarget and ActiveConnectionCount metrics ensures that the load balancer sends new requests to targets (EC2 instances) with the fewest active connections and outstanding requests, preventing overloaded instances.
Why other options are wrong:
A. Round robin routing evenly distributes traffic without considering current load or outstanding requests, so overloaded instances can still receive traffic.
C. Round robin routing based on RequestCount and TargetResponseTime still does not dynamically avoid overloaded instances.
D. TargetResponseTime indicates latency but does not directly manage outstanding requests; it does not ensure the least-loaded instance is selected.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-metrics-request-count-per-target
A company runs its customer-facing web application on containers. The workload uses Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. The web application is resource intensive.
The web application needs to be available 24 hours a day, 7 days a week for customers. The company expects the application to experience short bursts of high traffic. The workload must be highly available.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Configure an ECS capacity provider with Fargate. Conduct load testing by using a third-party tool. Rightsize the Fargate tasks in Amazon CloudWatch.
✅ B. Configure an ECS capacity provider with Fargate for steady state and Fargate Spot for burst traffic.
⬜ C. Configure an ECS capacity provider with Fargate Spot for steady state and Fargate for burst traffic.
⬜ D. Configure an ECS capacity provider with Fargate. Use AWS Compute Optimizer to rightsize the Fargate task.
Explanation:
● B is correct because steady workloads should use regular Fargate to ensure high availability and reliability, while burst traffic can be handled by Fargate Spot, which is cheaper but can be interrupted. This provides cost optimization without sacrificing availability.
Why other options are wrong:
A. Load testing and rightsizing are good practices but do not address handling burst traffic cost-effectively with Spot capacity.
C. Using Fargate Spot for steady-state workloads risks interruptions and availability issues, which is not acceptable for a 24/7 application.
D. AWS Compute Optimizer helps with rightsizing, but it doesn’t solve cost optimization for burst traffic like Fargate Spot does.
Source: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using-fargate-capacity-providers.html
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance. Which solution meets these requirements MOST cost-effectively?
⬜ A. Stop the DB instance when tests are completed. Restart the DB instance when required.
⬜ B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
✅ C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
⬜ D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.
Explanation:
● C is correct because taking a snapshot and terminating the DB instance saves ongoing compute costs. When needed, the team can restore from the snapshot without paying for unused resources in the meantime. This is the most cost-effective method when the database is only used once a month.
Why other options are wrong:
A. Stopping RDS instances is only supported for up to 7 days. After that, they are automatically started, leading to unintended costs.
B. RDS instances do not support Auto Scaling for compute and memory resources automatically.
D. Manually modifying instance types requires downtime and administrative effort every time, and could still incur costs during idle periods.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Snapshots.html
A company hosts a monolithic web application on an Amazon EC2 instance. Application users have recently reported poor performance at specific times. Analysis of Amazon CloudWatch metrics shows that CPU utilization is 100% during the periods of poor performance.
The company wants to resolve this performance issue and improve application availability.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
✅ A. Use AWS Compute Optimizer to obtain a recommendation for an instance type to scale vertically.
⬜ B. Create an Amazon Machine Image (AMI) from the web server. Reference the AMI in a new launch template.
⬜ C. Create an Auto Scaling group and an Application Load Balancer to scale vertically.
⬜ D. Use AWS Compute Optimizer to obtain a recommendation for an instance type to scale horizontally.
✅ E. Create an Auto Scaling group and an Application Load Balancer to scale horizontally.
Explanation:
● A is correct because AWS Compute Optimizer can suggest a better EC2 instance type with more CPU or memory capacity, helping to scale vertically and reduce CPU bottlenecks.
● E is correct because an Auto Scaling group combined with an Application Load Balancer enables the system to scale horizontally, improving performance and availability by distributing the load across multiple instances.
Why other options are wrong:
B. Creating an AMI and launch template is a step toward automation, but alone it does not address performance issues or scalability.
C. Auto Scaling scales horizontally, not vertically. Vertical scaling requires resizing an instance, not using Auto Scaling groups.
D. AWS Compute Optimizer provides instance type recommendations, not strategies for horizontal scaling directly.
Source: https://docs.aws.amazon.com/compute-optimizer/latest/ug/what-is-compute-optimizer.html
A company uses an Amazon RDS for MySQL instance. To prepare for end-of-year processing, the company added a read replica to accommodate extra read-only queries from the company’s reporting tool. The read replica CPU usage was 60% and the primary instance CPU usage was 60%.
After end-of-year activities are complete, the read replica has a constant 25% CPU usage. The primary instance still has a constant 60% CPU usage. The company wants to rightsize the database and still provide enough performance for future growth.
Which solution will meet these requirements?
⬜ A. Delete the read replica. Do not make changes to the primary instance.
✅ B. Resize the read replica to a smaller instance size. Do not make changes to the primary instance.
⬜ C. Resize the read replica to a larger instance size. Resize the primary instance to a smaller instance size.
⬜ D. Delete the read replica. Resize the primary instance to a larger instance.
Explanation:
● B is correct because the read replica is underutilized (only 25% CPU). It is cost-effective to downsize the replica to a smaller instance without touching the primary instance, which is already at 60% CPU usage and thus needs to maintain performance for future growth.
Why other options are wrong:
A. Deleting the read replica would remove the redundancy and read scalability, which may be needed for future growth.
C. Increasing the size of the read replica and decreasing the primary instance size makes no sense, as the primary is already operating at a relatively high CPU load.
D. Deleting the read replica and resizing the primary larger adds unnecessary cost and removes read scalability.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
A company is moving its data and applications to AWS during a multiyear migration project. The company wants to securely access data on Amazon S3 from the company’s AWS Region and from the company’s on-premises location. The data must not traverse the internet. The company has established an AWS Direct Connect connection between its Region and its on-premises location. Which solution will meet these requirements?
✅ A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the Region and the on-premises location.
⬜ B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-premises location.
⬜ C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
⬜ D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the on-premises location.
Explanation:
● A is correct because gateway VPC endpoints for Amazon S3 enable private connections to S3 without traversing the internet, and they work well with AWS Direct Connect to allow on-premises to S3 access securely.
Why other options are wrong:
B. AWS Transit Gateway does not directly provide private access to S3. A gateway VPC endpoint is still needed for S3 access.
C. Interface endpoints (powered by AWS PrivateLink) are not used for S3 — S3 uses gateway endpoints instead.
D. AWS KMS is used for encryption of data but does not control network path (public vs private access).
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
A company is running a batch application on Amazon EC2 instances. The application consists of a backend with multiple Amazon RDS databases. The application is causing a high number of reads on the databases. A solutions architect must reduce the number of database reads while ensuring high availability.
✅ A. Add Amazon RDS read replicas.
⬜ B. Use Amazon ElastiCache for Redis.
⬜ C. Use Amazon Route 53 DNS caching
⬜ D. Use Amazon ElastiCache for Memcached.
Explanation:
● Amazon RDS read replicas are designed to offload read traffic from the primary database instance, reducing load while maintaining high availability.
● Read replicas support asynchronous replication and can be deployed across multiple Availability Zones for additional fault tolerance.
● This is the most straightforward and managed solution that requires minimal application changes.
Why other options are wrong:
B. ElastiCache for Redis can reduce reads through caching, but it requires manual integration into the application logic.
C. Route 53 caches DNS lookups, not database queries—it does not reduce read load on RDS.
D. ElastiCache for Memcached offers caching but lacks features like persistence and replication, making it less suitable for high availability scenarios.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is recorded. The company does not want this new service to affect the performance of the current application.
⬜ A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
⬜ B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
✅ C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
⬜ D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.
Explanation:
● DynamoDB Streams captures table activity (e.g., inserts, updates, deletes) in near real-time.
● By enabling streams and attaching an AWS Lambda trigger, new weather event records can be processed without modifying the current application.
● The Lambda function can publish to a single SNS topic, allowing all four internal teams to subscribe with minimal effort and no performance impact on the primary application.
● This is a serverless and highly decoupled architecture, which minimizes operational overhead.
Why other options are wrong:
A. DynamoDB transactions are for atomic operations across multiple items and do not inherently support notification logic.
B. Modifying the current application to publish SNS messages contradicts the requirement to not impact its performance.
D. Scanning the table with a cron job introduces latency, unnecessary costs, and operational burden. It’s inefficient for real-time alerting.
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
An IAM user made several configuration changes to AWS resources in their company’s account during a production deployment last week. A solutions architect learned that a couple of security group rules are not configured as desired. The solutions architect wants to confirm which IAM user was responsible for making changes.
⬜ A. Amazon GuardDuty
⬜ B. Amazon Inspector
✅ C. AWS CloudTrail
⬜ D. AWS Config
Explanation:
● AWS CloudTrail provides a history of AWS API calls and related events for account activity, including who made a change, when it was made, and from where.
● It allows you to audit IAM user activity, making it the best tool to determine which IAM user made configuration changes, such as modifying security group rules.
● CloudTrail logs can be viewed in the AWS Management Console or exported to Amazon S3 for analysis.
Why other options are wrong:
A. Amazon GuardDuty is a threat detection service and does not track user configuration changes.
B. Amazon Inspector is used to analyze EC2 instances for security vulnerabilities, not to audit user actions.
D. AWS Config records configuration changes and evaluates compliance, but it does not show who made the change.
Source: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns.
⬜ A. Implement Amazon SNS to store the database calls.
✅ B. Implement Amazon ElastiCache to cache the large datasets.
⬜ C. Implement an RDS for MySQL read replica to cache database calls.
⬜ D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Explanation:
● Amazon ElastiCache is a managed in-memory caching service (supporting Redis and Memcached) that can significantly reduce database load by caching frequently accessed data.
● Since the application is experiencing frequent identical queries, caching these results in ElastiCache improves response time and reduces the number of read operations to RDS.
● This results in improved backend performance and scalability.
Why other options are wrong:
A. Amazon SNS is a pub/sub messaging service and is not used for caching or storing database calls.
C. RDS Read Replicas help distribute read traffic but do not cache query results and are still dependent on RDS query performance.
D. Amazon Kinesis Data Firehose is used for streaming data into analytics platforms and is not relevant for reducing database read load.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/WhatIs.html
A company is developing a real-time multiplayer game that uses UDP for communications between the client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.
⬜ A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
✅ B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
⬜ C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
⬜ D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
Explanation:
● Network Load Balancer (NLB) supports UDP traffic and is designed to handle millions of requests per second with ultra-low latencies, making it ideal for real-time multiplayer games.
● Amazon DynamoDB (on-demand capacity mode) is a serverless, NoSQL database that automatically scales to handle unpredictable workloads without manual intervention, making it suitable for storing game scores and metadata.
● This combination satisfies the performance, scalability, and UDP support requirements of the use case.
Why other options are wrong:
A. Amazon Route 53 does DNS-based routing, not suitable for real-time UDP traffic management. Aurora Serverless is relational and doesn’t fit non-relational data requirements.
C. Aurora Global Database is designed for relational, multi-region use, not for NoSQL or highly dynamic non-relational game data.
D. Application Load Balancer (ALB) does not support UDP, only HTTP/HTTPS/TCP, making it unsuitable for this game traffic.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns.
✅ A. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
⬜ B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
⬜ C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
⬜ D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
Explanation:
● AWS Batch is purpose-built for running batch computing jobs and supports diverse runtimes and long-duration tasks without requiring a rewrite.
● Amazon EventBridge (formerly CloudWatch Events) can be used to schedule these jobs, maintaining the existing schedule.
● This solution automatically handles compute provisioning, enabling performance and scalability improvements while minimizing operational overhead.
Why other options are wrong:
B. AWS App Runner is for web applications and services. It doesn’t directly support scheduling or varied runtimes for batch jobs.
C. AWS Lambda is unsuitable for 1-hour tasks due to its maximum 15-minute timeout. Also, tasks written in different languages may not all be supported or easy to convert.
D. Using AMIs and Auto Scaling groups introduces unnecessary complexity and operational overhead. It also doesn’t inherently resolve performance or scheduling challenges.
Source: https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html
Source: https://docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html
A company uses a payment processing system that requires messages for a particular payment ID to be received in the same order that they were sent. Otherwise, the payments might be processed incorrectly. Which actions should a solutions architect take to meet this requirement? (Choose two.)
⬜ A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key.
✅ B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
⬜ C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key.
⬜ D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
✅ E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the message group to use the payment ID.
Explanation:
● Amazon Kinesis ensures ordered message processing when the same partition key (e.g., payment ID) is used. All messages with the same key go to the same shard, maintaining strict order.
● Amazon SQS FIFO queues maintain the order of messages within a message group. By setting the message group ID to the payment ID, you ensure that all messages for a payment are processed in order.
Why other options are wrong:
A. DynamoDB provides consistent reads but is not designed for ordered message processing or queuing.
C. ElastiCache for Memcached is a key-value store and does not guarantee message ordering.
D. Standard SQS queues do not preserve order. Setting a message attribute does not enforce sequencing.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
Source: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html
A company is running a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that need to run 24 hours a day, 7 days a week, and backend nodes that need to run only for a short time based on workload. The number of backend nodes varies during the day.
</br/>The company needs to scale out and scale in more instances based on workload.
</br/>Which solution will meet these requirements MOST cost-effectively?
⬜ A. Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
✅ B. Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
⬜ C. Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
⬜ D. Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
Explanation:
● Reserved Instances provide cost savings for predictable, 24/7 workloads such as frontend nodes.
● Spot Instances are ideal for backend nodes that are short-lived and scalable based on fluctuating demand, making them highly cost-effective.
● Auto Scaling can be used with Spot Instances to dynamically scale based on demand while minimizing cost.
Why other options are wrong:
A. AWS Fargate is intended for containerized workloads, not EC2-based ones. It’s generally more expensive than Spot Instances for dynamic workloads.
C. Spot Instances are unsuitable for the always-on frontend nodes due to possible interruptions.
D. Similar to A and C—using Spot for frontend is unreliable, and Fargate adds unnecessary cost for backend EC2-based processing.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instances-reserved.html
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company wants to use these data points in its existing analytics platform.A solutions architect must determine the most viable multi-tier option to support this architecture. The data points must be accessible from the REST API.
</br/>Which action meets these requirements for storing and retrieving location data?
⬜ A. Use Amazon Athena with Amazon S3.
⬜ B. Use Amazon API Gateway with AWS Lambda.
⬜ C. Use Amazon QuickSight with Amazon Redshift.
✅ D. Use Amazon API Gateway with Amazon Kinesis Data Analytics.
Explanation:
● Amazon API Gateway can expose RESTful endpoints to ingest and retrieve real-time data.
● Amazon Kinesis Data Analytics processes streaming data and integrates easily with analytics platforms.
● This combination supports both real-time tracking and analytics, making it ideal for location tracking during peak operating hours.
Why other options are wrong:
A. Amazon Athena with S3 is suitable for querying static data, not real-time streaming data.
B. API Gateway with Lambda is good for request-response workloads but not optimized for continuous data streaming or analytics.
C. Amazon QuickSight with Redshift is for visualization and reporting, not real-time data ingestion and REST API-based access.
Source: https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html
Source: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html
A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually process messages without any downtime.
Which solution meets these requirements MOST cost-effectively?
⬜ A. Use Spot Instances exclusively to handle the maximum capacity required.
⬜ B. Use Reserved Instances exclusively to handle the maximum capacity required.
✅ C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
⬜ D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
Explanation:
● Reserved Instances (RIs) are cost-effective for baseline, predictable usage, providing significant savings over On-Demand.
● Spot Instances are ideal for handling intermittent or burst workloads, like processing spikes from an SQS queue, at a fraction of the cost.
● Combining both allows the application to run continuously with cost efficiency and scale during unpredictable spikes.
Why other options are wrong:
A. Spot Instances can be interrupted, which risks downtime in a production environment.
B. Reserved Instances for the full capacity would lead to overprovisioning and increased cost due to unpredictable traffic.
D. Using On-Demand Instances for spikes ensures availability but is less cost-effective compared to Spot Instances.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instances-spot.html
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reserved-instances.html
A company recently launched a new application for its customers. The application runs on multiple Amazon EC2 instances across two Availability Zones. End users use TCP to communicate with the application.
The application must be highly available and must automatically scale as the number of users increases.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
✅ A. Add a Network Load Balancer in front of the EC2 instances.
✅ B. Configure an Auto Scaling group for the EC2 instances.
⬜ C. Add an Application Load Balancer in front of the EC2 instances.
⬜ D. Manually add more EC2 instances for the application.
⬜ E. Add a Gateway Load Balancer in front of the EC2 instances.
Explanation:
● A. A Network Load Balancer (NLB) is best suited for TCP traffic and provides high performance and low latency at a lower cost, making it appropriate for this use case.
● B. Auto Scaling groups allow EC2 instances to scale automatically in and out based on demand, ensuring cost-efficiency and high availability across multiple Availability Zones.
Why other options are wrong:
C. Application Load Balancers are ideal for HTTP/HTTPS traffic at Layer 7, not for TCP-based applications.
D. Manually adding instances is not scalable or cost-efficient and does not meet automatic scaling requirements.
E. Gateway Load Balancers are typically used for third-party virtual appliances such as firewalls, not for balancing TCP application traffic.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses a MySQL database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load to each EC2 instance.
⬜ B. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.
⬜ C. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop the EC2 instance and change the instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization surpasses 75%.
✅ D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application. Apply the AMI to a launch template. Create an Auto Scaling group with the launch template. Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer to the Auto Scaling group.
Explanation:
D. This option offers high availability and scalability with the use of Amazon Aurora (a fully managed, scalable, and fault-tolerant database) and EC2 Auto Scaling. Using a Spot Fleet further optimizes cost, while the Application Load Balancer ensures that traffic is evenly distributed among available instances, making it the most cost-effective and scalable solution.
Why other options are wrong:
A. This solution uses only two fixed On-Demand instances, which does not scale automatically based on demand and is more expensive than Spot.
B. Similar to A, it lacks automatic scaling, and Route 53 weighted routing does not provide health checks and failover capabilities as efficiently as an ALB.
C. Changing the EC2 instance type via Lambda and CloudWatch does not support horizontal scaling and could cause downtime or delays during resizing.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each user a personalized view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?
✅ A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin.
⬜ B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest Region.
⬜ C. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content directly from the ALB.
⬜ D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in the closest Region.
Explanation:
● A. Using Amazon CloudFront as a global content delivery network (CDN) reduces latency by caching content at edge locations. CloudFront also supports dynamic content by forwarding requests to the origin (in this case, the ALB). Serving both static and dynamic content through CloudFront provides global performance optimization and low latency from a single AWS Region, minimizing infrastructure complexity and cost.
Why other options are wrong:
B. While latency-based routing across multiple Regions can reduce latency, it introduces higher operational overhead and cost, and still may not outperform CloudFront’s edge caching for static and dynamic content.
C. This setup improves latency for static content only. Dynamic content will still be served directly from the Region, causing higher latency for global users.
D. Geolocation routing may not route users to the lowest-latency Region and does not optimize for actual network latency, only geographical location.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Source: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
A solutions architect is designing a multi-tier application for a company. The application’s users upload images from a mobile device. The application generates a thumbnail of each image and returns a message to the user to confirm that the image was uploaded successfully.
The thumbnail generation can take up to 60 seconds, but the company wants to provide a faster response time to its users to notify them that the original image was received. The solutions architect must design the application to asynchronously dispatch requests to the different application tiers.
What should the solutions architect do to meet these requirements?
⬜ A. Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to invoke the Lambda function.
⬜ B. Create an AWS Step Functions workflow. Configure Step Functions to handle the orchestration between the application tiers and alert the user when thumbnail generation is complete.
✅ C. Create an Amazon Simple Queue Service (Amazon SQS) message queue. As images are uploaded, place a message on the SQS queue for thumbnail generation. Alert the user through an application message that the image was received.
⬜ D. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions. Use one subscription with the application to generate the thumbnail after the image upload is complete. Use a second subscription to message the user’s mobile app by way of a push notification after thumbnail generation is complete.
Explanation:
● C. Placing a message in an Amazon SQS queue after uploading the image allows the image processing (thumbnail generation) to be handled asynchronously. The application can immediately acknowledge image receipt to the user without waiting for the thumbnail to be generated. This decouples the upload and processing tiers, improving responsiveness and scalability.
Why other options are wrong:
A. A Lambda function can be used for thumbnail generation, but invoking it synchronously from the image upload event would not satisfy the need for faster user response. Thumbnail generation may take up to 60 seconds, which delays the user notification.
B. AWS Step Functions are better suited for complex workflows, but they add unnecessary complexity and overhead for this relatively simple decoupled task.
D. Using SNS to fan out messages is useful, but it lacks the durability and retry mechanisms provided by SQS. Also, the app needs to notify users immediately after upload, not after thumbnail generation.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
Source: https://docs.aws.amazon.com/whitepapers/latest/serverless-multi-tier-architectures-api-gateway-lambda/sqs.html
A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads against the DB instance and recommends adding a read replica.
Which combination of actions should a solutions architect take before implementing this change? (Choose two.)
✅ A. Enable binlog replication on the RDS primary node.
⬜ B. Choose a failover priority for the source DB instance.
⬜ C. Allow long-running transactions to complete on the source DB instance.
⬜ D. Create a global table and specify the AWS Regions where the table will be available.
✅ E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Explanation:
● A. Binary logging (binlog replication) must be enabled on the source RDS MySQL instance for replication to function correctly. It is required to support read replica creation and ongoing replication.
● E. Automatic backups must be enabled (backup retention period > 0) to create read replicas. This is a prerequisite for RDS to support point-in-time recovery and replication.
Why other options are wrong:
B. Failover priority is relevant for Multi-AZ deployments, not read replicas. Read replicas are designed for scaling read traffic, not automatic failover.
C. Long-running transactions do not need to complete before adding a read replica. Replication will begin from a snapshot, independent of current transactions.
D. Global tables apply to Amazon DynamoDB, not RDS for MySQL, and are unrelated to the read replica setup.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.MySQL.html
A company is launching an application on AWS. The application uses an Application Load Balancer (ALB) to direct traffic to at least two Amazon EC2 instances in a single target group. The instances are in an Auto Scaling group for each environment. The company requires a development environment and a production environment. The production environment will have periods of high traffic.
Which solution will configure the development environment MOST cost-effectively?
⬜ A. Reconfigure the target group in the development environment to have only one EC2 instance as a target.
⬜ B. Change the ALB balancing algorithm to least outstanding requests.
⬜ C. Reduce the size of the EC2 instances in both environments.
✅ D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group.
Explanation:
● D. Reducing the maximum number of instances in the development Auto Scaling group allows the system to scale only as needed within a controlled cost boundary. Since development environments typically require fewer resources, this is a cost-effective and scalable solution.
Why other options are wrong:
A. Reconfiguring to only one instance removes high availability and is not aligned with AWS best practices.
B. Changing the ALB algorithm does not impact cost — it only affects how requests are distributed.
C. Reducing instance size in both environments may degrade production performance, which is not acceptable given high traffic expectations.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-scaling-options.html
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?
✅ A. Configure the Requester Pays feature on the company’s S3 bucket.
⬜ B. Configure S3 Cross-Region Replication from the company’s S3 bucket to one of the marketing firm’s S3 buckets.
⬜ C. Configure cross-account access for the marketing firm so that the marketing firm has access to the company’s S3 bucket.
⬜ D. Configure the company’s S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm’s S3 buckets.
Explanation:
● A. The Requester Pays feature ensures that the party making the request (in this case, the European marketing firm) is responsible for data transfer and request costs. This helps the company minimize its own expenses while still sharing the data.
Why other options are wrong:
B. Cross-Region Replication incurs significant costs, especially when replicating 3 TB+ of data to another region. It also doesn’t shift the cost burden.
C. While cross-account access enables sharing, it doesn’t shift the cost of data transfer—the bucket owner pays unless Requester Pays is used.
D. S3 Intelligent-Tiering optimizes storage costs, not data transfer costs. Also, syncing data involves transfer and replication charges.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
A company hosts its web applications in the AWS Cloud. The company configures Elastic Load Balancers to use certificates that are imported into AWS Certificate Manager (ACM). The company’s security team must be notified 30 days before the expiration of each certificate.
What should a solutions architect recommend to meet this requirement?
⬜ A. Add a rule in ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS) topic every day, beginning 30 days before any certificate will expire.
⬜ B. Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant resource.
⬜ C. Use AWS Trusted Advisor to check for certificates that will expire within 30 days. Create an Amazon CloudWatch alarm that is based on Trusted Advisor metrics for check status changes. Configure the alarm to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS).
✅ D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30 days. Configure the rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS).
Explanation:
● D. ACM automatically sends events to EventBridge for expiring certificates. These events can be filtered for “ACM Certificate Expiring Soon” and configured to trigger a Lambda function that sends an alert using SNS, allowing automation and customization of notifications.
Why other options are wrong:
A. ACM does not support adding custom rules for publishing messages directly.
B. AWS Config does not natively support checking for certificate expiration in ACM; this is not an optimal approach.
C. Trusted Advisor only supports certificate checks for certain account types and may not offer timely notifications or flexibility for custom alerting.
Source: https://docs.aws.amazon.com/acm/latest/userguide/monitoring-certs-eventbridge.html
Source: https://docs.aws.amazon.com/eventbridge/latest/userguide/acm-event-types.html
A solutions architect needs to design a new microservice for a company’s application. Clients must be able to call an HTTPS endpoint to reach the microservice. The microservice also must use AWS Identity and Access Management (IAM) to authenticate calls. The solutions architect will write the logic for this microservice by using a single AWS Lambda function that is written in Go 1.x.
Which solution will deploy the function in the MOST operationally efficient way?
⬜ A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.
✅ B. Create a Lambda function URL for the function. Specify AWS_IAM as the authentication type.
⬜ C. Create an Amazon CloudFront distribution. Deploy the function to Lambda@Edge. Integrate IAM authentication logic into the Lambda@Edge function.
⬜ D. Create an Amazon CloudFront distribution. Deploy the function to CloudFront Functions. Specify AWS_IAM as the authentication type.
Explanation:
● B is the most operationally efficient option. AWS Lambda function URLs provide a built-in HTTPS endpoint with native support for IAM authentication (AWS_IAM). This eliminates the need for provisioning and managing an API Gateway, making deployment and maintenance simpler for small or single-function microservices.
Why other options are wrong:
A. While valid, using API Gateway adds additional operational overhead in setup and management compared to Lambda function URLs.
C. Lambda@Edge is more complex, involves regional restrictions, and does not natively support IAM authentication.
D. CloudFront Functions do not support IAM authentication and are only suitable for lightweight processing at the edge.
Source: https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html
Source: https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html
A company runs its application by using Amazon EC2 instances and AWS Lambda functions. The EC2 instances run in private subnets of a VPC. The Lambda functions need direct network access to the EC2 instances for the application to work.
The application will run for 1 year. The number of Lambda functions that the application uses will increase during the 1-year period. The company must minimize costs on all application resources.
Which solution will meet these requirements?
⬜ A. Purchase an EC2 Instance Savings Plan. Connect the Lambda functions to the private subnets that contain the EC2 instances.
⬜ B. Purchase an EC2 Instance Savings Plan. Connect the Lambda functions to new public subnets in the same VPC where the EC2 instances run.
✅ C. Purchase a Compute Savings Plan. Connect the Lambda functions to the private subnets that contain the EC2 instances.
⬜ D. Purchase a Compute Savings Plan. Keep the Lambda functions in the Lambda service VPC.
Explanation:
● C is correct because Compute Savings Plans apply to both EC2 instances and AWS Lambda, providing the most cost-effective option when both services are in use.
● Connecting Lambda functions to the private subnets that host the EC2 instances allows for direct VPC access, fulfilling the requirement for Lambda-to-EC2 communication.
Why other options are wrong:
A. EC2 Instance Savings Plans only apply to specific EC2 instance types and do not apply to Lambda, missing out on Lambda cost savings.
B. Placing Lambda functions in public subnets is unnecessary and incorrect for accessing EC2 instances in private subnets.
D. Lambda functions in the default service VPC cannot access private EC2 subnets directly, which violates the connectivity requirement.
Source: https://docs.aws.amazon.com/savingsplans/latest/userguide/compute-savings-plans.html
Source: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
A company is expanding a secure on-premises network to the AWS Cloud by using an AWS Direct Connect connection. The on-premises network has no direct internet access. An application that runs on the on-premises network needs to use an Amazon S3 bucket.
Which solution will meet these requirements MOST cost-effectively?
✅ A. Create a public virtual interface (VIF). Route the AWS traffic over the public VIF.
⬜ B. Create a VPC and a NAT gateway. Route the AWS traffic from the on-premises network to the NAT gateway.
⬜ C. Create a VPC and an Amazon S3 interface endpoint. Route the AWS traffic from the on-premises network to the S3 interface endpoint.
⬜ D. Create a VPC peering connection between the on-premises network and Direct Connect. Route the AWS traffic over the peering connection.
Explanation:
● A is correct because a public virtual interface (public VIF) allows on-premises resources to access AWS public services such as Amazon S3 over a private Direct Connect connection without traversing the internet. It is the most cost-effective and direct method for enabling access to S3 in this setup.
Why other options are wrong:
B. NAT gateways are designed for internet access from VPC-based resources, not for on-premises to AWS public services. This also incurs unnecessary NAT data processing costs.
C. S3 interface endpoints are for use within a VPC, not for use over Direct Connect by on-premises networks.
D. There is no such thing as a VPC peering connection between on-premises and Direct Connect. VPC peering is only between VPCs, not between on-prem and AWS.
Source: https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html
Source: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateway.html
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The instances run in an Auto Scaling group behind an Application Load Balancer (ALB). All ecommerce data is stored in an Amazon RDS for MariaDB Multi-AZ DB instance.
The company wants to optimize customer session management during transactions. The application must store session data durably.
Which solutions will meet these requirements? (Choose two.)
⬜ A. Turn on the sticky sessions feature (session affinity) on the ALB.
✅ B. Use an Amazon DynamoDB table to store customer session information.
⬜ C. Deploy an Amazon Cognito user pool to manage user session information.
✅ D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
⬜ E. Use AWS Systems Manager Application Manager in the application to manage user session information.
Explanation:
● B is correct because Amazon DynamoDB provides durable, highly available, and scalable storage for session data and is a common choice for storing session state in stateless web applications.
● D is correct because Amazon ElastiCache for Redis is a high-performance, in-memory data store that supports session management and can be made durable with AOF (Append-Only File) or snapshotting features.
Why other options are wrong:
A. Sticky sessions keep users routed to the same EC2 instance but do not provide durability. Session data is lost if the instance is terminated.
C. Amazon Cognito is for user authentication and authorization, not for storing application session data.
E. AWS Systems Manager Application Manager is used for monitoring and managing applications, not for storing or managing user session data.
Source: https://docs.aws.amazon.com/elasticache/latest/mem-ug/ManagingSessions.html
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SessionHandling.html
A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company’s growth. A solutions architect must improve the application’s infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
✅ A. Migrate the PostgreSQL database to Amazon Aurora.
⬜ B. Migrate the web application to be hosted on Amazon EC2 instances.
⬜ C. Set up an Amazon CloudFront distribution for the web application content.
⬜ D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
✅ E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Explanation:
● A is correct because Amazon Aurora is a fully managed relational database that is compatible with PostgreSQL. It removes the burden of maintenance, scaling, and backups, thereby reducing operational overhead.
● E is correct because AWS Fargate allows you to run containers without managing servers or clusters, which minimizes the operational burden and simplifies capacity planning.
Why other options are wrong:
B. Migrating to Amazon EC2 still requires managing instances, patching, and capacity, so it does not significantly reduce operational overhead.
C. Amazon CloudFront improves content delivery performance, but it doesn’t reduce the infrastructure management burden or help with backend containerized compute.
D. ElastiCache can improve performance but does not directly address infrastructure overhead or capacity management.
Source: https://docs.aws.amazon.com/fargate/latest/userguide/what-is-fargate.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer (ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously.
The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products.
What should a solutions architect recommend to ensure that all the requests are processed successfully?
⬜ A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
⬜ B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.
⬜ C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle.
✅ D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances.
Explanation:
● D is correct because adding CloudFront for static content improves performance and reduces load on the origin (S3), while Amazon SQS decouples incoming requests from processing, allowing backend EC2 workers to scale and process asynchronously, ensuring reliability and success even during traffic spikes.
Why other options are wrong:
A. CloudFront is not ideal for dynamic API content unless edge-optimized Lambda is used. Increasing EC2 instances alone may not handle burst traffic reliably due to spin-up time.
B. Auto Scaling helps, but by itself it doesn’t decouple the API requests from backend processing; the system may still drop requests under heavy load.
C. ElastiCache improves read performance but is not suitable for queuing or handling asynchronous sales processing requests.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
A company needs to store contract documents. A contract lasts for 5 years. During the 5-year period, the company must ensure that the documents cannot be overwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys automatically every year.
Which combination of steps should a solutions architect take to meet these requirements with the LEAST operational overhead? (Choose two.)
⬜ A. Store the documents in Amazon S3. Use S3 Object Lock in governance mode.
✅ B. Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.
⬜ C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure key rotation.
✅ D. Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Configure key rotation.
⬜ E. Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Configure key rotation.
Explanation:
● B is correct because S3 Object Lock in compliance mode enforces WORM (Write Once, Read Many) protection and prevents deletion or overwriting for the duration of the retention period—even by administrators. This is essential to meet regulatory retention requirements.
● D is correct because KMS customer managed keys (CMKs) support automatic annual key rotation and provide better control and auditing capabilities compared to AWS-managed keys or imported keys.
Why other options are wrong:
A. Governance mode can be bypassed by users with special permissions. It does not provide the same level of protection as compliance mode.
C. SSE-S3 does not support user-configurable key rotation; it uses AWS-managed keys and handles rotation without visibility or control.
E. Imported keys (customer-provided) require manual key rotation, which increases operational overhead and is contrary to the requirement of low overhead.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
Source: https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time.
Which combination should a solutions architect recommend to meet these requirements?
✅ A. Amazon CloudFront and Amazon S3
⬜ B. AWS Lambda and Amazon DynamoDB
⬜ C. Application Load Balancer with Amazon EC2 Auto Scaling
⬜ D. Amazon Route 53 with internal Application Load Balancers
Explanation:
● Amazon S3 is a cost-effective and highly durable solution for hosting static content such as downloadable reports.
● Amazon CloudFront is a global content delivery network (CDN) that caches content at edge locations, thereby reducing latency and providing fast response times globally.
● This combination avoids provisioning and managing servers, minimizing operational overhead and infrastructure provisioning.
Why other options are wrong:
B. Lambda and DynamoDB are ideal for dynamic, compute-based workloads—not for serving static downloadable files.
C. EC2 Auto Scaling and ALB are more expensive and require managing infrastructure, which goes against the requirement to limit provisioning.
D. Route 53 with internal ALBs is typically used for internal routing and microservices, not global content delivery.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html
A company is launching a new application that requires a structured database to store user profiles, application settings, and transactional data. The database must be scalable with application traffic and must offer backups.
Which solution will meet these requirements MOST cost-effectively?
⬜ A. Deploy a self-managed database on Amazon EC2 instances by using open source software. Use Spot Instances for cost optimization. Configure automated backups to Amazon S3.
⬜ B. Use Amazon RDS. Use on-demand capacity mode for the database with General Purpose SSD storage. Configure automatic backups with a retention period of 7 days.
✅ C. Use Amazon Aurora Serverless for the database. Use serverless capacity scaling. Configure automated backups to Amazon S3.
⬜ D. Deploy a self-managed NoSQL database on Amazon EC2 instances. Use Reserved Instances for cost optimization. Configure automated backups directly to Amazon S3 Glacier Flexible Retrieval.
Explanation:
● Amazon Aurora Serverless is a fully managed, auto-scaling, on-demand relational database option that automatically starts up, shuts down, and scales based on application needs.
● It supports structured relational data, is cost-effective for variable workloads, and automatically backs up to Amazon S3, meeting both scalability and backup requirements with minimal operational overhead.
Why other options are wrong:
A. Running a self-managed database on EC2, even with Spot Instances, increases operational burden and risks due to Spot interruptions.
B. RDS with on-demand mode provides scalability but is less cost-effective than Aurora Serverless for unpredictable workloads.
D. A self-managed NoSQL solution doesn’t meet the requirement for a structured database and adds unnecessary complexity.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html
A company needs to implement a new data retention policy for regulatory compliance. As part of this policy, sensitive documents that are stored in an Amazon S3 bucket must be protected from deletion or modification for a fixed period of time.
Which solution will meet these requirements?
⬜ A. Activate S3 Object Lock on the required objects and enable governance mode.
✅ B. Activate S3 Object Lock on the required objects and enable compliance mode.
⬜ C. Enable versioning on the S3 bucket. Set a lifecycle policy to delete the objects after a specified period.
⬜ D. Configure an S3 Lifecycle policy to transition objects to S3 Glacier Flexible Retrieval for the retention duration.
Explanation:
● S3 Object Lock in compliance mode ensures that objects cannot be modified or deleted by any user (including root users) during the defined retention period, satisfying strict regulatory compliance requirements.
● This mode provides WORM (Write Once, Read Many) protection, ideal for financial, legal, or healthcare data under regulatory mandates.
Why other options are wrong:
A. Governance mode allows certain users with special permissions to override retention settings, which is not acceptable for strict regulatory compliance.
C. Versioning with a lifecycle policy does not prevent deletion or modification of current versions before the expiration period.
D. Lifecycle policies control storage class transitions and deletion timing but do not prevent deletion or modification of objects before that period.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
A company is migrating an application from an on-premises location to Amazon Elastic Kubernetes Service (Amazon EKS). The company must use a custom subnet for pods that are in the company’s VPC to comply with requirements. The company also needs to ensure that the pods can communicate securely within the pods’ VPC.
Which solution will meet these requirements?
⬜ A. Configure AWS Transit Gateway to directly manage custom subnet configurations for the pods in Amazon EKS.
⬜ B. Create an AWS Direct Connect connection from the company’s on-premises IP address ranges to the EKS pods.
✅ C. Use the Amazon VPC CNI plugin for Kubernetes. Define custom subnets in the VPC cluster for the pods to use.
⬜ D. Implement a Kubernetes network policy that has pod anti-affinity rules to restrict pod placement to specific nodes that are within custom subnets.
Explanation:
● Amazon VPC CNI plugin for Kubernetes allows each pod to be assigned an IP address from the VPC, enabling it to use the custom subnets defined in the VPC.
● This approach allows pods to securely communicate within the VPC using private IPs and complies with custom networking requirements.
Why other options are wrong:
A. AWS Transit Gateway connects VPCs and on-premises networks but does not manage pod subnetting within EKS.
B. AWS Direct Connect is used for hybrid connectivity, but it does not configure custom subnets for pods.
D. Kubernetes pod anti-affinity rules control pod scheduling, not network configurations or custom subnet allocation.
Source: https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html
A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the encryption key must be automatically rotated every year.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
✅ B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket’s default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.
⬜ C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket’s default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year.
⬜ D. Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material. Import the customer key material into the KMS key. Enable automatic key rotation.
Explanation:
● Option B is correct because AWS KMS customer managed keys support automatic annual rotation when enabled, which reduces operational overhead.
● You can configure the S3 bucket to use this KMS key for default server-side encryption, ensuring that all uploaded objects are encrypted without additional effort.
Why other options are wrong:
A. SSE-S3 uses S3-managed keys but does not support automatic key rotation that is user-visible or configurable.
C. Manually rotating the key increases operational burden, violating the “least operational overhead” requirement.
D. Imported key material into KMS cannot be automatically rotated—only customer-managed KMS keys with AWS-managed material can be auto-rotated.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
A software company needs to upgrade a critical web application. The application currently runs on a single Amazon EC2 instance that the company hosts in a public subnet. The EC2 instance runs a MySQL database. The application’s DNS records are published in an Amazon Route 53 zone.
A solutions architect must reconfigure the application to be scalable and highly available. The solutions architect must also reduce MySQL read latency.
Which combination of solutions will meet these requirements? (Choose two.)
⬜ A. Launch a second EC2 instance in a second AWS Region. Use a Route 53 failover routing policy to redirect the traffic to the second EC2 instance.
✅ B. Create and configure an Auto Scaling group to launch private EC2 instances in multiple Availability Zones. Add the instances to a target group behind a new Application Load Balancer.
✅ C. Migrate the database to an Amazon Aurora MySQL cluster. Create the primary DB instance and reader DB instance in separate Availability Zones.
⬜ D. Create and configure an Auto Scaling group to launch private EC2 instances in multiple AWS Regions. Add the instances to a target group behind a new Application Load Balancer.
⬜ E. Migrate the database to an Amazon Aurora MySQL cluster with cross-Region read replicas.
Explanation:
● Option B enables horizontal scalability and high availability by using EC2 Auto Scaling across multiple Availability Zones behind an Application Load Balancer.
● Option C ensures the database tier is highly available and provides low-latency read capability by using an Aurora MySQL cluster with a reader instance in a separate AZ.
Why other options are wrong:
A. Multi-Region deployment with Route 53 failover is more appropriate for disaster recovery, not for scalability and low latency.
D. Deploying across Regions with an ALB is not feasible; ALBs are Regional, not global.
E. Cross-Region replicas add unnecessary complexity and are not required for in-Region scalability and availability.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-introduction.html
A company runs thousands of AWS Lambda functions. The company needs a solution to securely store sensitive information that all the Lambda functions use. The solution must also manage the automatic rotation of the sensitive information.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
⬜ A. Create HTTP security headers by using Lambda@Edge to retrieve and create sensitive information
✅ B. Create a Lambda layer that retrieves sensitive information
✅ C. Store sensitive information in AWS Secrets Manager
⬜ D. Store sensitive information in AWS Systems Manager Parameter Store
⬜ E. Create a Lambda consumer with dedicated throughput to retrieve sensitive information and create environmental variables
Explanation:
● C. AWS Secrets Manager is purpose-built for securely storing and managing sensitive information such as API keys and database credentials. It also supports automatic rotation of secrets, which minimizes operational overhead.
● B. A Lambda layer can be used to share code that retrieves secrets across all Lambda functions, reducing duplication and improving maintainability.
Why other options are wrong:
A. Lambda@Edge is designed for CDN request/response customization—not for securely retrieving and rotating secrets.
D. AWS Systems Manager Parameter Store supports storing secure strings, but automatic rotation is not natively supported, which makes Secrets Manager the better fit.
E. Creating a dedicated Lambda consumer adds unnecessary complexity and operational overhead when Secrets Manager already handles secret retrieval efficiently.
Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
A company has an internal application that runs on Amazon EC2 instances in an Auto Scaling group. The EC2 instances are compute optimized and use Amazon Elastic Block Store (Amazon EBS) volumes.
The company wants to identify cost optimizations across the EC2 instances, the Auto Scaling group, and the EBS volumes.
Which solution will meet these requirements with the MOST operational efficiency?
⬜ A. Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the EC2 instances the Auto Scaling group, and the EBS volumes.
⬜ B. Create new Amazon CloudWatch billing alerts. Check the alert statuses for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
✅ C. Configure AWS Compute Optimizer for cost recommendations for the EC2 instances, the Auto Scaling group and the EBS volumes.
⬜ D. Configure AWS Compute Optimizer for cost recommendations for the EC2 instances. Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the Auto Scaling group and the EBS volumes.
Explanation:
● AWS Compute Optimizer provides automated cost and performance optimization recommendations for Amazon EC2 instances, Auto Scaling groups, and Amazon EBS volumes. It analyzes historical utilization metrics and generates actionable insights with minimal operational overhead.
Why other options are wrong:
A. AWS Cost and Usage Reports provide detailed billing info but do not generate specific optimization recommendations automatically.
B. CloudWatch billing alerts notify about spending thresholds but do not provide optimization advice.
D. This approach unnecessarily combines multiple tools when Compute Optimizer alone already supports all three resource types (EC2, Auto Scaling groups, and EBS).
Source: https://docs.aws.amazon.com/compute-optimizer/latest/ug/what-is.html
A company is running a media store across multiple Amazon EC2 instances distributed across multiple Availability Zones in a single VPC. The company wants a high-performing solution to share data between all the EC2 instances, and prefers to keep the data within the VPC only.
What should a solutions architect recommend?
⬜ A. Create an Amazon S3 bucket and call the service APIs from each instance’s application
⬜ B. Create an Amazon S3 bucket and configure all instances to access it as a mounted volume
⬜ C. Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all instances
✅ D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all instances
Explanation:
● Amazon EFS is a fully managed, elastic, and scalable shared file system designed to be mounted across multiple EC2 instances and works seamlessly across Availability Zones within a VPC. It meets the requirement for high performance and VPC-only access.
Why other options are wrong:
A. Amazon S3 is object storage, not suitable for high-performance file sharing between instances.
B. S3 cannot be natively mounted like a file system, and requires third-party tools (e.g., s3fs) which are not optimal for performance or POSIX compliance.
C. Amazon EBS volumes cannot be shared across multiple EC2 instances simultaneously unless using special configurations like EBS Multi-Attach, which has limitations and is not designed for shared file systems.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
A digital image processing company wants to migrate its on-premises monolithic application to the AWS Cloud. The company processes thousands of images and generates large files as part of the processing workflow.
The company needs a solution to manage the growing number of image processing jobs. The solution must also reduce the manual tasks in the image processing workflow. The company does not want to manage the underlying infrastructure of the solution.
Which solution will meet these requirements with the LEAST operational overhead?
⬜ A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 Spot Instances to process the images. Configure Amazon Simple Queue Service (Amazon SQS) to orchestrate the workflow. Store the processed files in Amazon Elastic File System (Amazon EFS).
✅ B. Use AWS Batch jobs to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon S3 bucket.
⬜ C. Use AWS Lambda functions and Amazon EC2 Spot Instances to process the images. Store the processed files in Amazon FSx.
⬜ D. Deploy a group of Amazon EC2 instances to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon Elastic Block Store (Amazon EBS) volume.
Explanation:
● AWS Batch is ideal for managing batch workloads at scale without needing to manage servers. It integrates well with Step Functions, which can automate and orchestrate complex workflows.
● Amazon S3 provides a durable, cost-effective solution for storing large output files from image processing.
● This combination minimizes infrastructure management, as both AWS Batch and Step Functions are fully managed services.
Why other options are wrong:
A. Requires managing ECS tasks and Spot instance handling, increasing operational overhead.
C. AWS Lambda is not ideal for long-running, compute-heavy jobs like image processing.
D. EC2-based architectures require manual provisioning, scaling, and maintenance, which violates the “least operational overhead” requirement.
Source: https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html
Source: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
A company wants to migrate its existing on-premises monolithic application to AWS. The company wants to keep as much of the front-end code and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?
⬜ A. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.
⬜ B. Host the application with AWS Amplify. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
⬜ C. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
✅ D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the target.
Explanation:
● Amazon ECS allows a monolithic application to be refactored into smaller, containerized microservices, each of which can be managed independently by different teams.
● It also provides scalability and reduces operational overhead, especially when using Fargate launch type, which is serverless for containers.
● An Application Load Balancer can route traffic to different services based on the request path or hostname, which is ideal for managing multiple microservices.
Why other options are wrong:
A. AWS Lambda is ideal for event-driven, short-lived functions. Migrating a monolithic application without major refactoring would not be feasible.
B. AWS Amplify is targeted for modern web and mobile apps but not suitable for hosting legacy or large monolithic applications.
C. Amazon EC2 provides flexibility but requires significant operational management and does not natively support microservices separation or scalability benefits like ECS does.
Source: https://aws.amazon.com/ecs/