AWS Certified Solutions Architect Associate (SAA-C03) Exam Questions AWS Certified Solutions Architect Associate (SAA-C03) Exam Questions

Page content

Comprehensive list of Free AWS Certified Solutions Architect Associate (SAA-C03) exam questions curated for cracking the exam with confidence.

Disclaimer: AWS is a protected Brand. These exam questions are neither endorsed by nor affiliated with AWS. These are not the AWS official exam questions/dumps. These questions are created from the web resources of the AWS. These questions cover all the objectives and services of the AWS SAA-C03 official exam and once you go through these questions and their concepts, you are more than ready to crack the exam in first attempt.

Overview


  1. Prepare well for the exam, it is the toughest exam I cracked in recent years.
  2. Requires 2 to 3 month of preparation depending upon your commitment per day.
  3. Exam code is SAA-C03 (third version) and cost you 150 USD per attempt.
  4. You need to solve 65 questions in 130 mins from your laptop under the supervision of online proctor.
  5. Passing score is 720 (out of 1000) means you should answer at least 47 (out of 65) questions correctly. No negative scoring so answer all the questions!
  6. You get the result (Pass or Fail) once you submit the exam, however you don’t receive any email immediately. It generally takes 2-3 days. I received an email with digital certificate, score-card and badge after two days. You can also login to AWS training to get them later.
  7. You can schedule exam with Pearson VUE or PSI. I heard bad reviews about PSI and chose Pearson VUE for my exam. The exam went smooth.
  8. You get discount vouchers under Benefits tab of AWS training portal once you crack at least one AWS exam. You can use these vouchers for subsequent exams.
  9. Exam Guide for more details.

Practice Questions


A solutions architect needs to optimize a large data analytics job that runs on an Amazon EMR cluster. The job takes 13 hours to finish. The cluster has multiple core nodes and worker nodes deployed on large, compute-optimized instances.

After reviewing EMR logs, the solutions architect discovers that several nodes are idle for more than 5 hours while the job is running. The solutions architect needs to optimize cluster performance.

Which solution will meet this requirement MOST cost-effectively?

⬜ A. Increase the number of core nodes to ensure there is enough processing power to handle the analytics job without any idle time.
✅ B. Use the EMR managed scaling feature to automatically resize the cluster based on workload.
⬜ C. Migrate the analytics job to a set of AWS Lambda functions. Configure reserved concurrency for the functions.
⬜ D. Migrate the analytics job core nodes to a memory-optimized instance type to reduce the total job runtime.

Explanation:
EMR managed scaling dynamically resizes the cluster by adding or removing nodes based on the workload. This feature helps minimize idle time and reduces costs by scaling the cluster to meet processing demands efficiently.
Incorrect Options:
Option A: Increasing the number of core nodes might increase idle time further, as it does not address the root cause of underutilization.
Option C: Migrating the job to Lambda is infeasible for large analytics jobs due to resource and runtime constraints.
Option D: Changing to memory-optimized instances may not necessarily reduce idle time or optimize costs.
Source: Using managed scaling in Amazon EMR


A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.

Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.

The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.

How should the developer resolve this issue?

⬜ A. Change the capacity mode from provisioned to on-demand.
⬜ B. Double the number of shards until the throttling errors stop occurring.
✅ C. Change the partition key from service name to creation timestamp.
⬜ D. Use a separate Kinesis stream for each service to generate the logs.

Explanation:
Partition Key Issue: Using ‘service name’ as the partition key results in uneven data distribution. Some shards may become hot due to excessive logs from certain services, leading to throttling errors. Changing the partition key to ‘creation timestamp’ ensures a more even distribution of records across shards.
Incorrect Options:
Option A: On-demand capacity mode eliminates throughput management but is more expensive and does not address the root cause.
Option B: Adding more shards does not solve the issue if the partition key still creates hot shards.
Option D: Using separate streams increases complexity and is unnecessary.
Source: Amazon Kinesis Data Streams Terminology and concepts


An e-commerce company has an application that uses Amazon DynamoDB tables configured with provisioned capacity. Order data is stored in a table named Orders. The Orders table has a primary key of order-ID and a sort key of product-ID. The company configured an AWS Lambda function to receive DynamoDB streams from the Orders table and update a table named Inventory. The company has noticed that during peak sales periods, updates to the Inventory table take longer than the company can tolerate.

Which solutions will resolve the slow table updates? (Select TWO.)

⬜ A. Add a global secondary index to the Orders table. Include the product-ID attribute.
✅ B. Set the batch size attribute of the DynamoDB streams to be based on the size of items in the Orders table.
✅ C. Increase the DynamoDB table provisioned capacity by 1,000 write capacity units (WCUs).
⬜ D. Increase the DynamoDB table provisioned capacity by 1,000 read capacity units (RCUs).
⬜ E. Increase the timeout of the Lambda function to 15 minutes.

Explanation:
Key Problem:
Delayed Inventory table updates during peak sales.
DynamoDB Streams and Lambda processing require optimization.
Analysis of Options:
Option A: Adding a GSI is unrelated to the issue. It does not address stream processing delays or capacity issues.
Option B: Optimizing batch size reduces latency and allows the Lambda function to process larger chunks of data at once, improving performance during peak load.
Option C: Increasing write capacity for the Inventory table ensures that it can handle the increased volume of updates during peak times.
Option D: Increasing read capacity for the Orders table does not directly resolve the issue since the problem is with updates to the Inventory table.
Option E: Increasing Lambda timeout only addresses longer processing times but does not solve the underlying throughput problem.
Source: Best practices for designing and architecting with DynamoDB
Source: DynamoDB provisioned capacity mode


A developer is creating an ecommerce workflow in an AWS Step Functions state machine that includes an HTTP Task state. The task passes shipping information and order details to an endpoint.

The developer needs to test the workflow to confirm that the HTTP headers and body are correct and that the responses meet expectations.

Which solution will meet these requirements?

⬜ A. Use the TestState API to invoke only the HTTP Task. Set the inspection level to TRACE.
⬜ B. Use the TestState API to invoke the state machine. Set the inspection level to DEBUG.
⬜ C. Use the data flow simulator to invoke only the HTTP Task. View the request and response data.
✅ D. Change the log level of the state machine to ALL. Run the state machine.

Explanation:
State Machine Testing with Logs:
Changing the log level to ALL enables capturing detailed request and response data. This helps verify HTTP headers, body, and responses.
Incorrect Options:
Option A and B: The TestState API is not a valid option for Step Functions.
Option C: A data flow simulator does not exist for AWS Step Functions.
Source: What is Step Functions?


A company is deploying a critical application by using Amazon RDS for MySQL. The application must be highly available and must recover automatically. The company needs to support interactive users (transactional queries) and batch reporting (analytical queries) with no more than a 4-hour lag. The analytical queries must not affect the performance of the transactional queries.

⬜ A. Configure Amazon RDS for MySQL in a Multi-AZ DB instance deployment with one standby instance. Point the transactional queries to the primary DB instance. Point the analytical queries to a secondary DB instance that runs in a different Availability Zone.
⬜ B. Configure Amazon RDS for MySQL in a Multi-AZ DB cluster deployment with two standby instances. Point the transactional queries to the primary DB instance. Point the analytical queries to the reader endpoint.
✅ C. Configure Amazon RDS for MySQL to use multiple read replicas across multiple Availability Zones. Point the transactional queries to the primary DB instance. Point the analytical queries to one of the replicas in a different Availability Zone.
⬜ D. Configure Amazon RDS for MySQL as the primary database for the transactional queries with automated backups enabled. Configure automated backups. Each night, create a read-only database from the most recent snapshot to support the analytical queries. Terminate the previously created database.

Explanation:
Key Requirements:
High availability and automatic recovery.
Separate transactional and analytical queries with minimal performance impact.
Allow up to a 4-hour lag for analytical queries.
Analysis of Options:
Option A: Multi-AZ deployments provide high availability but do not include read replicas for separating transactional and analytical queries. Analytical queries on the secondary DB instance would impact the transactional workload. Does not meet the requirement of query separation.
Option B: Multi-AZ DB clusters provide high availability and include a reader endpoint. However, these are better suited for Aurora and not RDS for MySQL. Not applicable to standard RDS for MySQL.
Option C: Multiple read replicas allow separation of transactional and analytical workloads. Queries can be pointed to a replica in a different AZ, ensuring no impact on transactional queries. Meets all requirements with high availability and query separation.
Option D: Creating nightly snapshots and read-only databases adds significant operational overhead and does not support the 4-hour lag requirement. Not practical for dynamic query separation.
Source: Working with DB instance read replicas
Source: Configuring and managing a Multi-AZ deployment for Amazon RDS


A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS Single Sign-On) and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all the accounts.

The permissions will be used by multiple IAM users and must be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users that are hired on both teams.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create individual users in IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign the users to the appropriate groups Create a custom IAM policy for each group to set fine-grained permissions.
⬜ B. Create individual users in IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed IAM policies to each user as needed for fine-grained permissions.
✅ C. Create individual users in IAM Identity Center Create new developer and administrator groups in IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts Assign the new permission sets to the new groups When new users are hired, add them to the appropriate group.
⬜ D. Create individual users in IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each user. Assign the users to the appropriate accounts. Grant additional IAM permissions to the users from within specific accounts. When new users are hired, add them to IAM Identity Center and assign them to the accounts.

Explanation:
The best approach for least operational overhead is to use groups in IAM Identity Center and permission sets:
● IAM Identity Center users are managed centrally (not per account individually).
● Groups (e.g., “Developers” and “Administrators”) make managing large numbers of users easier.
● Permission sets define reusable permission templates (fine-grained) and are assigned once per group.
● New users can simply be added to the appropriate group — no need to reconfigure policies or individual user permissions manually.
This is a standard, scalable design for AWS Organizations environments using IAM Identity Center and AWS Control Tower.
Why other options are wrong:
A: Creates custom IAM policies per group — but manually per user/account, which is high overhead.
B: Suggests attaching managed policies directly to users — poor practice at scale and harder to maintain.
D: Creates permission sets per user — not scalable, and increases management complexity when users change roles.
Source: https://docs.aws.amazon.com/singlesignon/latest/userguide/howtocreategroups.html


How can a company detect and notify security teams about PII in S3 buckets?

✅ A. Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
⬜ B. Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
⬜ C. Use Amazon Macie. Create an EventBridge rule for SensitiveData:S3Object/Personal findings and send an SQS notification.
⬜ D. Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.

Explanation:
Amazon Macie is purpose-built for detecting PII in S3.
Option A uses EventBridge to filter SensitiveData findings and notify via SNS, meeting the requirements.
Options B and D involve GuardDuty, which is not designed for PII detection.
Option C uses SQS, which is less suitable for immediate notifications.


An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists of a web application and a backend Microsoft SQL database for storage.

The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS Cloud must be highly available and scalable.

Which solution will meet these requirements with the LEAST administrative overhead?

⬜ A. Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.
⬜ B. Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across separate AWS Regions with database replication.
✅ C. Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.
⬜ D. Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.

Explanation:
The correct solution with high availability, scalability, and minimal administrative overhead is:
● C. Deploy the web application on EC2 instances in an Auto Scaling group across multiple AZs for scalability and high availability,
● Use Amazon RDS with Multi-AZ deployment for automatic failover, managed backups, and minimal downtime without the need to manually manage replication or databases.
This combination offers server scaling, database resilience, and AWS-managed maintenance with lowest operational complexity.
Why other options are wrong:
A. RDS Multi-AZ provides high availability, but read replicas are intended for read scaling, not high availability — not the best for failover protection.
B. Replication across separate AWS Regions manually on EC2 adds heavy operational overhead and complexity.
D. Managing databases manually across EC2 instances and AZs requires high operational effort compared to using managed RDS.
Source: Auto Scaling groups
Source: Configuring and managing a Multi-AZ deployment for Amazon RDS


A company is developing a new application that uses a relational database to store user data and application configurations. The company expects the application to have steady user growth. The company expects the database usage to be variable and read-heavy, with occasional writes.

The company wants to cost-optimize the database solution. The company wants to use an AWS managed database solution that will provide the necessary performance.

Which solution will meet these requirements MOST cost-effectively?

⬜ A. Deploy the database on Amazon RDS. Use Provisioned IOPS SSD storage to ensure consistent performance for read and write operations.
✅ B. Deploy the database on Amazon Aurora Serveriess to automatically scale the database capacity based on actual usage to accommodate the workload.
⬜ C. Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically scale throughput to accommodate the workload.
⬜ D. Deploy the database on Amazon RDS Use magnetic storage and use read replicas to accommodate the workload.

Explanation:
Amazon Aurora Serverless is a cost-effective, on-demand, autoscaling configuration for Amazon Aurora. It automatically adjusts the database’s capacity based on the current demand, which is ideal for workloads with variable and unpredictable usage patterns. Since the application is expected to be read-heavy with occasional writes and steady growth, Aurora Serverless can provide the necessary performance without requiring the management of database instances.
Cost-Optimization: Aurora Serverless only charges for the database capacity you use, making it a more cost-effective solution compared to always running provisioned database instances, especially for workloads with fluctuating demand.
Scalability: It automatically scales database capacity up or down based on actual usage, ensuring that you always have the right amount of resources available.
Performance: Aurora Serverless is built on the same underlying storage as Amazon Aurora, providing high performance and availability.
Why other options are wrong:
Option A (RDS with Provisioned IOPS SSD): While Provisioned IOPS SSD ensures consistent performance, it is generally more expensive and less flexible compared to the autoscaling nature of Aurora Serverless.
Option C (DynamoDB with On-Demand Capacity): DynamoDB is a NoSQL database and may not be the best fit for applications requiring relational database features.
Option D (RDS with Magnetic Storage and Read Replicas): Magnetic storage is outdated and generally slower. While read replicas help with read-heavy workloads, the overall performance might not be optimal, and magnetic storage doesn’t provide the necessary performance.
Source: Using Amazon Aurora Serverless v1
Source: Amazon Aurora Pricing


An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.

Which solution will meet these requirements MOST cost-effectively?

⬜ A. Create a cross-Region read replica and promote the read replica to the primary instance
⬜ B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
⬜ C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket
✅ D. Copy automatic snapshots to another Region every 24 hours.

Explanation:
This solution is the most cost-effective and meets the RPO and RTO requirements of 24 hours.
● Automatic Snapshots: Amazon RDS automatically creates snapshots of your DB instance at regular intervals. By copying these snapshots to another AWS Region every 24 hours, you ensure that you have a backup available in a different geographic location, providing disaster recovery capability.
● RPO and RTO: Since the company’s RPO and RTO are both 24 hours, copying snapshots daily to another Region is sufficient. In the event of a disaster, you can restore the DB instance from the most recent snapshot in the target Region.
Why other options are wrong:
Option A (Cross-Region Read Replica): This could provide a faster recovery time but is more costly due to the ongoing replication and resource usage in another Region.
Option B (DMS Cross-Region Replication): While effective for continuous replication, it introduces complexity and cost that isn’t necessary given the 24-hour RPO/RTO.
Option C (Cross-Region Native Backup Copy): This involves more manual steps and doesn’t offer as straightforward a solution as automated snapshot copying.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html


A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.

The company needs a managed solution with proactive engagement to detect against DDoS attacks.

Which solution will meet these requirements?

⬜ A. Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.
⬜ B. Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.
⬜ C. Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.
✅ D. Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as protected resources.

Explanation:
AWS Shield Advanced is designed to provide enhanced protection against DDoS attacks with proactive engagement and response capabilities, making it the best solution for this scenario.
● AWS Shield Advanced: This service provides advanced protection against DDoS attacks. It includes detailed attack diagnostics, 24/7 access to the AWS DDoS Response Team (DRT), and financial protection against DDoS-related scaling charges. Shield Advanced also integrates with Route 53 and the Application Load Balancer (ALB) to ensure comprehensive protection for your web applications.
● Route 53 and ALB Protection: By adding your Route 53 hosted zones and ALB resources to AWS Shield Advanced, you ensure that these components are covered under the enhanced protection plan. Shield Advanced actively monitors traffic and provides real-time attack mitigation, minimizing the impact of DDoS attacks on your application.
Why other options are wrong:
Option A (AWS Config): AWS Config is a configuration management service and does not provide DDoS protection or detection capabilities.
Option B (AWS WAF): While AWS WAF can help mitigate some types of attacks, it does not provide the comprehensive DDoS protection and proactive engagement offered by Shield Advanced.
Option C (GuardDuty): GuardDuty is a threat detection service that identifies potentially malicious activity within your AWS environment, but it is not specifically designed to provide DDoS protection.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html


A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create Amazon RDS automated backups. Set the retention period to 90 days.
⬜ B. Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.
⬜ C. Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days
✅ D. Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.

Explanation:
AWS Backup is the most appropriate solution for managing backups with minimal operational overhead while meeting the regulatory requirement to retain data for 90 days and enabling point-in-time restore for up to 14 days.
● AWS Backup: AWS Backup provides a centralized backup management solution that supports automated backup scheduling, retention management, and compliance reporting across AWS services, including Amazon RDS. By creating a backup plan, you can define a retention period (in this case, 90 days) and automate the backup process.
● Point-in-Time Restore (PITR): Amazon RDS supports point-in-time restore for up to 35 days with automated backups. By using AWS Backup in conjunction with RDS, you ensure that your backup strategy meets the requirement for restoring data to a specific point in time within the last 14 days.
Why other options are wrong:
Option A (RDS Automated Backups): While RDS automated backups support PITR, they do not directly support retention beyond 35 days without manual intervention.
Option B (Manual Snapshots): Manually creating and managing snapshots is operationally intensive and less automated compared to AWS Backup.
Option C (Aurora Clones): Aurora Clone is a feature specific to Amazon Aurora and is not applicable to Amazon RDS for Oracle.
Source: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html


A company has an employee web portal. Employees log in to the portal to view payroll details. The company is developing a new system to give employees the ability to upload scanned documents for reimbursement. The company runs a program to extract text-based data from the documents and attach the extracted information to each employee’s reimbursement IDs for processing.

The employee web portal requires 100% uptime. The document extract program runs infrequently throughout the day on an on-demand basis. The company wants to build a scalable and cost-effective new system that will require minimal changes to the existing web portal. The company does not want to make any code changes.

Which solution will meet these requirements with the LEAST implementation effort?

✅ A. Run Amazon EC2 On-Demand Instances in an Auto Scaling group for the web portal. Use an AWS Lambda function to run the document extract program. Invoke the Lambda function when an employee uploads a new reimbursement document.
⬜ B. Run Amazon EC2 Spot Instances in an Auto Scaling group for the web portal. Run the document extract program on EC2 Spot Instances Start document extract program instances when an employee uploads a new reimbursement document.
⬜ C. Purchase a Savings Plan to run the web portal and the document extract program. Run the web portal and the document extract program in an Auto Scaling group.
⬜ D. Create an Amazon S3 bucket to host the web portal. Use Amazon API Gateway and an AWS Lambda function for the existing functionalities. Use the Lambda function to run the document extract program. Invoke the Lambda function when the API that is associated with a new document upload is called.

Explanation:
This solution offers the most scalable and cost-effective approach with minimal changes to the existing web portal and no code modifications.
Amazon EC2 On-Demand Instances in an Auto Scaling Group: Running the web portal on EC2 On-Demand instances ensures 100% uptime and scalability. The Auto Scaling group will maintain the desired number of instances, automatically scaling up or down as needed, ensuring high availability for the employee web portal.
AWS Lambda for Document Extraction: Lambda is a serverless compute service that allows you to run code in response to events without provisioning or managing servers. By using Lambda to run the document extraction program, you can trigger the function whenever an employee uploads a document. This approach is cost-effective since you only pay for the compute time used by the Lambda function.
No Code Changes Required: This solution integrates with the existing infrastructure with minimal implementation effort and does not require any modifications to the web portal’s code.
Why other options are wrong:
Option B (Spot Instances): Spot Instances are not suitable for workloads requiring 100% uptime, as they can be terminated by AWS with short notice.
Option C (Savings Plan): A Savings Plan could reduce costs but does not address the requirement for running the document extraction program efficiently or without code changes.
Option D (S3 with API Gateway and Lambda): This would require significant changes to the existing web portal setup, including moving the portal to S3 and reconfiguring its architecture, which contradicts the requirement of minimal implementation effort and no code changes.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Source: https://docs.aws.amazon.com/lambda/latest/dg/welcome.html


A company has one million users that use its mobile app. The company must analyze the data usage in near-real time. The company also must encrypt the data in near-real time and must store the data in a centralized location in Apache Parquet format for further processing.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.
⬜ B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS Lambda function to send the data to the EMR cluster.
⬜ C. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data.
✅ D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data.

Explanation:
The solution must deliver near-real-time ingestion, encryption, storage in Parquet format, and centralized storage — with least operational overhead:
● Kinesis Data Firehose automatically ingests, encrypts, buffers, and delivers streaming data to S3 (with built-in support for Parquet format and encryption via KMS).
● Kinesis Data Analytics can analyze streaming data directly in real-time without needing manual cluster management like EMR.
● Both Kinesis services are fully managed (no servers or manual scaling), meeting the least operational overhead requirement perfectly.
Thus, Kinesis Data Firehose + Kinesis Data Analytics is the optimal serverless, efficient solution.
Why other options are wrong:
A: Kinesis Data Stream + manual Lambda invocation + Kinesis Data Analytics = more components to manage (Lambda triggers, stream consumers).
B: Kinesis Data Stream + EMR cluster = EMR adds high operational overhead (you need to manage cluster lifecycle, scaling, costs).
C: Firehose to S3 is good, but EMR again requires cluster management — not as low-overhead as using fully managed Kinesis Data Analytics.
Source: https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html Source: https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html


A company has 15 employees. The company stores employee start dates in an Amazon DynamoDB table. The company wants to send an email message to each employee on the day of the employee’s work anniversary.

Which solution will meet these requirements with the MOST operational efficiency?

⬜ A. Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
⬜ B. Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
✅ C. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda function to run every day.
⬜ D. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary Schedule this Lambda function to run every day.

Explanation:
For most operational efficiency, we want a serverless solution with minimal management overhead:
● AWS Lambda allows you to run code without managing servers (no EC2 needed).
● Amazon SNS is a simple, scalable service for sending emails (directly integrated with email endpoints).
● Scheduling Lambda functions (using EventBridge or CloudWatch Events) is a fully managed and reliable way to run daily jobs.
Thus, Lambda + DynamoDB + SNS is the most efficient and cost-effective solution for small-scale tasks like sending anniversary emails daily.
Why other options are wrong:
A: Requires managing an EC2 instance and cron job — adds operational overhead (servers to patch, secure, etc.).
B: Same problem as A — plus SQS is a queue system, not an email service. You would still need another consumer to send emails.
D: Using Lambda is good, but SQS is unnecessary here. SNS is purpose-built for sending notifications/emails directly.
Source: https://docs.aws.amazon.com/lambda/latest/dg/services-cwe-tutorial.html
Source: https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html


A company hosts a three-tier web application in the AWS Cloud. A Multi-AZ Amazon RDS for MySQL server forms the database layer. Amazon ElastiCache forms the cache layer. The company wants a caching strategy that adds or updates data in the cache when a customer adds an item to the database. The data in the cache must always match the data in the database.

Which solution will meet these requirements?

⬜ A. Implement the lazy loading caching strategy
✅ B. Implement the write-through caching strategy
⬜ C. Implement the adding TTL caching strategy
⬜ D. Implement the AWS AppConfig caching strategy

Explanation:
The write-through caching strategy ensures that every time new data is written to the database, it is also written to the cache immediately.
● This keeps the cache and the database perfectly in sync, meeting the requirement that the cache must always reflect the database accurately.
● It eliminates the chances of a cache miss or stale data after a database update.
Why other options are wrong:
A. Lazy loading: Updates the cache only on read, not when data is written — causing potential inconsistency.
C. Adding TTL: Only sets expiration times for cache entries, but does not update cache when database changes.
D. AWS AppConfig caching strategy: AWS AppConfig is for application configuration management, not for database caching.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html


A company has two AWS accounts: Production and Development. The company needs to push code changes in the Development account to the Production account. In the alpha phase, only two senior developers on the development team need access to the Production account. In the beta phase, more developers will need access to perform testing.

Which solution will meet these requirements?

⬜ A. Create two policy documents by using the AWS Management Console in each account. Assign the policy to developers who need access.
⬜ B. Create an IAM role in the Development account. Grant the IAM role access to the Production account. Allow developers to assume the role.
✅ C. Create an IAM role in the Production account. Define a trust policy that specifies the Development account. Allow developers to assume the role.
⬜ D. Create an IAM group in the Production account. Add the group as a principal in a trust policy that specifies the Production account. Add developers to the group.

Explanation:
The correct solution is to create an IAM role in the Production account and configure a trust policy that allows users from the Development account to assume the role.
● This is the standard cross-account access pattern in AWS: the resource account (Production) owns the role, and the trusted account (Development) users assume it.
● As team size grows (from two senior developers to more developers later), you just need to manage permissions in the Development account — no change needed in Production.
Why other options are wrong:
A. Create two policy documents: Policy documents alone do not allow cross-account access. You need a trust relationship.
B. Create an IAM role in Development account: The role must exist in the Production account, not Development, because you are accessing Production resources.
D. Create an IAM group in Production: IAM groups cannot be a principal in a trust policy. Trust policies are only for IAM roles.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html#roles-creatingrole-trust-policy


A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.

Which solution will meet these requirements?

⬜ A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
✅ B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
⬜ C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
⬜ D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

Explanation:
The best solution for migrating 70 TB of data quickly with least possible network bandwidth usage is to use AWS Snowball Edge.
● Snowball Edge is a physical device that AWS ships to you.
● You load data locally, and ship the device back to AWS.
● AWS then imports the data directly into S3 from their side.
● This approach avoids overloading the network and speeds up large-scale data transfers.
Why other options are wrong:
A. AWS CLI upload: Copying 70 TB over the network using CLI would be very slow and consume heavy bandwidth.
C. S3 File Gateway: Suitable for ongoing access to S3 as NFS, not ideal for a one-time bulk migration.
D. AWS Direct Connect: Setting up Direct Connect is time-consuming and expensive, unnecessary for a one-time migration.
Source: https://docs.aws.amazon.com/snowball/latest/developer-guide/whatis.html


A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.

The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.

Which solution meets these requirements?

✅ A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
⬜ B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
⬜ C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.
⬜ D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.

Explanation:
The correct and most efficient solution for fast, global data uploads with minimal complexity is:
● A. Enable S3 Transfer Acceleration on the destination bucket.
● S3 Transfer Acceleration uses Amazon CloudFront’s globally distributed edge locations to accelerate uploads from geographically dispersed clients directly to S3, minimizing latency and maximizing speed.
● Using multipart uploads further speeds up large file transfers and improves reliability.
Why other options are wrong:
B. Cross-Region Replication adds extra steps, higher costs, and additional management complexity.
C. Snowball devices are best for offline, bulk transfers, not for daily, fast uploads over high-speed internet.
D. Using EC2 and EBS snapshots is overly complex, costly, and unnecessary for simple file transfers.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html


A company is deploying an application in three AWS Regions using an Application Load Balancer. Amazon Route 53 will be used to distribute traffic between these Regions.

Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?

✅ A. Create an A record with a latency policy.
⬜ B. Create an A record with a geolocation policy.
⬜ C. Create a CNAME record with a failover policy.
⬜ D. Create a CNAME record with a geoproximity policy.

Explanation:
The latency-based routing policy in Route 53 helps direct users to the AWS Region that provides the lowest network latency, thus providing the most high-performing experience.
● It improves end-user experience by routing requests to the Region that responds the fastest.
● An A record with a latency policy ensures that users get routed to the closest, best-performing Region automatically based on real-time measurements.
Why other options are wrong:
B. Geolocation policy: Routes traffic based on the user’s location (e.g., country or continent), not on network latency or performance.
C. Failover policy: Focuses on availability and recovery, not on performance optimization.
D. Geoproximity policy: Routes based on physical distance (with traffic bias), but latency is a better metric for performance compared to pure distance.
Source: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency


A company runs containers in a Kubernetes environment in the company’s local data center. The company wants to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services. Data must remain locally in the company’s data center and cannot be stored in any remote site or cloud to maintain compliance.

Which solution will meet these requirements?

⬜ A. Deploy AWS Local Zones in the company’s data center
⬜ B. Use an AWS Snowmobile in the company’s data center
✅ C. Install an AWS Outposts rack in the company’s data center
⬜ D. Install an AWS Snowball Edge Storage Optimized node in the data center

Explanation:
AWS Outposts is the correct solution — it brings AWS services, APIs, and managed infrastructure to the company’s on-premises data center, while keeping the data local to meet compliance requirements.
● Outposts supports Amazon EKS, Amazon RDS, S3 APIs, and many AWS managed services locally.
● It is specifically designed for use cases where data must remain on-premises while still leveraging AWS-native capabilities. Why other options are wrong:
A. Local Zones: Local Zones are owned and operated by AWS in nearby AWS-managed sites, not inside your own data center.
B. Snowmobile: AWS Snowmobile is for bulk data transfer (exabytes) to AWS — not for running services on-premises.
D. Snowball Edge: A Snowball Edge node is designed for edge computing and limited storage/compute tasks, but not suitable for fully running managed services like EKS.
Source: https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html


A company’s near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.

Which combination of steps should the solutions architect take? (Select TWO.)

✅ A. Use Amazon Kinesis Data Firehose to ingest the data.
⬜ B. Use AWS Lambda with AWS Step Functions to process the data.
⬜ C. Use AWS Database Migration Service (AWS DMS) to ingest the data.
⬜ D. Use Amazon EC2 instances in an Auto Scaling group to process the data.
✅ E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.

Explanation:
A. Kinesis Data Firehose is a fully managed, serverless service that ingests streaming data and can automatically deliver it to destinations like S3, Redshift, or Elasticsearch, handling large volumes with minimal setup.
E. AWS Fargate with Amazon ECS allows running containerized jobs serverlessly without managing any EC2 instances.
Since the job takes 30 minutes, AWS Lambda is not a good fit (Lambda has a 15-minute timeout), but Fargate can easily handle longer-running tasks.
Thus, combining Kinesis Data Firehose for ingestion and Fargate/ECS for processing offers a scalable, serverless architecture.
Why other options are wrong:
B. Lambda with Step Functions: Lambda cannot handle 30-minute jobs due to its 15-minute max timeout.
C. AWS DMS: DMS is for database migration, not for general-purpose streaming data ingestion.
D. EC2 Auto Scaling group: EC2 instances are not serverless — you have to manage and scale instances manually.
Source: https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html Source: https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html


A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) to run its self-managed database. The company has 350 TB of data spread across all EBS volumes. The company takes daily EBS snapshots and keeps the snapshots for 1 month. The daily change rate is 5% of the EBS volumes.

Because of new regulations, the company needs to keep the monthly snapshots for 7 years. The company needs to change its backup strategy to comply with the new regulations and to ensure that data is available with minimal administrative effort.

Which solution will meet these requirements MOST cost-effectively?

⬜ A. Keep the daily snapshot in the EBS snapshot standard tier for 1 month. Copy the monthly snapshot to Amazon S3 Glacier Deep Archive with a 7-year retention period.
✅ B. Continue with the current EBS snapshot policy. Add a new policy to move the monthly snapshot to Amazon EBS Snapshots Archive with a 7-year retention period.
⬜ C. Keep the daily snapshot in the EBS snapshot standard tier for 1 month. Keep the monthly snapshot in the standard tier for 7 years. Use incremental snapshots.
⬜ D. Keep the daily snapshot in the EBS snapshot standard tier. Use EBS direct APIs to take snapshots of all the EBS volumes every month. Store the snapshots in an Amazon S3 bucket in the Infrequent Access tier for 7 years.

Explanation:
The most cost-effective solution is to use Amazon EBS Snapshots Archive, which provides a lower-cost storage tier designed for long-term retention of snapshots.
● You can keep daily snapshots in the standard tier for 1 month for fast restores.
● Then archive monthly snapshots for 7 years at much lower cost without needing manual movement or conversions.
● AWS natively supports snapshot lifecycle policies to automatically manage transitions between standard and archive tiers, ensuring minimal administrative effort.
Why other options are wrong:
A. Copying to Glacier Deep Archive: EBS snapshots are not directly copied to Glacier — they stay within EBS snapshot management. Manual copy to S3/Glacier increases operational overhead.
C. Keeping snapshots in the standard tier for 7 years: Much higher cost compared to moving to Archive tier.
D. Using direct APIs and S3 Infrequent Access: EBS snapshots cannot be directly stored in S3 without custom export/import logic, adding unnecessary complexity.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-archive.html


A company wants to use NAT gateways in its AWS environment. The company’s Amazon EC2 instances in private subnets must be able to connect to the public internet through the NAT gateways.

Which solution will meet these requirements?

⬜ A. Create public NAT gateways in the same private subnets as the EC2 instances
⬜ B. Create private NAT gateways in the same private subnets as the EC2 instances
✅ C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances
⬜ D. Create private NAT gateways in public subnets in the same VPCs as the EC2 instances

Explanation:
To allow EC2 instances in private subnets to access the public internet:
● You must deploy public NAT gateways in a public subnet of the same VPC.
● A public subnet means the subnet has a route to an internet gateway.
● The private subnet EC2 instances route their internet-bound traffic to the NAT gateway, which forwards it to the internet gateway.
This setup ensures private instances are not directly exposed to the internet while still allowing them to initiate outbound connections.
Why other options are wrong:
A. Public NAT in private subnet: You cannot create a public NAT gateway in a private subnet; it needs a public IP and internet route.
B. Private NAT in private subnet: Private NAT is used for VPC-to-VPC communication, not internet access.
D. Private NAT in public subnet: Private NAT gateways are for private communications only, not for internet-bound traffic.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html


A social media company has workloads that collect and process data. The workloads store the data in on-premises NFS storage. The data store cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the current data store to AWS.

Which solution will meet these requirements MOST cost-effectively?

⬜ A. Set up an AWS Storage Gateway Volume Gateway. Use an Amazon S3 Lifecycle policy to transition the data to the appropriate storage class.
✅ B. Set up an AWS Storage Gateway Amazon S3 File Gateway. Use an Amazon S3 Lifecycle policy to transition the data to the appropriate storage class.
⬜ C. Use the Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access (Standard-IA) storage class. Activate the infrequent access lifecycle policy.
⬜ D. Use the Amazon Elastic File System (Amazon EFS) One Zone-Infrequent Access (One Zone-IA) storage class. Activate the infrequent access lifecycle policy.

Explanation:
The most cost-effective solution to migrate NFS-based on-premises storage to AWS is to use AWS Storage Gateway - S3 File Gateway:
● It allows your applications to access Amazon S3 objects as NFS file shares, appearing like a local file system.
● You can apply S3 Lifecycle policies to automatically move older data to lower-cost storage classes like S3 Standard-IA, Glacier Instant Retrieval, or Deep Archive.
● This solution provides scalability, durability, and low cost, while preserving NFS compatibility.
Why other options are wrong:
A. Volume Gateway: Volume Gateway presents block storage (iSCSI) — not NFS file system — so it does not match NFS needs.
C. EFS Standard-IA: EFS is great but much more expensive compared to S3-based storage for massive, less frequently accessed data.
D. EFS One Zone-IA: Slightly cheaper than Standard-IA but still more costly than an S3-backed Storage Gateway for large, cold datasets.
Source: https://docs.aws.amazon.com/filegateway/latest/files3/WhatIsFileGateway.html


A company runs a container application on a Kubernetes cluster in the company’s data center. The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the workloads to AWS.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS). Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
✅ B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon MQ to retrieve the messages.
⬜ C. Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve the messages.
⬜ D. Use AWS Lambda functions to run the application. Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.

Explanation:
● The application is already containerized on Kubernetes and uses AMQP, a protocol not natively supported by Amazon SQS.
● Amazon MQ fully supports AMQP and provides a managed message broker service (e.g., ActiveMQ, RabbitMQ) with minimal operational overhead.
● Amazon EKS allows for a seamless migration from on-prem Kubernetes to AWS-managed Kubernetes with scaling, patching, and upgrades managed by AWS.
Thus, migrating to Amazon EKS + Amazon MQ is the solution that matches the existing architecture and minimizes operational burden.
Why other options are wrong:
A. ECS + SQS: SQS does not support AMQP protocol.
C. EC2 instances: Running Kubernetes on EC2 manually would increase operational overhead (managing nodes, patching, scaling manually).
D. Lambda + SQS: Lambda functions are not suitable for long-running containerized applications, and again SQS does not support AMQP.
Source: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/what-is-amazon-mq.html


A company website hosted on Amazon EC2 instances processes classified data stored in the application. The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.

Which solution will meet this requirement?

⬜ A. Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
✅ B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
⬜ C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the EBS level.
⬜ D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account. Ensure that the key policy is active.

Explanation:
● When you create EBS volumes, you can specify encryption at creation time.
● AWS handles encryption at rest transparently using AWS-managed or customer-managed KMS keys.
● Encrypted volumes ensure that all data written to disk is encrypted automatically without needing manual intervention later.
Why other options are wrong:
A. IAM role for encryption: IAM roles control permissions, but they do not encrypt EBS volumes automatically.
C. Instance tags: Tags are metadata for resources; they have no effect on encryption settings.
D. KMS key policy: A key policy controls access to encryption keys, but by itself does not enforce EBS volume encryption.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html


⬜ A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
⬜ B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
✅ C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
⬜ D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.

Explanation:
The easiest and most efficient way to analyze CloudTrail logs for Access Denied and Unauthorized errors is to use Amazon Athena.
● Athena allows you to run SQL queries directly against CloudTrail logs stored in S3 without setting up infrastructure or writing custom scripts.
● This provides fast, serverless, ad-hoc querying at a low cost with minimal effort compared to building custom ETL pipelines or dashboards.
Why other options are wrong:
A. AWS Glue requires building ETL jobs and custom scripts, which adds complexity.
B. AWS Batch also needs custom scripting and compute management, making it higher effort.
D. Amazon QuickSight is good for visualization, not for quick and easy log searching; it requires Athena or another data source under the hood.
Source: https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html


A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The application will also use an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.

Which solution will meet these requirements?

⬜ A. Use AWS Key Management Service (AWS KMS) certificates on the ALB to encrypt data in transit. Use AWS Certificate Manager (ACM) to encrypt the EBS volumes and Aurora database storage at rest.
⬜ B. Use the AWS root account to log in to the AWS Management Console. Upload the company’s encryption certificates. While in the root account, select the option to turn on encryption for all data at rest and in transit for the account.
✅ C. Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate Manager (ACM) certificate to the ALB to encrypt data in transit.
⬜ D. Use BitLocker to encrypt all data at rest. Import the company’s TLS certificate keys to AWS Key Management Service (AWS KMS). Attach the KMS keys to the ALB to encrypt data in transit.

Explanation:
The correct way to meet encryption requirements is:
● Use AWS KMS to encrypt EBS volumes and Aurora storage at rest.
● Use AWS Certificate Manager (ACM) to issue and attach SSL/TLS certificates to the ALB for data in transit encryption.
● This is a standard, AWS-native, secure, and low-effort approach that fully complies with encryption best practices.
Why other options are wrong:
A. KMS is not used to issue certificates for ALB; ACM manages certificates for in-transit encryption.
B. The AWS root account should not be used for day-to-day operations, and there is no single toggle for account-wide encryption.
D. BitLocker is a Microsoft on-premises solution, not applicable to AWS services like EBS or Aurora.
Source: https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html
Source: https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html


A company uses AWS Organizations to run workloads within multiple AWS accounts. A tagging policy adds department tags to AWS resources when the company creates tags.

An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must determine which departments are responsible for the costs regardless of AWS account. The accounting team has access to AWS Cost Explorer for all AWS accounts within the organization and needs to access all reports from Cost Explorer.

Which solution meets these requirements in the MOST operationally efficient way?

✅ A. From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
⬜ B. From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
⬜ C. From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by the tag name, and filter by EC2.
⬜ D. From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name and filter by EC2.

Explanation:
The correct and most efficient solution is:
● In the management account’s billing console, you must activate the user-defined cost allocation tag (department).
● After activation, you can group by the department tag and filter by Amazon EC2 usage inside Cost Explorer.
● Only the management account has the ability to activate tags for consolidated billing across all accounts.
Why other options are wrong:
B. AWS-defined tags are automatically created by AWS for resources (e.g., createdBy), not custom department tags created by users.
C. Member accounts cannot activate tags for consolidated billing across all accounts; activation must happen from the management account.
D. Same problem as C, plus it wrongly assumes department is an AWS-defined tag.
Source: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html


A company wants to run its payment application on AWS. The application receives payment notifications from mobile devices. Payment notifications require a basic validation before they are sent for further processing.

The backend processing application is long running and requires compute and memory to be adjusted. The company does not want to manage the infrastructure.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere. Create a standalone cluster.
⬜ B. Create an Amazon API Gateway API. Integrate the API with an AWS Step Functions state machine to receive payment notifications from mobile devices. Invoke the state machine to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure an EKS cluster with self-managed nodes.
⬜ C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon EC2 Spot Instances. Configure a Spot Fleet with a default allocation strategy.
✅ D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment notifications from mobile devices. Invoke a Lambda function to validate payment notifications and send the notifications to the backend application. Deploy the backend application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.

Explanation:
The solution with the least operational overhead is:
● Use API Gateway to receive mobile notifications and Lambda to do lightweight validation (serverless, fully managed, minimal operations).
● Deploy the backend application using Amazon ECS with AWS Fargate, where AWS handles the underlying server provisioning, scaling, and maintenance automatically.
● This setup ensures that both validation and long-running backend processing require minimal infrastructure management while supporting auto-scaling compute and memory adjustments.
Why other options are wrong:
A. EKS Anywhere requires managing on-premises or own cluster infrastructure, increasing operational overhead.
B. EKS with self-managed nodes again requires managing EC2 instances, patching, scaling, which is not “least operational overhead.”
C. EC2 Spot Instances introduce potential interruptions and infrastructure management, not ideal for critical payment processing.
Source: https://docs.aws.amazon.com/ecs/latest/developerguide/what-is-fargate.html


A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. The AMIs contain critical data and configurations that are necessary for the company’s operations. The company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate AWS account.
⬜ B. Copy all AMIs to another AWS account periodically.
✅ C. Create a retention rule in Recycle Bin.
⬜ D. Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.

Explanation:
The Recycle Bin for EBS-backed AMIs allows you to set retention rules so that if an AMI is accidentally deleted, it is retained for a specified period and can be recovered easily.
● This solution requires very little operational overhead because AWS automatically retains and manages the deleted AMIs according to the retention policy.
● No need for manual copying, snapshot management, or external backup scripts.
Why other options are wrong:
A. EBS snapshots store the volume data but do not store the full AMI metadata (like launch permissions and block device mappings).
B. Copying AMIs periodically to another account adds manual effort and scripting to manage versions.
D. Uploading AMIs to S3 is not a supported process for AMI management — AMIs are not simple files and cannot be directly stored in S3 buckets.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recycle-bin.html


A company’s developers want a secure way to gain SSH access on the company’s Amazon EC2 instances that run the latest version of Amazon Linux. The developers work remotely and in the corporate office.

The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private subnet and access the internet through a NAT gateway that is deployed in a public subnet.

What should a solutions architect do to meet these requirements MOST cost-effectively?

⬜ A. Create a bastion host in the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2 instances.
⬜ B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct the developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on the corporate network. Instruct the developers to set up another VPN connection for access when they work remotely.
⬜ C. Create a bastion host in the public subnet of the VPC. Configure the security groups and SSH keys of the bastion host to only allow connections and SSH authentication from the developers’ corporate and remote networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2 instances.
✅ D. Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.

Explanation:
The most cost-effective, secure, and low-operational overhead solution is to use AWS Systems Manager Session Manager:
● It allows developers to access EC2 instances without needing SSH keys, bastion hosts, or open inbound ports.
● It uses IAM policies and secure tunneling through the Systems Manager service, even for instances in private subnets.
● This solution eliminates the need to maintain bastion hosts, VPNs, or manage SSH key distributions.
Why other options are wrong:
A. Bastion hosts add additional infrastructure management and costs; and EC2 Instance Connect is designed for instances with public IPs.
B. Site-to-Site VPN is complex and expensive, especially requiring two different VPN solutions for remote and corporate users.
C. Bastion hosts require ongoing patching, hardening, and expose additional risk with open inbound ports.
Source: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html


A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:

Which IAM principals can the solutions architect attach this policy to? (Select TWO.)
{
  "Statement": [
    {
      "Action": [
        "ssm:ListDocuments",
        "ssm:GetDocument"
      ],
      "Effect": "Allow",
      "Resource": "*",
      "Sid": ""
    }
  ],
  "Version": "2012-10-17"
}

✅ A. Role
✅ B. Group
⬜ C. Organization
⬜ D. Amazon Elastic Container Service (Amazon ECS) resource
⬜ E. Amazon EC2 resource

Explanation:
This JSON represents an identity-based policy, which can be attached to IAM principals such as:
● IAM Roles (A) — to grant permissions to assume and perform actions.
● IAM Groups (B) — to group users and assign permissions at the group level.
Identity-based policies cannot be directly attached to:
● AWS Organizations entities (C),
● ECS resources like tasks/services (D),
● EC2 instances (E).
For ECS and EC2 resources, you would typically use resource-based policies or instance profiles/roles, not direct identity policies.
Why other options are wrong:
C. Organization: You attach service control policies (SCPs) at the organization or organizational unit (OU) level, not identity policies.
D. Amazon ECS resource: ECS tasks or services assume IAM roles but do not have identity policies directly attached.
E. Amazon EC2 resource: EC2 instances assume IAM roles (instance profiles) rather than being assigned identity policies directly.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html


A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.

Which solution meets these requirements and the MOST operationally efficient?

⬜ A. Server-side encryption with customer-provided keys (SSE-C)
⬜ B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
⬜ C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
✅ D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation

Explanation:
The most operationally efficient solution is to use SSE-KMS with automatic key rotation:
● AWS KMS automatically logs key usage through AWS CloudTrail, meeting auditing requirements.
● Automatic key rotation happens yearly without needing manual intervention, ensuring compliance with rotation policies.
● Encryption at rest is fully managed, with minimal operational overhead.
Why other options are wrong:
A. SSE-C requires clients to manage encryption keys themselves for each request — high operational overhead.
B. SSE-S3 encrypts data but does not log individual key usage for auditing.
C. Manual rotation of KMS keys introduces unnecessary operational burden compared to automated rotation.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html


A company wants to use an AWS CloudFormation stack for its application in a test environment. The company stores the CloudFormation template in an Amazon S3 bucket that blocks public access. The company wants to grant CloudFormation access to the template in the S3 bucket based on specific user requests to create the test environment. The solution must follow security best practices.

Which solution will meet these requirements?

⬜ A. Create a gateway VPC endpoint for Amazon S3. Configure the CloudFormation stack to use the S3 object URL.
⬜ B. Create an Amazon API Gateway REST API that has the S3 bucket as the target. Configure the CloudFormation stack to use the API Gateway URL.
✅ C. Create a presigned URL for the template object. Configure the CloudFormation stack to use the presigned URL.
⬜ D. Allow public access to the template object in the S3 bucket. Block the public access after the test environment is created.

Explanation:
The most secure and best-practice approach is to create a presigned URL for the S3 object.
● A presigned URL grants temporary, limited access to the private S3 object without exposing it publicly.
● CloudFormation can use this presigned URL to access the template securely during stack creation, satisfying both access control and security best practices.
Why other options are wrong:
A. A VPC endpoint allows internal access but does not directly help CloudFormation access a private template unless the user is in the same VPC.
B. Creating an API Gateway to proxy S3 access is unnecessarily complex and not cost-effective for a simple use case.
D. Allowing public access, even temporarily, violates security best practices.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html


A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution to store and deliver the images to users.

Which solution will meet these requirements MOST cost-effectively?

⬜ A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
⬜ B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
⬜ C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
✅ D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.

Explanation:
The most cost-effective solution for infrequently accessed images is Amazon S3 Standard-IA.
● S3 Standard-IA offers lower storage costs compared to S3 Standard, with slightly higher retrieval costs — ideal for objects accessed only a few times per year.
● Hosting the images using S3 static website hosting provides high availability without the need for managing servers.
Why other options are wrong:
A. EBS volumes are attached to EC2 instances and are not highly available across multiple Availability Zones without complex setups.
B. EFS is designed for frequent access, and is more expensive for infrequent access scenarios.
C. S3 Standard is highly available but more expensive than S3 Standard-IA when access frequency is low.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-diff-ia


A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very quickly.

What should a solutions architect recommend?

✅ A. Create a DynamoDB table in on-demand capacity mode.
⬜ B. Create a DynamoDB table with a global secondary index.
⬜ C. Create a DynamoDB table with provisioned capacity and auto scaling.
⬜ D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.

Explanation:
The on-demand capacity mode for DynamoDB is ideal for:
● Unpredictable workloads with irregular traffic patterns.
● Automatic scaling to handle sudden traffic spikes without manual capacity planning.
● Cost optimization because you only pay for the reads and writes you use, without overprovisioning.
This makes on-demand mode the best fit for unpredictable, spike-prone, and cost-sensitive scenarios.
Why other options are wrong:
B. A global secondary index (GSI) helps in querying but does not address cost optimization or capacity scaling.
C. Provisioned capacity with auto scaling helps with some variability but may not scale fast enough for sudden spikes.
D. Global tables are for multi-Region replication, not for handling unpredictable traffic or cost optimization within a single Region.
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html


A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability.

What should a solutions architect do to meet these requirements?

✅ A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon S3 bucket.
⬜ B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an Amazon RDS database.
⬜ C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket.
⬜ D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the resize jobs.

Explanation:
The best solution is to use AWS Lambda with Amazon S3:
● Lambda is serverless, automatically scales to handle unpredictable traffic, and minimizes operational overhead.
● S3 provides highly durable, highly available storage for both the original and resized images.
● Triggering Lambda from S3 upload events ensures real-time, scalable, and efficient image processing without the need to manage servers.
Why other options are wrong:
B. Step Functions are meant for orchestrating workflows, not for directly processing image resizing tasks; storing images in RDS is not cost-effective or scalable for large objects.
C. A single EC2 instance does not automatically scale and introduces operational management overhead.
D. Using ECS and EC2 adds complexity and infrastructure management, and is unnecessary for a simple image resize task.
Source: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html


A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.

An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the overall reliability of its environment.

What should the solutions architect do to maximize reliability of the application’s infrastructure?

⬜ A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection.
✅ B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
⬜ C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
⬜ D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.

Explanation:
The most reliable solution is to:
● Enable Multi-AZ on the RDS DB instance to provide automatic failover and high availability.
● Enable deletion protection to prevent accidental deletion of the RDS DB instance.
● Deploy EC2 instances across multiple Availability Zones within an Auto Scaling group behind an Application Load Balancer to ensure web server high availability and fault tolerance.
This setup maximizes reliability across both compute and database layers.
Why other options are wrong:
A. Deleting one EC2 instance reduces redundancy; keeping only one instance does not improve reliability.
C. Adding Lambda/API Gateway is unnecessary and overcomplicates the architecture for a traditional web application.
D. Using Spot Instances is not reliable for steady workloads because Spot Instances can be interrupted at any time.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html


A company has an application that processes customer orders. The company hosts the application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database. Occasionally when traffic is high, the workload does not process orders fast enough.

What should a solutions architect do to write the orders reliably to the database as quickly as possible?

⬜ A. Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic.
✅ B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
⬜ C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SNS topic.
⬜ D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.

Explanation:
The best approach is to decouple the application by writing orders into an SQS queue, then having EC2 instances in an Auto Scaling group behind an Application Load Balancer read from the queue and process the orders into Aurora.
● SQS allows for reliable buffering during traffic spikes.
● Auto Scaling allows the application tier to scale dynamically based on demand, ensuring fast and reliable processing.
Why other options are wrong:
A. SNS is a pub/sub service, not a queuing service; subscribing a database endpoint directly is not valid.
C. Similarly, SNS is designed for notification delivery, not buffered task processing.
D. Writing to SQS only when CPU thresholds are hit delays queuing and can cause orders to be lost or delayed during rapid spikes.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html


A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of images each second. The company wants to resize the images dynamically and serve appropriate formats to clients.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Install an external image management library on an EC2 instance. Use the image management library to process the images.
⬜ B. Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
✅ C. Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
⬜ D. Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.

Explanation:
The best solution with the least operational overhead is to use a Lambda@Edge function with CloudFront:
● Lambda@Edge allows serverless image processing very close to the client, reducing latency.
● You can dynamically resize and transform images based on the incoming request, such as adjusting based on the User-Agent header.
● No need to manage EC2 instances, load balancers, or manual backend processing, significantly reducing operational overhead.
Why other options are wrong:
A. Managing external libraries on EC2 instances requires infrastructure maintenance and scaling concerns.
B. A CloudFront origin request policy only forwards headers; it cannot resize or process images automatically.
D. A CloudFront response headers policy manages HTTP response headers, but it does not modify or resize images.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html


A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company has no machine learning (ML) experience and wants to use a managed service for the training and predictions.

Which combination of steps will meet these requirements? (Select TWO.)

⬜ A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
⬜ B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
⬜ C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create predictions based on the inputs.
✅ D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to create a prediction based on the inputs.
✅ E. Train an Amazon Forecast predictor by using the historical data in the S3 bucket.

Explanation:
The best solution, given no machine learning experience and a need for managed services, is to use Amazon Forecast:
● Amazon Forecast is a fully managed service that automates model building, training, and deploying forecasts based on time series data without requiring ML expertise.
● E: Train an Amazon Forecast predictor using historical data stored in S3.
● D: Use an AWS Lambda function that calls the Forecast predictor to generate forecasts dynamically based on new inputs.
Why other options are wrong:
A. SageMaker requires ML expertise for model development and tuning, which the company lacks.
B. Same as A; SageMaker model training is not fully managed for non-ML users.
C. Lambda integrating with SageMaker endpoints still implies custom ML model handling, unsuitable for non-ML experienced teams.
Source: https://docs.aws.amazon.com/forecast/latest/dg/what-is-forecast.html


A company offers a food delivery service that is growing rapidly. Because of the growth, the company’s order processing system is experiencing scaling problems during peak traffic hours. The current architecture includes the following:

● A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application.

● Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfill orders.

The order collection process occurs quickly, but the order fulfillment process can take longer. Data must not be lost because of a scaling event.

A solutions architect must ensure that the order collection process and the order fulfillment process can both scale properly during peak traffic hours. The solution must optimize utilization of the company’s AWS resources.

Which solution meets these requirements?

⬜ A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure each Auto Scaling group’s minimum capacity according to peak workload values.
⬜ B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure a CloudWatch alarm to invoke an Amazon Simple Notification Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.
⬜ C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the EC2 instances to poll their respective queue. Scale the Auto Scaling groups based on notifications that the queues send.
✅ D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the EC2 instances to poll their respective queue. Create a metric based on a backlog per instance calculation. Scale the Auto Scaling groups based on this metric.

Explanation:
The most efficient solution is to:
● Use Amazon SQS queues to decouple order collection and order fulfillment, ensuring no data is lost if instances are scaled in or out.
● Use a backlog per instance metric (number of queued messages divided by the number of running instances) to dynamically scale EC2 instances in the Auto Scaling groups.
This approach optimizes resource utilization by scaling based on the actual workload rather than CPU utilization or manual capacity settings.
Why other options are wrong:
A. Setting minimum capacity based on peak workloads leads to overprovisioning and inefficient resource use during normal hours.
B. Creating new Auto Scaling groups dynamically is complex and unnecessary when scaling can be done within existing groups.
C. Scaling based only on SQS notifications is less precise compared to using a calculated backlog per instance metric.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html


A company has a three-tier environment on AWS that ingests sensor data from its users’ devices. The traffic flows through a Network Load Balancer (NLB), then to Amazon EC2 instances for the web tier, and finally to EC2 instances for the application tier that makes database calls.

What should a solutions architect do to improve the security of data in transit to the web tier?

✅ A. Configure a TLS listener and add the server certificate on the NLB.
⬜ B. Configure AWS Shield Advanced and enable AWS WAF on the NLB.
⬜ C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it.
⬜ D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS).

Explanation:
The best way to improve security of data in transit is to encrypt traffic between clients and the web tier using TLS.
● Network Load Balancers support TLS termination by configuring a TLS listener and attaching a server certificate to handle secure connections.
● This ensures that all data transmitted over the network to the web servers is encrypted in transit.
Why other options are wrong:
B. AWS Shield Advanced and AWS WAF protect against DDoS and application-layer attacks, not specifically encryption of data in transit.
C. Switching to an Application Load Balancer is unnecessary just for encryption; NLB can also handle TLS termination.
D. Encrypting EBS volumes protects data at rest, not data in transit.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html


A company plans to use Amazon ElastiCache for its multi-tier web application. A solutions architect creates a Cache VPC for the ElastiCache cluster and an App VPC for the application’s Amazon EC2 instances. Both VPCs are in the us-east-1 Region.

The solutions architect must implement a solution to provide the application’s EC2 instances with access to the ElastiCache cluster.

Which solution will meet these requirements MOST cost-effectively?

✅ A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
⬜ B. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
⬜ C. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the peering connection’s security group to allow inbound connection from the application’s security group.
⬜ D. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an inbound rule for the Transit VPC’s security group to allow inbound connection from the application’s security group.

Explanation:
The most cost-effective and simple solution is to use a VPC peering connection:
● VPC peering allows direct network connectivity between two VPCs in the same Region without the need for additional network appliances like Transit VPCs.
● Update the route tables for both VPCs to route traffic through the peering connection.
● Adjust the ElastiCache security group to allow inbound traffic from the application’s security group.
This approach avoids the complexity and cost associated with Transit VPC architectures.
Why other options are wrong:
B. Transit VPCs are more expensive and unnecessarily complex for two VPCs within the same Region.
C. Security groups are attached to resources like EC2 or ElastiCache, not to peering connections themselves.
D. Same as B — Transit VPCs are not needed here and introduce avoidable operational and cost overhead.
Source: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html


A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure the database access credentials. The company’s security team wants to protect the application and the database from SQL injection and other web-based attacks.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Use security groups and network ACLs to secure the database and application servers.
✅ B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
⬜ C. Use AWS Network Firewall to protect the application and the database.
⬜ D. Use different database accounts in the application code for different functions. Avoid granting excessive privileges to the database users.

Explanation:
The best solution with the least operational overhead is to use AWS WAF to protect the application:
● AWS WAF can automatically detect and block SQL injection and other common web-based attacks without needing to manage infrastructure manually.
● Additionally, RDS parameter groups help fine-tune the database’s security settings, such as enforcing SSL connections.
This approach leverages managed services to improve security while keeping operational burden low.
Why other options are wrong:
A. Security groups and network ACLs control network traffic, not application-layer attacks like SQL injection.
C. AWS Network Firewall is mainly used for network-level traffic filtering, not specifically web application-level attacks.
D. Using different database accounts improves database security but does not protect against web-based threats like SQL injection.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html


A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed.

What should the solutions architect do to ensure that the architecture supports distributed session data management?

✅ A. Use Amazon ElastiCache to manage and store session data.
⬜ B. Use session affinity (sticky sessions) of the ALB to manage session data.
⬜ C. Use Session Manager from AWS Systems Manager to manage the session.
⬜ D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.

Explanation:
The most scalable and reliable way to handle distributed session data is to use Amazon ElastiCache (such as Redis or Memcached) to store session state centrally:
● ElastiCache allows any EC2 instance to read/write session data regardless of which instance handled the original request.
● This approach supports scaling across Availability Zones and frequent instance scale-in/scale-out without losing session data.
Why other options are wrong:
B. Sticky sessions tie users to specific instances, which does not scale well when instances are replaced frequently.
C. Session Manager is for secure shell access to EC2 instances, not for application session management.
D. AWS STS GetSessionToken provides temporary security credentials, not web application session data management.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ManagingSessions.Redis.html


A company runs a high performance computing (HPC) workload on AWS. The workload requires low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.

What should a solutions architect propose to improve the performance of the workload?

✅ A. Choose a cluster placement group while launching Amazon EC2 instances.
⬜ B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
⬜ C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
⬜ D. Choose the required capacity reservation while launching Amazon EC2 instances.

Explanation:
For high performance computing (HPC) workloads requiring low-latency, high-throughput, tightly coupled node-to-node communication, the best solution is to launch EC2 instances in a cluster placement group:
● Cluster placement groups place instances physically close together inside a single Availability Zone, optimizing for high network performance.
● This reduces network latency and increases network throughput between instances.
Why other options are wrong:
B. Dedicated tenancy provides isolated hardware but does not optimize network latency or throughput.
C. Elastic Inference accelerators are used to boost machine learning inference performance, not general HPC networking performance.
D. Capacity reservations ensure instance availability but do not affect network performance.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html


A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available.

Which combination of actions should the company take to meet these requirements? (Select TWO.)

⬜ A. Refactor the application as serverless with AWS Lambda functions running .NET Core.
✅ B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
⬜ C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).
⬜ D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.
✅ E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.

Explanation:
The company wants to minimize development changes, so the best approach is:
● B. Rehost the application on Elastic Beanstalk using the .NET platform. Elastic Beanstalk supports Windows-based applications and automatically handles Multi-AZ deployments for high availability.
● E. Use AWS DMS to migrate from Oracle to Oracle on Amazon RDS, allowing the company to continue using Oracle without needing to change the application’s database logic. RDS also offers Multi-AZ deployments for high availability.
Why other options are wrong:
A. Refactoring into serverless with Lambda would require major development changes (contrary to the requirement).
C. Replatforming to EC2 with Amazon Linux would require OS and platform changes, increasing development and migration efforts.
D. Migrating from Oracle to DynamoDB would require significant schema and application logic changes, not minimal changes.
Source: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_NET.html Source: https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html


A company’s application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
✅ B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
⬜ C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule’s target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule’s target.
⬜ D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

Explanation:
The best solution with the least operational overhead is to use Amazon AppFlow:
● Amazon AppFlow is a fully managed integration service that allows you to transfer data securely and directly from SaaS applications into AWS services like S3 without running any servers.
● Combined with S3 event notifications to SNS for user notifications after uploads, this completely removes the need for EC2 instance management, improving performance and reliability.
Why other options are wrong:
A. Using Auto Scaling EC2 instances still requires managing servers, which increases operational overhead.
C. EventBridge rules for direct data ingestion from SaaS sources are complex to set up and not the intended use case.
D. Containerizing with ECS reduces EC2 overhead but still requires managing containers and scaling policies, making it more complex than a managed service like AppFlow.
Source: https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html


A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary.

Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?

⬜ A. Use an Amazon Aurora global database with a pilot light deployment.
✅ B. Use an Amazon Aurora global database with a warm standby deployment.
⬜ C. Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment.
⬜ D. Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment.

Explanation:
The best solution for lowest RTO and up-to-date data with minimal latency is to use an Amazon Aurora global database with a warm standby deployment:
● Aurora global databases are designed for low-latency cross-Region replication (typically less than 1 second lag).
● A warm standby means that the DR environment is running at reduced capacity but can scale up quickly when needed, providing a much faster recovery compared to cold or pilot light strategies.
Why other options are wrong:
A. A pilot light approach keeps minimal services running, but warm standby provides faster recovery (lower RTO).
C. RDS Multi-AZ operates only within a single Region, not across multiple Regions.
D. RDS Multi-AZ still remains in one Region; it does not support cross-Region disaster recovery natively.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-databases.html


A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an AWS managed solution that will control access to the REST API to reduce development efforts.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Configure an AWS Lambda function to be an authorizer in API Gateway to validate which user made the request.
⬜ B. For each user, create and assign an API key that must be sent with each request. Validate the key by using an AWS Lambda function.
⬜ C. Send the user’s email address in the header with every request. Invoke an AWS Lambda function to validate that the user with that email address has proper access.
✅ D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.

Explanation:
The most efficient, AWS managed solution with least operational overhead is to use an Amazon Cognito user pool authorizer directly in API Gateway:
● API Gateway can natively integrate with Cognito to authenticate and authorize API requests.
● No need to write custom Lambda functions for validation, reducing development and operational burden.
● Cognito will automatically handle token validation and user authentication with minimal configuration.
Why other options are wrong:
A. Lambda authorizers require custom coding and management, increasing operational overhead.
B. API keys are primarily used for usage tracking and throttling, not authentication or fine-grained access control.
C. Sending user information in headers and validating manually requires custom logic, making it less secure and more complex.
Source: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html


A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.

Which steps should the solutions architect do in conjunction to reach this goal? (Select two.)

⬜ A. Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
⬜ B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
⬜ C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the AdministratorAccess IAM policy attached.
✅ D. Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
✅ E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using that IAM role.

Explanation:
To follow the principle of least privilege:
● D. Create an IAM user and assign permissions only for CloudFormation actions needed for their role, not broad permissions like Administrator or PowerUser.
● E. Creating an IAM role with specific permissions to launch CloudFormation stacks provides controlled and auditable access, ensuring tight privilege boundaries.
Why other options are wrong:
A. Never use the AWS root account for daily activities; it’s reserved for account and security administration only.
B. PowerUsers policy grants too many permissions (almost admin access excluding IAM and billing).
C. AdministratorAccess grants full access to all AWS services and resources, which violates least privilege principles.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html


A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files must be stored for 7 years for auditing purposes.

Which solution will meet these requirements?

⬜ A. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for transcript file analysis.
✅ B. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
⬜ C. Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file analysis.
⬜ D. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file analysis.

Explanation:
The best solution is to use Amazon Transcribe:
● Amazon Transcribe is specifically designed for audio transcription and supports multiple speaker recognition (speaker diarization).
● Amazon Athena can query the transcript files stored in Amazon S3 directly using SQL queries, without the need for complex ETL processes.
● This combination meets the need for easy analysis, scalability, and 7 years of cost-effective S3 storage for compliance.
Why other options are wrong:
A. Amazon Rekognition is used for image and video analysis, not audio transcription.
C. Amazon Translate is for language translation, not speaker recognition or audio transcription.
D. Amazon Textract is used for extracting text from documents, not analyzing audio transcriptions.
Source: https://docs.aws.amazon.com/transcribe/latest/dg/what-is-transcribe.html


An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some customers experienced timeouts and the application did not process the orders of those customers. A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open connections. The solutions architect needs to prevent the timeout errors while making the least possible changes to the application.

Which solution will meet these requirements?

⬜ A. Configure provisioned concurrency for the Lambda function. Modify the database to be a global database in multiple AWS Regions.
✅ B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the database endpoint.
⬜ C. Create a read replica for the database in a different AWS Region. Use query string parameters in API Gateway to route traffic to the read replica.
⬜ D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). Modify the Lambda function to use the DynamoDB table.

Explanation:
The best solution with the least changes to the application is to use Amazon RDS Proxy:
● RDS Proxy manages database connections efficiently, pooling and reusing connections instead of opening a new one for every Lambda invocation.
● This reduces CPU and memory load on the database, preventing timeout errors during sudden traffic spikes.
● Updating the Lambda function to connect to the RDS Proxy endpoint instead of the direct database endpoint is a minimal code change.
Why other options are wrong:
A. Provisioned concurrency improves Lambda cold starts but does not address database connection overload.
C. Read replicas are for read scaling only, and routing traffic manually via query strings is not practical for this use case.
D. Migrating to DynamoDB involves major application changes, which contradicts the requirement for minimal changes.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html


A company’s application is having performance issues. The application is stateful and needs to complete in-memory tasks on Amazon EC2 instances. The company used AWS CloudFormation to deploy infrastructure and used the M5 EC2 instance family. As traffic increased, the application performance degraded. Users are reporting delays when the users attempt to access the application.

Which solution will resolve these issues in the MOST operationally efficient way?

⬜ A. Replace the EC2 instances with T3 EC2 instances that run in an Auto Scaling group. Make the changes by using the AWS Management Console.
⬜ B. Modify the CloudFormation templates to run the EC2 instances in an Auto Scaling group. Increase the desired capacity and the maximum capacity of the Auto Scaling group manually when an increase is necessary.
✅ C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Use Amazon CloudWatch built-in EC2 memory metrics to track the application performance for future capacity planning.
⬜ D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2 instances to generate custom application latency metrics for future capacity planning.

Explanation:
The application is stateful and performs in-memory operations, meaning memory-optimized instances are a better fit.
● C. Replacing M5 instances with R5 memory-optimized instances improves performance for memory-intensive workloads.
● Using built-in CloudWatch memory metrics ensures operational efficiency without needing extra agents for monitoring, making future capacity planning easier.
Why other options are wrong:
A. T3 instances are burstable and not designed for consistent high memory workloads.
B. Auto Scaling groups are helpful for stateless applications; scaling stateful apps is complex and not addressed just by adding instances.
D. Deploying the CloudWatch agent introduces additional management overhead, whereas built-in metrics are easier and faster to use.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#AvailableInstanceTypes


A company is running several business applications in three separate VPCs within the us-east-1 Region. The applications must be able to communicate between VPCs. The applications also must be able to consistently send hundreds of gigabytes of data each day to a latency-sensitive application that runs in a single on-premises data center.

A solutions architect needs to design a network connectivity solution that maximizes cost-effectiveness.

Which solution meets these requirements?

⬜ A. Configure three AWS Site-to-Site VPN connections from the data center to AWS. Establish connectivity by configuring one VPN connection for each VPC.
⬜ B. Launch a third-party virtual network appliance in each VPC. Establish an IPsec VPN tunnel between the data center and each virtual appliance.
⬜ C. Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1. Establish connectivity by configuring each VPC to use one of the Direct Connect connections.
✅ D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit gateway. Establish connectivity between the Direct Connect connection and the transit gateway.

Explanation:
The best solution for high throughput, low latency, and cost-effectiveness is to:
● D. Use a single AWS Direct Connect connection (cost-effective for large data volumes) and a Transit Gateway to connect multiple VPCs and the on-premises data center.
● Transit Gateway allows easy and scalable VPC-to-VPC communication and efficient on-premises connectivity through the Direct Connect link.
Why other options are wrong:
A. Site-to-Site VPNs are cheaper but introduce higher latency and lower throughput compared to Direct Connect — not ideal for latency-sensitive, high-data-volume applications.
B. Using third-party appliances adds complexity and extra cost without solving the primary throughput/latency requirement efficiently.
C. Setting up three separate Direct Connect connections is unnecessary and expensive when one connection and a Transit Gateway can achieve the same result.
Source: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways.html
Source: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html


⬜ A. Reduce the S3 object sizes to less than 126 MB.
✅ B. Partition the data by date and region in Amazon S3.
⬜ C. Store the files as large, single objects in Amazon S3.
⬜ D. Use Amazon Kinesis Data Analytics to run the queries as part of the batch processing operation.
✅ E. Use an AWS Glue extract, transform, and load (ETL) process to convert the CSV files into Apache Parquet format.

Explanation:
To improve Athena query performance and reliability:
● B. Partitioning the data by date and region reduces the amount of data scanned per query, significantly improving speed and reducing costs.
● E. Converting CSV files to Parquet (a columnar format) reduces storage size and increases query efficiency because Athena can read only the relevant columns needed for a query.
Why other options are wrong:
A. Reducing object size below 126 MB could cause too many small files, leading to performance degradation in Athena.
C. Large, unpartitioned files can slow down queries and increase scan costs.
D. Amazon Kinesis Data Analytics is for real-time stream processing, not for optimizing batch query performance on static S3 datasets.
Source: https://docs.aws.amazon.com/athena/latest/ug/optimizing-queries.html


A company runs a global web application on Amazon EC2 instances behind an Application Load Balancer. The application stores data in Amazon Aurora. The company needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The solution does not need to handle the load when the primary infrastructure is healthy.

What should a solutions architect do to meet these requirements?

✅ A. Deploy the application with the required infrastructure elements in place. Use Amazon Route 53 to configure active-passive failover. Create an Aurora Replica in a second AWS Region.
⬜ B. Host a scaled-down deployment of the application in a second AWS Region. Use Amazon Route 53 to configure active-active failover. Create an Aurora Replica in the second Region.
⬜ C. Replicate the primary infrastructure in a second AWS Region. Use Amazon Route 53 to configure active-active failover. Create an Aurora database that is restored from the latest snapshot.
⬜ D. Back up data with AWS Backup. Use the backup to create the required infrastructure in a second AWS Region. Use Amazon Route 53 to configure active-passive failover. Create an Aurora second primary instance in the second Region.

Explanation:
The best solution for 30 minutes of acceptable downtime and some data loss tolerance is an active-passive disaster recovery setup:
● A. Set up the necessary infrastructure in a second Region but keep it passive (inactive) unless a failover occurs.
● Use Route 53 active-passive failover to switch DNS to the secondary Region if the primary fails.
● An Aurora cross-Region replica provides near real-time replication and faster recovery in case of disaster.
This approach is cost-effective because the secondary infrastructure is not actively serving traffic.
Why other options are wrong:
B. Active-active setups are more expensive and are intended for high availability, not just disaster recovery with some downtime tolerance.
C. Restoring from a snapshot introduces more downtime than allowed (snapshot restores are slower than promoting a replica).
D. AWS Backup is suitable for backups but not ideal for minimizing RTO/RPO compared to Aurora replicas for DR scenarios.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html


A company wants to direct its users to a backup static error page if the company’s primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure overhead.

Which solution will meet these requirements?

⬜ A. Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an Amazon S3 bucket to the records so that the traffic is sent to the most responsive endpoints.
✅ B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
⬜ C. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoints. Configure Route 53 to send requests to the instance only if the health checks fail for the ALB.
⬜ D. Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic to the website if the health check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass.

Explanation:
The most efficient solution with minimal infrastructure changes and low overhead is:
● B. Use Route 53 active-passive failover.
● Primary traffic is directed to the ALB while healthy, and when the ALB health check fails, traffic is automatically routed to the static S3 bucket hosting the error page.
● This setup is simple, serverless, and cost-effective.
Why other options are wrong:
A. Latency routing is for choosing the lowest latency endpoint, not for failover in case of unavailability.
C. Active-active setup adds unnecessary complexity; the company only needs a backup page, not a second active website.
D. Multivalue answer routing is not as reliable for precise failover behavior compared to a dedicated active-passive failover policy.
Source: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html


A company has a data ingestion workflow that includes the following components:

● An Amazon Simple Notification Service (Amazon SNS) topic that receives notifications about new data deliveries

● An AWS Lambda function that processes and stores the data

The ingestion workflow occasionally fails because of network connectivity issues. When failure occurs, the corresponding data is not ingested unless the company manually reruns the job. What should a solutions architect do to ensure that all notifications are eventually processed?

⬜ A. Configure the Lambda function for deployment across multiple Availability Zones.
⬜ B. Modify the Lambda function’s configuration to increase the CPU and memory allocations for the function.
⬜ C. Configure the SNS topic’s retry strategy to increase both the number of retries and the wait time between retries.
✅ D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process messages in the queue.

Explanation:
The most reliable way to ensure that all notifications are eventually processed is to:
● D. Use an Amazon SQS queue as a dead-letter queue (DLQ) or on-failure destination for the SNS topic.
● This ensures that if the Lambda function invocation fails, the notification is stored in SQS and can be retried later.
● Lambda can then process messages from the SQS queue, making the ingestion more fault-tolerant without manual intervention.
Why other options are wrong:
A. Lambda is already highly available across multiple AZs by design — no extra configuration needed.
B. Increasing CPU and memory might improve performance, but does not solve network failure issues.
C. SNS retry strategies are helpful but do not guarantee delivery beyond a few retries if persistent failures occur — using SQS ensures durable message storage.
Source: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations


A business’s backup data totals 700 terabytes (TB) and is kept in network-attached storage (NAS) at its data center. This backup data must be available in the event of occasional regulatory inquiries and preserved for a period of seven years. The organization has chosen to relocate its backup data from its on-premises data center to Amazon Web Services (AWS). Within one month, the migration must be completed. The company’s public internet connection provides 500 Mbps of dedicated capacity for data transport.

What should a solutions architect do to ensure that data is migrated and stored at the LOWEST possible cost?

✅ A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
⬜ B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier.
⬜ C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
⬜ D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.

Explanation:
The best option for migrating 700 TB of data within one month and at the lowest cost is to:
● A. Use AWS Snowball devices, which are designed for large-scale data migrations without depending on network bandwidth.
● After ingestion into S3, S3 lifecycle policies can automatically transition data to S3 Glacier Deep Archive for low-cost long-term storage.
Transferring 700 TB over a 500 Mbps link would take many months, not meet the one-month timeline, and incur high data transfer costs.
Why other options are wrong:
B. VPN connection over public internet would be too slow and is not intended for large bulk data migrations.
C. Provisioning Direct Connect might improve speed but requires weeks to set up, costs more, and may not meet the one-month deadline.
D. AWS DataSync uses network bandwidth, and 700 TB over a 500 Mbps line would not complete in time.
Source: https://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html


A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application’s architecture.

What should a solutions architect do to meet these requirements?

✅ A. Use Amazon ElastiCache in front of the database.
⬜ B. Use RDS Proxy between the application and the database.
⬜ C. Migrate the application from EC2 instances to AWS Lambda.
⬜ D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.

Explanation:
The most effective way to improve read performance while minimizing changes to the current architecture is to:
● A. Implement Amazon ElastiCache (like Redis or Memcached) in front of the RDS database.
● ElastiCache will cache frequent read queries, reducing the load on RDS and significantly improving application responsiveness without major changes to the application code.
Why other options are wrong:
B. RDS Proxy helps with connection pooling and scaling connections, but does not significantly improve read performance for heavy query loads.
C. Migrating to Lambda would require major refactoring of the application, which goes against the requirement to minimize changes.
D. Migrating to DynamoDB would require significant application redesign, as DynamoDB is a NoSQL database, very different from MySQL.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html


A company is planning to build a high performance computing (HPC) workload as a service solution that is hosted on AWS. A group of 16 Amazon EC2 Linux instances requires the lowest possible latency for node-to-node communication. The instances also need a shared block device volume for high-performing storage.

Which solution will meet these requirements?

✅ A. Use a cluster placement group. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume to all the instances by using Amazon EBS Multi-Attach.
⬜ B. Use a cluster placement group. Create shared file systems across the instances by using Amazon Elastic File System (Amazon EFS).
⬜ C. Use a partition placement group. Create shared file systems across the instances by using Amazon Elastic File System (Amazon EFS).
⬜ D. Use a spread placement group. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume to all the instances by using Amazon EBS Multi-Attach.

Explanation:
For HPC workloads requiring lowest possible network latency and high-performance shared storage:
● A. Use a cluster placement group to place all instances physically close together inside a single AZ, optimizing network performance (low latency and high throughput).
● Attach a Provisioned IOPS SSD EBS volume with EBS Multi-Attach, allowing multiple instances to access the same block storage device simultaneously for high-performance shared storage.
Why other options are wrong:
B. EFS is a file system, not a block device, and introduces higher latency — not ideal for HPC requiring block-level storage and low latency.
C. Partition placement groups isolate groups of instances for large-scale distributed systems but do not optimize for low-latency communication like cluster placement groups do.
D. Spread placement groups maximize instance separation for fault tolerance, not for minimizing network latency.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html


A solutions architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The solutions architect must improve the security posture and minimize the impact of a DDoS attack on resources.

Which solution is MOST effective?

✅ A. Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the WAF ACL on the CloudFront distribution.
⬜ B. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access.
⬜ C. Enable VPC Flow Logs and store them in Amazon S3. Create a custom AWS Lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
⬜ D. Enable Amazon GuardDuty and configure findings written to Amazon CloudWatch. Create an event with CloudWatch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS Lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.

Explanation:
The most effective way to improve the security posture and protect against DDoS attacks is to:
● A. Use AWS WAF with rate-based rules to automatically block IP addresses that exceed request thresholds.
● Place AWS WAF in front of an Amazon CloudFront distribution, and point the CloudFront distribution to the ALB, which helps absorb and mitigate DDoS attacks at the edge layer (CloudFront).
This method provides automated protection, reduces load on backend systems, and minimizes operational overhead.
Why other options are wrong:
B. Building a custom Lambda function and managing NACLs manually is complex, reactive, and error-prone compared to using managed services.
C. VPC Flow Logs are useful for monitoring but do not actively block DDoS traffic.
D. GuardDuty provides threat detection, not immediate traffic filtering; mitigation through custom workflows increases latency and operational overhead.
Source: https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web.html


A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.

Which combination of actions must a solutions architect take to meet these requirements? (Select TWO.)

✅ A. Create an ongoing replication task.
⬜ B. Create a database backup of the on-premises database.
✅ C. Create an AWS Database Migration Service (AWS DMS) replication server.
⬜ D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
⬜ E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.

Explanation:
To migrate a live, online database while keeping it synchronized:
● A. An ongoing replication task with AWS DMS captures ongoing changes (CDC - Change Data Capture) after the initial load.
● C. An AWS DMS replication server is needed to manage and run the replication process from the source (on-premises PostgreSQL) to the target (Aurora PostgreSQL).
Why other options are wrong:
B. Creating a backup is helpful but does not support ongoing synchronization — backups are one-time, not continuous replication.
D. The AWS Schema Conversion Tool (AWS SCT) is only needed for heterogeneous migrations (different engines). Since both are PostgreSQL, no schema conversion is needed.
E. EventBridge can monitor events but does not synchronize databases; monitoring alone is not sufficient to ensure migration success.
Source: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Overview.html


A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has automated backups configured for Aurora.

Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

⬜ A. Take a manual snapshot of the DB cluster.
⬜ B. Create a lifecycle policy for the automated backups.
⬜ C. Configure automated backup retention for 5 years.
✅ D. Configure an Amazon CloudWatch Logs export for the DB cluster.
✅ E. Use AWS Backup to take the backups and to keep the backups for 5 years.

Explanation:
To meet the requirements:
● D. Export the Aurora audit logs to Amazon CloudWatch Logs to retain audit activities indefinitely and meet compliance requirements.
● E. Use AWS Backup to manage database backups and set a retention policy to keep backups for 5 years and delete them after that.
Why other options are wrong:
A. Manual snapshots do not expire automatically, requiring manual deletion and management, which increases operational overhead.
B. There is no lifecycle policy for automated Aurora backups (Aurora automatically manages backups within a limited retention window).
C. Aurora automated backup retention supports up to 35 days only, not 5 years — AWS Backup is required for longer retention periods.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-postgresql-logs.html


A company has migrated a fleet of hundreds of on-premises virtual machines (VMs) to Amazon EC2 instances. The instances run a diverse fleet of Windows Server versions along with several Linux distributions. The company wants a solution that will automate inventory and updates of the operating systems. The company also needs a summary of common vulnerabilities of each instance for regular monthly reviews.

What should a solutions architect recommend to meet these requirements?

⬜ A. Set up AWS Systems Manager Patch Manager to manage all the EC2 instances. Configure AWS Security Hub to produce monthly reports.
✅ B. Set up AWS Systems Manager Patch Manager to manage all the EC2 instances. Deploy Amazon Inspector, and configure monthly reports.
⬜ C. Set up AWS Shield Advanced, and configure monthly reports. Deploy AWS Config to automate patch installations on the EC2 instances.
⬜ D. Set up Amazon GuardDuty in the account to monitor all EC2 instances. Deploy AWS Config to automate patch installations on the EC2 instances.

Explanation:
The best solution is:
● B. Use AWS Systems Manager Patch Manager to automate patching and updating across diverse EC2 instances.
● Deploy Amazon Inspector, which automatically assesses instances for common vulnerabilities and exposures (CVEs) and provides monthly reports on findings.
This setup addresses both patch automation and vulnerability assessment in a scalable, managed way.
Why other options are wrong:
A. AWS Security Hub aggregates findings from services like Inspector but does not perform vulnerability scans itself.
C. AWS Shield Advanced is for DDoS protection, not patching or vulnerability management.
D. Amazon GuardDuty is for threat detection, not managing patches or generating CVE vulnerability reports.
Source: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
Source: https://docs.aws.amazon.com/inspector/latest/user/inspector_introduction.html


A company wants to build an online marketplace application on AWS as a set of loosely coupled microservices. For this application, when a customer submits a new order two microservices should handle the event simultaneously:

● The Email microservice will send a confirmation email.

● The OrderProcessing microservice will start the order delivery process.

If a customer cancels an order, the OrderCancellation and Email microservices should handle the event simultaneously.

A solutions architect wants to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) to design the messaging between the microservices.

How should the solutions architect design the solution?

⬜ A. Create a single SQS queue and publish order events to it. The Email, OrderProcessing, and OrderCancellation microservices can then consume messages off the queue.
⬜ B. Create three SNS topics for each microservice. Publish order events to the three topics. Subscribe each of the Email, OrderProcessing, and OrderCancellation microservices to its own topic.
✅ C. Create an SNS topic and publish order events to it. Create three SQS queues for the Email, OrderProcessing, and OrderCancellation microservices. Subscribe all SQS queues to the SNS topic with message filtering.
⬜ D. Create two SQS queues and publish order events to both queues simultaneously. One queue is for the Email and OrderProcessing microservices. The second queue is for the Email and OrderCancellation microservices.

Explanation:
The most efficient and scalable solution is:
● C. Create an SNS topic for publishing order events.
● Create three SQS queues (one for each microservice: Email, OrderProcessing, OrderCancellation).
● Subscribe the queues to the SNS topic and use message filtering to route only relevant events (e.g., new order or cancel order) to the appropriate microservices.
This allows loose coupling, fan-out architecture, and efficient event distribution with minimal duplication.
Why other options are wrong:
A. A single SQS queue does not allow multiple services to independently consume the same event.
B. Creating separate SNS topics for each microservice is overcomplicating the design unnecessarily.
D. Publishing to multiple SQS queues directly increases complexity and breaks event-driven best practices.
Source: https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html


A solutions architect must migrate a Windows Internet Information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solutions architect has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.

Which replacement to the on-premises file share is MOST resilient and durable?

⬜ A. Migrate the file share to Amazon RDS.
⬜ B. Migrate the file share to AWS Storage Gateway.
✅ C. Migrate the file share to Amazon FSx for Windows File Server.
⬜ D. Migrate the file share to Amazon Elastic File System (Amazon EFS).

Explanation:
The best replacement for an on-premises Windows-based file share that supports IIS web applications is:
● C. Use Amazon FSx for Windows File Server, which provides a fully managed, highly available, and durable Windows-native SMB file system.
● It natively supports Windows file system features (such as NTFS, Active Directory integration, and DFS namespaces), making it the ideal solution for a seamless migration with high resilience and durability across Availability Zones.
Why other options are wrong:
A. Amazon RDS is for relational databases, not file shares.
B. AWS Storage Gateway is typically used to extend on-premises storage to AWS, not as a full native AWS file system.
D. Amazon EFS is primarily for Linux-based file systems (NFS protocol), not Windows IIS environments.
Source: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html


A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a solution that provides secure access to the EC2 instances.

Which solution will meet this requirement with the LEAST amount of administrative overhead?

✅ A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.
⬜ B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
⬜ C. Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.
⬜ D. Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.

Explanation:
The best solution with the least administrative overhead is to:
● A. Use AWS Systems Manager Session Manager to securely connect to EC2 instances without the need for SSH keys.
● It eliminates the need to manage SSH keys, supports fine-grained IAM access control, provides audit logging to Amazon CloudWatch Logs or Amazon S3, and works over the AWS network without needing to open inbound ports.
Why other options are wrong:
B. AWS STS is mainly for temporary credentials for AWS APIs, not for managing SSH access.
C. Bastion hosts still require managing SSH keys and expose security risks and operational overhead.
D. Cognito + Lambda adds unnecessary complexity for simple EC2 access management.
Source: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html


A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator updates the website content infrequently and uses an SFTP client to upload new documents.

The company decides to host its website on AWS and to use Amazon CloudFront. The company’s solutions architect creates a CloudFront distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.

Which solution will meet these requirements?

⬜ A. Create a virtual server by using Amazon Lightsail. Configure the web server in the Lightsail instance. Upload website content by using an SFTP client.
⬜ B. Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer. Upload website content by using an SFTP client.
✅ C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website content by using the AWS CLI.
⬜ D. Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content by using the SFTP client.

Explanation:
The most cost-effective and resilient solution is:
● C. Store the static website content in a private Amazon S3 bucket and configure access through a CloudFront origin access identity (OAI).
● Uploading via the AWS CLI is straightforward and secure.
● This approach provides high availability, automatic scaling, and low cost without running servers.
Why other options are wrong:
A. Amazon Lightsail adds unnecessary server management and higher costs compared to S3 for static content.
B. An Auto Scaling group and ALB are overkill for hosting static documents, and much more expensive.
D. Making the S3 bucket public is not recommended due to security risks; best practice is using private buckets with CloudFront OAI.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html


A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon S3 buckets so that the data can be processed in the future. According to compliance laws, the data must not be transmitted over the public internet. Servers in the company’s on-premises data center will consume the output from an application that runs on the EC2 instances.

Which solution will meet these requirements?

⬜ A. Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the company and the VPC.
✅ B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between the on-premises network and the VPC.
⬜ C. Set up an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN connection between the company and the VPC.
⬜ D. Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances to fetch S3 data and feed the application instances.

Explanation:
The correct and most compliant solution is:
● B. Use a gateway VPC endpoint for S3 to allow EC2 instances to access S3 privately without using the public internet.
● Use AWS Direct Connect to create a private, dedicated connection between the on-premises data center and AWS for secure data transfer without touching the internet.
This ensures that all traffic stays private and meets compliance requirements.
Why other options are wrong:
A. An interface VPC endpoint is for AWS services like EC2 APIs, not for S3 data access.
C. Transit Gateway is for connecting VPCs and on-premises networks, not for direct S3 access.
D. Using proxy EC2 instances with NAT gateways would still route traffic through public IPs, violating the compliance requirement.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html


A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage. What should a solutions architect do to meet these requirements?

⬜ A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
✅ B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
⬜ C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.
⬜ D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchronize the EBS volumes across the different EC2 instances.

Explanation:
The best solution is:
● B. Use Amazon Elastic File System (Amazon EFS), which is a fully managed, scalable, NFS-based shared file storage service.
● EFS allows multiple EC2 instances across multiple AZs to simultaneously mount and read/write to the same shared file system.
● This setup is perfect for applications requiring hierarchical directory structures and concurrent access with high throughput and low latency.
Why other options are wrong:
A. Amazon S3 is object storage, not a shared file system — not suitable for hierarchical file system requirements or concurrent file operations.
C. EBS volumes can only be attached to one instance at a time unless using EBS Multi-Attach (which is still block-level, not shared file system semantics).
D. Synchronizing EBS volumes manually across instances is complex, error-prone, and inefficient.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html


A company wants to implement a disaster recovery plan for its primary on-premises file storage volume. The file storage volume is mounted from an Internet Small Computer Systems Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of terabytes (TB) of data.

The company wants to ensure that end users retain immediate access to all file types from the on-premises systems without experiencing latency.

Which solution will meet these requirements with the LEAST amount of change to the company’s existing infrastructure?

⬜ A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises. Set the local cache to 10 TB. Modify existing applications to access the files through the NFS protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3 bucket that contains the files.
⬜ B. Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all existing data to a virtual tape library. Configure the data backup solution to run nightly after the initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in the virtual tape library.
⬜ C. Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to 10 TB. Mount the Volume Gateway cached volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2 instance.
✅ D. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volume. Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2 instance.

Explanation:
The best solution with the least change to the existing on-premises iSCSI setup and low-latency access is:
● D. Use a Storage Gateway Volume Gateway stored volume.
● Stored volumes keep a complete copy of your data on-premises for low-latency access while asynchronously backing up data to AWS.
● This meets the requirement for immediate access without latency and provides an easy disaster recovery mechanism via snapshots to Amazon EBS.
Why other options are wrong:
A. S3 File Gateway would require changing protocols (to NFS), which means modifying applications — more change than desired.
B. Tape Gateway is primarily used for archival backup workflows, not live access to files.
C. Cached volumes only cache frequently used data locally, so access to less-frequent files would experience latency when pulling from AWS.
Source: https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html#volume-gateway-concepts


A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The company’s chief information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on-premises backup applications and workflows.

What should a solutions architect recommend?

⬜ A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
⬜ B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
⬜ C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
✅ D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.

Explanation:
The correct solution is:
● D. Use AWS Storage Gateway (Tape Gateway) with an iSCSI virtual tape library (VTL) interface.
● This allows the company to replace physical tape infrastructure with a cloud-based virtual tape solution without changing existing backup applications or workflows.
● It emulates tape libraries that backup software can recognize, thereby preserving current investments and simplifying backup management.
Why other options are wrong:
A. NFS Storage Gateway is used for file storage, not a tape replacement.
B. Amazon EFS is a file system, not a backup tape replacement or VTL solution.
C. Amazon EFS does not support iSCSI; EFS uses the NFS protocol, so it cannot emulate tape backup systems.
Source: https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html#tape-gateway-concepts


A company has a web application that runs on Amazon EC2 instances. The company wants end users to authenticate themselves before they use the web application. The web application accesses AWS resources, such as Amazon S3 buckets, on behalf of users who are logged on.

Which combination of actions must a solutions architect take to meet these requirements? (Select TWO.)

⬜ A. Configure AWS App Mesh to log on users.
⬜ B. Enable and configure AWS Single Sign-On in AWS Identity and Access Management (IAM).
✅ C. Define a default IAM role for authenticated users.
⬜ D. Use AWS Identity and Access Management (IAM) for user authentication.
✅ E. Use Amazon Cognito for user authentication.

Explanation:
The correct solution to handle user authentication and accessing AWS resources on behalf of authenticated users is:
● E. Use Amazon Cognito to handle user authentication (sign-up, sign-in, and user management) securely.
● C. Define a default IAM role for authenticated users through Amazon Cognito identity pools to allow temporary access to AWS resources like S3.
This ensures that users can authenticate and the application can safely assume roles to access AWS services on their behalf.
Why other options are wrong:
A. AWS App Mesh is for service-to-service communication inside microservices, not user authentication.
B. AWS Single Sign-On is used for centralized access management to AWS accounts, not for external application users.
D. IAM is intended for controlling access for AWS resources and users — it is not meant for direct end-user authentication at the application level.
Source: https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html


An ecommerce company is building a distributed application that involves several serverless functions and AWS services to complete order-processing tasks. These tasks require manual approvals as part of the workflow. A solutions architect needs to design an architecture for the order-processing application. The solution must be able to combine multiple AWS Lambda functions into responsive serverless applications. The solution also must orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers.

Which solution will meet these requirements with the LEAST operational overhead?

✅ A. Use AWS Step Functions to build the application.
⬜ B. Integrate all the application components in an AWS Glue job.
⬜ C. Use Amazon Simple Queue Service (Amazon SQS) to build the application.
⬜ D. Use AWS Lambda functions and Amazon EventBridge (Amazon CloudWatch Events) events to build the application.

Explanation:
The best solution for orchestrating multiple serverless functions, services, and manual approval steps with low operational overhead is:
● A. AWS Step Functions.
● Step Functions can sequence Lambda functions, integrate with EC2, containers, and on-premises systems, and add manual approval steps into a workflow with visual monitoring and error handling.
● It provides a serverless, fully managed orchestration service, significantly simplifying complex workflows.
Why other options are wrong:
B. AWS Glue is for ETL jobs and data transformation, not workflow orchestration.
C. Amazon SQS is good for decoupling components but does not manage complex workflows or manual approvals.
D. EventBridge helps with event-driven architectures, but does not handle workflow orchestration and manual approval steps out of the box.
Source: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html


A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every day through the company’s website.

The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage performance is the problem.

Which solution addresses this performance issue?

✅ A. Change the storage type to Provisioned IOPS SSD.
⬜ B. Change the DB instance to a memory optimized instance class.
⬜ C. Change the DB instance to a burstable performance instance class.
⬜ D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.

Explanation:
The correct solution is:
● A. Change the RDS storage type from General Purpose SSD (gp2) to Provisioned IOPS SSD (io1/io2), which provides consistent, high-performance I/O required for high-throughput, low-latency workloads like this one.
● Provisioned IOPS storage is specifically designed for transaction-intensive workloads with millions of inserts/updates, eliminating I/O bottlenecks.
Why other options are wrong:
B. A memory-optimized instance improves query caching and read performance, but the issue here is storage I/O performance.
C. Burstable instances are designed for low to moderate baseline workloads, not high-volume, high-update workloads.
D. Read replicas help scale read operations, but insert and update operations would still be bottlenecked at the primary database storage level.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#gp2-storage


A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS CLI to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.

The administrator is using an IAM role that has the following IAM policy attached:

What is the cause of the unsuccessful request?
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": ["ec2:TerminateInstances"],
            "Resource": ["*"]
        },
        {
            "Effect": "Deny",
            "Action": ["ec2:TerminateInstances"],
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": [
                        "192.0.2.0/24",
                        "203.0.113.0/24"
                    ]
                }
            },
            "Resource": ["*"]
        }
    ]
}

⬜ A. The EC2 instance has a resource-based policy with a Deny statement.
⬜ B. The principal has not been specified in the policy statement.
⬜ C. The ‘Action’ field does not grant the actions that are required to terminate the EC2 instance.
✅ D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.

Explanation:
The policy attached has:
● An Allow to terminate instances generally.
● A Deny conditionally applied if the request does NOT come from the IP addresses 192.0.2.0/24 or 203.0.113.0/24.
Since the administrator is receiving a 403 Access Denied, it indicates that the request did not originate from the allowed CIDR blocks, triggering the Deny condition.
In AWS IAM policies, explicit Deny overrides any Allow.
Why other options are wrong:
A. Resource-based policies are not relevant to EC2 for actions like termination — this is IAM identity policy based.
B. The Principal field is not necessary inside an IAM identity policy (it’s needed for resource policies).
C. The Action field is correctly set to ec2:TerminateInstances.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html


A company is building a media sharing application and decides to use Amazon S3 for storage. When a media file is uploaded, the company starts a multi-step process to create thumbnails, identify objects in the images, transcode videos into standard formats and resolutions, and extract and store the metadata to an Amazon DynamoDB table. The metadata is used for searching and navigation.

The amount of traffic is variable. The solution must be able to scale to handle spikes in load without unnecessary expenses.

What should a solutions architect recommend to support this workload?

⬜ A. Build the processing into the website or mobile app used to upload the content to Amazon S3. Save the required data to the DynamoDB table when the objects are uploaded.
✅ B. Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table.
⬜ C. Trigger an AWS Lambda function when an object is stored in the S3 bucket. Have the Lambda function start AWS Batch to perform the steps to process the object. Place the object data in the DynamoDB table when complete.
⬜ D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object is uploaded to Amazon S3. Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for unprocessed items, and use the program to perform the processing.

Explanation:
The best solution for handling variable load, multiple processing steps, and scaling efficiently is:
● B. Use AWS Step Functions to orchestrate the multi-step process triggered by S3 uploads.
● Step Functions can coordinate multiple AWS services (e.g., Lambda, Batch) easily with serverless scaling, error handling, and pay-per-use cost model.
● This approach ensures low operational overhead and automatic scaling for spikes in load without running unnecessary compute resources.
Why other options are wrong:
A. Pushing processing into the mobile/web app adds complexity and ties processing to the client, which is not scalable.
C. Using Lambda to start AWS Batch adds unnecessary complexity — Step Functions can directly manage workflows efficiently.
D. Using EC2 instances to poll and process adds management overhead and fixed costs, making it less cost-efficient and less scalable compared to serverless solutions.
Source: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html


A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination.

The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen a MySQL-compatible edition of Amazon Aurora to host the DB instance.

Which solutions will create the new DB instance? (Select TWO.)

✅ A. Import the RDS snapshot directly into Aurora.
⬜ B. Upload the RDS snapshot to Amazon S3, then import the RDS snapshot into Aurora.
✅ C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.
⬜ D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
⬜ E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.

Explanation:
The correct options are:
● A. You can directly restore an RDS snapshot into Amazon Aurora MySQL-Compatible Edition by using the Snapshot Import feature.
● C. mysqldump creates a database dump (SQL file), which can be uploaded to S3 and imported into Aurora using the Aurora MySQL import feature (mysqlimport or LOAD DATA FROM S3).
Why other options are wrong:
B. You cannot manually upload a snapshot to S3 and import it — RDS snapshots are managed internally by AWS.
D. AWS DMS is used for live replication and ongoing migrations, not for direct snapshot imports.
E. Although DMS could be used to migrate from a running database, using DMS to import a mysqldump file is unnecessary and complicated compared to native Aurora import tools.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replicate.html


A company uses a simple static website and wants to host it on AWS. The company already has a domain that it uses for email. The company needs a hosting solution that supports HTTPS. Which solution will meet these requirements MOST cost-effectively?

⬜ A. Create an Amazon S3 bucket with a name to match the website. Upload the website to the S3 bucket. Set up website hosting for the S3 bucket. Set up the DNS to point to the S3 website endpoint.
⬜ B. Create an Amazon S3 bucket, upload the website to the S3 bucket. Set up an HTTPS certificate by using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution for the S3 bucket and choose Price Class All.
⬜ C. Set up an open-source content management system (CMS) from AWS Marketplace. Deploy the CMS across two Availability Zones. Copy the website onto the CMS. Set up the DNS to point to the CMS.
✅ D. Create an Amazon S3 bucket. Upload the website to the S3 bucket. Set up an HTTPS certificate by using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution for the S3 bucket and choose Price Class 100. Point to the CloudFront distribution.

Explanation:
The most cost-effective way to host a static website with HTTPS support on AWS is:
● D. Host the static content in an Amazon S3 bucket,
● Use AWS Certificate Manager (ACM) to provision an SSL/TLS certificate for HTTPS,
● Distribute the content using Amazon CloudFront (choosing Price Class 100 keeps costs lower by using edge locations mostly in the US, Canada, and Europe).
S3 website hosting endpoints do not support HTTPS natively without CloudFront.
Why other options are wrong:
A. S3 static website hosting supports HTTP but not HTTPS directly — CloudFront is needed for HTTPS.
B. Price Class All includes global edge locations, which is more expensive than necessary if the website’s audience is mostly regional.
C. Deploying a CMS is overkill for a simple static website and costlier than using S3 + CloudFront.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html


A solutions architect at an ecommerce company wants to back up application log data to Amazon S3. The solutions architect is unsure how frequently the logs will be accessed or which logs will be accessed the most. The company wants to keep costs as low as possible by using the appropriate S3 storage class.

Which S3 storage class should be implemented to meet these requirements?

⬜ A. S3 Glacier
✅ B. S3 Intelligent-Tiering
⬜ C. S3 Standard-Infrequent Access (S3 Standard-IA)
⬜ D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

Explanation:
The best choice when access patterns are unknown and cost savings are needed is:
● B. S3 Intelligent-Tiering automatically moves objects between two access tiers (frequent and infrequent access) based on changing access patterns without performance impact or operational overhead.
● It is cost-optimized for data with unknown or unpredictable access patterns, and avoids unnecessary retrieval or transition fees.
Why other options are wrong:
A. S3 Glacier is for archival storage — not suitable for data that may need quick or unpredictable access.
C. S3 Standard-IA requires objects to be accessed infrequently, and early retrieval or minimum storage charges apply if accessed too soon.
D. S3 One Zone-IA stores data in a single AZ — it’s cheaper but less resilient, and still best for rarely accessed data, not uncertain access.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-overview.html


A company is implementing a shared storage solution for a media application that is hosted in the AWS Cloud. The company needs the ability to use SMB clients to access data. The solution must be fully managed.

Which AWS solution meets these requirements?

⬜ A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol. Connect the application server to the file share.
⬜ B. Create an AWS Storage Gateway tape gateway. Configure tapes to use Amazon S3. Connect the application server to the tape gateway.
⬜ C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to the file share.
✅ D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system.

Explanation:
The best solution for SMB (Server Message Block) access with a fully managed AWS service is:
● D. Amazon FSx for Windows File Server.
● It provides a fully managed, native Windows file system with SMB protocol support, Windows ACLs, Active Directory integration, and high availability across multiple AZs.
Why other options are wrong:
A. Storage Gateway volume gateway provides iSCSI block storage, not SMB file storage.
B. Storage Gateway tape gateway is for virtual tape libraries, not file sharing.
C. Setting up a Windows file server on EC2 requires server management, which goes against the requirement for a fully managed solution.
Source: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html


A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years. The log files will be analyzed by a reporting tool that must be able to access all the files concurrently.

Which storage solution meets these requirements MOST cost-effectively?

⬜ A. Amazon Elastic Block Store (Amazon EBS)
⬜ B. Amazon Elastic File System (Amazon EFS)
⬜ C. Amazon EC2 instance store
✅ D. Amazon S3

Explanation:
The most cost-effective and scalable solution for long-term storage with concurrent access is:
● D. Amazon S3.
● S3 provides durable object storage with virtually unlimited capacity, and multiple clients (such as EC2 instances or reporting tools) can access objects concurrently.
● S3 is far cheaper for long-term storage compared to EBS or EFS, and it supports storage class tiers (e.g., S3 Glacier for archival) for additional cost optimization.
Why other options are wrong:
A. EBS volumes are block storage attached to a single EC2 instance and are not shared across instances without complex setup (and costlier for 7 years of storage).
B. EFS is shared file storage but is more expensive than S3 and better suited for frequent access.
C. Instance store is ephemeral and data is lost if the instance stops or terminates — unsuitable for compliance and long-term retention.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html


An online retail company needs to run near-real-time analytics on website traffic to analyze top-selling products across different locations. The product purchase data and the user location details are sent to a third-party application that runs on premises. The application processes the data and moves the data into the company’s analytics engine.

The company needs to implement a cloud-based solution to make the data available for near-real-time analytics.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Use Amazon Kinesis Data Streams to ingest the data. Use AWS Lambda to transform the data. Configure Lambda to write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).
⬜ B. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket. Schedule an AWS Glue crawler job to enrich the data and update the AWS Glue Data Catalog. Use Amazon Athena for analytics.
⬜ C. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket. Add an Apache Spark job on Amazon EMR to enrich the data in the S3 bucket and write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).
✅ D. Use Amazon Kinesis Data Firehose to ingest the data. Enable Kinesis Data Firehose data transformation with AWS Lambda. Configure Kinesis Data Firehose to write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).

Explanation:
The best solution for near-real-time analytics with minimal operational overhead is:
● D. Use Amazon Kinesis Data Firehose, a fully managed service that can ingest, transform (using AWS Lambda), and deliver data directly to Amazon OpenSearch Service.
● Kinesis Data Firehose handles scaling, retries, and delivery automatically, reducing the need for operational management.
Why other options are wrong:
A. Kinesis Data Streams + Lambda would require more setup and management (manual provisioning, scaling, and error handling).
B. Writing to S3 and analyzing with Athena is batch-based, not near-real-time.
C. Using EMR and Spark involves more operational overhead (cluster management and tuning) — not the simplest option.
Source: https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html


A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and other website assets in the content management system.

A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content with the least possible lag time.

Which solution meets these requirements?

⬜ A. Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
✅ B. Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
⬜ C. Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
⬜ D. Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new EC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EBS volume.

Explanation:
The correct and most efficient solution is:
● B. Use Amazon EFS (Elastic File System) to provide a shared file system that is accessible simultaneously by all EC2 instances across multiple Availability Zones.
● EFS ensures that all EC2 instances have up-to-date access to the same files with minimal lag time and no need for syncing or manual copying.
Why other options are wrong:
A. Copying from the newest instance is manual, error-prone, and not scalable. It doesn’t ensure low-latency or real-time updates across multiple instances.
C. S3 sync every hour creates a delay of up to an hour, which does not meet near-real-time update requirements.
D. EBS snapshots are static; they do not update in real-time. Also, EBS volumes cannot be shared across multiple instances.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html


An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solutions architect must configure the necessary permissions.

Which solution will allow least privilege access to the DynamoDB table from the EC2 instance?

✅ A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance.
⬜ B. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role.
⬜ C. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly.
⬜ D. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.

Explanation:
The best and most secure solution for least privilege access is:
● A. Create an IAM role with the minimal permissions needed for the DynamoDB table, and assign the role to the EC2 instance through an instance profile.
● This allows the EC2 instance to assume the role automatically without hardcoding or manually managing credentials, ensuring secure and temporary access.
Why other options are wrong:
B. EC2 instances assume roles through instance profiles, not directly by modifying the trust policy for the instance itself.
C. Storing IAM user credentials in an S3 bucket and fetching them is insecure and unnecessary.
D. Storing IAM user credentials locally on the EC2 instance violates security best practices and increases operational risk.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html


A solutions architect is designing a new hybrid architecture to extend a company’s on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.

What should the solutions architect do to meet these requirements?

✅ A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails.
⬜ B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
⬜ C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
⬜ D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.

Explanation:
The best solution that meets high availability, low latency for primary, and cost efficiency is:
● A. Use AWS Direct Connect for primary, low-latency, high-performance private connectivity.
● Provision a VPN connection as a lower-cost, slower backup in case the Direct Connect link fails.
● This setup ensures primary fast access and fallback resilience without the expense of provisioning a second Direct Connect link.
Why other options are wrong:
B. VPNs are cheaper but have higher latency and are less reliable than Direct Connect.
C. A second Direct Connect link would meet latency goals but greatly increases cost, which the company wants to avoid.
D. The Direct Connect failover attribute does not automatically create backup links — it only adjusts routing behavior if you have multiple existing connections.
Source: https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html


A company is building a disaster recovery (DR) solution. The company wants to rotate its primary systems between AWS Regions on a regular basis. The company’s application is geographically distributed and includes a serverless web tier. The application’s database tier runs on Amazon Aurora.

A solutions architect needs to build an architecture for the database layer to implement managed, planned failover.

Which combination of actions will meet these requirements with the LEAST downtime? (Select TWO.)

✅ A. Create an Aurora DB cluster. Configure Aurora Replicas.
⬜ B. Fail over to one of the secondary DB clusters from another Region.
⬜ C. Create an Aurora DB cluster snapshot. Restore from the snapshot.
✅ D. Configure an Aurora global database. Set up a secondary DB cluster.
⬜ E. Promote one of the read replicas as a writer from the Amazon RDS console.

Explanation:
The correct solution to rotate primary systems between Regions with minimal downtime is:
● A. Set up an Aurora DB cluster with Aurora Replicas to maintain local high availability.
● D. Configure an Aurora Global Database to replicate the primary cluster to another Region with low-lag, fast failover capabilities.
● Aurora Global Database is designed for cross-Region disaster recovery, allowing managed planned failover with downtime usually less than a minute.
Why other options are wrong:
B. Simply failing over to a secondary cluster implies manual processes if not using Aurora Global Database — more downtime.
C. Restoring from snapshots introduces significant downtime and data staleness — not suitable for low downtime DR.
E. Promoting a read replica is a manual process with longer downtime and potential replication lag issues.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html


A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture.

What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?

⬜ A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
⬜ B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
✅ C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
⬜ D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.

Explanation:
The best solution with minimal operational overhead is:
● C. Use Amazon Athena directly to query the JSON logs in Amazon S3 using standard SQL syntax.
● Athena is serverless, so there is no infrastructure to manage, and you pay per query. It can directly query structured, semi-structured (like JSON), and unstructured data stored in S3 without moving the data.
Why other options are wrong:
A. Amazon Redshift would require loading data into Redshift, which introduces extra steps and management overhead.
B. CloudWatch Logs is for real-time log storage and monitoring, not designed for large-scale ad-hoc querying on existing S3 data.
D. AWS Glue + EMR introduces more complexity and cost, and is overkill for simple, on-demand SQL queries.
Source: https://docs.aws.amazon.com/athena/latest/ug/what-is.html


A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.

Which solution meets these requirements with the LEAST amount of operational overhead?

✅ A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
⬜ B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
⬜ C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
⬜ D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.

Explanation:
The best solution with minimal operational overhead is:
● A. Use the aws:PrincipalOrgID condition key in the S3 bucket policy to automatically allow access only to identities that belong to accounts within a specific AWS Organization.
● This is simple, scalable, and automatically covers all accounts in the organization without needing to track or manually update policies as accounts are added or removed.
Why other options are wrong:
B. aws:PrincipalOrgPaths is more specific to organizational unit (OU) paths, but not needed just to restrict access to the entire organization.
C. Monitoring CloudTrail events and updating policies manually would be high operational overhead and error-prone.
D. Tagging individual users introduces manual tagging effort and requires managing tag consistency, which is unnecessary for this use case.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgid


An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.

Which solution will provide private network connectivity to Amazon S3?

✅ A. Create a gateway VPC endpoint to the S3 bucket.
⬜ B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
⬜ C. Create an instance profile on Amazon EC2 to allow S3 access.
⬜ D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Explanation:
The correct solution for private access to Amazon S3 from within a VPC is:
● A. Create a gateway VPC endpoint for S3.
● A VPC gateway endpoint enables private connections between your VPC and S3 without requiring an internet gateway, NAT device, or VPN connection, ensuring that traffic does not leave the AWS network.
Why other options are wrong:
B. Streaming logs to CloudWatch and then exporting to S3 is not direct access and adds unnecessary complexity.
C. An instance profile provides authorization (IAM permissions) to access S3, but it does not change the network path — the EC2 instance would still need internet access unless using a VPC endpoint.
D. API Gateway is unnecessary and overcomplicates a simple S3 access requirement.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html


A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.

What should a solutions architect propose to ensure users see all of their documents at once?

⬜ A. Copy the data so both EBS volumes contain all the documents.
⬜ B. Configure the Application Load Balancer to direct a user to the server with the documents.
✅ C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.
⬜ D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.

Explanation:
The correct solution is:
● C. Use Amazon EFS (Elastic File System), which provides a shared file system accessible from multiple EC2 instances across Availability Zones.
● By storing all documents centrally in EFS, both EC2 instances can simultaneously access and update the same files, ensuring users always see the complete set of documents, regardless of which instance serves the request.
Why other options are wrong:
A. Manually copying data is error-prone, not scalable, and does not ensure real-time consistency.
B. Routing users to the “correct server” does not solve the problem for new uploads or failover situations.
D. Sending requests to both servers would double network overhead, complicate application logic, and still not guarantee consistency.
Source: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html


A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability.

Which solution meets these requirements?

⬜ A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.
⬜ B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics.
⬜ C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.
✅ D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues.

Explanation:
The correct solution for highly scalable, decoupled, fan-out messaging architecture is:
● D. Use Amazon SNS to publish messages once and fan them out to multiple SQS queues.
● Each consumer application can process messages independently from its own SQS queue, allowing scalability and decoupling.
● SQS scales automatically to handle sudden surges in messages, including tens of thousands of messages per second.
Why other options are wrong:
A. Kinesis Data Analytics is for real-time analytics, not for fan-out message distribution.
B. Scaling EC2 instances manually based on CPU metrics would not handle sudden spikes instantly and adds more operational complexity.
C. A single shard in Kinesis Data Streams would not handle 100,000 messages/sec (each shard supports only about 1 MB/sec or 1,000 records/sec).
Source: https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html


A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.

How should a solutions architect design the architecture to meet these requirements?

⬜ A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
✅ B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
⬜ C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
⬜ D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.

Explanation:
The correct and most resilient and scalable solution is:
● B. Use Amazon SQS to decouple job coordination and Auto Scaling based on queue size.
● When the number of pending messages grows, EC2 Auto Scaling can launch more compute instances to process the jobs, ensuring the system automatically adapts to workload variability without depending on a single primary server.
Why other options are wrong:
A. Scheduled scaling is static and cannot adapt dynamically to sudden changes in workload.
C. CloudTrail is for logging API events, not job coordination.
D. EventBridge is for event-driven architecture, but not a queue service designed for handling large volumes of jobs in a decoupled manner.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html


A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days, the files are rarely accessed.

The total data size is increasing and is close to the company’s total storage capacity. A solutions architect must increase the company’s available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.

Which solution will meet these requirements?

⬜ A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
✅ B. Create an Amazon S3 File Gateway to extend the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
⬜ C. Create an Amazon FSx for Windows File Server file system to extend the company’s storage space.
⬜ D. Install a utility on each user’s computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

Explanation:
The correct solution that extends storage, preserves low-latency access for new files, and automates lifecycle management is:
● B. Use an Amazon S3 File Gateway to provide on-premises SMB access to files stored in Amazon S3, while using local caching for recently accessed files.
● Additionally, an S3 Lifecycle policy can automatically transition older files to S3 Glacier Deep Archive to save costs on infrequently accessed data.
Why other options are wrong:
A. AWS DataSync copies data but does not extend the local file system or provide ongoing, seamless access.
C. Amazon FSx is a fully managed file system but would require migrating the workload and does not integrate with the existing on-premises server easily.
D. Installing a utility on each user’s computer adds complexity and breaks the SMB user experience — not a seamless extension.
Source: https://docs.aws.amazon.com/filegateway/latest/files3/WhatIsStorageGateway.html


A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.

Which solution will meet these requirements?

⬜ A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
✅ B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
⬜ C. Use an API Gateway authorizer to block any requests while the application processes an order.
⬜ D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.

Explanation:
The correct solution for preserving the order of message processing is:
● B. Use Amazon SQS FIFO (First-In-First-Out) queues which guarantee the order of message delivery and processing.
● API Gateway will send the order message to the FIFO queue, and then Lambda can process them one by one in the exact order received.
Why other options are wrong:
A. SNS is a pub/sub system and does not guarantee ordering of message delivery.
C. An API Gateway authorizer is used for authentication/authorization, not for sequencing or managing request flow.
D. An SQS standard queue provides best-effort ordering but does not guarantee strict ordering, which is required here.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html


A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.

What should a solutions architect do to accomplish this goal?

✅ A. Use AWS Secrets Manager. Turn on automatic rotation.
⬜ B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
⬜ C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
⬜ D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.

Explanation:
The correct and most efficient solution to minimize operational overhead for secure credential management is:
● A. Use AWS Secrets Manager to store and automatically rotate database credentials securely.
● Secrets Manager integrates easily with Amazon Aurora, and you can configure automatic rotation without changing application code significantly, reducing manual credential maintenance and increasing security.
Why other options are wrong:
B. AWS Systems Manager Parameter Store can store secrets, but automatic rotation is only supported in Secrets Manager, not native in Parameter Store.
C. Storing credentials in an S3 bucket, even encrypted, requires custom access logic and manual rotation — more operational overhead.
D. Storing credentials on encrypted EBS volumes does not solve the credential management or rotation challenge — it’s still manual and less secure.
Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html


A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.

What should a solutions architect do to meet these requirements?

✅ A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
⬜ B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Configure Route 53 to route traffic to the CloudFront distribution.
⬜ C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
⬜ D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.

Explanation:
The correct and most efficient solution is:
● A. Use Amazon CloudFront as a single distribution with multiple origins (S3 for static content and ALB for dynamic content).
● This provides global caching to reduce latency, improve performance, and simplify routing with one unified domain via Route 53.
Why other options are wrong:
● B, C, and D involve using AWS Global Accelerator, which primarily optimizes TCP/UDP traffic rather than providing caching and content distribution like CloudFront.
● Additionally, creating multiple domain names and accelerators unnecessarily increases complexity without providing better performance for static content caching compared to CloudFront.
Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html


A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.

Which solution will meet these requirements with the LEAST operational overhead?

✅ A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
⬜ B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule.
⬜ C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
⬜ D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets.

Explanation:
The correct solution that provides least operational overhead with automatic rotation and multi-Region replication is:
A. Use AWS Secrets Manager, which natively supports secret rotation, multi-Region replication, and integration with Amazon RDS for managed credentials without needing to build custom Lambda functions or scripts.
Why other options are wrong:
B. AWS Systems Manager Parameter Store does not natively support automatic rotation or seamless multi-Region replication like Secrets Manager.
C. Managing credentials in S3 with custom EventBridge and Lambda adds high operational complexity.
D. Using DynamoDB and KMS with custom Lambda logic is overly complicated and manual compared to Secrets Manager’s managed features.
Source: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets.html


A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.

The database’s performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.

Which solution will meet these requirements?

⬜ A. Use Amazon Redshift with a single node for leader and compute functionality.
⬜ B. Use Amazon RDS with a Single-AZ deployment. Configure Amazon RDS to add reader instances in a different Availability Zone.
✅ C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
⬜ D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.

Explanation:
The correct solution to handle unpredictable read workloads with automatic scaling and high availability is:
● C. Use Amazon Aurora with a Multi-AZ deployment and Aurora Auto Scaling for Aurora Replicas.
● Aurora Replicas can scale out read workloads automatically based on demand, while the primary instance handles writes.
● Aurora’s architecture provides high availability, low replication lag, and fault tolerance across multiple Availability Zones.
Why other options are wrong:
A. Amazon Redshift is for analytical (OLAP) workloads, not transactional (OLTP) ecommerce applications.
B. Amazon RDS Single-AZ deployment does not provide high availability. Also, manually adding readers is not as seamless as Aurora Auto Scaling.
D. ElastiCache (e.g., Memcached) is for caching — it cannot replace a relational database for transaction persistence.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replica-autoscaling.html


A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.

Which solution will meet these requirements?

⬜ A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
⬜ B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
✅ C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
⬜ D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.

Explanation:
The correct solution for inspecting, filtering, and protecting traffic in and out of a VPC is:
● C. Use AWS Network Firewall, which provides stateful traffic inspection, intrusion prevention, and fine-grained traffic filtering capabilities inside the VPC.
● Network Firewall is managed, scalable, and natively integrated with VPCs, making it ideal for production-level traffic control and threat protection.
Why other options are wrong:
A. Amazon GuardDuty detects threats but does not actively filter or inspect traffic in real time.
B. Traffic Mirroring captures packets for analysis only — it does not block or filter traffic.
D. AWS Firewall Manager manages firewall policies across multiple accounts, but it uses Network Firewall or WAF — by itself it does not perform direct traffic inspection.
Source: https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html


A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company’s management team should have full access to all the visualizations. The rest of the company should have only limited access.

Which solution will meet these requirements?

⬜ A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles.
✅ B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
⬜ C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
⬜ D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.

Explanation:
The correct solution for data visualization across multiple data sources with fine-grained user access control is:
● B. Use Amazon QuickSight, which can connect to multiple data sources (like S3 and RDS) and allows sharing dashboards with specific users and groups for precise access control.
● QuickSight allows easy management of user-based access with different levels of visibility based on groups and users.
Why other options are wrong:
A. IAM roles are not the correct method for managing QuickSight user access to dashboards. QuickSight natively manages user and group sharing.
C. AWS Glue ETL jobs and S3 reports do not provide interactive visualizations — they just produce static files.
D. Athena with S3 reports similarly lacks live dashboards and interactive visualization, and S3 access policies are less flexible for user-specific visual controls.
Source: https://docs.aws.amazon.com/quicksight/latest/user/welcome.html


A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.

What should the solutions architect do to meet this requirement?

✅ A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
⬜ B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
⬜ C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
⬜ D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.

Explanation:
The correct and secure way for EC2 instances to access AWS services like S3 is:
● A. Create an IAM role with the appropriate permissions to access the S3 bucket, and attach the role to the EC2 instances.
● The instances then automatically obtain temporary security credentials via the role, which is the best practice for secure, automatic credential management in AWS.
Why other options are wrong:
B. You cannot directly attach an IAM policy to an EC2 instance; it must be attached to a role.
C. IAM groups are for grouping users, not for attaching to EC2 instances.
D. IAM users are meant for people, not for EC2 instances — attaching user credentials manually is not recommended and less secure.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html


An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket.

A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.

Which combination of actions will meet these requirements? (Choose two.)

✅ A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.
✅ B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
⬜ C. Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed.
⬜ D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function.
⬜ E. Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon Simple Notification Service (Amazon SNS) topic with the application owner’s email address for further processing.

Explanation:
The best solution that uses durable, stateless, and serverless components is:
A. Use Amazon SQS to queue notifications from the S3 bucket, ensuring durability and resiliency even if Lambda is temporarily unavailable.
B. Configure the Lambda function to poll the SQS queue. Lambda can automatically scale to process messages and delete messages after successful processing.
This decouples upload events from processing, ensuring reliable and automatic image compression without manual tracking or state maintenance.
Why other options are wrong:
C. Writing filenames manually to memory is not durable or stateless — memory is lost when the Lambda function execution ends.
D. Launching EC2 to monitor the queue adds unnecessary operational overhead — Lambda can natively poll SQS.
E. EventBridge and SNS only alert — they do not trigger processing automatically for the uploaded images.
Source: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html


A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.

A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server.

Which solution will meet these requirements with the LEAST operational overhead?

⬜ A. Create a Network Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection.
⬜ B. Create an Application Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection.
⬜ C. Deploy a transit gateway in the inspection VPC. Configure route tables to route the incoming packets through the transit gateway.
✅ D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.

Explanation:
The best solution that provides automatic traffic inspection with minimal operational overhead is:
● D. Use an AWS Gateway Load Balancer (GWLB), which is specifically designed to deploy and scale third-party virtual appliances like firewalls, intrusion detection systems, etc.
● The Gateway Load Balancer endpoint makes it easy to redirect and inspect traffic without modifying the application architecture manually.
Why other options are wrong:
A. A Network Load Balancer (NLB) does not natively integrate with firewall appliances for packet-level inspection.
B. An Application Load Balancer (ALB) works at the HTTP/HTTPS layer (Layer 7), not packet inspection (Layer 3/4).
C. Transit Gateway provides routing between VPCs but does not perform automatic packet inspection and would require complex route configuration.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/introduction.html


A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance.

A solutions architect needs to minimize the time that is required to clone the production data into the test environment.

Which solution will meet these requirements?

⬜ A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
⬜ B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
⬜ C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots.
✅ D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.

Explanation:
The correct solution that minimizes cloning time while ensuring high I/O performance is:
● D. Use EBS fast snapshot restore, which allows you to quickly create fully-initialized EBS volumes from snapshots without the typical performance penalties of restoring data on-demand.
● This guarantees immediate, high performance from the moment the new EBS volumes are attached to the EC2 instances, which is essential for test environments needing production-like I/O performance.
Why other options are wrong:
A. EC2 instance store volumes are ephemeral, not persistent, and cannot restore from EBS snapshots.
B. EBS Multi-Attach is only supported for specific EBS volume types (io1/io2) and does not support cloning or isolation between production and test environments.
C. Attaching volumes before restoring snapshots would not speed up cloning — restoration still needs to complete, and performance would be slow initially.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html


A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones.

Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet.

Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)

✅ A. Configure a NAT gateway in a public subnet.
✅ B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier.
⬜ C. Assign Elastic IP addresses to the EC2 instances.
⬜ D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier.
⬜ E. Configure a NAT instance in a private subnet.

Explanation:
The correct approach for allowing private EC2 instances to access the internet securely is:
● A. Deploy a NAT gateway in a public subnet. NAT gateways allow instances in private subnets to initiate outbound traffic to the internet but block inbound connections initiated from the internet.
● B. Modify the route table for the private subnets to send internet-bound traffic (0.0.0.0/0) to the NAT gateway.
This setup ensures that EC2 instances remain private (no direct internet access) while being able to download patches and updates.
Why other options are wrong:
C. Assigning Elastic IP addresses would make instances publicly accessible, which violates the requirement.
D. Associating a private subnet with an internet gateway would expose instances to the internet, which is not secure.
E. A NAT instance could work, but requires more operational overhead compared to a fully managed NAT gateway — and thus is not the best choice for least operational burden.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html


A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation.

Which approach should the solutions architect recommend to shut down and resume the EC2 instances?

⬜ A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them.
⬜ B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances.
✅ C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2-week company shutdown.
⬜ D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.

Explanation:
The correct solution to preserve instance memory (RAM) and save costs during the shutdown is:
● C. Use EC2 Hibernation, which saves the instance’s in-memory state (RAM) to the root EBS volume and stops the instance, allowing it to resume with memory contents intact after the shutdown.
● During hibernation, you only pay for storage (EBS), not compute (EC2 instance running cost).
Why other options are wrong:
A. Instance store volumes are ephemeral and data is lost when the instance stops or terminates.
B. Snapshots capture the disk state, not the in-memory state (RAM).
D. Restarting in the same Availability Zone does not preserve RAM contents — only hibernation can do that.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html


A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable.

Which approach will meet these requirements?

⬜ A. Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance.
⬜ B. Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails.
✅ C. Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable.
⬜ D. Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.

Explanation:
The best solution to quickly fail over to a standby EC2 instance using a private IP address is:
● C. Use a secondary Elastic Network Interface (ENI) that is attached to the primary instance.
● Upon failure, detach the ENI and reattach it to the standby instance.
● The private IP address moves along with the ENI, allowing traffic redirection with minimal downtime and without changing DNS records.
Why other options are wrong:
A. An Application Load Balancer (ALB) does not work with private IP addresses directly — it expects target groups by instance IDs or IPs in dynamic scaling environments.
B. DHCP option sets cannot dynamically reassign a specific private IP to a standby instance in case of failure.
D. Elastic IP addresses are public IP addresses — the question explicitly mentions private IP connectivity, not public.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html


An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket.

What must a solutions architect do to ensure that the script will successfully execute?

✅ A. Enable cross-origin resource sharing (CORS) on the S3 bucket.
⬜ B. Enable S3 Versioning on the S3 bucket.
⬜ C. Provide the users with a signed URL for the script.
⬜ D. Configure an S3 bucket policy to allow public execute privileges.

Explanation:
The correct action to allow a script on a user’s web page to make cross-origin HTTP requests to an S3 bucket is:
● A. Enable Cross-Origin Resource Sharing (CORS) on the S3 bucket.
● Without CORS enabled, browsers block cross-origin JavaScript calls for security reasons.
● CORS policies define who can access resources, what methods (GET, POST, etc.) are allowed, and under what conditions.
Why other options are wrong:
B. S3 Versioning is about object versions, not cross-origin access.
C. Signed URLs are used for temporary, secure access, but do not solve cross-origin browser restrictions.
D. Public execute privileges are not a valid permission type for S3 buckets — and public access might be unnecessary or risky here.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html


A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises.

Which encryption options meet these requirements? (Select TWO.)

⬜ A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
⬜ B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
✅ C. Use server-side encryption with customer-provided encryption keys (SSE-C).
✅ D. Use client-side encryption to provide at-rest encryption.
⬜ E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.

Explanation:
The correct approaches to ensure encryption with keys stored on premises are:
C. Server-side encryption with customer-provided keys (SSE-C) allows the client to provide encryption keys for S3, ensuring that AWS never stores the encryption keys.
D. Client-side encryption encrypts data before sending it to AWS using encryption keys that are managed and stored locally (on-premises), ensuring compliance with the requirement.
Why other options are wrong:
A. SSE-S3 uses S3-managed keys, not customer-managed keys stored on premises.
B. SSE-KMS uses AWS Key Management Service (KMS) keys — even if customer-managed, they are stored within AWS.
E. Using a Lambda function would still require S3 to temporarily store unencrypted data before the Lambda function encrypts it, violating strict “always encrypted at rest” requirements.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html


A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible.

What should a solutions architect do to accomplish this?

✅ A. Deploy On-Demand Instances during periods of high demand.
⬜ B. Create a second EC2 reservation for additional instances.
⬜ C. Deploy Spot Instances during periods of high demand.
⬜ D. Increase the EC2 instance size in the EC2 reservation to support the increased workload.

Explanation:
The best approach to cost-effectively and reliably scale temporarily without risking interruption is:
● A. Use On-Demand Instances during periods of temporary high demand.
● On-Demand Instances are not interruptible (unlike Spot Instances) and provide flexible scaling without needing long-term commitments (like new Reserved Instances).
● Since the job cannot be interrupted once started, Spot Instances are not suitable despite being cheaper.
Why other options are wrong:
B. Purchasing another Reserved Instance is not cost-effective for temporary needs — Reserved Instances require a long-term commitment (1 or 3 years).
C. Spot Instances are interruptible at any time by AWS, which risks job failure.
D. Increasing the instance size would increase costs permanently — and the workload spike is temporary.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-purchasing-options.html


A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime.

Which solution meets these requirements?

⬜ A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions.
⬜ B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances.
✅ C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database.
⬜ D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.

Explanation:
The correct solution to smooth sudden traffic spikes without downtime is:
● C. Use Amazon SQS to decouple the vote submissions from the database writes.
● Incoming votes are queued reliably and then processed asynchronously by a fleet of worker instances that write to the database at a sustainable rate.
● This buffers the traffic spikes, preventing the RDS database from being overwhelmed.
Why other options are wrong:
A. Migrating to Lambda is a major re-architecture, unnecessary for the immediate scaling problem.
B. Multi-AZ deployment improves availability, not performance. Also, the secondary DB is read-only — you cannot write to it.
D. Dynamically resizing the DB with EventBridge introduces downtime and complexity — not ideal for real-time, high-throughput voting systems.
Source: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html


A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ).

Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)

⬜ A. Create new public and private subnets in the same AZ.
✅ B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances.
⬜ C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
⬜ D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database.
✅ E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.

Explanation:
To provide high availability across multiple AZs for both application and database layers:
B. Create an Auto Scaling group with instances spread across multiple AZs and an Application Load Balancer (ALB) to distribute traffic between healthy instances.
E. Create new public and private subnets in a second AZ and move the database to Amazon RDS with Multi-AZ deployment, which automatically provides failover and resiliency across AZs.
Why other options are wrong:
A. Creating new subnets in the same AZ does not provide high availability — need multiple AZs.
C. Adding existing instances to an Auto Scaling group without spanning multiple AZs does not meet HA requirements.
D. Running a database instance in a public subnet is against best practices for security — databases should always reside in private subnets.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-auto-scaling-groups.html Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html


A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests.

How should a solutions architect redesign the architecture to better respond to changing traffic?

⬜ A. Configure a Network Load Balancer with a slow start configuration.
⬜ B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances.
✅ C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition.
⬜ D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.

Explanation:
The best solution to address instance initialization delays during a predictable traffic burst is:
● C. Configure a step scaling policy and use an EC2 instance warmup setting in the Auto Scaling group.
● The warmup ensures that the new instance is not considered available (and does not receive traffic) until after its boot/initiation delay (in this case, about 1 minute), improving the user experience during scale-outs.
Why other options are wrong:
A. A Network Load Balancer operates at Layer 4 (TCP/UDP) and does not directly manage application readiness or warmup periods.
B. ElastiCache improves read performance, but does not solve application initialization delays after scaling events.
D. CloudFront helps with content delivery and caching but does not fix instance warm-up or Auto Scaling responsiveness issues.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-warmup.html


An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.

What should the solutions architect do to separate the read requests from the write requests?

⬜ A. Enable read-through caching on the Aurora database.
⬜ B. Update the application to read from the Multi-AZ standby instance.
✅ C. Create an Aurora replica and modify the application to use the appropriate endpoints.
⬜ D. Create a second Aurora database and link it to the primary database as a read replica.

Explanation:
The best way to separate reads and writes in Aurora is:
● C. Create Aurora Replicas and update the application to use the read endpoint for read queries and the writer endpoint for writes.
● Aurora Replicas automatically scale read traffic and offload reads from the primary writer instance, improving write performance by reducing read contention.
Why other options are wrong:
A. Aurora does not support read-through caching natively — you would use services like ElastiCache separately, and it wouldn’t solve write pressure directly.
B. The Multi-AZ standby instance in Aurora is for failover only, not for read scaling.
D. Creating a separate Aurora cluster as a replica is not the right solution — Aurora supports built-in replicas within the same cluster for read scaling.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html


A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances with different instance types and sizes. The application is extensively used during office hours from 9 in the morning to 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours.

Which of the following is the MOST operationally efficient solution to implement to ensure the application works properly at the beginning of the day?

⬜ A. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization.
⬜ B. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization.
✅ C. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.
⬜ D. Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances.

Explanation:
The best solution for a predictable, scheduled traffic pattern (like office hours) is:
● C. Use a Scheduled Scaling policy to proactively launch new EC2 instances before 9 AM.
● Scheduled scaling ensures that the required capacity is already available when users log in, preventing delays caused by dynamic scaling.
Why other options are wrong:
A. CPU-based dynamic scaling reacts after utilization is high — not proactive.
B. Memory utilization dynamic scaling also reacts late and memory metrics are not natively available without CloudWatch Agent.
D. Predictive scaling needs time and historical data to learn patterns; Scheduled scaling is simpler and faster to implement for a known schedule.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html


A financial application consists of an Auto Scaling group of Amazon EC2 instances, an Application Load Balancer, and a MySQL RDS instance set up in a Multi-AZ Deployment configuration. To protect customers’ confidential data, it must be ensured that the Amazon RDS database is only accessible using an authentication token specific to the profile credentials of EC2 instances.

Which of the following actions should be taken to meet this requirement?

✅ A. Enable the IAM DB Authentication.
⬜ B. Configure SSL in your application to encrypt the database connection to RDS.
⬜ C. Create an IAM Role and assign it to your EC2 instances which will grant exclusive access to your RDS instance.
⬜ D. Use a combination of IAM and STS to enforce restricted access to your RDS instance using a temporary authentication token.

Explanation:
The correct method is:
● A. Enable IAM DB Authentication for Amazon RDS.
● This allows database access to be authenticated using AWS IAM tokens instead of traditional username/passwords. The tokens are generated based on IAM role credentials that EC2 instances use, ensuring that only authorized instances can access the database securely.
Why other options are wrong:
B. SSL encrypts the connection but does not authenticate access based on IAM credentials.
C. Assigning an IAM role to EC2 is necessary, but alone it does not enable token-based RDS authentication without IAM DB Authentication enabled.
D. IAM and STS provide temporary credentials for AWS services generally, but IAM DB Authentication is the specific solution for RDS access control.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html


A company hosted a web application in an Auto Scaling group of EC2 instances. The IT manager is concerned about the over-provisioning of the resources that can cause higher operating costs. A Solutions Architect has been instructed to create a cost-effective solution without affecting the performance of the application.

Which dynamic scaling policy should be used to satisfy this requirement?

⬜ A. Use simple scaling.
⬜ B. Use scheduled scaling.
⬜ C. Use suspend and resume scaling.
✅ D. Use target tracking scaling.

Explanation:
The best solution to automatically maintain optimal capacity and avoid over-provisioning is:
● D. Use target tracking scaling.
● Target tracking scaling adjusts the number of EC2 instances dynamically based on a defined metric target, such as maintaining CPU utilization at 50%. It automatically adds or removes capacity to match the actual demand, preventing both under- and over-provisioning without manual intervention.
Why other options are wrong:
A. Simple scaling is reactive and slower — it requires alarms to trigger scaling and may not proactively prevent over-provisioning.
B. Scheduled scaling is based on time, not real-time usage — not ideal when load patterns vary unpredictably.
C. Suspend and resume scaling is used for pausing scaling activities during maintenance or troubleshooting — not for cost optimization.
Source: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html#as-scaling-target-tracking


An online medical system hosted in AWS stores sensitive Personally Identifiable Information (PII) of the users in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company.

Which S3 encryption technique should the Architect use?

⬜ A. Use S3 client-side encryption with an AWS KMS key.
✅ B. Use S3 client-side encryption with a client-side master key.
⬜ C. Use S3 server-side encryption with an AWS KMS key.
⬜ D. Use S3 server-side encryption with customer provided key.

Explanation:
The correct choice to ensure that neither the encryption keys nor unencrypted data ever leave the client-side is:
● B. Use S3 client-side encryption with a client-side master key.
● With client-side encryption, encryption happens before the data is sent to AWS, and decryption happens after it is retrieved.
● The master key is managed entirely on the client-side, meeting strict compliance requirements where AWS should never see the key or the unencrypted data.
Why other options are wrong:
A. Using AWS KMS still sends the encryption request to AWS, meaning AWS has some involvement with key management.
C. Server-side encryption with KMS encrypts after the object reaches S3, violating the “never send unencrypted data to AWS” rule.
D. Server-side encryption with customer-provided keys (SSE-C) still sends the key to AWS during upload (even though it’s discarded after use).
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html


A Solutions Architect is hosting a website in an Amazon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://codingnconcepts.s3-website-us-east-1.amazonaws.com. A new requirement has been introduced to add JavaScript on the webpages to make authenticated HTTP GET requests against the same bucket using the S3 API endpoint (codingnconcepts.s3.amazonaws.com). However, upon testing, the web browser blocks JavaScript from allowing those requests.

Which of the following options is the MOST suitable solution to implement for this scenario?

⬜ A. Enable cross-account access.
⬜ B. Enable Cross-Zone Load Balancing.
✅ C. Enable Cross-origin resource sharing (CORS) configuration in the bucket.
⬜ D. Enable Cross-Region Replication (CRR).

Explanation:
The best solution to allow the browser to permit JavaScript requests from a different origin is:
● C. Enable Cross-origin resource sharing (CORS) configuration in the bucket.
● CORS allows a browser to access resources in an S3 bucket from a different domain or endpoint, thus enabling cross-origin JavaScript HTTP requests without being blocked by the browser’s same-origin policy.
Why other options are wrong:
A. Cross-account access allows different AWS accounts to access a bucket, not related to browser-origin restrictions.
B. Cross-Zone Load Balancing is about distributing traffic across Availability Zones, not related to CORS or S3 access.
D. Cross-Region Replication (CRR) is for copying objects to another region, and does not solve browser cross-origin issues.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html


A company is designing a banking portal that uses Amazon ElastiCache for Redis as its distributed session management component. To secure session data and ensure that Cloud Engineers must authenticate before executing Redis commands, specifically MULTI EXEC commands, the system should enforce strong authentication by requiring users to enter a password. Additionally, access should be managed with long-lived credentials while supporting robust security practices.

Which of the following actions should be taken to meet the above requirement?

⬜ A. Generate an IAM authentication token using AWS credentials and provide this token as a password.
⬜ B. Set up a Redis replication group and enable the AtRestEncryptionEnabled parameter.
✅ C. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled.
⬜ D. Enable the in-transit encryption for Redis replication groups.

Explanation:
The correct approach to enforce strong password authentication and secure in-transit communication for ElastiCache Redis is:
● C. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled.
--auth-token enables password-based authentication for Redis, and --transit-encryption-enabled ensures TLS encryption for all Redis traffic, meeting both strong authentication and encryption requirements.
Why other options are wrong:
A. IAM authentication tokens are used for RDS databases, not for ElastiCache Redis.
B. AtRestEncryptionEnabled encrypts data on disk, but does not handle authentication of users or in-transit encryption.
D. Enabling only in-transit encryption secures traffic but does not enforce authentication without --auth-token.
Source: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html


A company runs an online payments application in an Auto Scaling group of Amazon EC2 instances in multiple Availability Zones. The EC2 instances are all launched in private subnets. An internet-facing Application Load Balancer (ALB) has been provisioned and points to the existing EC2 instances as the target group. The team noticed that the internet traffic was not reaching the Amazon EC2 instances.

What is the MOST operationally efficient solution that meets these requirements?

⬜ A. Set up a NAT gateway in a public subnet to allow incoming Internet traffic. Use a Gateway Load Balancer instead of an Application Load Balancer.
⬜ B. Move the existing Amazon EC2 instances that are running from the private subnets to public subnets. Allow outbound traffic to 0.0.0.0/0 in the security groups of the EC2 instances.
⬜ C. Add a rule to allow outbound traffic to 0.0.0.0/0 in the security groups of the EC2 instances. Update the route tables of the existing subnets to send all 0.0.0.0/0 traffic through the internet gateway route.
✅ D. Launch public subnets in each Availability Zone and associate them with the Application Load Balancer. Modify the route tables for the public subnets with a route to the private subnets of the EC2 instances.

Explanation:
D. An internet-facing Application Load Balancer (ALB) must be placed in public subnets that have a route to an internet gateway. The backend Amazon EC2 instances must remain in private subnets for security. The ALB forwards incoming internet traffic to EC2 instances across Availability Zones without requiring the EC2 instances to have public IP addresses. This architecture minimizes exposure of backend instances while maintaining internet accessibility through the ALB. It is the most secure and operationally efficient design for this scenario.
Why other options are wrong:
A. A NAT gateway enables outbound internet access for private instances but does not allow inbound traffic from the internet. NATs are for egress traffic only, not ingress.
B. Moving EC2 instances to public subnets and assigning public IPs would expose them directly to the internet, which violates best practices for secure backend instances behind an ALB.
C. Modifying the security groups and routing outbound traffic would not fix the inbound traffic problem. Inbound requests from the internet must still flow through the ALB, which requires proper public subnet configuration.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-subnets.html


The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch.

Which of the following is a custom metric in CloudWatch which you have to manually set up?

✅ A. Memory Utilization of an EC2 instance
⬜ B. CPU Utilization of an EC2 instance
⬜ C. Disk Reads activity of an EC2 instance
⬜ D. Network packets out of an EC2 instance

Explanation:
A. Memory Utilization is not available by default in Amazon CloudWatch for EC2 instances. It requires installing and configuring the CloudWatch agent on the instance to collect and publish this custom metric manually. AWS does not automatically push memory usage statistics to CloudWatch, unlike CPU, disk, and network metrics.
Why other options are wrong:
B. CPU Utilization is a default metric provided by CloudWatch for EC2 instances without needing any custom setup.
C. Disk Read (and Write) metrics are available by default in CloudWatch under EC2 instance metrics.
D. Network packets out (and in) are default metrics collected automatically by CloudWatch for EC2 instances.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-cloudwatch.html


⬜ A. There is no need to do anything because, by default, Lambda already encrypts the environment variables using the AWS Key Management Service.
⬜ B. Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information.
⬜ C. Lambda does not provide encryption for the environment variables. Deploy your code to an Amazon EC2 instance instead.
✅ D. Create a new AWS KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information.

Explanation:
D. For maximum security, you should create a new AWS KMS key and configure your Lambda function to use KMS encryption helpers. This ensures that the sensitive environment variables (such as database credentials and API keys) are encrypted at rest and only decrypted at runtime by Lambda functions with the appropriate permissions. While Lambda provides encryption for environment variables by default using a service-managed KMS key, using a customer-managed KMS key gives you full control over key rotation, key policies, and auditing access.
Why other options are wrong:
A. Although AWS Lambda encrypts environment variables by default using a service-managed KMS key, maximum security requires creating and managing your own KMS key to have full control over encryption and access.
B. CloudHSM is generally used for very high-security use cases that require dedicated hardware security modules. It is not necessary for basic Lambda environment variable encryption and adds unnecessary complexity.
C. Lambda does provide encryption for environment variables. Moving the code to EC2 would increase operational overhead without solving the problem.
Source: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-encryption


There was an incident in a production environment where user data stored in an Amazon S3 bucket was accidentally deleted by a Junior DevOps Engineer. The issue was escalated to management, and after a few days, an instruction was given to improve the security and protection of AWS resources.

What combination of the following options will protect the S3 objects in the bucket from both accidental deletion and overwriting? (Select TWO.)

✅ A. Enable Versioning
⬜ B. Provide access to S3 data strictly through pre-signed URL only
⬜ C. Disallow S3 Delete using an IAM bucket policy
⬜ D. Enable S3 Intelligent-Tiering
✅ E. Enable Multi-Factor Authentication Delete

Explanation:
A. Enabling Versioning preserves every version of every object. If an object is deleted or overwritten, the previous version is still available for recovery. This protects against both accidental deletions and overwrites.
E. Multi-Factor Authentication (MFA) Delete adds an additional security layer by requiring MFA authentication for permanently deleting objects or permanently deleting object versions. This minimizes the risk of accidental or malicious deletions.
Why other options are wrong:
B. Pre-signed URLs control who can access an object temporarily, but they do not prevent overwrites or deletions once access is granted.
C. While blocking deletes in an IAM policy can prevent deletion, it is too restrictive and does not provide the flexibility and auditability that Versioning + MFA Delete provide. IAM deny policies would also prevent authorized deletions.
D. S3 Intelligent-Tiering only manages object storage class transitions for cost optimization based on access patterns. It does not protect against overwrites or deletions.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html