0 / 60 answered
Domain 1: Detection
Domain 2: Incident Response
Domain 3: Infrastructure
Domain 4: IAM
Domain 5: Data Protection
Domain 6: Governance
Domain 1 — Detection (60 Questions)
Q1.A security team enables Amazon GuardDuty for the first time in their AWS account. Which data source does GuardDuty analyze by default without any additional configuration?
✓ Correct: C. GuardDuty automatically analyzes CloudTrail management events.
Why C is correct: When GuardDuty is enabled, it automatically analyzes AWS CloudTrail management events, VPC Flow Logs, and DNS query logs as its foundational data sources. No additional configuration or enabling of these services is required — GuardDuty accesses them independently.
Why others are wrong:
A) S3 data events — S3 protection is a separate GuardDuty protection plan that monitors CloudTrail S3 data events. It is enabled by default on new accounts but is a distinct feature from the foundational data sources.
B) EC2 instance system logs — GuardDuty does not read EC2 system logs. It uses VPC Flow Logs and DNS logs to detect threats, not OS-level logs.
D) ALB access logs — GuardDuty does not analyze ALB access logs. ALB logs are a separate feature stored in S3.
Why C is correct: When GuardDuty is enabled, it automatically analyzes AWS CloudTrail management events, VPC Flow Logs, and DNS query logs as its foundational data sources. No additional configuration or enabling of these services is required — GuardDuty accesses them independently.
Why others are wrong:
A) S3 data events — S3 protection is a separate GuardDuty protection plan that monitors CloudTrail S3 data events. It is enabled by default on new accounts but is a distinct feature from the foundational data sources.
B) EC2 instance system logs — GuardDuty does not read EC2 system logs. It uses VPC Flow Logs and DNS logs to detect threats, not OS-level logs.
D) ALB access logs — GuardDuty does not analyze ALB access logs. ALB logs are a separate feature stored in S3.
Q2.An organization notices a GuardDuty finding of type
UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS. What does this finding indicate?
✓ Correct: B. This finding means EC2 instance role credentials are being used from outside AWS.
Why B is correct: The finding
Why others are wrong:
A) Console password compromised — This finding is specifically about instance credentials (temporary role credentials), not IAM user console passwords.
C) Command-and-control communication — C&C communication would generate findings like
D) S3 public access — S3 bucket policy issues would generate findings under the S3 finding type category, such as
Why B is correct: The finding
UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS indicates that credentials created exclusively for an EC2 instance through an instance launch role are being used from an external IP address outside of AWS. This is a high-severity finding suggesting credential theft.Why others are wrong:
A) Console password compromised — This finding is specifically about instance credentials (temporary role credentials), not IAM user console passwords.
C) Command-and-control communication — C&C communication would generate findings like
Backdoor:EC2/C&CActivity.B, not an InstanceCredentialExfiltration finding.D) S3 public access — S3 bucket policy issues would generate findings under the S3 finding type category, such as
Policy:S3/BucketPublicAccessGranted.
Q3.A company wants to automatically isolate an EC2 instance when GuardDuty detects cryptocurrency mining activity. Which combination of services enables this automated response?
✓ Correct: A. EventBridge with Lambda is the standard pattern for automated GuardDuty response.
Why A is correct: GuardDuty sends all findings to Amazon EventBridge automatically. You can create an EventBridge rule that matches the cryptocurrency mining finding type (e.g.,
Why others are wrong:
B) GuardDuty auto-remediation — GuardDuty is a detection service only; it does not have built-in auto-remediation capabilities. You must build the response workflow yourself.
C) AWS Config rule — AWS Config evaluates resource configurations against rules but does not directly respond to GuardDuty findings or remove instances from VPCs.
D) CloudWatch Logs metric filter — GuardDuty findings are not sent to CloudWatch Logs by default. The proper integration is through EventBridge, and manual isolation would not be automated.
Why A is correct: GuardDuty sends all findings to Amazon EventBridge automatically. You can create an EventBridge rule that matches the cryptocurrency mining finding type (e.g.,
CryptoCurrency:EC2/BitcoinTool.B) and triggers a Lambda function. The Lambda function can then replace the instance's security group with an isolation security group that blocks all traffic.Why others are wrong:
B) GuardDuty auto-remediation — GuardDuty is a detection service only; it does not have built-in auto-remediation capabilities. You must build the response workflow yourself.
C) AWS Config rule — AWS Config evaluates resource configurations against rules but does not directly respond to GuardDuty findings or remove instances from VPCs.
D) CloudWatch Logs metric filter — GuardDuty findings are not sent to CloudWatch Logs by default. The proper integration is through EventBridge, and manual isolation would not be automated.
Q4.In a multi-account AWS Organization, what is the recommended way to manage Amazon GuardDuty across all member accounts?
✓ Correct: D. Use a delegated administrator for centralized GuardDuty management.
Why D is correct: AWS recommends designating a delegated administrator account (typically the security account) in AWS Organizations for GuardDuty. This account can then enable GuardDuty across all member accounts, view aggregated findings, and manage configurations centrally. The delegated admin should be a dedicated security account, not the management account.
Why others are wrong:
A) Enable individually — This approach does not scale and does not provide centralized visibility across accounts. It creates management overhead.
B) CloudFormation StackSets — While StackSets can deploy resources across accounts, GuardDuty has native Organizations integration that is simpler and recommended.
C) Management account only — Enabling GuardDuty in only the management account does not automatically protect member accounts. Each account needs GuardDuty enabled, and best practice is to use a delegated admin, not the management account.
Why D is correct: AWS recommends designating a delegated administrator account (typically the security account) in AWS Organizations for GuardDuty. This account can then enable GuardDuty across all member accounts, view aggregated findings, and manage configurations centrally. The delegated admin should be a dedicated security account, not the management account.
Why others are wrong:
A) Enable individually — This approach does not scale and does not provide centralized visibility across accounts. It creates management overhead.
B) CloudFormation StackSets — While StackSets can deploy resources across accounts, GuardDuty has native Organizations integration that is simpler and recommended.
C) Management account only — Enabling GuardDuty in only the management account does not automatically protect member accounts. Each account needs GuardDuty enabled, and best practice is to use a delegated admin, not the management account.
Q5.A security engineer wants GuardDuty to stop generating findings for a known internal vulnerability scanner's IP address. What is the correct approach?
✓ Correct: B. Trusted IP lists prevent GuardDuty from generating findings for known safe IPs.
Why B is correct: GuardDuty trusted IP lists contain IP addresses that you consider safe. GuardDuty will not generate findings for any activity originating from trusted IP addresses. This is ideal for known internal tools like vulnerability scanners that may trigger false positives.
Why others are wrong:
A) Suppression rule — Suppression rules auto-archive findings but GuardDuty still generates them. Trusted IP lists prevent finding generation entirely, which is the correct approach for known safe IPs.
C) Threat IP list — Threat IP lists are for known malicious IPs. Adding the scanner IP here would cause GuardDuty to generate more findings, not fewer.
D) Disable finding type — You cannot selectively disable individual finding types in GuardDuty. This would also be too broad, as you want to exclude only the scanner IP, not all findings of that type.
Why B is correct: GuardDuty trusted IP lists contain IP addresses that you consider safe. GuardDuty will not generate findings for any activity originating from trusted IP addresses. This is ideal for known internal tools like vulnerability scanners that may trigger false positives.
Why others are wrong:
A) Suppression rule — Suppression rules auto-archive findings but GuardDuty still generates them. Trusted IP lists prevent finding generation entirely, which is the correct approach for known safe IPs.
C) Threat IP list — Threat IP lists are for known malicious IPs. Adding the scanner IP here would cause GuardDuty to generate more findings, not fewer.
D) Disable finding type — You cannot selectively disable individual finding types in GuardDuty. This would also be too broad, as you want to exclude only the scanner IP, not all findings of that type.
Q6.A security engineer receives many low-severity GuardDuty findings from development environments that clutter the findings console. The findings should be retained for compliance but not displayed. What should they do?
✓ Correct: C. Suppression rules auto-archive findings while preserving them for compliance.
Why C is correct: Suppression rules in GuardDuty automatically archive findings that match specified filter criteria. Archived findings are still stored and accessible for auditing and compliance, but they do not appear in the current findings list. This is the ideal solution for known low-priority findings.
Why others are wrong:
A) Delete findings — GuardDuty findings cannot be deleted. They are retained for 90 days regardless of their status.
B) Trusted IP list — A trusted IP list would prevent findings from being generated at all, which would not meet the compliance requirement to retain findings.
D) Disable GuardDuty — Disabling GuardDuty removes threat detection entirely from those accounts, which is a security risk and would not retain findings.
Why C is correct: Suppression rules in GuardDuty automatically archive findings that match specified filter criteria. Archived findings are still stored and accessible for auditing and compliance, but they do not appear in the current findings list. This is the ideal solution for known low-priority findings.
Why others are wrong:
A) Delete findings — GuardDuty findings cannot be deleted. They are retained for 90 days regardless of their status.
B) Trusted IP list — A trusted IP list would prevent findings from being generated at all, which would not meet the compliance requirement to retain findings.
D) Disable GuardDuty — Disabling GuardDuty removes threat detection entirely from those accounts, which is a security risk and would not retain findings.
Q7.Which GuardDuty protection plan must be explicitly enabled to detect threats in Amazon EKS clusters?
✓ Correct: A. EKS Protection is the dedicated GuardDuty plan for Kubernetes threats.
Why A is correct: GuardDuty offers EKS Protection as a separate protection plan that includes EKS Audit Log Monitoring (analyzes Kubernetes audit logs) and EKS Runtime Monitoring (uses a security agent on EKS nodes). These must be explicitly enabled to detect threats like privilege escalation, suspicious API calls, and container-level attacks in EKS.
Why others are wrong:
B) EC2 Malware Protection — This scans EBS volumes attached to EC2 instances for malware. It does not analyze Kubernetes-specific threats or audit logs.
C) S3 Protection — S3 Protection monitors S3 data plane events for suspicious access patterns, not EKS activity.
D) Lambda Network Activity Monitoring — This monitors network activity from Lambda functions, not EKS clusters.
Why A is correct: GuardDuty offers EKS Protection as a separate protection plan that includes EKS Audit Log Monitoring (analyzes Kubernetes audit logs) and EKS Runtime Monitoring (uses a security agent on EKS nodes). These must be explicitly enabled to detect threats like privilege escalation, suspicious API calls, and container-level attacks in EKS.
Why others are wrong:
B) EC2 Malware Protection — This scans EBS volumes attached to EC2 instances for malware. It does not analyze Kubernetes-specific threats or audit logs.
C) S3 Protection — S3 Protection monitors S3 data plane events for suspicious access patterns, not EKS activity.
D) Lambda Network Activity Monitoring — This monitors network activity from Lambda functions, not EKS clusters.
Q8.What is the primary purpose of AWS Security Hub?
✓ Correct: B. Security Hub aggregates findings and performs automated security compliance checks.
Why B is correct: AWS Security Hub provides a comprehensive view of security alerts and compliance status across AWS accounts. It aggregates findings from services like GuardDuty, Inspector, Macie, Firewall Manager, and third-party tools. It also runs automated security checks against standards like CIS AWS Foundations Benchmark and AWS Foundational Security Best Practices.
Why others are wrong:
A) Real-time threat detection with ML — This describes Amazon GuardDuty, not Security Hub. Security Hub aggregates findings but does not perform its own threat detection.
C) Encrypt data — Encryption is handled by AWS KMS, ACM, and service-level encryption settings, not Security Hub.
D) Manage IAM policies — IAM policies are managed through the IAM service, AWS Organizations SCPs, or AWS Control Tower. Security Hub evaluates compliance, not manage access.
Why B is correct: AWS Security Hub provides a comprehensive view of security alerts and compliance status across AWS accounts. It aggregates findings from services like GuardDuty, Inspector, Macie, Firewall Manager, and third-party tools. It also runs automated security checks against standards like CIS AWS Foundations Benchmark and AWS Foundational Security Best Practices.
Why others are wrong:
A) Real-time threat detection with ML — This describes Amazon GuardDuty, not Security Hub. Security Hub aggregates findings but does not perform its own threat detection.
C) Encrypt data — Encryption is handled by AWS KMS, ACM, and service-level encryption settings, not Security Hub.
D) Manage IAM policies — IAM policies are managed through the IAM service, AWS Organizations SCPs, or AWS Control Tower. Security Hub evaluates compliance, not manage access.
Q9.Which security standard in AWS Security Hub checks your environment against the CIS AWS Foundations Benchmark?
✓ Correct: D. Security Hub has a dedicated CIS AWS Foundations Benchmark standard.
Why D is correct: AWS Security Hub includes the CIS AWS Foundations Benchmark as one of its built-in security standards. When enabled, it runs automated checks against CIS controls covering areas like IAM, logging, monitoring, and networking. Security Hub supports both CIS v1.2.0 and v1.4.0.
Why others are wrong:
A) FSBP — The AWS Foundational Security Best Practices is a different standard developed by AWS. It has its own set of controls distinct from CIS.
B) PCI DSS — PCI DSS is a separate standard in Security Hub for payment card industry compliance, not CIS.
C) NIST SP 800-53 — NIST 800-53 is another available standard focused on federal information systems, separate from CIS.
Why D is correct: AWS Security Hub includes the CIS AWS Foundations Benchmark as one of its built-in security standards. When enabled, it runs automated checks against CIS controls covering areas like IAM, logging, monitoring, and networking. Security Hub supports both CIS v1.2.0 and v1.4.0.
Why others are wrong:
A) FSBP — The AWS Foundational Security Best Practices is a different standard developed by AWS. It has its own set of controls distinct from CIS.
B) PCI DSS — PCI DSS is a separate standard in Security Hub for payment card industry compliance, not CIS.
C) NIST SP 800-53 — NIST 800-53 is another available standard focused on federal information systems, separate from CIS.
Q10.A security team needs to centralize findings from multiple AWS security services into Security Hub. Which TWO services natively integrate with Security Hub as finding providers?SELECT TWO
✓ Correct: B, D. GuardDuty and Inspector natively send findings to Security Hub.
Why B is correct: Amazon GuardDuty is one of the primary native integration partners with Security Hub. GuardDuty automatically sends all findings to Security Hub when both services are enabled in the same account and region.
Why D is correct: Amazon Inspector is a native finding provider for Security Hub. Inspector vulnerability findings are automatically sent to Security Hub using the AWS Security Finding Format (ASFF).
Why others are wrong:
A) CloudFormation — CloudFormation is an infrastructure-as-code service and does not generate security findings for Security Hub.
C) DynamoDB — DynamoDB is a database service and does not send security findings to Security Hub.
E) SQS — SQS is a message queuing service and is not a security finding provider.
Why B is correct: Amazon GuardDuty is one of the primary native integration partners with Security Hub. GuardDuty automatically sends all findings to Security Hub when both services are enabled in the same account and region.
Why D is correct: Amazon Inspector is a native finding provider for Security Hub. Inspector vulnerability findings are automatically sent to Security Hub using the AWS Security Finding Format (ASFF).
Why others are wrong:
A) CloudFormation — CloudFormation is an infrastructure-as-code service and does not generate security findings for Security Hub.
C) DynamoDB — DynamoDB is a database service and does not send security findings to Security Hub.
E) SQS — SQS is a message queuing service and is not a security finding provider.
Q11.A Security Hub administrator wants to automatically send specific high-severity findings to a ticketing system. Which Security Hub feature enables this?
✓ Correct: C. Custom actions allow you to send findings to EventBridge for automated workflows.
Why C is correct: Security Hub custom actions let you create named actions that, when triggered, send the selected findings to Amazon EventBridge. From EventBridge, you can route the findings to a Lambda function, SNS topic, or any supported target to create tickets in a system like Jira or ServiceNow. You can also use automated rules based on finding criteria.
Why others are wrong:
A) Insights — Insights are saved filter sets that create grouped collections of related findings for analysis. They do not trigger automated actions.
B) Security standards — Security standards define compliance checks, not automated response workflows.
D) Finding aggregation — Finding aggregation consolidates findings from multiple regions or accounts into a single view. It does not trigger external workflows.
Why C is correct: Security Hub custom actions let you create named actions that, when triggered, send the selected findings to Amazon EventBridge. From EventBridge, you can route the findings to a Lambda function, SNS topic, or any supported target to create tickets in a system like Jira or ServiceNow. You can also use automated rules based on finding criteria.
Why others are wrong:
A) Insights — Insights are saved filter sets that create grouped collections of related findings for analysis. They do not trigger automated actions.
B) Security standards — Security standards define compliance checks, not automated response workflows.
D) Finding aggregation — Finding aggregation consolidates findings from multiple regions or accounts into a single view. It does not trigger external workflows.
Q12.What is the primary use case of Amazon Detective?
✓ Correct: A. Amazon Detective is designed for security investigation and root cause analysis.
Why A is correct: Amazon Detective automatically collects log data from AWS resources and uses machine learning, statistical analysis, and graph theory to build visualizations that help you investigate security issues. It is commonly used to triage GuardDuty findings by understanding the scope and context of suspicious activity across accounts and resources.
Why others are wrong:
B) Preventing unauthorized access — Detective is a post-detection investigation tool. It does not prevent access; services like IAM and security groups handle prevention.
C) Scanning for vulnerabilities — Amazon Inspector scans for software vulnerabilities, not Detective.
D) Encrypting data — Encryption is managed by AWS KMS and S3 encryption settings, not Detective.
Why A is correct: Amazon Detective automatically collects log data from AWS resources and uses machine learning, statistical analysis, and graph theory to build visualizations that help you investigate security issues. It is commonly used to triage GuardDuty findings by understanding the scope and context of suspicious activity across accounts and resources.
Why others are wrong:
B) Preventing unauthorized access — Detective is a post-detection investigation tool. It does not prevent access; services like IAM and security groups handle prevention.
C) Scanning for vulnerabilities — Amazon Inspector scans for software vulnerabilities, not Detective.
D) Encrypting data — Encryption is managed by AWS KMS and S3 encryption settings, not Detective.
Q13.Which data sources does Amazon Detective use to build its behavior graph?
✓ Correct: C. Detective builds its behavior graph from CloudTrail, VPC Flow Logs, and GuardDuty findings.
Why C is correct: Amazon Detective ingests data from AWS CloudTrail logs, Amazon VPC Flow Logs, Amazon GuardDuty findings, and optionally Amazon EKS audit logs to build a behavior graph. This graph models relationships between resources, IP addresses, and API calls to support security investigations.
Why others are wrong:
A) Config and CloudFormation — Detective does not use AWS Config configuration history or CloudFormation templates as data sources.
B) S3 and CloudFront logs — These logs are not direct data sources for the Detective behavior graph.
D) CloudWatch metrics — CloudWatch metrics and application logs are not used by Detective.
Why C is correct: Amazon Detective ingests data from AWS CloudTrail logs, Amazon VPC Flow Logs, Amazon GuardDuty findings, and optionally Amazon EKS audit logs to build a behavior graph. This graph models relationships between resources, IP addresses, and API calls to support security investigations.
Why others are wrong:
A) Config and CloudFormation — Detective does not use AWS Config configuration history or CloudFormation templates as data sources.
B) S3 and CloudFront logs — These logs are not direct data sources for the Detective behavior graph.
D) CloudWatch metrics — CloudWatch metrics and application logs are not used by Detective.
Q14.Amazon Inspector is used to scan which types of workloads for vulnerabilities?
✓ Correct: B. Amazon Inspector scans EC2 instances, ECR container images, and Lambda functions.
Why B is correct: Amazon Inspector v2 supports three types of scanning: EC2 instance scanning (using SSM Agent for software vulnerabilities and network reachability), Amazon ECR container image scanning (for OS and programming language package vulnerabilities), and AWS Lambda function scanning (for code and dependency vulnerabilities).
Why others are wrong:
A) Only Amazon Linux EC2 — Inspector supports multiple operating systems including Amazon Linux, Ubuntu, Windows, RHEL, and others, not just Amazon Linux.
C) Docker Hub images — Inspector scans container images in Amazon ECR, not Docker Hub or other external registries.
D) RDS and DynamoDB — Inspector does not scan database services. These are managed services where AWS handles the underlying infrastructure patching.
Why B is correct: Amazon Inspector v2 supports three types of scanning: EC2 instance scanning (using SSM Agent for software vulnerabilities and network reachability), Amazon ECR container image scanning (for OS and programming language package vulnerabilities), and AWS Lambda function scanning (for code and dependency vulnerabilities).
Why others are wrong:
A) Only Amazon Linux EC2 — Inspector supports multiple operating systems including Amazon Linux, Ubuntu, Windows, RHEL, and others, not just Amazon Linux.
C) Docker Hub images — Inspector scans container images in Amazon ECR, not Docker Hub or other external registries.
D) RDS and DynamoDB — Inspector does not scan database services. These are managed services where AWS handles the underlying infrastructure patching.
Q15.A company needs Amazon Inspector to automatically scan EC2 instances for vulnerabilities. What prerequisite must be met on the EC2 instances?
✓ Correct: D. Inspector v2 uses SSM Agent for EC2 scanning.
Why D is correct: Amazon Inspector v2 uses the AWS Systems Manager (SSM) Agent to collect software inventory from EC2 instances. The instance must have SSM Agent installed and be a managed instance in Systems Manager (with proper IAM role like
Why others are wrong:
A) Manual Inspector agent — Inspector v2 (the current version) does not use its own agent. It relies on SSM Agent, which is pre-installed on many Amazon-provided AMIs.
B) Public subnet required — Instances do not need to be in a public subnet. They can use VPC endpoints for SSM communication from private subnets.
C) CloudWatch Agent — CloudWatch Agent is for metrics and log collection, not vulnerability scanning. It is not a prerequisite for Inspector.
Why D is correct: Amazon Inspector v2 uses the AWS Systems Manager (SSM) Agent to collect software inventory from EC2 instances. The instance must have SSM Agent installed and be a managed instance in Systems Manager (with proper IAM role like
AmazonSSMManagedInstanceCore). Inspector does not require its own agent.Why others are wrong:
A) Manual Inspector agent — Inspector v2 (the current version) does not use its own agent. It relies on SSM Agent, which is pre-installed on many Amazon-provided AMIs.
B) Public subnet required — Instances do not need to be in a public subnet. They can use VPC endpoints for SSM communication from private subnets.
C) CloudWatch Agent — CloudWatch Agent is for metrics and log collection, not vulnerability scanning. It is not a prerequisite for Inspector.
Q16.A team wants to collect custom application logs and OS-level metrics from EC2 instances and send them to CloudWatch. Which tool should they install on the instances?
✓ Correct: A. CloudWatch Agent collects custom logs and OS metrics from EC2 instances.
Why A is correct: The Amazon CloudWatch Agent (also called the unified CloudWatch Agent) can be installed on EC2 instances and on-premises servers to collect system-level metrics (memory, disk, CPU per core) and custom application logs. It sends this data to CloudWatch Metrics and CloudWatch Logs. It can also use the procstat plugin to monitor individual process metrics.
Why others are wrong:
B) SSM Agent — SSM Agent is used for Systems Manager operations (patching, run commands, inventory), not for sending logs and metrics to CloudWatch.
C) X-Ray daemon — X-Ray daemon collects application tracing data for distributed tracing, not general logs or OS metrics.
D) Kinesis Agent — Kinesis Agent sends data to Kinesis Data Streams or Firehose, not directly to CloudWatch.
Why A is correct: The Amazon CloudWatch Agent (also called the unified CloudWatch Agent) can be installed on EC2 instances and on-premises servers to collect system-level metrics (memory, disk, CPU per core) and custom application logs. It sends this data to CloudWatch Metrics and CloudWatch Logs. It can also use the procstat plugin to monitor individual process metrics.
Why others are wrong:
B) SSM Agent — SSM Agent is used for Systems Manager operations (patching, run commands, inventory), not for sending logs and metrics to CloudWatch.
C) X-Ray daemon — X-Ray daemon collects application tracing data for distributed tracing, not general logs or OS metrics.
D) Kinesis Agent — Kinesis Agent sends data to Kinesis Data Streams or Firehose, not directly to CloudWatch.
Q17.A security engineer wants to monitor the number of running processes on an EC2 instance using CloudWatch. Which CloudWatch Agent plugin should they configure?
✓ Correct: C. The procstat plugin monitors individual process metrics.
Why C is correct: The procstat plugin for the CloudWatch Agent collects metrics from individual processes. It can monitor metrics such as CPU usage, memory usage, and the number of running instances of a specific process. You can select processes by process name, PID file, or pattern matching.
Why others are wrong:
A) statsd plugin — The StatsD plugin receives custom metrics from applications using the StatsD protocol. It does not monitor OS processes.
B) collectd plugin — The collectd plugin receives metrics from the collectd daemon. It is for general system statistics, not process-specific monitoring via CloudWatch Agent.
D) ethtool plugin — The ethtool plugin collects network interface metrics. It does not monitor process-level information.
Why C is correct: The procstat plugin for the CloudWatch Agent collects metrics from individual processes. It can monitor metrics such as CPU usage, memory usage, and the number of running instances of a specific process. You can select processes by process name, PID file, or pattern matching.
Why others are wrong:
A) statsd plugin — The StatsD plugin receives custom metrics from applications using the StatsD protocol. It does not monitor OS processes.
B) collectd plugin — The collectd plugin receives metrics from the collectd daemon. It is for general system statistics, not process-specific monitoring via CloudWatch Agent.
D) ethtool plugin — The ethtool plugin collects network interface metrics. It does not monitor process-level information.
Q18.An application writes logs to CloudWatch Logs. The security team wants to trigger an alarm whenever a specific error pattern appears in the logs. What should they create?
✓ Correct: B. A metric filter extracts metrics from logs, which can then trigger a CloudWatch Alarm.
Why B is correct: CloudWatch Logs metric filters define patterns to search for in log data and create custom CloudWatch metrics based on matches. You can then create a CloudWatch Alarm on that metric to be notified when the error count exceeds a threshold. This is the standard approach for log-based alerting.
Why others are wrong:
A) Subscription filter to Lambda — Subscription filters stream log data in real time to destinations like Lambda, but they do not directly create alarms based on patterns. This is more for log processing.
C) EventBridge rule — EventBridge does not directly match patterns within CloudWatch Logs data. It handles events from AWS services, not log content.
D) S3 event notification — This would only trigger when log files are exported to S3, not in real time when specific patterns appear.
Why B is correct: CloudWatch Logs metric filters define patterns to search for in log data and create custom CloudWatch metrics based on matches. You can then create a CloudWatch Alarm on that metric to be notified when the error count exceeds a threshold. This is the standard approach for log-based alerting.
Why others are wrong:
A) Subscription filter to Lambda — Subscription filters stream log data in real time to destinations like Lambda, but they do not directly create alarms based on patterns. This is more for log processing.
C) EventBridge rule — EventBridge does not directly match patterns within CloudWatch Logs data. It handles events from AWS services, not log content.
D) S3 event notification — This would only trigger when log files are exported to S3, not in real time when specific patterns appear.
Q19.A company streams CloudWatch Logs from multiple accounts to a centralized logging account in near real time. Which CloudWatch Logs feature enables this?
✓ Correct: D. Subscription filters can stream logs to cross-account destinations in near real time.
Why D is correct: CloudWatch Logs subscription filters can stream log events in near real time to destinations including Kinesis Data Streams, Kinesis Data Firehose, or Lambda functions. By setting up a cross-account subscription with a destination in the centralized logging account, you can aggregate logs from multiple accounts in near real time.
Why others are wrong:
A) Logs Insights — CloudWatch Logs Insights is an interactive query tool for analyzing logs. It does not stream or aggregate logs across accounts.
B) Metric filters — Metric filters extract numeric metrics from logs but do not forward the actual log data to another account.
C) Export to S3 — S3 export creates a batch export job that can take up to 12 hours. It is not near real time and is not ideal for continuous cross-account streaming.
Why D is correct: CloudWatch Logs subscription filters can stream log events in near real time to destinations including Kinesis Data Streams, Kinesis Data Firehose, or Lambda functions. By setting up a cross-account subscription with a destination in the centralized logging account, you can aggregate logs from multiple accounts in near real time.
Why others are wrong:
A) Logs Insights — CloudWatch Logs Insights is an interactive query tool for analyzing logs. It does not stream or aggregate logs across accounts.
B) Metric filters — Metric filters extract numeric metrics from logs but do not forward the actual log data to another account.
C) Export to S3 — S3 export creates a batch export job that can take up to 12 hours. It is not near real time and is not ideal for continuous cross-account streaming.
Q20.A security team wants to query and analyze CloudWatch Logs interactively to search for failed SSH login attempts across multiple log groups. Which feature should they use?
✓ Correct: A. CloudWatch Logs Insights provides interactive ad-hoc log querying.
Why A is correct: CloudWatch Logs Insights enables you to interactively search and analyze log data in CloudWatch Logs using a purpose-built query language. You can query multiple log groups simultaneously, filter for specific patterns like failed SSH attempts, aggregate results, and visualize them. It is the most direct and efficient tool for ad-hoc log analysis.
Why others are wrong:
B) Metric filters — Metric filters create numeric metrics from log patterns but do not support interactive queries or return log details.
C) Athena with S3 logs — While Athena can query logs exported to S3, this requires a separate export step and is not as immediate as Logs Insights for CloudWatch Logs data.
D) OpenSearch dashboards — OpenSearch could work but requires setting up a separate cluster and streaming logs to it, adding unnecessary complexity for this use case.
Why A is correct: CloudWatch Logs Insights enables you to interactively search and analyze log data in CloudWatch Logs using a purpose-built query language. You can query multiple log groups simultaneously, filter for specific patterns like failed SSH attempts, aggregate results, and visualize them. It is the most direct and efficient tool for ad-hoc log analysis.
Why others are wrong:
B) Metric filters — Metric filters create numeric metrics from log patterns but do not support interactive queries or return log details.
C) Athena with S3 logs — While Athena can query logs exported to S3, this requires a separate export step and is not as immediate as Logs Insights for CloudWatch Logs data.
D) OpenSearch dashboards — OpenSearch could work but requires setting up a separate cluster and streaming logs to it, adding unnecessary complexity for this use case.
Q21.A company wants to receive a single CloudWatch Alarm notification only when BOTH a high CPU alarm AND a high memory alarm are in ALARM state simultaneously. What should they use?
✓ Correct: B. Composite alarms combine multiple alarms using boolean logic.
Why B is correct: CloudWatch composite alarms allow you to combine multiple alarms using AND, OR, and NOT boolean operators. By creating a composite alarm with an AND condition on both the CPU and memory alarms, it only triggers when both child alarms are in ALARM state simultaneously. This reduces alarm noise and provides more meaningful alerts.
Why others are wrong:
A) Two separate alarms to same SNS — This would send separate notifications when either alarm triggers, not only when both are in ALARM state simultaneously.
C) Metric math — Metric math can combine metrics into a formula but cannot express the condition that two independent alarms must both be in ALARM state.
D) EventBridge correlation — While possible with custom Lambda logic, this is overly complex. Composite alarms provide this capability natively.
Why B is correct: CloudWatch composite alarms allow you to combine multiple alarms using AND, OR, and NOT boolean operators. By creating a composite alarm with an AND condition on both the CPU and memory alarms, it only triggers when both child alarms are in ALARM state simultaneously. This reduces alarm noise and provides more meaningful alerts.
Why others are wrong:
A) Two separate alarms to same SNS — This would send separate notifications when either alarm triggers, not only when both are in ALARM state simultaneously.
C) Metric math — Metric math can combine metrics into a formula but cannot express the condition that two independent alarms must both be in ALARM state.
D) EventBridge correlation — While possible with custom Lambda logic, this is overly complex. Composite alarms provide this capability natively.
Q22.Which service should a security engineer use to create rules that automatically trigger actions based on events from AWS services such as EC2 state changes or GuardDuty findings?
✓ Correct: A. EventBridge is the event-driven service for matching and routing AWS service events.
Why A is correct: Amazon EventBridge (formerly CloudWatch Events) is a serverless event bus that receives events from AWS services, SaaS applications, and custom sources. You can create rules that match specific event patterns (e.g., GuardDuty findings, EC2 state changes) and route them to targets like Lambda, SNS, SQS, or Step Functions for automated response.
Why others are wrong:
B) SQS — SQS is a message queue for decoupling services. It does not create rules or match event patterns; it can be a target of EventBridge.
C) Step Functions — Step Functions orchestrates workflows but does not match event patterns. It can be triggered by EventBridge as a target.
D) SNS — SNS is a notification service for pub/sub messaging. It can be a target of EventBridge rules but does not create event-matching rules itself.
Why A is correct: Amazon EventBridge (formerly CloudWatch Events) is a serverless event bus that receives events from AWS services, SaaS applications, and custom sources. You can create rules that match specific event patterns (e.g., GuardDuty findings, EC2 state changes) and route them to targets like Lambda, SNS, SQS, or Step Functions for automated response.
Why others are wrong:
B) SQS — SQS is a message queue for decoupling services. It does not create rules or match event patterns; it can be a target of EventBridge.
C) Step Functions — Step Functions orchestrates workflows but does not match event patterns. It can be triggered by EventBridge as a target.
D) SNS — SNS is a notification service for pub/sub messaging. It can be a target of EventBridge rules but does not create event-matching rules itself.
Q23.Which type of AWS CloudTrail event records actions taken on S3 objects such as
GetObject and PutObject?
✓ Correct: C. Data events capture resource-level operations like S3 object access.
Why C is correct: CloudTrail data events (also called data plane operations) record resource-level operations performed on or within a resource. S3 object-level actions like
Why others are wrong:
A) Management events — Management events (control plane operations) record API calls like
B) Insights events — CloudTrail Insights detects unusual API activity patterns, not individual object-level operations.
D) Network activity events — Network activity events track API calls made through VPC endpoints using AWS PrivateLink, which is a different category.
Why C is correct: CloudTrail data events (also called data plane operations) record resource-level operations performed on or within a resource. S3 object-level actions like
GetObject, PutObject, and DeleteObject are data events. Data events are not logged by default and must be explicitly enabled in the trail configuration due to their high volume.Why others are wrong:
A) Management events — Management events (control plane operations) record API calls like
CreateBucket or PutBucketPolicy, not object-level operations.B) Insights events — CloudTrail Insights detects unusual API activity patterns, not individual object-level operations.
D) Network activity events — Network activity events track API calls made through VPC endpoints using AWS PrivateLink, which is a different category.
Q24.A security team suspects that CloudTrail log files in their S3 bucket may have been tampered with. Which CloudTrail feature should they use to verify log file integrity?
✓ Correct: D. Log file integrity validation uses hash-based verification to detect tampering.
Why D is correct: CloudTrail log file integrity validation creates a SHA-256 hash for every log file and delivers digest files hourly that contain the hashes. You can use the AWS CLI
Why others are wrong:
A) CloudTrail Insights — Insights detects unusual API call volumes and patterns, not log file tampering.
B) Event selectors — Event selectors configure which events to log (management, data, or Insights events). They do not verify integrity.
C) Organization trail — An organization trail logs events across all accounts in an organization but does not validate log integrity.
Why D is correct: CloudTrail log file integrity validation creates a SHA-256 hash for every log file and delivers digest files hourly that contain the hashes. You can use the AWS CLI
aws cloudtrail validate-logs command to verify that log files have not been modified, deleted, or forged after delivery. This must be enabled when creating or updating the trail.Why others are wrong:
A) CloudTrail Insights — Insights detects unusual API call volumes and patterns, not log file tampering.
B) Event selectors — Event selectors configure which events to log (management, data, or Insights events). They do not verify integrity.
C) Organization trail — An organization trail logs events across all accounts in an organization but does not validate log integrity.
Q25.A company wants CloudTrail to detect when there is an unusual spike in
RunInstances API calls. Which CloudTrail feature provides this?
✓ Correct: B. CloudTrail Insights detects unusual API activity patterns.
Why B is correct: CloudTrail Insights continuously analyzes management event API call volumes and identifies unusual activity that deviates from normal patterns. If there is an unexpected spike in
Why others are wrong:
A) Data events — Data events record resource-level operations like S3 object access.
C) Log file integrity — Integrity validation verifies log files have not been tampered with, not that API call volumes are abnormal.
D) Event history — Event history provides a 90-day view of management events in the console. It does not analyze patterns or detect anomalies.
Why B is correct: CloudTrail Insights continuously analyzes management event API call volumes and identifies unusual activity that deviates from normal patterns. If there is an unexpected spike in
RunInstances calls, Insights generates an Insights event. It can detect anomalies in both API call rate and API error rate.Why others are wrong:
A) Data events — Data events record resource-level operations like S3 object access.
RunInstances is a management event, not a data event.C) Log file integrity — Integrity validation verifies log files have not been tampered with, not that API call volumes are abnormal.
D) Event history — Event history provides a 90-day view of management events in the console. It does not analyze patterns or detect anomalies.
Q26.An organization with multiple AWS accounts wants a single CloudTrail trail that logs events from all accounts in the organization. What type of trail should they create?
✓ Correct: A. An organization trail automatically logs events from all member accounts.
Why A is correct: An organization trail is created in the management account of AWS Organizations. It automatically applies to all member accounts, logging events from every account in the organization to a centralized S3 bucket. Member accounts can see the trail but cannot modify or delete it.
Why others are wrong:
B) Multi-region trail in each account — This creates separate trails per account, which is harder to manage and does not automatically include new accounts added to the organization.
C) Single-region trail with cross-account roles — This approach is manual, limited to one region, and does not scale with the organization.
D) Delegated administrator trail — CloudTrail organization trails can only be created from the management account. CloudTrail supports delegated administrator for some features, but the organization trail itself is managed from the management account.
Why A is correct: An organization trail is created in the management account of AWS Organizations. It automatically applies to all member accounts, logging events from every account in the organization to a centralized S3 bucket. Member accounts can see the trail but cannot modify or delete it.
Why others are wrong:
B) Multi-region trail in each account — This creates separate trails per account, which is harder to manage and does not automatically include new accounts added to the organization.
C) Single-region trail with cross-account roles — This approach is manual, limited to one region, and does not scale with the organization.
D) Delegated administrator trail — CloudTrail organization trails can only be created from the management account. CloudTrail supports delegated administrator for some features, but the organization trail itself is managed from the management account.
Q27.A security team wants to receive near real-time alerts when specific API calls are made, as recorded by CloudTrail. What is the recommended integration?
✓ Correct: C. CloudTrail integration with CloudWatch Logs enables near real-time alerting.
Why C is correct: CloudTrail can be configured to deliver logs to CloudWatch Logs. Once in CloudWatch Logs, you can create metric filters that match specific API call patterns and link them to CloudWatch Alarms that trigger SNS notifications. This provides near real-time alerting (typically within minutes) for specific API activity.
Why others are wrong:
A) S3 event notifications — S3 event notifications tell you when a log file is delivered to S3, but they do not parse the log content for specific API calls.
B) Event history with queries — Event history does not support scheduled queries or automated alerting. It is for manual console-based lookups.
D) CloudTrail Insights — Insights detects unusual patterns of API activity, not specific individual API calls. It is for anomaly detection, not specific event alerting.
Why C is correct: CloudTrail can be configured to deliver logs to CloudWatch Logs. Once in CloudWatch Logs, you can create metric filters that match specific API call patterns and link them to CloudWatch Alarms that trigger SNS notifications. This provides near real-time alerting (typically within minutes) for specific API activity.
Why others are wrong:
A) S3 event notifications — S3 event notifications tell you when a log file is delivered to S3, but they do not parse the log content for specific API calls.
B) Event history with queries — Event history does not support scheduled queries or automated alerting. It is for manual console-based lookups.
D) CloudTrail Insights — Insights detects unusual patterns of API activity, not specific individual API calls. It is for anomaly detection, not specific event alerting.
Q28.What is the purpose of AWS Config rules?
✓ Correct: B. AWS Config rules evaluate resource configurations against desired compliance settings.
Why B is correct: AWS Config rules represent your desired configuration settings for AWS resources. Each rule evaluates whether resource configurations are compliant or non-compliant with the rule definition. AWS provides managed rules for common checks (e.g.,
Why others are wrong:
A) Deploy resources — Infrastructure as code deployment is done by CloudFormation or Terraform, not AWS Config rules.
C) Encrypt data — AWS Config can check whether encryption is enabled but does not perform encryption itself. KMS and service-level settings handle encryption.
D) Monitor network traffic — Network traffic monitoring is done by VPC Flow Logs and GuardDuty, not AWS Config.
Why B is correct: AWS Config rules represent your desired configuration settings for AWS resources. Each rule evaluates whether resource configurations are compliant or non-compliant with the rule definition. AWS provides managed rules for common checks (e.g.,
s3-bucket-server-side-encryption-enabled) and you can also create custom rules using Lambda functions.Why others are wrong:
A) Deploy resources — Infrastructure as code deployment is done by CloudFormation or Terraform, not AWS Config rules.
C) Encrypt data — AWS Config can check whether encryption is enabled but does not perform encryption itself. KMS and service-level settings handle encryption.
D) Monitor network traffic — Network traffic monitoring is done by VPC Flow Logs and GuardDuty, not AWS Config.
Q29.An AWS Config rule detects that an S3 bucket has public read access enabled. The team wants Config to automatically remove the public access. Which Config feature enables this?
✓ Correct: D. Config automatic remediation triggers SSM Automation documents to fix non-compliant resources.
Why D is correct: AWS Config rules can be associated with remediation actions that use AWS Systems Manager Automation documents to automatically fix non-compliant resources. When a resource is found non-compliant, Config can trigger the remediation automatically or manually. AWS provides pre-built remediation documents for common scenarios like removing S3 public access.
Why others are wrong:
A) Config aggregator — Aggregators collect Config data from multiple accounts and regions into a single view. They do not remediate resources.
B) Conformance pack — Conformance packs bundle multiple Config rules and remediations for deployment, but the remediation itself is done through SSM Automation documents.
C) SNS notification — SNS sends notifications about configuration changes and compliance status but does not perform remediation.
Why D is correct: AWS Config rules can be associated with remediation actions that use AWS Systems Manager Automation documents to automatically fix non-compliant resources. When a resource is found non-compliant, Config can trigger the remediation automatically or manually. AWS provides pre-built remediation documents for common scenarios like removing S3 public access.
Why others are wrong:
A) Config aggregator — Aggregators collect Config data from multiple accounts and regions into a single view. They do not remediate resources.
B) Conformance pack — Conformance packs bundle multiple Config rules and remediations for deployment, but the remediation itself is done through SSM Automation documents.
C) SNS notification — SNS sends notifications about configuration changes and compliance status but does not perform remediation.
Q30.A company needs to deploy a consistent set of AWS Config rules across all accounts in their organization. Which TWO approaches achieve this?SELECT TWO
✓ Correct: A, E. Conformance packs and CloudFormation StackSets can deploy Config rules across accounts.
Why A is correct: AWS Config conformance packs can be deployed at the organization level, automatically applying a collection of Config rules and remediation actions to all member accounts in the organization. This is the native Config approach for multi-account governance.
Why E is correct: CloudFormation StackSets can deploy CloudFormation templates (containing Config rule definitions) across multiple accounts and regions in an organization. This is a general-purpose multi-account deployment mechanism that works well for Config rules.
Why others are wrong:
B) S3 bucket policy — S3 bucket policies control access to S3 buckets. They have no relationship to deploying or enforcing AWS Config rules.
C) GuardDuty — GuardDuty is a threat detection service and has no capability to deploy Config rules.
D) Manually create rules — While technically possible, manual creation does not scale and does not ensure consistency across accounts.
Why A is correct: AWS Config conformance packs can be deployed at the organization level, automatically applying a collection of Config rules and remediation actions to all member accounts in the organization. This is the native Config approach for multi-account governance.
Why E is correct: CloudFormation StackSets can deploy CloudFormation templates (containing Config rule definitions) across multiple accounts and regions in an organization. This is a general-purpose multi-account deployment mechanism that works well for Config rules.
Why others are wrong:
B) S3 bucket policy — S3 bucket policies control access to S3 buckets. They have no relationship to deploying or enforcing AWS Config rules.
C) GuardDuty — GuardDuty is a threat detection service and has no capability to deploy Config rules.
D) Manually create rules — While technically possible, manual creation does not scale and does not ensure consistency across accounts.
Q31.A company wants to view AWS Config compliance data from all accounts in their organization in a single dashboard. Which feature should they use?
✓ Correct: A. Config aggregators provide a multi-account, multi-region compliance view.
Why A is correct: An AWS Config aggregator collects configuration and compliance data from multiple accounts and regions into a single account. You can create an aggregator in a central account and authorize it to collect data from organization member accounts, providing a unified compliance dashboard.
Why others are wrong:
B) Conformance pack — Conformance packs deploy Config rules across accounts but do not aggregate compliance data into a single view.
C) Security Hub — While Security Hub can display Config compliance findings, the question asks specifically about Config compliance data. The Config aggregator is the native Config feature for this purpose.
D) CloudTrail organization trail — CloudTrail logs API activity, not resource compliance status. It does not provide a compliance dashboard.
Why A is correct: An AWS Config aggregator collects configuration and compliance data from multiple accounts and regions into a single account. You can create an aggregator in a central account and authorize it to collect data from organization member accounts, providing a unified compliance dashboard.
Why others are wrong:
B) Conformance pack — Conformance packs deploy Config rules across accounts but do not aggregate compliance data into a single view.
C) Security Hub — While Security Hub can display Config compliance findings, the question asks specifically about Config compliance data. The Config aggregator is the native Config feature for this purpose.
D) CloudTrail organization trail — CloudTrail logs API activity, not resource compliance status. It does not provide a compliance dashboard.
Q32.What is the primary purpose of Amazon Macie?
✓ Correct: C. Macie discovers and protects sensitive data stored in S3.
Why C is correct: Amazon Macie uses machine learning and pattern matching to discover and classify sensitive data in Amazon S3 buckets. It can identify personally identifiable information (PII), financial data, credentials, and other sensitive content. Macie also evaluates S3 bucket security and access controls to alert on publicly accessible or unencrypted buckets.
Why others are wrong:
A) Detecting malware in EC2 — EC2 malware detection is performed by GuardDuty Malware Protection, not Macie.
B) Monitoring API calls — API call monitoring is done by CloudTrail and GuardDuty, not Macie.
D) Container image scanning — Container image vulnerability scanning is performed by Amazon Inspector, not Macie.
Why C is correct: Amazon Macie uses machine learning and pattern matching to discover and classify sensitive data in Amazon S3 buckets. It can identify personally identifiable information (PII), financial data, credentials, and other sensitive content. Macie also evaluates S3 bucket security and access controls to alert on publicly accessible or unencrypted buckets.
Why others are wrong:
A) Detecting malware in EC2 — EC2 malware detection is performed by GuardDuty Malware Protection, not Macie.
B) Monitoring API calls — API call monitoring is done by CloudTrail and GuardDuty, not Macie.
D) Container image scanning — Container image vulnerability scanning is performed by Amazon Inspector, not Macie.
Q33.An organization wants Macie to detect a custom data type specific to their business, such as internal employee ID numbers in the format
EMP-XXXXXX. What should they create?
✓ Correct: B. Custom data identifiers let you define business-specific sensitive data patterns.
Why B is correct: Macie custom data identifiers allow you to define detection criteria using regular expressions and optional keyword matching to detect sensitive data specific to your organization. For the employee ID format
Why others are wrong:
A) Suppression rule — Suppression rules hide specific findings from view. They do not define new data types to detect.
C) Managed data identifier — Managed data identifiers are pre-built by AWS for common sensitive data types (SSN, credit cards, etc.). You cannot create new managed identifiers.
D) Allow list — Allow lists define text patterns that Macie should ignore (false positives), not new data types to detect.
Why B is correct: Macie custom data identifiers allow you to define detection criteria using regular expressions and optional keyword matching to detect sensitive data specific to your organization. For the employee ID format
EMP-XXXXXX, you would create a custom data identifier with a regex like EMP-\d{6}.Why others are wrong:
A) Suppression rule — Suppression rules hide specific findings from view. They do not define new data types to detect.
C) Managed data identifier — Managed data identifiers are pre-built by AWS for common sensitive data types (SSN, credit cards, etc.). You cannot create new managed identifiers.
D) Allow list — Allow lists define text patterns that Macie should ignore (false positives), not new data types to detect.
Q34.Which log type captures information about IP traffic going to and from network interfaces in a VPC?
✓ Correct: D. VPC Flow Logs capture IP traffic information for network interfaces.
Why D is correct: VPC Flow Logs capture information about the IP traffic going to and from network interfaces in your VPC. They record details such as source/destination IP, ports, protocol, packet/byte counts, and accept/reject action. Flow Logs can be created at the VPC, subnet, or network interface level and sent to CloudWatch Logs, S3, or Kinesis Data Firehose.
Why others are wrong:
A) CloudTrail data events — CloudTrail data events record API-level resource operations (like S3 GetObject), not network traffic flows.
B) S3 access logs — S3 access logs record requests made to an S3 bucket, not VPC network traffic.
C) Route 53 resolver query logs — These log DNS queries resolved by Route 53 Resolver, not general IP traffic flows.
Why D is correct: VPC Flow Logs capture information about the IP traffic going to and from network interfaces in your VPC. They record details such as source/destination IP, ports, protocol, packet/byte counts, and accept/reject action. Flow Logs can be created at the VPC, subnet, or network interface level and sent to CloudWatch Logs, S3, or Kinesis Data Firehose.
Why others are wrong:
A) CloudTrail data events — CloudTrail data events record API-level resource operations (like S3 GetObject), not network traffic flows.
B) S3 access logs — S3 access logs record requests made to an S3 bucket, not VPC network traffic.
C) Route 53 resolver query logs — These log DNS queries resolved by Route 53 Resolver, not general IP traffic flows.
Q35.A security engineer needs to determine whether traffic to a specific port on an EC2 instance is being rejected by the security group or the network ACL. Where should they look?
✓ Correct: A. VPC Flow Logs show whether traffic was accepted or rejected at the network level.
Why A is correct: VPC Flow Logs record each network flow with an action field showing ACCEPT or REJECT. By analyzing flow logs at the network interface level, you can see rejected traffic. If both security group and NACL are involved, flow logs show the combined effect. Traffic rejected by security groups shows REJECT, and traffic rejected by NACLs also shows REJECT in the flow log records.
Why others are wrong:
B) CloudTrail management events — CloudTrail records API calls to modify security groups, not whether specific traffic is being accepted or rejected.
C) GuardDuty findings — GuardDuty detects threats but does not show per-packet accept/reject decisions by security groups or NACLs.
D) Config compliance history — Config tracks configuration changes to security groups over time but does not show real-time traffic flow decisions.
Why A is correct: VPC Flow Logs record each network flow with an action field showing ACCEPT or REJECT. By analyzing flow logs at the network interface level, you can see rejected traffic. If both security group and NACL are involved, flow logs show the combined effect. Traffic rejected by security groups shows REJECT, and traffic rejected by NACLs also shows REJECT in the flow log records.
Why others are wrong:
B) CloudTrail management events — CloudTrail records API calls to modify security groups, not whether specific traffic is being accepted or rejected.
C) GuardDuty findings — GuardDuty detects threats but does not show per-packet accept/reject decisions by security groups or NACLs.
D) Config compliance history — Config tracks configuration changes to security groups over time but does not show real-time traffic flow decisions.
Q36.A team needs to trigger a Lambda function automatically whenever a new object is uploaded to an S3 bucket. Which feature should they use?
✓ Correct: C. S3 Event Notifications trigger actions on bucket events like object creation.
Why C is correct: S3 Event Notifications can be configured to send notifications when specific events occur in an S3 bucket, such as
Why others are wrong:
A) S3 access logging — Access logging records requests to the bucket in a log file but does not trigger real-time actions.
B) S3 replication rules — Replication copies objects to another bucket but does not invoke Lambda functions.
D) S3 lifecycle policies — Lifecycle policies transition or expire objects based on age, not trigger functions on upload.
Why C is correct: S3 Event Notifications can be configured to send notifications when specific events occur in an S3 bucket, such as
s3:ObjectCreated:*. Supported destinations include Lambda functions, SQS queues, SNS topics, and Amazon EventBridge. This is the standard way to trigger automated processing when new objects are uploaded.Why others are wrong:
A) S3 access logging — Access logging records requests to the bucket in a log file but does not trigger real-time actions.
B) S3 replication rules — Replication copies objects to another bucket but does not invoke Lambda functions.
D) S3 lifecycle policies — Lifecycle policies transition or expire objects based on age, not trigger functions on upload.
Q37.What is the key difference between S3 access logs and CloudTrail S3 data events?
✓ Correct: B. S3 access logs are best-effort; CloudTrail data events are reliable.
Why B is correct: S3 server access logging is delivered on a best-effort basis and may not include every request. Logs can be delayed and some records may be missing. CloudTrail S3 data events provide reliable, complete logging of S3 object-level API operations with guaranteed delivery. CloudTrail is the recommended choice when completeness is required.
Why others are wrong:
A) Real-time vs delayed — This is backwards. S3 access logs can be delayed by hours, while CloudTrail typically delivers events within minutes.
C) API vs HTTP details — S3 access logs actually include HTTP-level details (requester IP, HTTP status), while CloudTrail captures API-level details (IAM identity, API action).
D) Access logs cost more — S3 access logging is free (you only pay for log storage). CloudTrail data events have per-event charges, making CloudTrail more expensive.
Why B is correct: S3 server access logging is delivered on a best-effort basis and may not include every request. Logs can be delayed and some records may be missing. CloudTrail S3 data events provide reliable, complete logging of S3 object-level API operations with guaranteed delivery. CloudTrail is the recommended choice when completeness is required.
Why others are wrong:
A) Real-time vs delayed — This is backwards. S3 access logs can be delayed by hours, while CloudTrail typically delivers events within minutes.
C) API vs HTTP details — S3 access logs actually include HTTP-level details (requester IP, HTTP status), while CloudTrail captures API-level details (IAM identity, API action).
D) Access logs cost more — S3 access logging is free (you only pay for log storage). CloudTrail data events have per-event charges, making CloudTrail more expensive.
Q38.A company wants to log all DNS queries made by resources in their VPC, including queries to private hosted zones. Which service should they enable?
✓ Correct: A. Route 53 Resolver query logging captures DNS queries made within a VPC.
Why A is correct: Route 53 Resolver query logging records DNS queries made by resources within your VPC, including queries to public hosted zones, private hosted zones, and external DNS resolvers. Logs can be sent to CloudWatch Logs, S3, or Kinesis Data Firehose. This is the purpose-built service for VPC DNS query visibility.
Why others are wrong:
B) VPC Flow Logs — Flow Logs capture IP traffic metadata (source, destination, port, protocol) but do not capture DNS query content or domain names.
C) CloudTrail Route 53 data events — CloudTrail logs Route 53 API calls (like creating hosted zones) but does not log individual DNS resolution queries.
D) GuardDuty DNS monitoring — GuardDuty analyzes DNS logs to detect threats but does not provide direct access to DNS query logs for the customer.
Why A is correct: Route 53 Resolver query logging records DNS queries made by resources within your VPC, including queries to public hosted zones, private hosted zones, and external DNS resolvers. Logs can be sent to CloudWatch Logs, S3, or Kinesis Data Firehose. This is the purpose-built service for VPC DNS query visibility.
Why others are wrong:
B) VPC Flow Logs — Flow Logs capture IP traffic metadata (source, destination, port, protocol) but do not capture DNS query content or domain names.
C) CloudTrail Route 53 data events — CloudTrail logs Route 53 API calls (like creating hosted zones) but does not log individual DNS resolution queries.
D) GuardDuty DNS monitoring — GuardDuty analyzes DNS logs to detect threats but does not provide direct access to DNS query logs for the customer.
Q39.A security team needs to run ad-hoc SQL queries against large volumes of CloudTrail logs stored in S3 without provisioning any servers. Which service should they use?
✓ Correct: D. Athena is a serverless SQL query service for data stored in S3.
Why D is correct: Amazon Athena is a serverless, interactive query service that lets you analyze data in S3 using standard SQL. It is ideal for ad-hoc querying of CloudTrail logs. You simply define a table schema pointing to the S3 location of your CloudTrail logs and run SQL queries. There is no infrastructure to provision or manage, and you pay only per query based on data scanned.
Why others are wrong:
A) Amazon RDS — RDS requires provisioning a database instance and loading data into it. It is not serverless for S3 log analysis.
B) Amazon Redshift — Redshift is a data warehouse that requires cluster provisioning (though Redshift Serverless exists, Athena is the simpler choice for S3 log querying).
C) Amazon OpenSearch — OpenSearch requires provisioning a domain/cluster and ingesting data. It is not serverless in the same way as Athena for direct S3 querying.
Why D is correct: Amazon Athena is a serverless, interactive query service that lets you analyze data in S3 using standard SQL. It is ideal for ad-hoc querying of CloudTrail logs. You simply define a table schema pointing to the S3 location of your CloudTrail logs and run SQL queries. There is no infrastructure to provision or manage, and you pay only per query based on data scanned.
Why others are wrong:
A) Amazon RDS — RDS requires provisioning a database instance and loading data into it. It is not serverless for S3 log analysis.
B) Amazon Redshift — Redshift is a data warehouse that requires cluster provisioning (though Redshift Serverless exists, Athena is the simpler choice for S3 log querying).
C) Amazon OpenSearch — OpenSearch requires provisioning a domain/cluster and ingesting data. It is not serverless in the same way as Athena for direct S3 querying.
Q40.A company uses Amazon OpenSearch Service for centralized log analysis. Which TWO are valid use cases for OpenSearch in a security context?SELECT TWO
✓ Correct: B, D. OpenSearch is used for log search, visualization, and dashboards.
Why B is correct: OpenSearch Service with OpenSearch Dashboards (formerly Kibana) can ingest VPC Flow Logs (via Kinesis or Lambda) and create real-time dashboards for network traffic analysis, helping security teams visualize traffic patterns and anomalies.
Why D is correct: OpenSearch is commonly used for threat hunting by ingesting CloudTrail logs and enabling fast full-text search, filtering, and visualization. Security analysts can investigate suspicious API calls, unusual user activity, and access patterns.
Why others are wrong:
A) Remediating Config rules — Remediation of Config rules is done through SSM Automation documents, not OpenSearch.
C) Encrypting S3 data — OpenSearch is a search and analytics engine; encryption is handled by S3 and KMS.
E) Scanning for vulnerabilities — Vulnerability scanning is performed by Amazon Inspector, not OpenSearch.
Why B is correct: OpenSearch Service with OpenSearch Dashboards (formerly Kibana) can ingest VPC Flow Logs (via Kinesis or Lambda) and create real-time dashboards for network traffic analysis, helping security teams visualize traffic patterns and anomalies.
Why D is correct: OpenSearch is commonly used for threat hunting by ingesting CloudTrail logs and enabling fast full-text search, filtering, and visualization. Security analysts can investigate suspicious API calls, unusual user activity, and access patterns.
Why others are wrong:
A) Remediating Config rules — Remediation of Config rules is done through SSM Automation documents, not OpenSearch.
C) Encrypting S3 data — OpenSearch is a search and analytics engine; encryption is handled by S3 and KMS.
E) Scanning for vulnerabilities — Vulnerability scanning is performed by Amazon Inspector, not OpenSearch.
Q41.GuardDuty generates a finding
Recon:EC2/PortProbeUnprotectedPort. What does this indicate?
✓ Correct: C. This finding means an exposed port on EC2 is being probed by a potentially malicious actor.
Why C is correct: The finding
Why others are wrong:
A) EC2 scanning external IPs — This would be an outbound scanning finding like
B) Outbound traffic blocked — NACL blocking is not detected by GuardDuty as a security finding. Flow logs would show this.
D) No security group — Every EC2 instance must have at least one security group. This finding is about a port being open, not about missing security groups.
Why C is correct: The finding
Recon:EC2/PortProbeUnprotectedPort indicates that an unprotected port on an EC2 instance (open to 0.0.0.0/0) is being probed by a known malicious IP address. This is a reconnaissance finding suggesting someone is scanning for vulnerable services on the instance.Why others are wrong:
A) EC2 scanning external IPs — This would be an outbound scanning finding like
Recon:EC2/Portscan, not a PortProbeUnprotectedPort finding which is about inbound probing.B) Outbound traffic blocked — NACL blocking is not detected by GuardDuty as a security finding. Flow logs would show this.
D) No security group — Every EC2 instance must have at least one security group. This finding is about a port being open, not about missing security groups.
Q42.GuardDuty is enabled but not generating any findings. The security team verified that VPC Flow Logs and CloudTrail are active. What is the MOST likely explanation?
✓ Correct: B. No findings means no suspicious activity has been detected.
Why B is correct: GuardDuty generates findings only when it detects suspicious or malicious activity. If the account has no threats, no anomalous behavior, and no known malicious IP interactions, there will be no findings. This is normal and expected behavior. You can generate sample findings in the console to verify the service is working correctly.
Why others are wrong:
A) Manual generation required — GuardDuty automatically generates findings when threats are detected. No manual action is needed.
C) 30-day waiting period — GuardDuty can generate findings immediately after enabling, though some machine learning baselines may take 7-14 days to establish for certain finding types.
D) Service-linked role not created — The service-linked role is automatically created when GuardDuty is enabled. If it failed, GuardDuty would show an error, not silently fail.
Why B is correct: GuardDuty generates findings only when it detects suspicious or malicious activity. If the account has no threats, no anomalous behavior, and no known malicious IP interactions, there will be no findings. This is normal and expected behavior. You can generate sample findings in the console to verify the service is working correctly.
Why others are wrong:
A) Manual generation required — GuardDuty automatically generates findings when threats are detected. No manual action is needed.
C) 30-day waiting period — GuardDuty can generate findings immediately after enabling, though some machine learning baselines may take 7-14 days to establish for certain finding types.
D) Service-linked role not created — The service-linked role is automatically created when GuardDuty is enabled. If it failed, GuardDuty would show an error, not silently fail.
Q43.A security analyst sees a Security Hub finding with a compliance status of
FAILED for the control “S3 buckets should have server-side encryption enabled.” What generated this finding?
✓ Correct: A. Compliance checks are run by Security Hub's enabled security standards.
Why A is correct: Security Hub runs automated security checks as part of its enabled security standards (e.g., AWS Foundational Security Best Practices, CIS). These checks evaluate resource configurations against controls and report PASSED or FAILED status. The S3 encryption check is a standard control in the AWS FSBP standard. Security Hub uses AWS Config rules behind the scenes to evaluate these controls.
Why others are wrong:
B) Inspector vulnerability scan — Inspector scans for software vulnerabilities in EC2, containers, and Lambda, not S3 bucket configuration compliance.
C) GuardDuty threat detection — GuardDuty detects active threats and suspicious activity, not configuration compliance issues like missing encryption.
D) Macie data discovery — Macie discovers sensitive data in S3 objects. It does not check whether S3 server-side encryption is enabled.
Why A is correct: Security Hub runs automated security checks as part of its enabled security standards (e.g., AWS Foundational Security Best Practices, CIS). These checks evaluate resource configurations against controls and report PASSED or FAILED status. The S3 encryption check is a standard control in the AWS FSBP standard. Security Hub uses AWS Config rules behind the scenes to evaluate these controls.
Why others are wrong:
B) Inspector vulnerability scan — Inspector scans for software vulnerabilities in EC2, containers, and Lambda, not S3 bucket configuration compliance.
C) GuardDuty threat detection — GuardDuty detects active threats and suspicious activity, not configuration compliance issues like missing encryption.
D) Macie data discovery — Macie discovers sensitive data in S3 objects. It does not check whether S3 server-side encryption is enabled.
Q44.A Security Hub insight shows the “Top accounts with the most findings.” What is a Security Hub insight?
✓ Correct: D. Security Hub insights are saved, grouped collections of findings.
Why D is correct: A Security Hub insight is a collection of related findings that are grouped by a grouping attribute (such as AWS account ID, resource type, or severity). Security Hub provides managed insights (like “Top accounts with the most findings”) and you can create custom insights using filters. Insights help prioritize security issues across your environment.
Why others are wrong:
A) Anomaly detection engine — Security Hub does not perform anomaly detection. That is the role of services like GuardDuty and CloudTrail Insights.
B) Automated remediation workflow — Insights display grouped findings; they do not trigger remediation. Custom actions handle automated workflows.
C) Third-party connector — Insights are a reporting feature within Security Hub, not an integration mechanism. Third-party integrations are configured separately.
Why D is correct: A Security Hub insight is a collection of related findings that are grouped by a grouping attribute (such as AWS account ID, resource type, or severity). Security Hub provides managed insights (like “Top accounts with the most findings”) and you can create custom insights using filters. Insights help prioritize security issues across your environment.
Why others are wrong:
A) Anomaly detection engine — Security Hub does not perform anomaly detection. That is the role of services like GuardDuty and CloudTrail Insights.
B) Automated remediation workflow — Insights display grouped findings; they do not trigger remediation. Custom actions handle automated workflows.
C) Third-party connector — Insights are a reporting feature within Security Hub, not an integration mechanism. Third-party integrations are configured separately.
Q45.A company wants to send CloudWatch Logs data to a centralized account in near real time. Which TWO are valid subscription filter destinations?SELECT TWO
✓ Correct: A, D. Kinesis Data Streams and Lambda are valid subscription filter destinations.
Why A is correct: Amazon Kinesis Data Streams is a supported destination for CloudWatch Logs subscription filters. It enables real-time streaming of log data, including cross-account streaming to a centralized logging account.
Why D is correct: AWS Lambda is a supported subscription filter destination. Lambda functions can process log events in near real time, transform them, and forward them to any downstream destination.
Why others are wrong:
B) S3 bucket — S3 is not a direct subscription filter destination. You can export logs to S3 using the
C) DynamoDB table — DynamoDB is not a supported subscription filter destination.
E) RDS database — RDS is not a supported subscription filter destination.
Why A is correct: Amazon Kinesis Data Streams is a supported destination for CloudWatch Logs subscription filters. It enables real-time streaming of log data, including cross-account streaming to a centralized logging account.
Why D is correct: AWS Lambda is a supported subscription filter destination. Lambda functions can process log events in near real time, transform them, and forward them to any downstream destination.
Why others are wrong:
B) S3 bucket — S3 is not a direct subscription filter destination. You can export logs to S3 using the
CreateExportTask API, but this is a batch operation, not a real-time subscription.C) DynamoDB table — DynamoDB is not a supported subscription filter destination.
E) RDS database — RDS is not a supported subscription filter destination.
Q46.A security team wants to export CloudWatch Logs to S3 for long-term archival. The export takes up to 12 hours. Which API call is used?
✓ Correct: C.
Why C is correct: The
Why others are wrong:
A) PutSubscriptionFilter — This creates a subscription filter for real-time log streaming to Lambda, Kinesis, or Firehose, not a batch export to S3.
B) CreateLogStream — This creates a new log stream within a log group. It does not export data.
D) PutLogEvents — This writes log events to a log stream. It is for ingesting data into CloudWatch Logs, not exporting it.
CreateExportTask exports CloudWatch Logs data to S3.Why C is correct: The
CreateExportTask API creates an export task to copy log data from a CloudWatch Logs log group to an S3 bucket. This is a batch operation that can take up to 12 hours to complete. Only one export task per account can run at a time. For near real-time delivery to S3, you would use subscription filters with Kinesis Data Firehose instead.Why others are wrong:
A) PutSubscriptionFilter — This creates a subscription filter for real-time log streaming to Lambda, Kinesis, or Firehose, not a batch export to S3.
B) CreateLogStream — This creates a new log stream within a log group. It does not export data.
D) PutLogEvents — This writes log events to a log stream. It is for ingesting data into CloudWatch Logs, not exporting it.
Q47.Which GuardDuty finding type indicates that an EC2 instance may be compromised and is communicating with a known command-and-control server?
✓ Correct: A. This finding indicates C&C communication from an EC2 instance.
Why A is correct: The finding
Why others are wrong:
B) Recon:EC2/Portscan — This indicates the EC2 instance is scanning ports on other hosts, which is a reconnaissance activity, not C&C communication.
C) Policy:IAMUser/RootCredentialUsage — This finding alerts when root account credentials are used, not EC2 C&C activity.
D) UnauthorizedAccess:S3/TorIPCaller — This indicates S3 API calls from a Tor exit node IP, not EC2 C&C communication.
Why A is correct: The finding
Backdoor:EC2/C&CActivity.B indicates that an EC2 instance is communicating with a known command-and-control (C2) server. This is a high-severity finding suggesting the instance is compromised and may be part of a botnet or being remotely controlled by an attacker.Why others are wrong:
B) Recon:EC2/Portscan — This indicates the EC2 instance is scanning ports on other hosts, which is a reconnaissance activity, not C&C communication.
C) Policy:IAMUser/RootCredentialUsage — This finding alerts when root account credentials are used, not EC2 C&C activity.
D) UnauthorizedAccess:S3/TorIPCaller — This indicates S3 API calls from a Tor exit node IP, not EC2 C&C communication.
Q48.An Amazon Inspector scan reveals a finding with severity “Critical” for CVE-2021-44228 (Log4Shell) on an EC2 instance. What type of vulnerability did Inspector detect?
✓ Correct: B. Inspector detected a known CVE in installed software packages.
Why B is correct: Amazon Inspector scans installed software packages on EC2 instances and compares them against known CVEs (Common Vulnerabilities and Exposures). CVE-2021-44228 (Log4Shell) is a critical remote code execution vulnerability in Apache Log4j. Inspector identified this vulnerability by scanning the software inventory collected via SSM Agent.
Why others are wrong:
A) Network reachability — While Inspector does check network reachability, CVE-2021-44228 is a software vulnerability, not a network configuration issue.
C) Misconfigured IAM role — Inspector does not evaluate IAM role configurations. IAM analysis is done by IAM Access Analyzer or AWS Config.
D) Unencrypted EBS volume — EBS encryption checks are done by AWS Config or Security Hub controls, not Inspector CVE scanning.
Why B is correct: Amazon Inspector scans installed software packages on EC2 instances and compares them against known CVEs (Common Vulnerabilities and Exposures). CVE-2021-44228 (Log4Shell) is a critical remote code execution vulnerability in Apache Log4j. Inspector identified this vulnerability by scanning the software inventory collected via SSM Agent.
Why others are wrong:
A) Network reachability — While Inspector does check network reachability, CVE-2021-44228 is a software vulnerability, not a network configuration issue.
C) Misconfigured IAM role — Inspector does not evaluate IAM role configurations. IAM analysis is done by IAM Access Analyzer or AWS Config.
D) Unencrypted EBS volume — EBS encryption checks are done by AWS Config or Security Hub controls, not Inspector CVE scanning.
Q49.A CloudWatch Agent on an EC2 instance is not sending logs to CloudWatch Logs. Which is the MOST likely cause to investigate first?
✓ Correct: D. Missing IAM permissions is the most common cause of CloudWatch Agent delivery failures.
Why D is correct: The CloudWatch Agent requires IAM permissions to write logs to CloudWatch Logs. The instance profile must include the
Why others are wrong:
A) GuardDuty not enabled — GuardDuty is independent of CloudWatch Agent functionality. It has no impact on log delivery.
B) Log group retention policy — A retention policy controls how long logs are kept but does not prevent new logs from being delivered.
C) VPC flow logs — VPC Flow Logs are unrelated to the CloudWatch Agent. The agent sends application and OS logs, not network flow data.
Why D is correct: The CloudWatch Agent requires IAM permissions to write logs to CloudWatch Logs. The instance profile must include the
CloudWatchAgentServerPolicy managed policy (or equivalent permissions like logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents). Missing permissions is the most common cause of agent delivery failures.Why others are wrong:
A) GuardDuty not enabled — GuardDuty is independent of CloudWatch Agent functionality. It has no impact on log delivery.
B) Log group retention policy — A retention policy controls how long logs are kept but does not prevent new logs from being delivered.
C) VPC flow logs — VPC Flow Logs are unrelated to the CloudWatch Agent. The agent sends application and OS logs, not network flow data.
Q50.A company wants to detect sensitive data exposure in their S3 buckets. Which TWO Macie features help identify sensitive data?SELECT TWO
✓ Correct: C, E. Managed and custom data identifiers are the core detection mechanisms in Macie.
Why C is correct: Managed data identifiers are built-in detection patterns provided by AWS that detect common sensitive data types such as credit card numbers, Social Security numbers, passport numbers, API keys, and other PII. They are automatically available and maintained by AWS.
Why E is correct: Custom data identifiers allow you to create your own detection rules using regular expressions and keyword matching to find business-specific sensitive data that managed identifiers do not cover.
Why others are wrong:
A) Suppression rules — Suppression rules hide findings from view but do not identify sensitive data.
B) Allow lists — Allow lists define patterns to ignore (reduce false positives), not to detect sensitive data.
D) Bucket policies — Macie evaluates bucket policies for security posture but “Macie bucket policies” is not a feature for sensitive data identification.
Why C is correct: Managed data identifiers are built-in detection patterns provided by AWS that detect common sensitive data types such as credit card numbers, Social Security numbers, passport numbers, API keys, and other PII. They are automatically available and maintained by AWS.
Why E is correct: Custom data identifiers allow you to create your own detection rules using regular expressions and keyword matching to find business-specific sensitive data that managed identifiers do not cover.
Why others are wrong:
A) Suppression rules — Suppression rules hide findings from view but do not identify sensitive data.
B) Allow lists — Allow lists define patterns to ignore (reduce false positives), not to detect sensitive data.
D) Bucket policies — Macie evaluates bucket policies for security posture but “Macie bucket policies” is not a feature for sensitive data identification.
Q51.An EventBridge rule is configured to capture all EC2 instance state-change events. Which event detail field indicates the new state of the instance?
✓ Correct: B. The
Why B is correct: In an EC2 Instance State-change Notification event, the detail object contains the
Why others are wrong:
A) source field — The
C) detail-type field — The
D) region field — The
detail.state field contains the new state of the instance.Why B is correct: In an EC2 Instance State-change Notification event, the detail object contains the
state field that indicates the new state (e.g., pending, running, stopping, stopped, shutting-down, terminated). EventBridge rules can match on this field to trigger actions for specific state changes.Why others are wrong:
A) source field — The
source field identifies the AWS service that generated the event, not the instance state.C) detail-type field — The
detail-type field identifies the type of event, not the specific state change value.D) region field — The
region field indicates where the event occurred, not the instance state.
Q52.A security engineer configures AWS Config to send notifications when any resource becomes non-compliant. Which service delivers these notifications?
✓ Correct: A. AWS Config uses SNS topics for delivering notifications.
Why A is correct: AWS Config can be configured to send notifications to an Amazon SNS topic when configuration changes occur and when compliance evaluation results change. You can also use EventBridge to capture Config compliance change events, but the native Config notification mechanism uses SNS.
Why others are wrong:
B) SQS — SQS is not a direct notification target for AWS Config. Config sends to SNS, which can then fan out to SQS if needed.
C) Lambda — Lambda is not a direct notification destination for Config. Config custom rules use Lambda for evaluation, but notifications go through SNS or EventBridge.
D) Kinesis — Kinesis is not a notification destination for AWS Config.
Why A is correct: AWS Config can be configured to send notifications to an Amazon SNS topic when configuration changes occur and when compliance evaluation results change. You can also use EventBridge to capture Config compliance change events, but the native Config notification mechanism uses SNS.
Why others are wrong:
B) SQS — SQS is not a direct notification target for AWS Config. Config sends to SNS, which can then fan out to SQS if needed.
C) Lambda — Lambda is not a direct notification destination for Config. Config custom rules use Lambda for evaluation, but notifications go through SNS or EventBridge.
D) Kinesis — Kinesis is not a notification destination for AWS Config.
Q53.A Macie finding shows that an S3 bucket is publicly accessible and contains objects with credit card numbers. What type of Macie finding is this?
✓ Correct: C. Macie generates both policy findings and sensitive data findings.
Why C is correct: Amazon Macie generates two categories of findings: policy findings (changes to bucket policies, access settings, or encryption that reduce security) and sensitive data findings (discovery of sensitive data in S3 objects). In this scenario, the public bucket generates a policy finding, and the credit card numbers generate a sensitive data finding.
Why others are wrong:
A) Policy finding only — The public access issue generates a policy finding, but the credit card numbers in objects generate a separate sensitive data finding.
B) Custom data identifier finding — Credit card numbers are detected by Macie's managed data identifiers, not custom ones. And this does not account for the public bucket finding.
D) Security Hub compliance finding — Macie generates its own findings. Security Hub receives Macie findings, not the other way around.
Why C is correct: Amazon Macie generates two categories of findings: policy findings (changes to bucket policies, access settings, or encryption that reduce security) and sensitive data findings (discovery of sensitive data in S3 objects). In this scenario, the public bucket generates a policy finding, and the credit card numbers generate a sensitive data finding.
Why others are wrong:
A) Policy finding only — The public access issue generates a policy finding, but the credit card numbers in objects generate a separate sensitive data finding.
B) Custom data identifier finding — Credit card numbers are detected by Macie's managed data identifiers, not custom ones. And this does not account for the public bucket finding.
D) Security Hub compliance finding — Macie generates its own findings. Security Hub receives Macie findings, not the other way around.
Q54.A company needs to detect when CloudTrail logging is disabled in any account. Which TWO services can detect this event?SELECT TWO
✓ Correct: B, E. EventBridge and AWS Config can both detect CloudTrail being disabled.
Why B is correct: You can create an EventBridge rule that matches CloudTrail
Why E is correct: The AWS Config managed rule
Why others are wrong:
A) Macie — Macie discovers sensitive data in S3 and does not monitor CloudTrail operational status.
C) Inspector — Inspector scans for software vulnerabilities, not CloudTrail configuration changes.
D) Detective — Detective investigates security findings but does not proactively detect configuration changes.
Why B is correct: You can create an EventBridge rule that matches CloudTrail
StopLogging API calls. When someone disables a CloudTrail trail, CloudTrail records the API call, and EventBridge can trigger an alert or automated response immediately.Why E is correct: The AWS Config managed rule
cloud-trail-enabled continuously evaluates whether CloudTrail is enabled. When CloudTrail logging is stopped, the rule marks the resource as non-compliant, generating an alert.Why others are wrong:
A) Macie — Macie discovers sensitive data in S3 and does not monitor CloudTrail operational status.
C) Inspector — Inspector scans for software vulnerabilities, not CloudTrail configuration changes.
D) Detective — Detective investigates security findings but does not proactively detect configuration changes.
Q55.VPC Flow Logs can be published to which destinations?
✓ Correct: B. VPC Flow Logs support CloudWatch Logs, S3, and Kinesis Data Firehose.
Why B is correct: VPC Flow Logs can be published to three destinations: Amazon CloudWatch Logs (for real-time monitoring and metric filters), Amazon S3 (for long-term storage and analysis with Athena), and Amazon Kinesis Data Firehose (for streaming to destinations like OpenSearch, Splunk, or S3 with transformation).
Why others are wrong:
A) RDS and DynamoDB — VPC Flow Logs cannot be published directly to RDS or DynamoDB.
C) SQS and Lambda — SQS and Lambda are not direct destinations for VPC Flow Logs.
D) Redshift and OpenSearch — These are not direct destinations. You can get flow logs to OpenSearch through Kinesis Data Firehose, but not directly.
Why B is correct: VPC Flow Logs can be published to three destinations: Amazon CloudWatch Logs (for real-time monitoring and metric filters), Amazon S3 (for long-term storage and analysis with Athena), and Amazon Kinesis Data Firehose (for streaming to destinations like OpenSearch, Splunk, or S3 with transformation).
Why others are wrong:
A) RDS and DynamoDB — VPC Flow Logs cannot be published directly to RDS or DynamoDB.
C) SQS and Lambda — SQS and Lambda are not direct destinations for VPC Flow Logs.
D) Redshift and OpenSearch — These are not direct destinations. You can get flow logs to OpenSearch through Kinesis Data Firehose, but not directly.
Q56.An S3 bucket is configured with event notifications to send to an SNS topic when objects are deleted. Which event type should be specified?
✓ Correct: A. The
Why A is correct: The
Why others are wrong:
B) s3:ObjectCreated:* — This event type triggers when objects are created (uploaded), not when they are deleted.
C) s3:Replication:* — Replication events are for cross-region or same-region replication status, not deletions.
D) s3:LifecycleExpiration:* — Lifecycle expiration events fire when lifecycle rules expire objects, which is a different mechanism than direct deletion.
s3:ObjectRemoved:* event type covers object deletions.Why A is correct: The
s3:ObjectRemoved:* event type matches all object deletion events, including s3:ObjectRemoved:Delete (permanent delete) and s3:ObjectRemoved:DeleteMarkerCreated (delete marker in versioned bucket). This is the correct event type for detecting when objects are deleted from the bucket.Why others are wrong:
B) s3:ObjectCreated:* — This event type triggers when objects are created (uploaded), not when they are deleted.
C) s3:Replication:* — Replication events are for cross-region or same-region replication status, not deletions.
D) s3:LifecycleExpiration:* — Lifecycle expiration events fire when lifecycle rules expire objects, which is a different mechanism than direct deletion.
Q57.A company has enabled GuardDuty Malware Protection. When does GuardDuty initiate an EBS malware scan?
✓ Correct: D. GuardDuty-initiated malware scans are triggered by suspicious findings.
Why D is correct: GuardDuty Malware Protection initiates an EBS volume scan when it detects a GuardDuty finding that indicates potential malware on an EC2 instance (e.g., C&C communication, cryptocurrency mining). GuardDuty creates a snapshot of the EBS volumes and scans the snapshot for malware, without impacting the running instance. On-demand scans can also be initiated manually.
Why others are wrong:
A) Fixed daily schedule — GuardDuty-initiated malware scans are not scheduled. They are triggered by suspicious findings.
B) New EBS volume creation — EBS volume creation does not automatically trigger a malware scan.
C) Manual console trigger — While on-demand scans exist, the GuardDuty-initiated malware protection feature automatically scans when suspicious activity is detected, not only through manual action.
Why D is correct: GuardDuty Malware Protection initiates an EBS volume scan when it detects a GuardDuty finding that indicates potential malware on an EC2 instance (e.g., C&C communication, cryptocurrency mining). GuardDuty creates a snapshot of the EBS volumes and scans the snapshot for malware, without impacting the running instance. On-demand scans can also be initiated manually.
Why others are wrong:
A) Fixed daily schedule — GuardDuty-initiated malware scans are not scheduled. They are triggered by suspicious findings.
B) New EBS volume creation — EBS volume creation does not automatically trigger a malware scan.
C) Manual console trigger — While on-demand scans exist, the GuardDuty-initiated malware protection feature automatically scans when suspicious activity is detected, not only through manual action.
Q58.A security team wants to detect API calls made using root account credentials. Which TWO approaches can detect root credential usage?SELECT TWO
✓ Correct: A, C. GuardDuty and CloudWatch metric filters on CloudTrail logs can both detect root usage.
Why A is correct: GuardDuty generates the
Why C is correct: By sending CloudTrail logs to CloudWatch Logs, you can create a metric filter that matches the pattern
Why others are wrong:
B) Inspector network reachability — Inspector checks for network exposure vulnerabilities, not IAM credential usage.
D) Macie sensitive data — Macie discovers sensitive data in S3, not IAM credential usage.
E) VPC Flow Logs — Flow Logs capture network traffic metadata, not API call identity information.
Why A is correct: GuardDuty generates the
Policy:IAMUser/RootCredentialUsage finding when root account credentials are used to make API requests. This is an automatic detection that requires no additional configuration beyond enabling GuardDuty.Why C is correct: By sending CloudTrail logs to CloudWatch Logs, you can create a metric filter that matches the pattern
{ $.userIdentity.type = "Root" }. When root credentials are used, the filter increments a metric, which can trigger a CloudWatch Alarm.Why others are wrong:
B) Inspector network reachability — Inspector checks for network exposure vulnerabilities, not IAM credential usage.
D) Macie sensitive data — Macie discovers sensitive data in S3, not IAM credential usage.
E) VPC Flow Logs — Flow Logs capture network traffic metadata, not API call identity information.
Q59.A security team wants to manage Amazon Macie across all accounts in their AWS Organization. What is the recommended approach?
✓ Correct: C. Use a delegated administrator for centralized Macie management.
Why C is correct: Amazon Macie supports AWS Organizations integration with a delegated administrator account. The delegated admin can enable Macie for member accounts, manage discovery jobs across accounts, and view aggregated findings. This follows the same multi-account pattern as GuardDuty and Security Hub.
Why others are wrong:
A) Management account only — Enabling Macie in the management account does not automatically enable it in member accounts. Each account needs Macie enabled, and the management account should not be the admin for day-to-day operations.
B) CloudFormation per account — While possible, Macie has native Organizations integration that is simpler and provides centralized management.
D) Security Hub to enable Macie — Security Hub aggregates findings from Macie but does not have the ability to enable or configure Macie in other accounts.
Why C is correct: Amazon Macie supports AWS Organizations integration with a delegated administrator account. The delegated admin can enable Macie for member accounts, manage discovery jobs across accounts, and view aggregated findings. This follows the same multi-account pattern as GuardDuty and Security Hub.
Why others are wrong:
A) Management account only — Enabling Macie in the management account does not automatically enable it in member accounts. Each account needs Macie enabled, and the management account should not be the admin for day-to-day operations.
B) CloudFormation per account — While possible, Macie has native Organizations integration that is simpler and provides centralized management.
D) Security Hub to enable Macie — Security Hub aggregates findings from Macie but does not have the ability to enable or configure Macie in other accounts.
Q60.A company needs to monitor for configuration drift and ensure ongoing compliance across their AWS environment. Which TWO services provide continuous configuration monitoring and compliance evaluation?SELECT TWO
✓ Correct: B, D. AWS Config and Security Hub provide continuous compliance monitoring.
Why B is correct: AWS Config continuously monitors and records AWS resource configurations. Config rules evaluate resources against desired configurations and flag non-compliant resources. It also maintains a configuration history so you can track configuration drift over time.
Why D is correct: AWS Security Hub runs automated security checks against enabled security standards (CIS, FSBP, PCI DSS, NIST) and provides a continuous compliance score. It uses AWS Config rules behind the scenes and aggregates compliance findings across accounts and regions.
Why others are wrong:
A) GuardDuty — GuardDuty detects active threats and suspicious behavior, not configuration compliance or drift.
C) Detective — Detective is for investigating security incidents, not monitoring configuration compliance.
E) Kinesis Data Streams — Kinesis is a data streaming service for real-time data processing, not a compliance monitoring tool.
Why B is correct: AWS Config continuously monitors and records AWS resource configurations. Config rules evaluate resources against desired configurations and flag non-compliant resources. It also maintains a configuration history so you can track configuration drift over time.
Why D is correct: AWS Security Hub runs automated security checks against enabled security standards (CIS, FSBP, PCI DSS, NIST) and provides a continuous compliance score. It uses AWS Config rules behind the scenes and aggregates compliance findings across accounts and regions.
Why others are wrong:
A) GuardDuty — GuardDuty detects active threats and suspicious behavior, not configuration compliance or drift.
C) Detective — Detective is for investigating security incidents, not monitoring configuration compliance.
E) Kinesis Data Streams — Kinesis is a data streaming service for real-time data processing, not a compliance monitoring tool.
Domain 2 — Incident Response (60 Questions)
Q1.A security engineer needs to perform a penetration test against their company's AWS-hosted web application running on EC2 instances. Which statement about AWS penetration testing is correct?
✓ Correct: B. AWS allows penetration testing on permitted services without prior approval.
Why B is correct: AWS permits customers to perform penetration testing against several of their AWS services without prior approval. EC2 instances are one of the permitted services for penetration testing. The customer is responsible for ensuring tests stay within the allowed scope.
Why others are wrong:
A) Requires approval form — AWS removed the requirement for prior approval for penetration testing on permitted services. No request form is needed.
C) Prohibited on all services — Penetration testing is allowed on specific permitted services including EC2, RDS, Aurora, CloudFront, API Gateway, Lambda, Lightsail, and Elastic Beanstalk.
D) Only AWS-certified vendors — Any customer can perform penetration testing on their own resources for the permitted services. No special certification is required.
Why B is correct: AWS permits customers to perform penetration testing against several of their AWS services without prior approval. EC2 instances are one of the permitted services for penetration testing. The customer is responsible for ensuring tests stay within the allowed scope.
Why others are wrong:
A) Requires approval form — AWS removed the requirement for prior approval for penetration testing on permitted services. No request form is needed.
C) Prohibited on all services — Penetration testing is allowed on specific permitted services including EC2, RDS, Aurora, CloudFront, API Gateway, Lambda, Lightsail, and Elastic Beanstalk.
D) Only AWS-certified vendors — Any customer can perform penetration testing on their own resources for the permitted services. No special certification is required.
Q2.Which of the following services is NOT on the list of AWS-permitted services for penetration testing?
✓ Correct: C. Route 53 is not on the permitted penetration testing list.
Why C is correct: Amazon Route 53 is a DNS service and is not included in the list of AWS services permitted for penetration testing. The permitted services include EC2, NAT Gateways, ELB, RDS, CloudFront, Aurora, API Gateway, Lambda, Lightsail, and Elastic Beanstalk environments.
Why others are wrong:
A) EC2 instances — EC2 is explicitly listed as a permitted service for penetration testing.
B) Amazon RDS — RDS is included in the permitted services list for penetration testing.
D) AWS Lambda — Lambda functions and their endpoints are included in the permitted services list.
Why C is correct: Amazon Route 53 is a DNS service and is not included in the list of AWS services permitted for penetration testing. The permitted services include EC2, NAT Gateways, ELB, RDS, CloudFront, Aurora, API Gateway, Lambda, Lightsail, and Elastic Beanstalk environments.
Why others are wrong:
A) EC2 instances — EC2 is explicitly listed as a permitted service for penetration testing.
B) Amazon RDS — RDS is included in the permitted services list for penetration testing.
D) AWS Lambda — Lambda functions and their endpoints are included in the permitted services list.
Q3.A company wants to test the resilience of their AWS infrastructure against DDoS attacks. Which activity is explicitly prohibited during penetration testing on AWS?
✓ Correct: D. DNS zone walking is a prohibited activity during AWS penetration testing.
Why D is correct: AWS explicitly prohibits DNS zone walking via Route 53 Hosted Zones as part of penetration testing. Other prohibited activities include DDoS attacks, port flooding, protocol flooding, and request flooding. These activities could affect other AWS customers sharing the infrastructure.
Why others are wrong:
A) Port scanning EC2 — Port scanning of your own EC2 instances is permitted as part of penetration testing.
B) Vulnerability scanning — Vulnerability scanning of your own web applications on permitted services is allowed.
C) SQL injection testing — Testing for SQL injection against your own RDS instances is a permitted activity.
Why D is correct: AWS explicitly prohibits DNS zone walking via Route 53 Hosted Zones as part of penetration testing. Other prohibited activities include DDoS attacks, port flooding, protocol flooding, and request flooding. These activities could affect other AWS customers sharing the infrastructure.
Why others are wrong:
A) Port scanning EC2 — Port scanning of your own EC2 instances is permitted as part of penetration testing.
B) Vulnerability scanning — Vulnerability scanning of your own web applications on permitted services is allowed.
C) SQL injection testing — Testing for SQL injection against your own RDS instances is a permitted activity.
Q4.A company wants to perform DDoS simulation testing on their AWS resources. What is the correct approach?
✓ Correct: A. DDoS simulation testing must be performed through approved AWS partners.
Why A is correct: AWS allows DDoS simulation testing only through approved AWS DDoS test partners. These partners follow AWS-approved methodologies and ensure the testing does not impact other customers. The DDoS Simulation Testing policy requires using pre-approved vendors.
Why others are wrong:
B) Perform directly — DDoS attacks are explicitly listed as a prohibited activity during penetration testing. You cannot perform them yourself even on your own resources.
C) Submit support ticket — You do not request DDoS capabilities through a support ticket. You must use an approved partner.
D) Not possible — DDoS simulation testing is possible but only through approved AWS DDoS test partners.
Why A is correct: AWS allows DDoS simulation testing only through approved AWS DDoS test partners. These partners follow AWS-approved methodologies and ensure the testing does not impact other customers. The DDoS Simulation Testing policy requires using pre-approved vendors.
Why others are wrong:
B) Perform directly — DDoS attacks are explicitly listed as a prohibited activity during penetration testing. You cannot perform them yourself even on your own resources.
C) Submit support ticket — You do not request DDoS capabilities through a support ticket. You must use an approved partner.
D) Not possible — DDoS simulation testing is possible but only through approved AWS DDoS test partners.
Q5.GuardDuty has detected that an EC2 instance may be compromised. What should be the FIRST step in your incident response?
✓ Correct: B. The first step is to isolate the compromised instance.
Why B is correct: When an EC2 instance is compromised, the first priority is containment. You should isolate the instance by attaching a restrictive security group that blocks all inbound and outbound traffic. This prevents the attacker from causing further damage or communicating with command-and-control servers while preserving evidence for forensics.
Why others are wrong:
A) Terminate immediately — Terminating the instance destroys forensic evidence including memory contents and volatile data needed for investigation.
C) Take EBS snapshot — While taking a snapshot is important for forensics, isolation should come first to prevent further damage. Snapshots are a later step.
D) Contact AWS support — Waiting for AWS support wastes critical time. You should take immediate containment actions first.
Why B is correct: When an EC2 instance is compromised, the first priority is containment. You should isolate the instance by attaching a restrictive security group that blocks all inbound and outbound traffic. This prevents the attacker from causing further damage or communicating with command-and-control servers while preserving evidence for forensics.
Why others are wrong:
A) Terminate immediately — Terminating the instance destroys forensic evidence including memory contents and volatile data needed for investigation.
C) Take EBS snapshot — While taking a snapshot is important for forensics, isolation should come first to prevent further damage. Snapshots are a later step.
D) Contact AWS support — Waiting for AWS support wastes critical time. You should take immediate containment actions first.
Q6.A security team is responding to a compromised EC2 instance. After isolating the instance, which TWO actions should they take for forensic analysis?SELECT TWO
✓ Correct: B, D. EBS snapshots and memory capture are essential for forensic analysis.
Why B is correct: Creating an EBS snapshot preserves the disk state including file system artifacts, logs, malware, and other evidence. This snapshot can be attached to a forensic workstation for offline analysis.
Why D is correct: Capturing a memory dump preserves volatile data like running processes, network connections, encryption keys, and malicious code that only exists in RAM. This data is lost if the instance is stopped or terminated.
Why others are wrong:
A) Restart the instance — Restarting destroys volatile memory contents and may trigger anti-forensics mechanisms planted by attackers.
C) Delete IAM role — You should not delete the IAM role. Instead, you can add a deny-all policy or revoke sessions. Deleting the role may affect other resources.
E) Modify instance type — Changing the instance type has no security benefit and requires stopping the instance, which destroys memory evidence.
Why B is correct: Creating an EBS snapshot preserves the disk state including file system artifacts, logs, malware, and other evidence. This snapshot can be attached to a forensic workstation for offline analysis.
Why D is correct: Capturing a memory dump preserves volatile data like running processes, network connections, encryption keys, and malicious code that only exists in RAM. This data is lost if the instance is stopped or terminated.
Why others are wrong:
A) Restart the instance — Restarting destroys volatile memory contents and may trigger anti-forensics mechanisms planted by attackers.
C) Delete IAM role — You should not delete the IAM role. Instead, you can add a deny-all policy or revoke sessions. Deleting the role may affect other resources.
E) Modify instance type — Changing the instance type has no security benefit and requires stopping the instance, which destroys memory evidence.
Q7.During EC2 forensics, a security engineer needs to capture the memory of a running Linux instance. Which approach should they use?
✓ Correct: C. LiME is the standard tool for Linux memory capture in forensics.
Why C is correct: LiME (Linux Memory Extractor) is a loadable kernel module that allows acquisition of volatile memory from a running Linux system. It must be run while the instance is still running to capture the current state of memory, including running processes, network connections, and in-memory artifacts.
Why others are wrong:
A) EC2 console download — The EC2 console does not provide a native feature to download instance memory contents.
B) Stop and snapshot — Stopping the instance destroys volatile memory. EBS snapshots only capture disk contents, not RAM.
D) Systems Manager — SSM does not have a built-in feature for memory capture. You would need to run LiME via SSM Run Command as a manual approach.
Why C is correct: LiME (Linux Memory Extractor) is a loadable kernel module that allows acquisition of volatile memory from a running Linux system. It must be run while the instance is still running to capture the current state of memory, including running processes, network connections, and in-memory artifacts.
Why others are wrong:
A) EC2 console download — The EC2 console does not provide a native feature to download instance memory contents.
B) Stop and snapshot — Stopping the instance destroys volatile memory. EBS snapshots only capture disk contents, not RAM.
D) Systems Manager — SSM does not have a built-in feature for memory capture. You would need to run LiME via SSM Run Command as a manual approach.
Q8.A security team needs to perform disk forensics on a compromised EC2 instance. What is the recommended approach for examining the EBS volume?
✓ Correct: A. Create a snapshot, build a new volume, and analyze in an isolated forensic environment.
Why A is correct: The proper forensic process is to snapshot the EBS volume, create a new volume from that snapshot, and attach it to a dedicated forensic EC2 instance in an isolated VPC. This preserves the original evidence, allows analysis without modifying the original, and prevents any malware from spreading to production systems.
Why others are wrong:
B) SSH into compromised instance — Logging into the compromised instance can alter evidence (file access times, logs) and risks exposure to active malware.
C) Attach to production instance — Attaching a potentially infected volume to a production instance risks spreading malware to production systems.
D) Amazon Inspector — Inspector performs vulnerability assessments and is not a disk forensics tool. It cannot perform detailed forensic analysis of a compromised volume.
Why A is correct: The proper forensic process is to snapshot the EBS volume, create a new volume from that snapshot, and attach it to a dedicated forensic EC2 instance in an isolated VPC. This preserves the original evidence, allows analysis without modifying the original, and prevents any malware from spreading to production systems.
Why others are wrong:
B) SSH into compromised instance — Logging into the compromised instance can alter evidence (file access times, logs) and risks exposure to active malware.
C) Attach to production instance — Attaching a potentially infected volume to a production instance risks spreading malware to production systems.
D) Amazon Inspector — Inspector performs vulnerability assessments and is not a disk forensics tool. It cannot perform detailed forensic analysis of a compromised volume.
Q9.An S3 bucket in your account has been identified as compromised. GuardDuty is showing suspicious API calls accessing the bucket. What should you do FIRST?
✓ Correct: D. Identify the compromised source and restrict its access immediately.
Why D is correct: When an S3 bucket is compromised, you must first identify the source of the compromise — typically a compromised IAM user or role making the suspicious API calls. Then restrict that entity's access by revoking temporary credentials, disabling access keys, or applying a deny policy. Also review the bucket policy and ACLs to close any unauthorized access paths.
Why others are wrong:
A) Delete all objects — Deleting objects destroys your own data and is not an appropriate response. You should restrict access, not destroy data.
B) Enable versioning — While versioning is good practice, it does not address the active compromise. Access must be restricted first.
C) Create new bucket — Migrating to a new bucket does not solve the problem if the compromised credentials still have access. Address the source first.
Why D is correct: When an S3 bucket is compromised, you must first identify the source of the compromise — typically a compromised IAM user or role making the suspicious API calls. Then restrict that entity's access by revoking temporary credentials, disabling access keys, or applying a deny policy. Also review the bucket policy and ACLs to close any unauthorized access paths.
Why others are wrong:
A) Delete all objects — Deleting objects destroys your own data and is not an appropriate response. You should restrict access, not destroy data.
B) Enable versioning — While versioning is good practice, it does not address the active compromise. Access must be restricted first.
C) Create new bucket — Migrating to a new bucket does not solve the problem if the compromised credentials still have access. Address the source first.
Q10.When responding to a compromised S3 bucket, which S3 feature should you check to determine if data has been accessed or exfiltrated?
✓ Correct: B. Server access logs and CloudTrail data events provide detailed access records.
Why B is correct: S3 server access logs record all requests made to the bucket, and CloudTrail S3 data events log object-level API operations like GetObject, PutObject, and DeleteObject. Together, these reveal who accessed which objects and when, enabling you to determine the scope of data exfiltration.
Why others are wrong:
A) Transfer Acceleration logs — Transfer Acceleration is a performance feature for uploads. It does not provide security audit logs.
C) Lifecycle policy history — Lifecycle policies automate object transitions and deletions. They do not track user access or data exfiltration.
D) Inventory reports — S3 Inventory provides a list of objects and their metadata but does not show who accessed the objects.
Why B is correct: S3 server access logs record all requests made to the bucket, and CloudTrail S3 data events log object-level API operations like GetObject, PutObject, and DeleteObject. Together, these reveal who accessed which objects and when, enabling you to determine the scope of data exfiltration.
Why others are wrong:
A) Transfer Acceleration logs — Transfer Acceleration is a performance feature for uploads. It does not provide security audit logs.
C) Lifecycle policy history — Lifecycle policies automate object transitions and deletions. They do not track user access or data exfiltration.
D) Inventory reports — S3 Inventory provides a list of objects and their metadata but does not show who accessed the objects.
Q11.An ECS cluster has been identified as compromised. A malicious container is running within the cluster. What is the recommended first step?
✓ Correct: C. Isolate the affected tasks to contain the compromise while preserving evidence.
Why C is correct: When an ECS cluster is compromised, the first step is to isolate the affected tasks by modifying the security group to deny all inbound and outbound traffic. This prevents the malicious container from communicating externally while preserving the running state for investigation. You should not stop the tasks before capturing evidence.
Why others are wrong:
A) Delete the cluster — Deleting the cluster destroys all evidence and running containers needed for forensic investigation.
B) Scale to zero — Scaling to zero stops all tasks, including the compromised ones, losing volatile evidence like process memory and network connections.
D) Update container image — Updating the image does not address the current compromise and does not isolate the malicious container.
Why C is correct: When an ECS cluster is compromised, the first step is to isolate the affected tasks by modifying the security group to deny all inbound and outbound traffic. This prevents the malicious container from communicating externally while preserving the running state for investigation. You should not stop the tasks before capturing evidence.
Why others are wrong:
A) Delete the cluster — Deleting the cluster destroys all evidence and running containers needed for forensic investigation.
B) Scale to zero — Scaling to zero stops all tasks, including the compromised ones, losing volatile evidence like process memory and network connections.
D) Update container image — Updating the image does not address the current compromise and does not isolate the malicious container.
Q12.A standalone Docker container running on an EC2 instance has been compromised. After isolating the EC2 instance, what should be done with the container?
✓ Correct: A. Pausing the container preserves its state for forensic analysis.
Why A is correct: Using
Why others are wrong:
B) docker rm -f — Force removing the container destroys all forensic evidence including the container's filesystem, logs, and process state.
C) Restart Docker daemon — Restarting the daemon stops all containers and destroys volatile evidence needed for investigation.
D) Redeploy container — Redeploying replaces the compromised container, destroying all evidence of the compromise.
Why A is correct: Using
docker pause freezes all processes in the container without stopping it, preserving memory, process state, and network connections. After pausing, you can take a snapshot of the EC2 instance's EBS volume to capture the container's filesystem and state for forensic analysis.Why others are wrong:
B) docker rm -f — Force removing the container destroys all forensic evidence including the container's filesystem, logs, and process state.
C) Restart Docker daemon — Restarting the daemon stops all containers and destroys volatile evidence needed for investigation.
D) Redeploy container — Redeploying replaces the compromised container, destroying all evidence of the compromise.
Q13.An RDS database instance has been compromised. What is the recommended approach for containment?
✓ Correct: C. Restrict network access, rotate credentials, and review logs.
Why C is correct: For a compromised RDS instance, you should restrict the security group to block unauthorized access, rotate all database credentials (master password and application credentials), and review DB audit logs to understand the scope of the compromise. This contains the threat while preserving the database for investigation.
Why others are wrong:
A) Delete the instance — Deleting the instance destroys evidence and may cause application outages. You should contain and investigate first.
B) Change only master password — Changing only the master password is insufficient. You must also restrict network access and rotate all application credentials.
D) Restore from backup — Restoring from backup may be needed later but should not be the first step. You need to contain and investigate first.
Why C is correct: For a compromised RDS instance, you should restrict the security group to block unauthorized access, rotate all database credentials (master password and application credentials), and review DB audit logs to understand the scope of the compromise. This contains the threat while preserving the database for investigation.
Why others are wrong:
A) Delete the instance — Deleting the instance destroys evidence and may cause application outages. You should contain and investigate first.
B) Change only master password — Changing only the master password is insufficient. You must also restrict network access and rotate all application credentials.
D) Restore from backup — Restoring from backup may be needed later but should not be the first step. You need to contain and investigate first.
Q14.IAM access keys for a developer have been found exposed in a public GitHub repository. What is the FIRST action you should take?
✓ Correct: B. Deactivate the exposed access keys immediately to prevent unauthorized use.
Why B is correct: The most urgent action is to deactivate (disable) the exposed access keys to immediately prevent any unauthorized use. After deactivation, you should investigate CloudTrail logs to see if the keys were used maliciously, then create new keys for the developer. Deactivation is preferred over deletion initially so you can still see the key ID in logs.
Why others are wrong:
A) Delete IAM user — Deleting the user is too aggressive and disrupts the developer's work. Simply deactivate the exposed keys first.
C) Enable MFA — MFA protects console access but does not protect API calls made with access keys. The exposed keys can still be used without MFA.
D) Rotate access keys — Rotation creates new keys but the old exposed keys remain active until explicitly deactivated. Deactivation must come first.
Why B is correct: The most urgent action is to deactivate (disable) the exposed access keys to immediately prevent any unauthorized use. After deactivation, you should investigate CloudTrail logs to see if the keys were used maliciously, then create new keys for the developer. Deactivation is preferred over deletion initially so you can still see the key ID in logs.
Why others are wrong:
A) Delete IAM user — Deleting the user is too aggressive and disrupts the developer's work. Simply deactivate the exposed keys first.
C) Enable MFA — MFA protects console access but does not protect API calls made with access keys. The exposed keys can still be used without MFA.
D) Rotate access keys — Rotation creates new keys but the old exposed keys remain active until explicitly deactivated. Deactivation must come first.
Q15.After discovering compromised AWS IAM credentials, which TWO actions should you take after deactivating the compromised access keys?SELECT TWO
✓ Correct: A, D. Review CloudTrail and deny all access for the compromised user.
Why A is correct: CloudTrail logs record all API calls made with the compromised credentials. Reviewing them reveals what resources were accessed, created, or modified, helping you determine the blast radius of the compromise.
Why D is correct: Attaching an explicit deny-all policy ensures the user cannot perform any actions even if other policies grant permissions. Revoking active sessions invalidates any temporary credentials that may have been issued before the keys were deactivated.
Why others are wrong:
B) Delete all S3 buckets — This is destructive and unnecessary. You should investigate and secure, not destroy your own resources.
C) Disable CloudTrail — CloudTrail is essential for investigation. Disabling it removes your ability to audit what happened.
E) Shut down all EC2 instances — This causes unnecessary disruption. Only affected resources need attention, not all instances.
Why A is correct: CloudTrail logs record all API calls made with the compromised credentials. Reviewing them reveals what resources were accessed, created, or modified, helping you determine the blast radius of the compromise.
Why D is correct: Attaching an explicit deny-all policy ensures the user cannot perform any actions even if other policies grant permissions. Revoking active sessions invalidates any temporary credentials that may have been issued before the keys were deactivated.
Why others are wrong:
B) Delete all S3 buckets — This is destructive and unnecessary. You should investigate and secure, not destroy your own resources.
C) Disable CloudTrail — CloudTrail is essential for investigation. Disabling it removes your ability to audit what happened.
E) Shut down all EC2 instances — This causes unnecessary disruption. Only affected resources need attention, not all instances.
Q16.An IAM role used by an application has been compromised. The role issues temporary security credentials via STS. How do you immediately revoke all active sessions for this role?
✓ Correct: D. Revoke active sessions adds an inline policy that denies all actions for tokens issued before the current time.
Why D is correct: The IAM console provides a "Revoke active sessions" button that attaches an inline policy with an
Why others are wrong:
A) Delete the role — Deleting the role is too disruptive and affects all applications using it. Existing temporary credentials may still work until they expire.
B) Change trust policy — Modifying the trust policy prevents new sessions but does not revoke existing temporary credentials already issued.
C) Remove all policies — While this removes permissions, it is disruptive and does not specifically target the compromised sessions. The revoke sessions approach is more surgical.
Why D is correct: The IAM console provides a "Revoke active sessions" button that attaches an inline policy with an
aws:TokenIssueTime condition. This policy denies all actions for any temporary credentials issued before the revocation time, effectively invalidating all existing sessions immediately without affecting future legitimate use.Why others are wrong:
A) Delete the role — Deleting the role is too disruptive and affects all applications using it. Existing temporary credentials may still work until they expire.
B) Change trust policy — Modifying the trust policy prevents new sessions but does not revoke existing temporary credentials already issued.
C) Remove all policies — While this removes permissions, it is disruptive and does not specifically target the compromised sessions. The revoke sessions approach is more surgical.
Q17.A Lambda function has been identified as compromised. The function is processing sensitive data and the security team suspects the code has been modified. What should you do FIRST?
✓ Correct: A. Setting reserved concurrency to zero prevents new invocations while preserving the function for investigation.
Why A is correct: Setting the reserved concurrency to zero effectively throttles the function, preventing any new invocations. This stops the compromised function from executing while preserving the code, configuration, and environment variables for forensic investigation. You should also review the function's IAM role and CloudTrail logs.
Why others are wrong:
B) Delete the function — Deleting destroys the compromised code, making it impossible to analyze what was changed and how the compromise occurred.
C) Update to empty handler — Updating the code overwrites the compromised version, destroying evidence of the malicious modifications.
D) Remove triggers — Removing triggers stops event-driven invocations but the function can still be invoked directly via the API. Setting concurrency to zero is more comprehensive.
Why A is correct: Setting the reserved concurrency to zero effectively throttles the function, preventing any new invocations. This stops the compromised function from executing while preserving the code, configuration, and environment variables for forensic investigation. You should also review the function's IAM role and CloudTrail logs.
Why others are wrong:
B) Delete the function — Deleting destroys the compromised code, making it impossible to analyze what was changed and how the compromise occurred.
C) Update to empty handler — Updating the code overwrites the compromised version, destroying evidence of the malicious modifications.
D) Remove triggers — Removing triggers stops event-driven invocations but the function can still be invoked directly via the API. Setting concurrency to zero is more comprehensive.
Q18.When investigating a compromised Lambda function, which of the following should you examine to understand what the function accessed?
✓ Correct: C. The execution role, CloudTrail, and CloudWatch Logs reveal what the function accessed.
Why C is correct: The function's execution role shows what permissions the function had and what resources it could access. CloudTrail logs show the actual AWS API calls the function made, and CloudWatch Logs contain the function's output and any data it logged. Together, these provide a complete picture of the function's activities during the compromise.
Why others are wrong:
A) Lambda Layers — While layers could contain malicious code, they do not show what the function accessed during execution.
B) Provisioned concurrency — Provisioned concurrency is a performance setting and provides no security-relevant information about function activity.
D) Memory and timeout — These are configuration parameters that do not reveal what resources the function accessed.
Why C is correct: The function's execution role shows what permissions the function had and what resources it could access. CloudTrail logs show the actual AWS API calls the function made, and CloudWatch Logs contain the function's output and any data it logged. Together, these provide a complete picture of the function's activities during the compromise.
Why others are wrong:
A) Lambda Layers — While layers could contain malicious code, they do not show what the function accessed during execution.
B) Provisioned concurrency — Provisioned concurrency is a performance setting and provides no security-relevant information about function activity.
D) Memory and timeout — These are configuration parameters that do not reveal what resources the function accessed.
Q19.An organization receives an email from AWS stating that their account may have compromised resources being used for malicious activities. What type of notification is this?
✓ Correct: B. This is an AWS Abuse Report notification.
Why B is correct: An AWS Abuse Report is sent when AWS detects that your resources may be involved in abusive or malicious activities, such as spam, port scanning, DDoS attacks, or hosting malicious content. AWS contacts the account owner to investigate and remediate. You must respond promptly or risk account suspension.
Why others are wrong:
A) Trusted Advisor — Trusted Advisor provides recommendations for cost optimization, performance, security, and fault tolerance. It does not send abuse notifications.
C) GuardDuty finding — GuardDuty findings appear in the GuardDuty console and can trigger EventBridge events. They are not sent as abuse emails from AWS.
D) Config rule violation — AWS Config tracks configuration compliance. It does not send notifications about resources being used for malicious activities.
Why B is correct: An AWS Abuse Report is sent when AWS detects that your resources may be involved in abusive or malicious activities, such as spam, port scanning, DDoS attacks, or hosting malicious content. AWS contacts the account owner to investigate and remediate. You must respond promptly or risk account suspension.
Why others are wrong:
A) Trusted Advisor — Trusted Advisor provides recommendations for cost optimization, performance, security, and fault tolerance. It does not send abuse notifications.
C) GuardDuty finding — GuardDuty findings appear in the GuardDuty console and can trigger EventBridge events. They are not sent as abuse emails from AWS.
D) Config rule violation — AWS Config tracks configuration compliance. It does not send notifications about resources being used for malicious activities.
Q20.You receive an AWS Abuse Report about your EC2 instances being used to send spam emails. Which steps should you take to address this?
✓ Correct: D. Investigate, contain, and respond with remediation actions.
Why D is correct: When you receive an AWS Abuse Report, you must investigate the affected resources, contain the compromise (isolate or stop the abusive instances), and respond to the abuse report detailing the remediation actions taken. Failure to respond promptly can result in AWS suspending your account. This is a required response, not optional.
Why others are wrong:
A) Ignore the report — Ignoring an abuse report can lead to account suspension. AWS Abuse Reports are legitimate and require action.
B) Only reply confirming receipt — Simply acknowledging the report is insufficient. You must take concrete remediation actions and describe them in your response.
C) Open billing case — A billing case does not address the security issue. You must remediate the compromised resources first.
Why D is correct: When you receive an AWS Abuse Report, you must investigate the affected resources, contain the compromise (isolate or stop the abusive instances), and respond to the abuse report detailing the remediation actions taken. Failure to respond promptly can result in AWS suspending your account. This is a required response, not optional.
Why others are wrong:
A) Ignore the report — Ignoring an abuse report can lead to account suspension. AWS Abuse Reports are legitimate and require action.
B) Only reply confirming receipt — Simply acknowledging the report is insufficient. You must take concrete remediation actions and describe them in your response.
C) Open billing case — A billing case does not address the security issue. You must remediate the compromised resources first.
Q21.A security team wants to automatically respond to GuardDuty findings by isolating compromised EC2 instances. Which AWS service should they use to create this automation?
✓ Correct: A. EventBridge can capture GuardDuty findings and trigger Lambda for automated response.
Why A is correct: Amazon EventBridge natively integrates with GuardDuty and receives findings as events. You can create an EventBridge rule that matches specific GuardDuty finding types and triggers a Lambda function to automatically isolate the instance by modifying its security group. This is the standard pattern for automated incident response on AWS.
Why others are wrong:
B) CloudFormation drift detection — CloudFormation detects configuration drift from templates but cannot respond to GuardDuty findings or perform incident response.
C) CloudWatch Alarms with Auto Scaling — CloudWatch Alarms monitor metrics, not security findings. Auto Scaling does not perform security isolation.
D) X-Ray with anomaly detection — X-Ray is an application tracing service for debugging, not a security automation tool.
Why A is correct: Amazon EventBridge natively integrates with GuardDuty and receives findings as events. You can create an EventBridge rule that matches specific GuardDuty finding types and triggers a Lambda function to automatically isolate the instance by modifying its security group. This is the standard pattern for automated incident response on AWS.
Why others are wrong:
B) CloudFormation drift detection — CloudFormation detects configuration drift from templates but cannot respond to GuardDuty findings or perform incident response.
C) CloudWatch Alarms with Auto Scaling — CloudWatch Alarms monitor metrics, not security findings. Auto Scaling does not perform security isolation.
D) X-Ray with anomaly detection — X-Ray is an application tracing service for debugging, not a security automation tool.
Q22.A company wants to automatically remediate non-compliant security group changes. Which combination of services provides automatic detection AND remediation?
✓ Correct: C. AWS Config Rules with SSM Automation provides both detection and automatic remediation.
Why C is correct: AWS Config Rules continuously evaluate resource configurations against desired settings. When a security group becomes non-compliant, the Config Rule detects it and can trigger an SSM Automation document as a remediation action to automatically revert the change. This provides a complete detection-and-remediation pipeline.
Why others are wrong:
A) CloudTrail with SNS — CloudTrail logs API calls and SNS sends notifications, but neither provides automatic remediation of non-compliant resources.
B) Inspector with Lambda — Inspector is for vulnerability assessment, not configuration compliance monitoring. It does not detect security group changes.
D) Trusted Advisor with EventBridge — Trusted Advisor checks are periodic, not real-time, and do not provide the same level of automatic remediation as Config Rules.
Why C is correct: AWS Config Rules continuously evaluate resource configurations against desired settings. When a security group becomes non-compliant, the Config Rule detects it and can trigger an SSM Automation document as a remediation action to automatically revert the change. This provides a complete detection-and-remediation pipeline.
Why others are wrong:
A) CloudTrail with SNS — CloudTrail logs API calls and SNS sends notifications, but neither provides automatic remediation of non-compliant resources.
B) Inspector with Lambda — Inspector is for vulnerability assessment, not configuration compliance monitoring. It does not detect security group changes.
D) Trusted Advisor with EventBridge — Trusted Advisor checks are periodic, not real-time, and do not provide the same level of automatic remediation as Config Rules.
Q23.A GuardDuty finding shows
UnauthorizedAccess:EC2/RDPBruteForce. What does this finding indicate?
✓ Correct: B. This finding indicates RDP brute force activity involving an EC2 instance.
Why B is correct: The
Why others are wrong:
A) DNS exfiltration — DNS exfiltration would be indicated by a finding type like
C) Unencrypted EBS — Encryption status is a configuration issue checked by AWS Config, not a GuardDuty finding type.
D) Overly permissive IAM role — IAM role permissions are analyzed by IAM Access Analyzer, not GuardDuty's EC2-based findings.
Why B is correct: The
UnauthorizedAccess:EC2/RDPBruteForce finding means that an EC2 instance is involved in an RDP (Remote Desktop Protocol) brute force attack. The instance could be the target (someone is trying to brute force into it) or the actor (the instance is compromised and is attacking other systems). You should check VPC Flow Logs and the finding details to determine the direction.Why others are wrong:
A) DNS exfiltration — DNS exfiltration would be indicated by a finding type like
Trojan:EC2/DNSDataExfiltration, not an RDP-related finding.C) Unencrypted EBS — Encryption status is a configuration issue checked by AWS Config, not a GuardDuty finding type.
D) Overly permissive IAM role — IAM role permissions are analyzed by IAM Access Analyzer, not GuardDuty's EC2-based findings.
Q24.Which EventBridge event pattern would correctly capture HIGH severity GuardDuty findings?
✓ Correct: A. GuardDuty uses numeric severity values, with HIGH being 7.0 to 8.9.
Why A is correct: GuardDuty findings use numeric severity values: Low (1.0-3.9), Medium (4.0-6.9), and High (7.0-8.9). In EventBridge, you filter on
Why others are wrong:
B) SecurityHub source — This uses the wrong event source. GuardDuty findings come from
C) String "HIGH" — GuardDuty severity is a numeric value (e.g., 7.5), not a string like "HIGH". Using string matching would not capture any findings.
D) Inspector source — This is the wrong service source. Inspector findings come from
Why A is correct: GuardDuty findings use numeric severity values: Low (1.0-3.9), Medium (4.0-6.9), and High (7.0-8.9). In EventBridge, you filter on
"source": ["aws.guardduty"] and use numeric matching on "detail.severity". The severity is a number, not a string label.Why others are wrong:
B) SecurityHub source — This uses the wrong event source. GuardDuty findings come from
aws.guardduty, not aws.securityhub.C) String "HIGH" — GuardDuty severity is a numeric value (e.g., 7.5), not a string like "HIGH". Using string matching would not capture any findings.
D) Inspector source — This is the wrong service source. Inspector findings come from
aws.inspector, not GuardDuty.
Q25.A company wants to automate notifications and remediation for security events. Which TWO event sources can be used with Amazon EventBridge for security automation?SELECT TWO
✓ Correct: B, E. Both AWS Config and GuardDuty integrate natively with EventBridge.
Why B is correct: AWS Config sends compliance change notifications to EventBridge when resources become compliant or non-compliant. You can create rules to trigger remediation actions like Lambda functions or SSM Automation documents.
Why E is correct: GuardDuty publishes all findings to EventBridge automatically. You can create rules to match specific finding types or severity levels and trigger automated responses like SNS notifications or Lambda-based remediation.
Why others are wrong:
A) S3 storage class changes — S3 storage class transitions are managed by lifecycle policies and are not security events sent to EventBridge.
C) EC2 CPU utilization — CPU utilization is a CloudWatch metric monitored via CloudWatch Alarms, not an EventBridge security event source.
D) IAM password policy changes — While IAM API calls are logged in CloudTrail (which can trigger EventBridge), IAM password policy itself is not a native EventBridge event source in the same way as Config and GuardDuty.
Why B is correct: AWS Config sends compliance change notifications to EventBridge when resources become compliant or non-compliant. You can create rules to trigger remediation actions like Lambda functions or SSM Automation documents.
Why E is correct: GuardDuty publishes all findings to EventBridge automatically. You can create rules to match specific finding types or severity levels and trigger automated responses like SNS notifications or Lambda-based remediation.
Why others are wrong:
A) S3 storage class changes — S3 storage class transitions are managed by lifecycle policies and are not security events sent to EventBridge.
C) EC2 CPU utilization — CPU utilization is a CloudWatch metric monitored via CloudWatch Alarms, not an EventBridge security event source.
D) IAM password policy changes — While IAM API calls are logged in CloudTrail (which can trigger EventBridge), IAM password policy itself is not a native EventBridge event source in the same way as Config and GuardDuty.
Q26.A security engineer is having trouble connecting to an EC2 instance using EC2 Instance Connect. The instance is in a public subnet with a public IP. What is the MOST likely cause?
✓ Correct: D. EC2 Instance Connect requires SSH access from AWS IP ranges.
Why D is correct: EC2 Instance Connect works by pushing a temporary SSH public key to the instance and then establishing an SSH connection. The security group must allow inbound SSH on port 22 from the AWS EC2 Instance Connect service IP ranges for the region. Without this, the connection will fail.
Why others are wrong:
A) No IAM role — The instance does not need an IAM role for EC2 Instance Connect. The user connecting needs IAM permissions for
B) Windows Server — While Instance Connect primarily supports Linux, this would cause a different error, not a connection failure.
C) Instance type too small — Instance type does not affect EC2 Instance Connect functionality.
Why D is correct: EC2 Instance Connect works by pushing a temporary SSH public key to the instance and then establishing an SSH connection. The security group must allow inbound SSH on port 22 from the AWS EC2 Instance Connect service IP ranges for the region. Without this, the connection will fail.
Why others are wrong:
A) No IAM role — The instance does not need an IAM role for EC2 Instance Connect. The user connecting needs IAM permissions for
ec2-instance-connect:SendSSHPublicKey.B) Windows Server — While Instance Connect primarily supports Linux, this would cause a different error, not a connection failure.
C) Instance type too small — Instance type does not affect EC2 Instance Connect functionality.
Q27.A security engineer wants to connect to an EC2 instance in a private subnet that has no public IP address. The instance has the SSM Agent installed. Which method should they use?
✓ Correct: B. SSM Session Manager can connect to instances in private subnets without SSH.
Why B is correct: AWS Systems Manager Session Manager provides secure shell access to EC2 instances without requiring inbound ports, bastion hosts, or public IP addresses. It uses the SSM Agent on the instance which communicates outbound to the SSM service endpoints. This makes it ideal for instances in private subnets.
Why others are wrong:
A) EC2 Instance Connect — Instance Connect requires SSH access (port 22) and typically needs a public IP or an EC2 Instance Connect Endpoint for private subnets.
C) Direct SSH — You cannot SSH directly to a private IP from outside the VPC without a VPN, Direct Connect, or bastion host.
D) CloudShell with SSH — CloudShell does not have direct network access to private subnets in your VPC.
Why B is correct: AWS Systems Manager Session Manager provides secure shell access to EC2 instances without requiring inbound ports, bastion hosts, or public IP addresses. It uses the SSM Agent on the instance which communicates outbound to the SSM service endpoints. This makes it ideal for instances in private subnets.
Why others are wrong:
A) EC2 Instance Connect — Instance Connect requires SSH access (port 22) and typically needs a public IP or an EC2 Instance Connect Endpoint for private subnets.
C) Direct SSH — You cannot SSH directly to a private IP from outside the VPC without a VPN, Direct Connect, or bastion host.
D) CloudShell with SSH — CloudShell does not have direct network access to private subnets in your VPC.
Q28.An EC2 instance is not appearing in the AWS Systems Manager managed instances list. The SSM Agent is installed and running. What is the MOST likely reason?
✓ Correct: C. The instance needs an IAM role with SSM permissions to register with Systems Manager.
Why C is correct: For an EC2 instance to be managed by Systems Manager, it must have an IAM instance profile with the
Why others are wrong:
A) No public IP — A public IP is not required. The instance can reach SSM endpoints through a NAT Gateway, VPC endpoints, or other outbound connectivity methods.
B) Amazon Linux 2 not supported — Amazon Linux 2 is fully supported and comes with the SSM Agent pre-installed.
D) Port 443 inbound blocked — SSM Agent makes outbound HTTPS connections to SSM endpoints. Inbound port 443 is not required; the security group needs to allow outbound traffic.
Why C is correct: For an EC2 instance to be managed by Systems Manager, it must have an IAM instance profile with the
AmazonSSMManagedInstanceCore managed policy (or equivalent permissions). This policy allows the SSM Agent to communicate with the SSM service. Without this role, the agent cannot register the instance.Why others are wrong:
A) No public IP — A public IP is not required. The instance can reach SSM endpoints through a NAT Gateway, VPC endpoints, or other outbound connectivity methods.
B) Amazon Linux 2 not supported — Amazon Linux 2 is fully supported and comes with the SSM Agent pre-installed.
D) Port 443 inbound blocked — SSM Agent makes outbound HTTPS connections to SSM endpoints. Inbound port 443 is not required; the security group needs to allow outbound traffic.
Q29.An EC2 instance in a private subnet needs to communicate with AWS Systems Manager. There is no NAT Gateway. Which TWO solutions allow the SSM Agent to reach SSM endpoints?SELECT TWO
✓ Correct: A, C. VPC endpoints or a NAT Gateway provide the necessary outbound connectivity.
Why A is correct: VPC Interface Endpoints (powered by AWS PrivateLink) for
Why C is correct: A NAT Gateway in a public subnet allows instances in private subnets to make outbound internet connections, enabling the SSM Agent to reach the public SSM service endpoints.
Why others are wrong:
B) Add public IP — Adding a public IP to an instance in a private subnet does not work because the private subnet's route table lacks an internet gateway route.
D) Open inbound port 443 — SSM Agent initiates outbound connections. Inbound port rules do not affect outbound connectivity.
E) Update SSM Agent — The agent version does not solve the network connectivity issue. The agent needs a path to reach SSM endpoints.
Why A is correct: VPC Interface Endpoints (powered by AWS PrivateLink) for
ssm, ssmmessages, and ec2messages allow the SSM Agent to communicate with Systems Manager without requiring internet access. Traffic stays within the AWS network.Why C is correct: A NAT Gateway in a public subnet allows instances in private subnets to make outbound internet connections, enabling the SSM Agent to reach the public SSM service endpoints.
Why others are wrong:
B) Add public IP — Adding a public IP to an instance in a private subnet does not work because the private subnet's route table lacks an internet gateway route.
D) Open inbound port 443 — SSM Agent initiates outbound connections. Inbound port rules do not affect outbound connectivity.
E) Update SSM Agent — The agent version does not solve the network connectivity issue. The agent needs a path to reach SSM endpoints.
Q30.What is the primary purpose of AWS Systems Manager Run Command?
✓ Correct: A. Run Command executes commands on managed instances without needing SSH.
Why A is correct: SSM Run Command allows you to remotely execute commands, scripts, or SSM documents on one or more managed instances without requiring SSH or RDP access. Commands are executed by the SSM Agent on the instances. This is valuable for incident response tasks like running forensic scripts across multiple instances.
Why others are wrong:
B) Deploy CloudFormation stacks — CloudFormation stack deployment is handled by CloudFormation itself or StackSets, not SSM Run Command.
C) Monitor metrics — Monitoring metrics is the job of CloudWatch. SSM Run Command executes actions, not monitoring.
D) Create AMI backups — While SSM Automation can be used to create AMIs, Run Command is specifically for executing commands on instances, not creating backups.
Why A is correct: SSM Run Command allows you to remotely execute commands, scripts, or SSM documents on one or more managed instances without requiring SSH or RDP access. Commands are executed by the SSM Agent on the instances. This is valuable for incident response tasks like running forensic scripts across multiple instances.
Why others are wrong:
B) Deploy CloudFormation stacks — CloudFormation stack deployment is handled by CloudFormation itself or StackSets, not SSM Run Command.
C) Monitor metrics — Monitoring metrics is the job of CloudWatch. SSM Run Command executes actions, not monitoring.
D) Create AMI backups — While SSM Automation can be used to create AMIs, Run Command is specifically for executing commands on instances, not creating backups.
Q31.IAM Access Analyzer has generated a finding for an S3 bucket. What does this finding indicate?
✓ Correct: B. IAM Access Analyzer identifies resources accessible from outside your zone of trust.
Why B is correct: IAM Access Analyzer uses automated reasoning to analyze resource policies and identify resources that are shared with external entities. A finding means the S3 bucket's policy, ACL, or access point allows access from outside your defined zone of trust (your AWS account or organization). This helps identify unintended public or cross-account access.
Why others are wrong:
A) Encryption disabled — IAM Access Analyzer does not check encryption settings. AWS Config or Security Hub would flag this.
C) Storage quota exceeded — S3 has no storage quota by default. Access Analyzer does not monitor storage consumption.
D) Lifecycle misconfiguration — Access Analyzer focuses on access policies, not lifecycle policies.
Why B is correct: IAM Access Analyzer uses automated reasoning to analyze resource policies and identify resources that are shared with external entities. A finding means the S3 bucket's policy, ACL, or access point allows access from outside your defined zone of trust (your AWS account or organization). This helps identify unintended public or cross-account access.
Why others are wrong:
A) Encryption disabled — IAM Access Analyzer does not check encryption settings. AWS Config or Security Hub would flag this.
C) Storage quota exceeded — S3 has no storage quota by default. Access Analyzer does not monitor storage consumption.
D) Lifecycle misconfiguration — Access Analyzer focuses on access policies, not lifecycle policies.
Q32.When you create an IAM Access Analyzer, what defines the "zone of trust" for the analyzer?
✓ Correct: D. The zone of trust is defined by the analyzer type: account or organization.
Why D is correct: When creating an IAM Access Analyzer, you choose the zone of trust as either the current AWS account or the entire AWS Organization. An account-level analyzer flags access from outside the account. An organization-level analyzer flags access from outside the organization, treating all member accounts as trusted.
Why others are wrong:
A) VPC — IAM Access Analyzer operates at the account or organization level. It is not a VPC-level resource and does not use VPCs as trust boundaries.
B) IAM users and roles — The zone of trust is not defined by specific IAM entities. It encompasses the entire account or organization.
C) AWS Region — While the analyzer is regional, the zone of trust boundary is the account or organization, not the region itself.
Why D is correct: When creating an IAM Access Analyzer, you choose the zone of trust as either the current AWS account or the entire AWS Organization. An account-level analyzer flags access from outside the account. An organization-level analyzer flags access from outside the organization, treating all member accounts as trusted.
Why others are wrong:
A) VPC — IAM Access Analyzer operates at the account or organization level. It is not a VPC-level resource and does not use VPCs as trust boundaries.
B) IAM users and roles — The zone of trust is not defined by specific IAM entities. It encompasses the entire account or organization.
C) AWS Region — While the analyzer is regional, the zone of trust boundary is the account or organization, not the region itself.
Q33.What capability does IAM Access Analyzer's policy validation feature provide?
✓ Correct: A. Policy validation checks policies against grammar rules and best practices.
Why A is correct: IAM Access Analyzer's policy validation feature validates IAM policies against IAM policy grammar, AWS best practices, and security warnings. It provides actionable findings categorized as errors, warnings, and suggestions — such as flagging overly broad wildcard permissions or deprecated policy elements.
Why others are wrong:
B) Auto-fixes syntax errors — Policy validation identifies issues but does not automatically fix them. The user must make corrections based on the findings.
C) Real-time monitoring and rollback — Policy validation is an on-demand analysis tool, not a real-time monitoring service. AWS Config would be used for monitoring changes.
D) Cross-account comparison — Policy validation analyzes individual policies, not comparing policies across multiple accounts.
Why A is correct: IAM Access Analyzer's policy validation feature validates IAM policies against IAM policy grammar, AWS best practices, and security warnings. It provides actionable findings categorized as errors, warnings, and suggestions — such as flagging overly broad wildcard permissions or deprecated policy elements.
Why others are wrong:
B) Auto-fixes syntax errors — Policy validation identifies issues but does not automatically fix them. The user must make corrections based on the findings.
C) Real-time monitoring and rollback — Policy validation is an on-demand analysis tool, not a real-time monitoring service. AWS Config would be used for monitoring changes.
D) Cross-account comparison — Policy validation analyzes individual policies, not comparing policies across multiple accounts.
Q34.A security team wants to create least-privilege IAM policies based on actual usage. Which IAM Access Analyzer feature should they use?
✓ Correct: C. Policy generation creates policies based on actual CloudTrail access activity.
Why C is correct: IAM Access Analyzer's policy generation feature analyzes CloudTrail logs to identify the AWS services and actions that an IAM entity has actually used, then generates a least-privilege policy based on that activity. This helps teams right-size permissions by creating policies that include only the permissions actually needed.
Why others are wrong:
A) Findings — Findings identify resources accessible from outside the zone of trust, not usage-based policies.
B) Policy validation — Policy validation checks existing policies for grammar and best practice issues but does not generate new policies based on usage.
D) Archive rules — Archive rules automatically archive findings that match specific criteria. They do not generate policies.
Why C is correct: IAM Access Analyzer's policy generation feature analyzes CloudTrail logs to identify the AWS services and actions that an IAM entity has actually used, then generates a least-privilege policy based on that activity. This helps teams right-size permissions by creating policies that include only the permissions actually needed.
Why others are wrong:
A) Findings — Findings identify resources accessible from outside the zone of trust, not usage-based policies.
B) Policy validation — Policy validation checks existing policies for grammar and best practice issues but does not generate new policies based on usage.
D) Archive rules — Archive rules automatically archive findings that match specific criteria. They do not generate policies.
Q35.An IAM Access Analyzer finding shows that an SQS queue is accessible by an external AWS account. After reviewing, the security team determines this is intentional. What should they do with the finding?
✓ Correct: B. Archive the finding to mark it as intentional and expected.
Why B is correct: When an Access Analyzer finding represents intentional access, you should archive the finding. Archived findings are kept for reference but no longer appear as active findings. You can also create archive rules to automatically archive similar findings in the future. This distinguishes intentional cross-account access from unintended exposure.
Why others are wrong:
A) Delete the finding — Access Analyzer findings cannot be deleted. They can only be archived or resolved (by changing the resource policy).
C) Disable for SQS — You cannot selectively disable Access Analyzer for specific resource types. The analyzer evaluates all supported resource types.
D) Remove queue policy — Removing the policy would break the intentional cross-account access that the team confirmed is needed.
Why B is correct: When an Access Analyzer finding represents intentional access, you should archive the finding. Archived findings are kept for reference but no longer appear as active findings. You can also create archive rules to automatically archive similar findings in the future. This distinguishes intentional cross-account access from unintended exposure.
Why others are wrong:
A) Delete the finding — Access Analyzer findings cannot be deleted. They can only be archived or resolved (by changing the resource policy).
C) Disable for SQS — You cannot selectively disable Access Analyzer for specific resource types. The analyzer evaluates all supported resource types.
D) Remove queue policy — Removing the policy would break the intentional cross-account access that the team confirmed is needed.
Q36.Which AWS resource types does IAM Access Analyzer support for generating findings about external access?
✓ Correct: A. Access Analyzer supports a specific set of resource types with resource-based policies.
Why A is correct: IAM Access Analyzer generates findings for a specific set of resource types including S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and Secrets Manager secrets. It also supports SNS topics, EBS volume snapshots, RDS DB snapshots, RDS DB cluster snapshots, ECR repositories, and EFS file systems.
Why others are wrong:
B) Only S3 and IAM roles — Access Analyzer supports many more resource types beyond just S3 and IAM roles.
C) All resources with resource policies — Not all resources with resource-based policies are supported. Access Analyzer covers a specific, growing list of resource types.
D) EC2, RDS, VPCs — EC2 instances, RDS databases (the running instances), and VPCs are not analyzed by Access Analyzer for external access findings.
Why A is correct: IAM Access Analyzer generates findings for a specific set of resource types including S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and Secrets Manager secrets. It also supports SNS topics, EBS volume snapshots, RDS DB snapshots, RDS DB cluster snapshots, ECR repositories, and EFS file systems.
Why others are wrong:
B) Only S3 and IAM roles — Access Analyzer supports many more resource types beyond just S3 and IAM roles.
C) All resources with resource policies — Not all resources with resource-based policies are supported. Access Analyzer covers a specific, growing list of resource types.
D) EC2, RDS, VPCs — EC2 instances, RDS databases (the running instances), and VPCs are not analyzed by Access Analyzer for external access findings.
Q37.Which category does AWS Trusted Advisor NOT provide checks for?
✓ Correct: D. Trusted Advisor does not check application code quality.
Why D is correct: AWS Trusted Advisor provides checks across five categories: cost optimization, performance, security, fault tolerance, and service limits. It does not analyze application source code or code quality. Code analysis tools like Amazon CodeGuru are used for that purpose.
Why others are wrong:
A) Cost optimization — This is one of the five core categories, including checks for underutilized resources and reserved instance optimization.
B) Security — Security is a core category with checks for open ports, IAM best practices, MFA on root, and public S3 buckets.
C) Fault tolerance — Fault tolerance is a core category with checks for multi-AZ deployments, backups, and redundancy.
Why D is correct: AWS Trusted Advisor provides checks across five categories: cost optimization, performance, security, fault tolerance, and service limits. It does not analyze application source code or code quality. Code analysis tools like Amazon CodeGuru are used for that purpose.
Why others are wrong:
A) Cost optimization — This is one of the five core categories, including checks for underutilized resources and reserved instance optimization.
B) Security — Security is a core category with checks for open ports, IAM best practices, MFA on root, and public S3 buckets.
C) Fault tolerance — Fault tolerance is a core category with checks for multi-AZ deployments, backups, and redundancy.
Q38.Which Trusted Advisor security check is available to ALL AWS accounts, regardless of support plan?
✓ Correct: C. Core security checks like S3 bucket permissions and MFA on root are free for all accounts.
Why C is correct: AWS provides a set of core Trusted Advisor checks free for all accounts, including S3 Bucket Permissions (public access), MFA on Root Account, security groups with unrestricted specific ports, IAM Use, and EBS public snapshots. The full set of checks requires a Business, Enterprise On-Ramp, or Enterprise support plan.
Why others are wrong:
A) IAM password policy — The IAM password policy check is part of the full Trusted Advisor checks, requiring a Business or higher support plan.
B) Unrestricted ports — The basic check for specific high-risk ports (like 0.0.0.0/0 on port 22) is free, but the comprehensive unrestricted port analysis requires a higher plan.
D) All checks on all plans — Only core checks are available on Basic/Developer plans. Full checks require Business or higher.
Why C is correct: AWS provides a set of core Trusted Advisor checks free for all accounts, including S3 Bucket Permissions (public access), MFA on Root Account, security groups with unrestricted specific ports, IAM Use, and EBS public snapshots. The full set of checks requires a Business, Enterprise On-Ramp, or Enterprise support plan.
Why others are wrong:
A) IAM password policy — The IAM password policy check is part of the full Trusted Advisor checks, requiring a Business or higher support plan.
B) Unrestricted ports — The basic check for specific high-risk ports (like 0.0.0.0/0 on port 22) is free, but the comprehensive unrestricted port analysis requires a higher plan.
D) All checks on all plans — Only core checks are available on Basic/Developer plans. Full checks require Business or higher.
Q39.A developer accidentally deleted an S3 bucket that contained important data. What AWS feature could have prevented permanent data loss?
✓ Correct: B. Versioning preserves deleted objects, and MFA Delete prevents accidental permanent deletion.
Why B is correct: S3 Versioning preserves all versions of objects, so a delete operation creates a delete marker rather than permanently removing the object. MFA Delete adds an extra layer of protection by requiring MFA authentication to permanently delete object versions or disable versioning, preventing accidental or unauthorized deletion.
Why others are wrong:
A) Transfer Acceleration — Transfer Acceleration speeds up uploads to S3 using CloudFront edge locations. It provides no data protection.
C) Intelligent-Tiering — Intelligent-Tiering automatically moves objects between storage tiers based on access patterns. It does not protect against deletion.
D) Object Lock Governance — While Object Lock provides protection, Governance mode can be overridden by users with special permissions. MFA Delete is the more direct answer for preventing accidental deletion.
Why B is correct: S3 Versioning preserves all versions of objects, so a delete operation creates a delete marker rather than permanently removing the object. MFA Delete adds an extra layer of protection by requiring MFA authentication to permanently delete object versions or disable versioning, preventing accidental or unauthorized deletion.
Why others are wrong:
A) Transfer Acceleration — Transfer Acceleration speeds up uploads to S3 using CloudFront edge locations. It provides no data protection.
C) Intelligent-Tiering — Intelligent-Tiering automatically moves objects between storage tiers based on access patterns. It does not protect against deletion.
D) Object Lock Governance — While Object Lock provides protection, Governance mode can be overridden by users with special permissions. MFA Delete is the more direct answer for preventing accidental deletion.
Q40.A CloudFormation stack was accidentally deleted, along with all the resources it created. How can the resources be recovered?
✓ Correct: A. Resources with Retain or Snapshot DeletionPolicy survive stack deletion.
Why A is correct: CloudFormation supports a DeletionPolicy attribute on resources. When set to Retain, the resource is preserved when the stack is deleted. When set to Snapshot (for supported resources like RDS, EBS), a snapshot is taken before deletion. These preserved resources can then be imported into a new CloudFormation stack.
Why others are wrong:
B) CloudFormation Recycle Bin — There is no CloudFormation Recycle Bin feature. The Recycle Bin service is for EBS snapshots and AMIs, not CloudFormation stacks.
C) Automatic backups — CloudFormation does not automatically back up stacks. The DeletionPolicy must be configured proactively.
D) Cannot be recovered — Resources can be recovered if the DeletionPolicy was properly set before deletion.
Why A is correct: CloudFormation supports a DeletionPolicy attribute on resources. When set to Retain, the resource is preserved when the stack is deleted. When set to Snapshot (for supported resources like RDS, EBS), a snapshot is taken before deletion. These preserved resources can then be imported into a new CloudFormation stack.
Why others are wrong:
B) CloudFormation Recycle Bin — There is no CloudFormation Recycle Bin feature. The Recycle Bin service is for EBS snapshots and AMIs, not CloudFormation stacks.
C) Automatic backups — CloudFormation does not automatically back up stacks. The DeletionPolicy must be configured proactively.
D) Cannot be recovered — Resources can be recovered if the DeletionPolicy was properly set before deletion.
Q41.An EBS volume was accidentally deleted. What AWS feature could help recover it?
✓ Correct: C. The Recycle Bin can retain deleted EBS snapshots for recovery.
Why C is correct: The Amazon EBS Recycle Bin allows you to set retention rules that protect EBS snapshots and AMIs from accidental deletion. When a snapshot is deleted, it goes to the Recycle Bin and is retained for the configured period, allowing recovery. If you had snapshots of the volume before deletion, you can restore the volume from those snapshots.
Why others are wrong:
A) EBS volume versioning — EBS does not have a versioning feature like S3. There is no built-in version history for EBS volumes.
B) AWS Backup auto restore — AWS Backup manages backups but does not automatically restore deleted resources. You would need to manually initiate a restore from a backup.
D) EBS volume replication — There is no native EBS volume replication feature that would automatically protect against deletion.
Why C is correct: The Amazon EBS Recycle Bin allows you to set retention rules that protect EBS snapshots and AMIs from accidental deletion. When a snapshot is deleted, it goes to the Recycle Bin and is retained for the configured period, allowing recovery. If you had snapshots of the volume before deletion, you can restore the volume from those snapshots.
Why others are wrong:
A) EBS volume versioning — EBS does not have a versioning feature like S3. There is no built-in version history for EBS volumes.
B) AWS Backup auto restore — AWS Backup manages backups but does not automatically restore deleted resources. You would need to manually initiate a restore from a backup.
D) EBS volume replication — There is no native EBS volume replication feature that would automatically protect against deletion.
Q42.A company needs to automate the isolation of compromised EC2 instances when GuardDuty detects a threat. Which TWO components are needed for this automation?SELECT TWO
✓ Correct: A, D. EventBridge rule + Lambda function provide the detection-to-action pipeline.
Why A is correct: An EventBridge rule is needed to capture GuardDuty findings. GuardDuty automatically publishes findings to EventBridge, and the rule filters for specific finding types or severity levels to trigger the response.
Why D is correct: A Lambda function serves as the target of the EventBridge rule and contains the logic to isolate the instance — typically by replacing the instance's security group with a restrictive one that blocks all traffic.
Why others are wrong:
B) S3 bucket trigger — S3 triggers respond to object events, not security findings. Not relevant to GuardDuty automation.
C) Inspector assessment — Inspector performs vulnerability assessments and is not part of the GuardDuty automated response workflow.
E) Config managed rule — Config rules evaluate resource compliance but are not the mechanism for responding to GuardDuty findings.
Why A is correct: An EventBridge rule is needed to capture GuardDuty findings. GuardDuty automatically publishes findings to EventBridge, and the rule filters for specific finding types or severity levels to trigger the response.
Why D is correct: A Lambda function serves as the target of the EventBridge rule and contains the logic to isolate the instance — typically by replacing the instance's security group with a restrictive one that blocks all traffic.
Why others are wrong:
B) S3 bucket trigger — S3 triggers respond to object events, not security findings. Not relevant to GuardDuty automation.
C) Inspector assessment — Inspector performs vulnerability assessments and is not part of the GuardDuty automated response workflow.
E) Config managed rule — Config rules evaluate resource compliance but are not the mechanism for responding to GuardDuty findings.
Q43.During incident response for a compromised EC2 instance, why is it important to capture the instance metadata before termination?
✓ Correct: D. Instance metadata provides critical context for the forensic investigation.
Why D is correct: Capturing instance metadata including the security group, IAM role, VPC/subnet, tags, and launch configuration is essential for understanding the security context. This information helps determine what the instance had access to, what network rules were in place, and how the instance was configured — all critical for root cause analysis and understanding the blast radius.
Why others are wrong:
A) Billing department — While tags may contain cost allocation info, billing is not the primary reason for capturing metadata during incident response.
B) Availability Zone — The AZ for a replacement instance is an operational concern, not a forensic priority.
C) Pricing model — The instance pricing model (on-demand, reserved, spot) is irrelevant to the security investigation.
Why D is correct: Capturing instance metadata including the security group, IAM role, VPC/subnet, tags, and launch configuration is essential for understanding the security context. This information helps determine what the instance had access to, what network rules were in place, and how the instance was configured — all critical for root cause analysis and understanding the blast radius.
Why others are wrong:
A) Billing department — While tags may contain cost allocation info, billing is not the primary reason for capturing metadata during incident response.
B) Availability Zone — The AZ for a replacement instance is an operational concern, not a forensic priority.
C) Pricing model — The instance pricing model (on-demand, reserved, spot) is irrelevant to the security investigation.
Q44.A compromised Lambda function was using environment variables to store database credentials. What should be done about these credentials during incident response?
✓ Correct: B. Any credentials accessible to a compromised function must be rotated immediately.
Why B is correct: When a Lambda function is compromised, any secrets it had access to — including environment variables containing database credentials — must be considered compromised. You should rotate these credentials immediately to prevent unauthorized access to the database. After rotation, consider using AWS Secrets Manager for future credential management.
Why others are wrong:
A) Delete the function — Deleting the function removes evidence and does not address the fact that the credentials may have already been exfiltrated.
C) Encrypt and redeploy — Encrypting environment variables does not help if the attacker already captured the plaintext values. Rotation is needed.
D) Move to function code — Hardcoding credentials in code is a worse security practice and does not address the compromise.
Why B is correct: When a Lambda function is compromised, any secrets it had access to — including environment variables containing database credentials — must be considered compromised. You should rotate these credentials immediately to prevent unauthorized access to the database. After rotation, consider using AWS Secrets Manager for future credential management.
Why others are wrong:
A) Delete the function — Deleting the function removes evidence and does not address the fact that the credentials may have already been exfiltrated.
C) Encrypt and redeploy — Encrypting environment variables does not help if the attacker already captured the plaintext values. Rotation is needed.
D) Move to function code — Hardcoding credentials in code is a worse security practice and does not address the compromise.
Q45.An organization's AWS account may be compromised at the root level. Which action is MOST critical to take first?
✓ Correct: A. Securing the root account is the highest priority when account compromise is suspected.
Why A is correct: The root account has unrestricted access to all resources and cannot be limited by IAM policies. If compromised, changing the password and enabling (or resetting) MFA is the most critical first step. You should also rotate all IAM access keys, review CloudTrail logs, and check for unauthorized resources or IAM entities created by the attacker.
Why others are wrong:
B) Delete all IAM users — This is overly destructive and disrupts all legitimate users. You should disable compromised credentials, not delete all users.
C) Close the account — Closing the account is a last resort and may result in loss of data and resources. Secure and investigate first.
D) Contact Solutions Architect — While contacting AWS Support is important, securing the root account should not be delayed waiting for external help.
Why A is correct: The root account has unrestricted access to all resources and cannot be limited by IAM policies. If compromised, changing the password and enabling (or resetting) MFA is the most critical first step. You should also rotate all IAM access keys, review CloudTrail logs, and check for unauthorized resources or IAM entities created by the attacker.
Why others are wrong:
B) Delete all IAM users — This is overly destructive and disrupts all legitimate users. You should disable compromised credentials, not delete all users.
C) Close the account — Closing the account is a last resort and may result in loss of data and resources. Secure and investigate first.
D) Contact Solutions Architect — While contacting AWS Support is important, securing the root account should not be delayed waiting for external help.
Q46.During incident response, a security team needs to determine if a compromised IAM user created any new IAM users or roles. Which service provides this information?
✓ Correct: C. CloudTrail logs all IAM API calls including CreateUser and CreateRole.
Why C is correct: AWS CloudTrail records all API calls made in the account, including IAM actions like
Why others are wrong:
A) GuardDuty — GuardDuty may detect unusual API activity as a finding, but it does not provide a detailed log of every API call made by a specific user.
B) AWS Config — Config tracks resource configuration changes over time but does not show who made the API call in the same way CloudTrail does.
D) Inspector — Inspector performs vulnerability assessments on EC2 instances and container images. It does not track IAM API calls.
Why C is correct: AWS CloudTrail records all API calls made in the account, including IAM actions like
CreateUser, CreateRole, CreateAccessKey, and AttachUserPolicy. By filtering CloudTrail events for the compromised user's identity, you can determine exactly what IAM actions they performed.Why others are wrong:
A) GuardDuty — GuardDuty may detect unusual API activity as a finding, but it does not provide a detailed log of every API call made by a specific user.
B) AWS Config — Config tracks resource configuration changes over time but does not show who made the API call in the same way CloudTrail does.
D) Inspector — Inspector performs vulnerability assessments on EC2 instances and container images. It does not track IAM API calls.
Q47.A security engineer wants to use SSM Session Manager to connect to instances but also needs to log all session activity. Where can session logs be stored?
✓ Correct: D. Session Manager can log session data to both S3 and CloudWatch Logs.
Why D is correct: SSM Session Manager can be configured to send session logs to both Amazon S3 and Amazon CloudWatch Logs. S3 stores the full session output, while CloudWatch Logs enables real-time monitoring and alerting on session activity. This logging is configured in the Session Manager preferences and is essential for audit compliance.
Why others are wrong:
A) Only SSM console — The SSM console shows session history but does not store detailed session output. External logging destinations must be configured.
B) Only CloudWatch Logs — CloudWatch Logs is one option, but S3 is also supported. Both can be used simultaneously.
C) Only S3 — S3 is one option, but CloudWatch Logs is also supported for session logging.
Why D is correct: SSM Session Manager can be configured to send session logs to both Amazon S3 and Amazon CloudWatch Logs. S3 stores the full session output, while CloudWatch Logs enables real-time monitoring and alerting on session activity. This logging is configured in the Session Manager preferences and is essential for audit compliance.
Why others are wrong:
A) Only SSM console — The SSM console shows session history but does not store detailed session output. External logging destinations must be configured.
B) Only CloudWatch Logs — CloudWatch Logs is one option, but S3 is also supported. Both can be used simultaneously.
C) Only S3 — S3 is one option, but CloudWatch Logs is also supported for session logging.
Q48.An SSM Automation document is being used to automatically remediate non-compliant resources. What is an SSM Automation document?
✓ Correct: A. SSM Automation documents define automated remediation workflows.
Why A is correct: An SSM Automation document (also called a runbook) is a JSON or YAML document that defines a series of automated steps to perform on AWS resources. Steps can include API calls, running scripts, approvals, and conditional logic. AWS provides pre-built automation documents, and you can create custom ones for incident response and remediation workflows.
Why others are wrong:
B) CloudFormation template — SSM documents and CloudFormation templates are different constructs. While both use JSON/YAML, they serve different purposes.
C) PDF document — SSM documents are executable automation definitions, not documentation files.
D) IAM policy document — IAM policy documents define permissions. SSM Automation documents define automated workflows and actions.
Why A is correct: An SSM Automation document (also called a runbook) is a JSON or YAML document that defines a series of automated steps to perform on AWS resources. Steps can include API calls, running scripts, approvals, and conditional logic. AWS provides pre-built automation documents, and you can create custom ones for incident response and remediation workflows.
Why others are wrong:
B) CloudFormation template — SSM documents and CloudFormation templates are different constructs. While both use JSON/YAML, they serve different purposes.
C) PDF document — SSM documents are executable automation definitions, not documentation files.
D) IAM policy document — IAM policy documents define permissions. SSM Automation documents define automated workflows and actions.
Q49.A company wants to detect when someone disables CloudTrail logging. Which approach provides the FASTEST automated detection?
✓ Correct: B. EventBridge provides near-real-time detection of CloudTrail API changes.
Why B is correct: CloudTrail API calls like
Why others are wrong:
A) Config rule daily — AWS Config rules can detect this but evaluation frequency may not be as immediate as EventBridge events.
C) Trusted Advisor — Trusted Advisor checks are periodic and not designed for real-time event detection.
D) Weekly manual review — Manual weekly reviews are far too slow. An attacker could operate undetected for up to a week.
Why B is correct: CloudTrail API calls like
StopLogging are sent to EventBridge in near real-time. By creating an EventBridge rule that matches the StopLogging API call, you can trigger an immediate notification (via SNS) or automated remediation (via Lambda to re-enable logging). This provides the fastest detection.Why others are wrong:
A) Config rule daily — AWS Config rules can detect this but evaluation frequency may not be as immediate as EventBridge events.
C) Trusted Advisor — Trusted Advisor checks are periodic and not designed for real-time event detection.
D) Weekly manual review — Manual weekly reviews are far too slow. An attacker could operate undetected for up to a week.
Q50.Which of the following is a correct step when responding to a compromised S3 bucket where objects may have been modified?
✓ Correct: A. Versioning allows recovery of original objects that were modified by an attacker.
Why A is correct: If S3 versioning was enabled before the compromise, each object modification creates a new version while preserving the previous version. You can identify modified objects and restore the previous versions to recover the original data. This is why enabling versioning proactively is a critical security best practice.
Why others are wrong:
B) Cross-Region Replication — Enabling CRR after the compromise would replicate the already-compromised objects to another bucket, not restore originals.
C) Change to Glacier — Changing storage class does not prevent access via API calls. It only affects retrieval time and cost.
D) Delete and recreate — Deleting the bucket destroys all data and evidence. This is not an appropriate incident response action.
Why A is correct: If S3 versioning was enabled before the compromise, each object modification creates a new version while preserving the previous version. You can identify modified objects and restore the previous versions to recover the original data. This is why enabling versioning proactively is a critical security best practice.
Why others are wrong:
B) Cross-Region Replication — Enabling CRR after the compromise would replicate the already-compromised objects to another bucket, not restore originals.
C) Change to Glacier — Changing storage class does not prevent access via API calls. It only affects retrieval time and cost.
D) Delete and recreate — Deleting the bucket destroys all data and evidence. This is not an appropriate incident response action.
Q51.A GuardDuty finding indicates
Recon:EC2/PortProbeUnprotectedPort. What does this finding mean?
✓ Correct: C. This finding indicates reconnaissance activity targeting an unprotected port.
Why C is correct: The
Why others are wrong:
A) Instance performing scans — This finding is about the instance being the target of probing, not the actor. An outbound scan would have a different finding type.
B) No security group — EC2 instances always have at least one security group. This finding is about open ports being probed, not missing security groups.
D) All outbound traffic allowed — Outbound rules are not what this finding addresses. It is about inbound probing of open ports.
Why C is correct: The
Recon:EC2/PortProbeUnprotectedPort finding means that an unprotected port on an EC2 instance is being probed by a known malicious actor. This is a reconnaissance attempt where the attacker is scanning for open, vulnerable services. The security group should be reviewed to restrict access to only necessary ports and trusted IP ranges.Why others are wrong:
A) Instance performing scans — This finding is about the instance being the target of probing, not the actor. An outbound scan would have a different finding type.
B) No security group — EC2 instances always have at least one security group. This finding is about open ports being probed, not missing security groups.
D) All outbound traffic allowed — Outbound rules are not what this finding addresses. It is about inbound probing of open ports.
Q52.A security engineer needs to troubleshoot why EC2 Instance Connect is failing for an instance in a private subnet. Which TWO items should they verify?SELECT TWO
✓ Correct: B, E. An Instance Connect Endpoint and proper security group rules are needed for private subnets.
Why B is correct: For instances in private subnets without public IPs, an EC2 Instance Connect Endpoint must be created in the VPC. This endpoint enables connectivity to private instances without requiring an internet gateway or NAT gateway.
Why E is correct: The instance's security group must allow inbound SSH (port 22) traffic from the EC2 Instance Connect Endpoint's security group or IP range. Without this rule, the connection will be blocked.
Why others are wrong:
A) Elastic IP — An Elastic IP is not required when using an EC2 Instance Connect Endpoint for private subnet access.
C) Windows AMI — EC2 Instance Connect supports Linux instances, not Windows. Using Windows would be the problem, not the solution.
D) At least 4 vCPUs — Instance size and vCPU count do not affect EC2 Instance Connect functionality.
Why B is correct: For instances in private subnets without public IPs, an EC2 Instance Connect Endpoint must be created in the VPC. This endpoint enables connectivity to private instances without requiring an internet gateway or NAT gateway.
Why E is correct: The instance's security group must allow inbound SSH (port 22) traffic from the EC2 Instance Connect Endpoint's security group or IP range. Without this rule, the connection will be blocked.
Why others are wrong:
A) Elastic IP — An Elastic IP is not required when using an EC2 Instance Connect Endpoint for private subnet access.
C) Windows AMI — EC2 Instance Connect supports Linux instances, not Windows. Using Windows would be the problem, not the solution.
D) At least 4 vCPUs — Instance size and vCPU count do not affect EC2 Instance Connect functionality.
Q53.Which SSM feature allows you to securely store configuration data and secrets such as database strings, passwords, and license keys?
✓ Correct: B. Parameter Store provides secure storage for configuration data and secrets.
Why B is correct: SSM Parameter Store provides secure, hierarchical storage for configuration data and secret management. It supports
Why others are wrong:
A) Run Command — Run Command executes commands on managed instances. It does not store configuration data or secrets.
C) Patch Manager — Patch Manager automates the process of patching managed instances with security and other updates.
D) State Manager — State Manager maintains instances in a defined state by applying configurations at scheduled intervals.
Why B is correct: SSM Parameter Store provides secure, hierarchical storage for configuration data and secret management. It supports
SecureString parameters encrypted with KMS, standard strings, and string lists. It integrates with IAM for access control and can be referenced by other AWS services like Lambda, ECS, and CloudFormation.Why others are wrong:
A) Run Command — Run Command executes commands on managed instances. It does not store configuration data or secrets.
C) Patch Manager — Patch Manager automates the process of patching managed instances with security and other updates.
D) State Manager — State Manager maintains instances in a defined state by applying configurations at scheduled intervals.
Q54.A security engineer needs to investigate which AWS services an IAM role has been using over the past 90 days to right-size its permissions. Which tool provides this information?
✓ Correct: D. Access Analyzer policy generation analyzes CloudTrail to show actual service usage.
Why D is correct: IAM Access Analyzer's policy generation feature analyzes CloudTrail logs for a specified period (up to 90 days) to identify which services and actions an IAM entity has actually used. It then generates a least-privilege policy based on this activity, making it the ideal tool for right-sizing permissions.
Why others are wrong:
A) Trusted Advisor — Trusted Advisor provides general security recommendations but does not analyze detailed per-role service usage patterns.
B) Credential Report — The IAM Credential Report shows credential status (last used, MFA, password age) for users but does not detail which services and actions a role has called.
C) Cost Explorer — Cost Explorer shows spending by service but does not map API calls to specific IAM roles.
Why D is correct: IAM Access Analyzer's policy generation feature analyzes CloudTrail logs for a specified period (up to 90 days) to identify which services and actions an IAM entity has actually used. It then generates a least-privilege policy based on this activity, making it the ideal tool for right-sizing permissions.
Why others are wrong:
A) Trusted Advisor — Trusted Advisor provides general security recommendations but does not analyze detailed per-role service usage patterns.
B) Credential Report — The IAM Credential Report shows credential status (last used, MFA, password age) for users but does not detail which services and actions a role has called.
C) Cost Explorer — Cost Explorer shows spending by service but does not map API calls to specific IAM roles.
Q55.During an incident involving a compromised ECS task on Fargate, how does the forensic approach differ from EC2-based ECS tasks?
✓ Correct: A. Fargate abstracts the infrastructure, limiting forensic capabilities compared to EC2.
Why A is correct: With Fargate, AWS manages the underlying infrastructure, so you cannot access the host to perform traditional forensic actions like memory dumps or disk captures. Forensics is limited to application-level logs in CloudWatch, CloudTrail API logs, and network flow logs. This is a key limitation of serverless container forensics.
Why others are wrong:
B) Cannot be compromised — Fargate tasks can absolutely be compromised through vulnerable application code, container images, or stolen credentials.
C) SSH into Fargate host — You cannot SSH into the underlying Fargate infrastructure. AWS manages the host and does not provide direct access.
D) Auto-captures forensics — Fargate does not automatically capture forensic evidence. You must rely on logging configured before the incident.
Why A is correct: With Fargate, AWS manages the underlying infrastructure, so you cannot access the host to perform traditional forensic actions like memory dumps or disk captures. Forensics is limited to application-level logs in CloudWatch, CloudTrail API logs, and network flow logs. This is a key limitation of serverless container forensics.
Why others are wrong:
B) Cannot be compromised — Fargate tasks can absolutely be compromised through vulnerable application code, container images, or stolen credentials.
C) SSH into Fargate host — You cannot SSH into the underlying Fargate infrastructure. AWS manages the host and does not provide direct access.
D) Auto-captures forensics — Fargate does not automatically capture forensic evidence. You must rely on logging configured before the incident.
Q56.A security team has set up an EventBridge rule to trigger a Lambda function when GuardDuty finds a compromised EC2 instance. The Lambda function should change the security group. What IAM permission does the Lambda function's execution role need?
✓ Correct: C. The Lambda function needs EC2 permissions to modify the instance's security group.
Why C is correct: To change the security groups attached to an EC2 instance, the Lambda function's execution role needs the
Why others are wrong:
A) guardduty:UpdateFindings — This permission allows updating GuardDuty finding status but does not grant permission to modify EC2 instances.
B) events:PutEvents — This permission is for publishing events to EventBridge, not for modifying EC2 security groups.
D) lambda:InvokeFunction — This permission is for invoking Lambda functions. The Lambda function itself needs EC2 permissions, not Lambda invocation permissions.
Why C is correct: To change the security groups attached to an EC2 instance, the Lambda function's execution role needs the
ec2:ModifyInstanceAttribute permission (or ec2:ModifyNetworkInterfaceAttribute for VPC instances). This allows the function to replace the instance's security groups with a restrictive isolation security group.Why others are wrong:
A) guardduty:UpdateFindings — This permission allows updating GuardDuty finding status but does not grant permission to modify EC2 instances.
B) events:PutEvents — This permission is for publishing events to EventBridge, not for modifying EC2 security groups.
D) lambda:InvokeFunction — This permission is for invoking Lambda functions. The Lambda function itself needs EC2 permissions, not Lambda invocation permissions.
Q57.A company needs to prevent accidental deletion of critical AWS resources. Which TWO strategies help protect against accidental resource deletion?SELECT TWO
✓ Correct: C, D. Resource-level protection and IAM policy restrictions prevent accidental deletion.
Why C is correct: Termination protection on EC2 instances and deletion protection on RDS instances prevent accidental deletion through the console, CLI, or API. These must be explicitly disabled before the resource can be deleted.
Why D is correct: IAM policies with MFA conditions can require multi-factor authentication for destructive actions like
Why others are wrong:
A) X-Ray tracing — X-Ray is for application performance tracing and debugging, not resource deletion protection.
B) Macie scanning — Macie discovers sensitive data in S3 but does not prevent resource deletion.
E) VPC Flow Logs — Flow Logs capture network traffic metadata for analysis but do not prevent resource deletion.
Why C is correct: Termination protection on EC2 instances and deletion protection on RDS instances prevent accidental deletion through the console, CLI, or API. These must be explicitly disabled before the resource can be deleted.
Why D is correct: IAM policies with MFA conditions can require multi-factor authentication for destructive actions like
ec2:TerminateInstances or rds:DeleteDBInstance. This adds an extra verification step before critical resources can be deleted.Why others are wrong:
A) X-Ray tracing — X-Ray is for application performance tracing and debugging, not resource deletion protection.
B) Macie scanning — Macie discovers sensitive data in S3 but does not prevent resource deletion.
E) VPC Flow Logs — Flow Logs capture network traffic metadata for analysis but do not prevent resource deletion.
Q58.A security engineer finds that GuardDuty is reporting
CryptoCurrency:EC2/BitcoinTool.B!DNS for an EC2 instance. What does this indicate?
✓ Correct: B. This finding indicates the instance is communicating with cryptocurrency mining domains.
Why B is correct: The
Why others are wrong:
A) Legitimate exchange — GuardDuty flags this as a threat finding. Even if the mining were intentional, this finding indicates potential compromise until verified.
C) Bitcoin wallet on EBS — GuardDuty detects DNS activity, not file contents on EBS volumes. This finding is about network behavior.
D) Security group allows mining traffic — GuardDuty does not evaluate security group rules for specific protocols. This finding is based on DNS query analysis.
Why B is correct: The
CryptoCurrency:EC2/BitcoinTool.B!DNS finding means the EC2 instance is making DNS queries to domains associated with cryptocurrency mining (Bitcoin or other crypto networks). This typically indicates the instance has been compromised and is being used for unauthorized cryptomining, which consumes resources and increases costs.Why others are wrong:
A) Legitimate exchange — GuardDuty flags this as a threat finding. Even if the mining were intentional, this finding indicates potential compromise until verified.
C) Bitcoin wallet on EBS — GuardDuty detects DNS activity, not file contents on EBS volumes. This finding is about network behavior.
D) Security group allows mining traffic — GuardDuty does not evaluate security group rules for specific protocols. This finding is based on DNS query analysis.
Q59.When performing EC2 forensics, what is the correct order of operations for preserving evidence on a compromised instance?
✓ Correct: A. This is the correct forensic evidence preservation order.
Why A is correct: The proper forensic order is: (1) Tag the instance to identify it as under investigation, (2) Isolate by replacing the security group with a deny-all group, (3) Capture memory while the instance is still running (volatile data is lost if stopped), (4) Snapshot EBS volumes for disk forensics, and (5) Investigate in an isolated forensic environment using copies of the evidence.
Why others are wrong:
B) Terminate after snapshot — Terminating before memory capture loses volatile evidence. Investigation should happen in a separate environment, not by terminating.
C) Stop then capture memory — Stopping the instance destroys memory contents. Memory must be captured while the instance is running.
D) Terminate and recover — Terminating destroys all evidence. This approach is not forensically sound.
Why A is correct: The proper forensic order is: (1) Tag the instance to identify it as under investigation, (2) Isolate by replacing the security group with a deny-all group, (3) Capture memory while the instance is still running (volatile data is lost if stopped), (4) Snapshot EBS volumes for disk forensics, and (5) Investigate in an isolated forensic environment using copies of the evidence.
Why others are wrong:
B) Terminate after snapshot — Terminating before memory capture loses volatile evidence. Investigation should happen in a separate environment, not by terminating.
C) Stop then capture memory — Stopping the instance destroys memory contents. Memory must be captured while the instance is running.
D) Terminate and recover — Terminating destroys all evidence. This approach is not forensically sound.
Q60.A company wants to use AWS Config for incident response. Which TWO capabilities does AWS Config provide that support security incident detection and response?SELECT TWO
✓ Correct: B, D. Config provides resource history and compliance rules for incident detection and response.
Why B is correct: AWS Config maintains a configuration timeline for each resource, showing the complete history of configuration changes. During incident response, this helps determine what changed, when it changed, and what the resource looked like before the incident — essential for root cause analysis.
Why D is correct: AWS Config rules (both managed and custom) continuously evaluate resource configurations against desired baselines. When non-compliance is detected, rules can trigger automatic remediation using SSM Automation documents, enabling proactive security response.
Why others are wrong:
A) Malware scanning — AWS Config does not perform malware scanning. Amazon GuardDuty Malware Protection or third-party tools provide this capability.
C) Automatic encryption — Config can detect unencrypted buckets and trigger remediation to enable encryption, but it does not automatically encrypt buckets on its own.
E) Packet capture — Config does not capture network packets. VPC Traffic Mirroring or third-party tools are used for packet capture.
Why B is correct: AWS Config maintains a configuration timeline for each resource, showing the complete history of configuration changes. During incident response, this helps determine what changed, when it changed, and what the resource looked like before the incident — essential for root cause analysis.
Why D is correct: AWS Config rules (both managed and custom) continuously evaluate resource configurations against desired baselines. When non-compliance is detected, rules can trigger automatic remediation using SSM Automation documents, enabling proactive security response.
Why others are wrong:
A) Malware scanning — AWS Config does not perform malware scanning. Amazon GuardDuty Malware Protection or third-party tools provide this capability.
C) Automatic encryption — Config can detect unencrypted buckets and trigger remediation to enable encryption, but it does not automatically encrypt buckets on its own.
E) Packet capture — Config does not capture network packets. VPC Traffic Mirroring or third-party tools are used for packet capture.
Domain 3 — Infrastructure Security (60 Questions)
Q1.A company has a public-facing web application running on EC2 instances in a public subnet. The security team wants to ensure that the instances can be reached from the internet on port 443 only. Which component attached to the VPC allows inbound traffic from the internet to reach the subnet?
✓ Correct: B. An Internet Gateway enables communication between instances in your VPC and the internet.
Why B is correct: An Internet Gateway (IGW) is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. For instances in a public subnet to be reachable from the internet, the subnet's route table must have a route to an IGW.
Why others are wrong:
A) NAT Gateway — A NAT Gateway allows instances in a private subnet to initiate outbound connections to the internet, but does not allow inbound connections from the internet.
C) Virtual Private Gateway — A Virtual Private Gateway is the VPN concentrator on the AWS side of a Site-to-Site VPN connection, not used for public internet access.
D) VPC Endpoint — VPC Endpoints provide private connectivity to AWS services without traversing the internet. They do not enable internet access.
Why B is correct: An Internet Gateway (IGW) is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. For instances in a public subnet to be reachable from the internet, the subnet's route table must have a route to an IGW.
Why others are wrong:
A) NAT Gateway — A NAT Gateway allows instances in a private subnet to initiate outbound connections to the internet, but does not allow inbound connections from the internet.
C) Virtual Private Gateway — A Virtual Private Gateway is the VPN concentrator on the AWS side of a Site-to-Site VPN connection, not used for public internet access.
D) VPC Endpoint — VPC Endpoints provide private connectivity to AWS services without traversing the internet. They do not enable internet access.
Q2.A solutions architect is designing a VPC. The design requires 4 subnets across 2 Availability Zones, with each subnet needing at least 200 IP addresses. Which CIDR block is the smallest that meets this requirement?
✓ Correct: C. A /22 VPC provides 1,024 IPs, which can be divided into four /24 subnets of 251 usable IPs each.
Why C is correct: Each subnet needs at least 200 IPs. A /24 subnet provides 256 addresses minus 5 AWS-reserved = 251 usable. Four /24 subnets require a /22 VPC CIDR (1,024 IPs total). AWS reserves 5 IPs per subnet (first 4 and last 1).
Why others are wrong:
A) /20 — A /20 provides 4,096 IPs, which works but is far larger than needed. The question asks for the smallest CIDR.
B) /21 — A /21 provides 2,048 IPs, which also works but is still larger than the minimum required.
D) /24 — A /24 provides only 256 IPs total, which cannot be split into 4 subnets each with 200+ IPs.
Why C is correct: Each subnet needs at least 200 IPs. A /24 subnet provides 256 addresses minus 5 AWS-reserved = 251 usable. Four /24 subnets require a /22 VPC CIDR (1,024 IPs total). AWS reserves 5 IPs per subnet (first 4 and last 1).
Why others are wrong:
A) /20 — A /20 provides 4,096 IPs, which works but is far larger than needed. The question asks for the smallest CIDR.
B) /21 — A /21 provides 2,048 IPs, which also works but is still larger than the minimum required.
D) /24 — A /24 provides only 256 IPs total, which cannot be split into 4 subnets each with 200+ IPs.
Q3.An application running in a private subnet needs to download software patches from the internet. The security team requires that no inbound connections from the internet are allowed. Which solution should be used?
✓ Correct: A. A NAT Gateway allows outbound internet access from private subnets while preventing inbound connections.
Why A is correct: A NAT Gateway must be placed in a public subnet (with an IGW route). Instances in private subnets route their internet-bound traffic through the NAT Gateway, which performs network address translation. This allows outbound traffic but blocks unsolicited inbound connections.
Why others are wrong:
B) Attach IGW to private subnet — An Internet Gateway is attached to the VPC, not to a subnet. Adding a route to the IGW in the private subnet's route table would make it a public subnet, allowing inbound connections.
C) VPC Endpoint — VPC Endpoints only work for supported AWS services, not for arbitrary internet software repositories.
D) Virtual Private Gateway — A VPG provides connectivity to on-premises networks via VPN, not to the public internet.
Why A is correct: A NAT Gateway must be placed in a public subnet (with an IGW route). Instances in private subnets route their internet-bound traffic through the NAT Gateway, which performs network address translation. This allows outbound traffic but blocks unsolicited inbound connections.
Why others are wrong:
B) Attach IGW to private subnet — An Internet Gateway is attached to the VPC, not to a subnet. Adding a route to the IGW in the private subnet's route table would make it a public subnet, allowing inbound connections.
C) VPC Endpoint — VPC Endpoints only work for supported AWS services, not for arbitrary internet software repositories.
D) Virtual Private Gateway — A VPG provides connectivity to on-premises networks via VPN, not to the public internet.
Q4.A security engineer needs to allow SSH access to EC2 instances in a private subnet from the corporate network. The company uses a Site-to-Site VPN. What is the recommended approach for secure administrative access?
✓ Correct: D. A bastion host (jump box) or Systems Manager Session Manager provides secure, auditable access to private instances.
Why D is correct: A bastion host sits in a public subnet and acts as a single point of entry for SSH into private instances. Security groups on the bastion restrict access to known corporate IPs. Alternatively, Systems Manager Session Manager provides shell access without opening any inbound ports, using IAM for authentication.
Why others are wrong:
A) Public subnet with restrictive SG — Moving instances to a public subnet exposes them to the internet, which violates the security principle of keeping application servers in private subnets.
B) NAT Gateway for SSH — NAT Gateways only allow outbound connections initiated from private instances. They cannot forward inbound SSH traffic.
C) Open port 22 in NACL for all IPs — Opening SSH to all IPs is a major security risk and does not solve the routing problem for private subnet instances.
Why D is correct: A bastion host sits in a public subnet and acts as a single point of entry for SSH into private instances. Security groups on the bastion restrict access to known corporate IPs. Alternatively, Systems Manager Session Manager provides shell access without opening any inbound ports, using IAM for authentication.
Why others are wrong:
A) Public subnet with restrictive SG — Moving instances to a public subnet exposes them to the internet, which violates the security principle of keeping application servers in private subnets.
B) NAT Gateway for SSH — NAT Gateways only allow outbound connections initiated from private instances. They cannot forward inbound SSH traffic.
C) Open port 22 in NACL for all IPs — Opening SSH to all IPs is a major security risk and does not solve the routing problem for private subnet instances.
Q5.A security engineer is troubleshooting why a web server in a public subnet cannot receive HTTP traffic. The security group allows inbound HTTP (port 80) from 0.0.0.0/0. The Network ACL allows inbound HTTP but the response traffic is being blocked. What is the most likely cause?
✓ Correct: B. NACLs are stateless, so outbound rules for ephemeral ports must be explicitly allowed for response traffic.
Why B is correct: Network ACLs are stateless, meaning they evaluate both inbound and outbound traffic independently. When a client sends a request on port 80, the server responds on an ephemeral port (1024-65535). If the NACL outbound rules do not allow this port range, the response traffic is dropped.
Why others are wrong:
A) Security group outbound — Security groups are stateful. If inbound traffic is allowed, the response traffic is automatically allowed regardless of outbound rules.
C) IGW not attached — If the IGW were not attached, no traffic would reach the instance at all, not just response traffic.
D) Missing route — If the route table lacked an IGW route, inbound traffic would also fail, not just the response.
Why B is correct: Network ACLs are stateless, meaning they evaluate both inbound and outbound traffic independently. When a client sends a request on port 80, the server responds on an ephemeral port (1024-65535). If the NACL outbound rules do not allow this port range, the response traffic is dropped.
Why others are wrong:
A) Security group outbound — Security groups are stateful. If inbound traffic is allowed, the response traffic is automatically allowed regardless of outbound rules.
C) IGW not attached — If the IGW were not attached, no traffic would reach the instance at all, not just response traffic.
D) Missing route — If the route table lacked an IGW route, inbound traffic would also fail, not just the response.
Q6.What is a key difference between Security Groups and Network ACLs in AWS?
✓ Correct: A. Security Groups are stateful and instance-level; NACLs are stateless and subnet-level.
Why A is correct: Security Groups are stateful (return traffic is automatically allowed), operate at the ENI/instance level, and only support allow rules. NACLs are stateless (return traffic must be explicitly allowed), operate at the subnet level, and support both allow and deny rules with numbered rule evaluation.
Why others are wrong:
B) Reversed definitions — This reverses the stateful/stateless and level-of-operation properties.
C) SG allow/deny — Security Groups only support allow rules. NACLs support both allow and deny rules.
D) SG rule numbers — NACLs are evaluated in order by rule number (lowest first, first match wins). Security Groups evaluate all rules collectively.
Why A is correct: Security Groups are stateful (return traffic is automatically allowed), operate at the ENI/instance level, and only support allow rules. NACLs are stateless (return traffic must be explicitly allowed), operate at the subnet level, and support both allow and deny rules with numbered rule evaluation.
Why others are wrong:
B) Reversed definitions — This reverses the stateful/stateless and level-of-operation properties.
C) SG allow/deny — Security Groups only support allow rules. NACLs support both allow and deny rules.
D) SG rule numbers — NACLs are evaluated in order by rule number (lowest first, first match wins). Security Groups evaluate all rules collectively.
Q7.A company needs to connect two VPCs in the same AWS Region so that instances in each VPC can communicate using private IP addresses. Which TWO statements about VPC Peering are correct?SELECT TWO
✓ Correct: B, D. VPC Peering works cross-account and requires manual route table updates.
Why B is correct: VPC Peering connections can be established between VPCs in the same account, different accounts, or even different regions (inter-region peering).
Why D is correct: After a peering connection is created and accepted, you must manually update the route tables in each VPC to direct traffic destined for the peer VPC's CIDR through the peering connection.
Why others are wrong:
A) Transitive routing — VPC Peering does not support transitive routing. If VPC A is peered with VPC B and VPC B is peered with VPC C, VPC A cannot reach VPC C through VPC B.
C) Overlapping CIDRs — VPC Peering cannot be established if the VPCs have overlapping CIDR blocks, as this would create routing conflicts.
E) Automatic route updates — Route tables are never automatically updated. You must manually add routes pointing to the peering connection.
Why B is correct: VPC Peering connections can be established between VPCs in the same account, different accounts, or even different regions (inter-region peering).
Why D is correct: After a peering connection is created and accepted, you must manually update the route tables in each VPC to direct traffic destined for the peer VPC's CIDR through the peering connection.
Why others are wrong:
A) Transitive routing — VPC Peering does not support transitive routing. If VPC A is peered with VPC B and VPC B is peered with VPC C, VPC A cannot reach VPC C through VPC B.
C) Overlapping CIDRs — VPC Peering cannot be established if the VPCs have overlapping CIDR blocks, as this would create routing conflicts.
E) Automatic route updates — Route tables are never automatically updated. You must manually add routes pointing to the peering connection.
Q8.A company wants to access Amazon S3 from EC2 instances in a private subnet without sending traffic over the internet. Which type of VPC Endpoint should be used for S3?
✓ Correct: C. S3 and DynamoDB use Gateway Endpoints, which are free and route-table based.
Why C is correct: Gateway Endpoints are available for Amazon S3 and DynamoDB only. They are free, added as a target in the route table, and do not use an ENI. Traffic stays within the AWS network. Note that S3 also supports Interface Endpoints, but the Gateway Endpoint is preferred as it is free.
Why others are wrong:
A) Interface Endpoint — While S3 does support Interface Endpoints (powered by PrivateLink), Gateway Endpoints are the recommended approach for S3 as they are free and simpler to configure.
B) NAT Gateway Endpoint — This is not a real AWS service. NAT Gateways and VPC Endpoints are separate components.
D) PrivateLink Endpoint — PrivateLink is the technology behind Interface Endpoints. While S3 supports this, Gateway Endpoints are preferred for S3.
Why C is correct: Gateway Endpoints are available for Amazon S3 and DynamoDB only. They are free, added as a target in the route table, and do not use an ENI. Traffic stays within the AWS network. Note that S3 also supports Interface Endpoints, but the Gateway Endpoint is preferred as it is free.
Why others are wrong:
A) Interface Endpoint — While S3 does support Interface Endpoints (powered by PrivateLink), Gateway Endpoints are the recommended approach for S3 as they are free and simpler to configure.
B) NAT Gateway Endpoint — This is not a real AWS service. NAT Gateways and VPC Endpoints are separate components.
D) PrivateLink Endpoint — PrivateLink is the technology behind Interface Endpoints. While S3 supports this, Gateway Endpoints are preferred for S3.
Q9.What is the key difference between a VPC Gateway Endpoint and a VPC Interface Endpoint?
✓ Correct: B. Gateway Endpoints are route-table based and free; Interface Endpoints use ENIs and cost money.
Why B is correct: Gateway Endpoints are added as targets in route tables, support only S3 and DynamoDB, and have no additional cost. Interface Endpoints provision an ENI with a private IP in your subnet, support most AWS services, and are billed per hour plus per GB of data processed.
Why others are wrong:
A) Reversed definitions — This reverses the mechanism. Gateway Endpoints use route tables; Interface Endpoints use ENIs.
C) Service support reversed — Gateway Endpoints only support S3 and DynamoDB. Interface Endpoints support dozens of AWS services.
D) Security groups — Interface Endpoints require security groups (attached to the ENI). Gateway Endpoints use VPC endpoint policies but not security groups.
Why B is correct: Gateway Endpoints are added as targets in route tables, support only S3 and DynamoDB, and have no additional cost. Interface Endpoints provision an ENI with a private IP in your subnet, support most AWS services, and are billed per hour plus per GB of data processed.
Why others are wrong:
A) Reversed definitions — This reverses the mechanism. Gateway Endpoints use route tables; Interface Endpoints use ENIs.
C) Service support reversed — Gateway Endpoints only support S3 and DynamoDB. Interface Endpoints support dozens of AWS services.
D) Security groups — Interface Endpoints require security groups (attached to the ENI). Gateway Endpoints use VPC endpoint policies but not security groups.
Q10.A SaaS company wants to expose its service running in its VPC to customers in other VPCs without traversing the public internet. Customers should access the service through their own VPC. Which AWS service should be used?
✓ Correct: D. AWS PrivateLink allows you to expose a service privately to other VPCs using Interface Endpoints.
Why D is correct: AWS PrivateLink enables service providers to expose their services via a Network Load Balancer. Consumers create an Interface Endpoint in their VPC that connects to the service. Traffic stays on the AWS network, CIDRs don't need to be unique, and no VPC Peering or IGW is required.
Why others are wrong:
A) VPC Peering — VPC Peering exposes the entire VPC network, not just a specific service. It also requires non-overlapping CIDRs and doesn't scale well for many customers.
B) Transit Gateway — Transit Gateway connects multiple VPCs but exposes full network connectivity, not individual services. It is also more complex and costly for this use case.
C) Internet Gateway — An IGW would route traffic over the public internet, which the question explicitly wants to avoid.
Why D is correct: AWS PrivateLink enables service providers to expose their services via a Network Load Balancer. Consumers create an Interface Endpoint in their VPC that connects to the service. Traffic stays on the AWS network, CIDRs don't need to be unique, and no VPC Peering or IGW is required.
Why others are wrong:
A) VPC Peering — VPC Peering exposes the entire VPC network, not just a specific service. It also requires non-overlapping CIDRs and doesn't scale well for many customers.
B) Transit Gateway — Transit Gateway connects multiple VPCs but exposes full network connectivity, not individual services. It is also more complex and costly for this use case.
C) Internet Gateway — An IGW would route traffic over the public internet, which the question explicitly wants to avoid.
Q11.A company is setting up a Site-to-Site VPN connection between their on-premises data center and AWS. Which TWO components are required on the AWS side?
✓ Correct: A. Site-to-Site VPN requires a Virtual Private Gateway (or Transit Gateway) on the AWS side and a Customer Gateway resource representing the on-premises device.
Why A is correct: A Virtual Private Gateway (VGW) is attached to the VPC and serves as the VPN concentrator on the AWS side. A Customer Gateway (CGW) resource in AWS represents the on-premises VPN device. Together, these two components define the VPN connection. Alternatively, a Transit Gateway can replace the VGW.
Why others are wrong:
B) IGW and NAT Gateway — These are for internet connectivity, not for establishing a VPN tunnel to on-premises networks.
C) Direct Connect Gateway — Direct Connect is a separate service providing dedicated physical connections, not VPN tunnels.
D) NLB and PrivateLink — These are for exposing services privately between VPCs, not for VPN connections.
Why A is correct: A Virtual Private Gateway (VGW) is attached to the VPC and serves as the VPN concentrator on the AWS side. A Customer Gateway (CGW) resource in AWS represents the on-premises VPN device. Together, these two components define the VPN connection. Alternatively, a Transit Gateway can replace the VGW.
Why others are wrong:
B) IGW and NAT Gateway — These are for internet connectivity, not for establishing a VPN tunnel to on-premises networks.
C) Direct Connect Gateway — Direct Connect is a separate service providing dedicated physical connections, not VPN tunnels.
D) NLB and PrivateLink — These are for exposing services privately between VPCs, not for VPN connections.
Q12.A company with multiple branch offices wants to establish VPN connections to AWS using a hub-and-spoke model, where branch offices can also communicate with each other through AWS. Which feature should be used?
✓ Correct: C. AWS VPN CloudHub allows multiple branch offices to communicate with each other and with AWS through a hub-and-spoke VPN model.
Why C is correct: AWS VPN CloudHub operates using a Virtual Private Gateway with multiple Customer Gateway connections. It enables branch-to-branch communication through the VGW and uses BGP for dynamic routing. It is low-cost and leverages existing VPN connections over the public internet.
Why others are wrong:
A) VPC Peering transitive routing — VPC Peering does not support transitive routing. It connects VPCs, not on-premises branch offices.
B) Multiple VPN with static routing — Static routing does not enable branch-to-branch communication. CloudHub specifically provides the hub-and-spoke topology with BGP.
D) Direct Connect — Direct Connect requires physical dedicated connections and is more expensive than VPN for this use case.
Why C is correct: AWS VPN CloudHub operates using a Virtual Private Gateway with multiple Customer Gateway connections. It enables branch-to-branch communication through the VGW and uses BGP for dynamic routing. It is low-cost and leverages existing VPN connections over the public internet.
Why others are wrong:
A) VPC Peering transitive routing — VPC Peering does not support transitive routing. It connects VPCs, not on-premises branch offices.
B) Multiple VPN with static routing — Static routing does not enable branch-to-branch communication. CloudHub specifically provides the hub-and-spoke topology with BGP.
D) Direct Connect — Direct Connect requires physical dedicated connections and is more expensive than VPN for this use case.
Q13.A company requires a dedicated, private network connection from their data center to AWS with consistent network performance. The connection must provide at least 1 Gbps of bandwidth. Which service should be used?
✓ Correct: B. AWS Direct Connect dedicated connections provide 1 Gbps, 10 Gbps, or 100 Gbps of dedicated bandwidth.
Why B is correct: AWS Direct Connect dedicated connections provide a physical Ethernet connection with dedicated bandwidth of 1 Gbps, 10 Gbps, or 100 Gbps. They offer consistent network performance, lower latency, and reduced data transfer costs compared to internet-based connections.
Why others are wrong:
A) Site-to-Site VPN — VPN connections traverse the public internet, so they cannot guarantee consistent performance or dedicated bandwidth.
C) Hosted connection — Hosted connections are provisioned through an AWS Direct Connect Partner and offer capacities from 50 Mbps to 10 Gbps, but the connection is shared infrastructure, not a dedicated physical port.
D) Client VPN — AWS Client VPN is for individual user remote access, not data center connectivity.
Why B is correct: AWS Direct Connect dedicated connections provide a physical Ethernet connection with dedicated bandwidth of 1 Gbps, 10 Gbps, or 100 Gbps. They offer consistent network performance, lower latency, and reduced data transfer costs compared to internet-based connections.
Why others are wrong:
A) Site-to-Site VPN — VPN connections traverse the public internet, so they cannot guarantee consistent performance or dedicated bandwidth.
C) Hosted connection — Hosted connections are provisioned through an AWS Direct Connect Partner and offer capacities from 50 Mbps to 10 Gbps, but the connection is shared infrastructure, not a dedicated physical port.
D) Client VPN — AWS Client VPN is for individual user remote access, not data center connectivity.
Q14.A company uses AWS Direct Connect but needs to encrypt data in transit over the connection. What is the recommended approach?
✓ Correct: D. Run a VPN connection over Direct Connect to add IPsec encryption to the dedicated connection.
Why D is correct: Direct Connect by itself is not encrypted. To encrypt data in transit, you can establish a Site-to-Site VPN connection that uses the Direct Connect connection as its transport (via a public virtual interface). This adds IPsec encryption on top of the dedicated connection, giving you both consistent performance and encryption.
Why others are wrong:
A) Encrypted by default — Direct Connect does not encrypt traffic by default. It is a private connection but data travels unencrypted.
B) Virtual interface encryption — There is no encryption toggle on Direct Connect virtual interface settings.
C) AWS KMS — KMS manages encryption keys for data at rest and some AWS services, but it cannot encrypt a network connection.
Why D is correct: Direct Connect by itself is not encrypted. To encrypt data in transit, you can establish a Site-to-Site VPN connection that uses the Direct Connect connection as its transport (via a public virtual interface). This adds IPsec encryption on top of the dedicated connection, giving you both consistent performance and encryption.
Why others are wrong:
A) Encrypted by default — Direct Connect does not encrypt traffic by default. It is a private connection but data travels unencrypted.
B) Virtual interface encryption — There is no encryption toggle on Direct Connect virtual interface settings.
C) AWS KMS — KMS manages encryption keys for data at rest and some AWS services, but it cannot encrypt a network connection.
Q15.A company has 15 VPCs that all need to communicate with each other and with the on-premises network via Direct Connect. Managing individual VPC peering connections has become complex. Which service simplifies this architecture?
✓ Correct: A. Transit Gateway acts as a central hub that connects multiple VPCs and on-premises networks.
Why A is correct: AWS Transit Gateway is a regional network hub that enables transitive routing between all connected VPCs, VPN connections, and Direct Connect gateways. Instead of N*(N-1)/2 peering connections, each VPC only needs one attachment to the Transit Gateway. It supports route tables for network segmentation.
Why others are wrong:
B) PrivateLink — PrivateLink exposes individual services, not full network connectivity between VPCs.
C) VPC Endpoints — VPC Endpoints provide access to AWS services, not VPC-to-VPC connectivity.
D) VPN CloudHub — CloudHub connects multiple on-premises sites but does not solve VPC-to-VPC connectivity at scale.
Why A is correct: AWS Transit Gateway is a regional network hub that enables transitive routing between all connected VPCs, VPN connections, and Direct Connect gateways. Instead of N*(N-1)/2 peering connections, each VPC only needs one attachment to the Transit Gateway. It supports route tables for network segmentation.
Why others are wrong:
B) PrivateLink — PrivateLink exposes individual services, not full network connectivity between VPCs.
C) VPC Endpoints — VPC Endpoints provide access to AWS services, not VPC-to-VPC connectivity.
D) VPN CloudHub — CloudHub connects multiple on-premises sites but does not solve VPC-to-VPC connectivity at scale.
Q16.A security team needs to capture and inspect network traffic from EC2 instances for threat detection without installing agents on the instances. Which AWS feature provides this capability?
✓ Correct: C. VPC Traffic Mirroring copies actual network packets for deep inspection.
Why C is correct: VPC Traffic Mirroring copies network traffic from the ENI of EC2 instances and sends it to a target (such as an NLB or another ENI) for deep packet inspection. Unlike VPC Flow Logs, it captures the actual packet content, not just metadata. It is agentless and supports filtering by source, destination, and protocol.
Why others are wrong:
A) VPC Flow Logs — Flow Logs capture metadata (source/dest IP, ports, protocol, action) but not the actual packet content. They are insufficient for deep packet inspection.
B) CloudTrail — CloudTrail logs API calls to AWS services, not network traffic between instances.
D) GuardDuty — GuardDuty analyzes VPC Flow Logs and DNS logs for threats but does not provide raw packet capture for custom inspection.
Why C is correct: VPC Traffic Mirroring copies network traffic from the ENI of EC2 instances and sends it to a target (such as an NLB or another ENI) for deep packet inspection. Unlike VPC Flow Logs, it captures the actual packet content, not just metadata. It is agentless and supports filtering by source, destination, and protocol.
Why others are wrong:
A) VPC Flow Logs — Flow Logs capture metadata (source/dest IP, ports, protocol, action) but not the actual packet content. They are insufficient for deep packet inspection.
B) CloudTrail — CloudTrail logs API calls to AWS services, not network traffic between instances.
D) GuardDuty — GuardDuty analyzes VPC Flow Logs and DNS logs for threats but does not provide raw packet capture for custom inspection.
Q17.A company needs to deploy a managed firewall that can perform deep packet inspection, intrusion prevention, and domain name filtering for traffic entering and leaving their VPC. Which AWS service should they use?
✓ Correct: B. AWS Network Firewall provides managed deep packet inspection, IPS, and domain filtering for VPC traffic.
Why B is correct: AWS Network Firewall is a managed stateful firewall that can inspect traffic at layers 3-7. It supports Suricata-compatible IPS rules, domain name filtering (allow/deny lists), and can be deployed in a dedicated firewall subnet. It integrates with Gateway Load Balancer and supports both ingress and egress filtering.
Why others are wrong:
A) AWS WAF — WAF operates at layer 7 and protects web applications behind ALB, API Gateway, or CloudFront. It does not inspect general VPC network traffic.
C) Security Groups — Security Groups provide basic allow rules based on IP, port, and protocol. They cannot perform deep packet inspection or domain filtering.
D) Network ACLs — NACLs are stateless, layer 3/4 filters. They cannot inspect packet content or filter by domain name.
Why B is correct: AWS Network Firewall is a managed stateful firewall that can inspect traffic at layers 3-7. It supports Suricata-compatible IPS rules, domain name filtering (allow/deny lists), and can be deployed in a dedicated firewall subnet. It integrates with Gateway Load Balancer and supports both ingress and egress filtering.
Why others are wrong:
A) AWS WAF — WAF operates at layer 7 and protects web applications behind ALB, API Gateway, or CloudFront. It does not inspect general VPC network traffic.
C) Security Groups — Security Groups provide basic allow rules based on IP, port, and protocol. They cannot perform deep packet inspection or domain filtering.
D) Network ACLs — NACLs are stateless, layer 3/4 filters. They cannot inspect packet content or filter by domain name.
Q18.A company manages 50 AWS accounts in AWS Organizations. They need to apply consistent WAF rules and Shield Advanced protections across all accounts. Which TWO features does AWS Firewall Manager provide for this scenario?SELECT TWO
✓ Correct: A, E. Firewall Manager centrally manages WAF rules and Shield Advanced across Organizations.
Why A is correct: AWS Firewall Manager creates security policies that automatically deploy WAF Web ACLs with specified rules across all accounts and resources in an AWS Organization. New accounts and resources are automatically protected.
Why E is correct: Firewall Manager can deploy Shield Advanced protections across all member accounts. It automatically subscribes new accounts and applies DDoS protections to specified resource types.
Why others are wrong:
B) VPC peering — Firewall Manager manages security policies (WAF, Shield, Security Groups, Network Firewall, NACLs), not network connectivity.
C) TLS encryption — Firewall Manager does not manage encryption of traffic between accounts.
D) Replace IAM — Firewall Manager manages network security policies, not IAM access control policies.
Why A is correct: AWS Firewall Manager creates security policies that automatically deploy WAF Web ACLs with specified rules across all accounts and resources in an AWS Organization. New accounts and resources are automatically protected.
Why E is correct: Firewall Manager can deploy Shield Advanced protections across all member accounts. It automatically subscribes new accounts and applies DDoS protections to specified resource types.
Why others are wrong:
B) VPC peering — Firewall Manager manages security policies (WAF, Shield, Security Groups, Network Firewall, NACLs), not network connectivity.
C) TLS encryption — Firewall Manager does not manage encryption of traffic between accounts.
D) Replace IAM — Firewall Manager manages network security policies, not IAM access control policies.
Q19.Which Route 53 record type should be used to map a domain name to an IPv4 address?
✓ Correct: D. An A record maps a domain name to an IPv4 address.
Why D is correct: An A (Address) record maps a hostname to an IPv4 address (e.g., example.com → 192.0.2.1). It is the most fundamental DNS record type for directing traffic to a specific IP address.
Why others are wrong:
A) CNAME — A CNAME (Canonical Name) record maps a hostname to another hostname (e.g., www.example.com → example.com). It cannot be used at the zone apex.
B) AAAA — An AAAA record maps a hostname to an IPv6 address, not IPv4.
C) MX — An MX (Mail Exchange) record specifies the mail servers for a domain, not a general IP address mapping.
Why D is correct: An A (Address) record maps a hostname to an IPv4 address (e.g., example.com → 192.0.2.1). It is the most fundamental DNS record type for directing traffic to a specific IP address.
Why others are wrong:
A) CNAME — A CNAME (Canonical Name) record maps a hostname to another hostname (e.g., www.example.com → example.com). It cannot be used at the zone apex.
B) AAAA — An AAAA record maps a hostname to an IPv6 address, not IPv4.
C) MX — An MX (Mail Exchange) record specifies the mail servers for a domain, not a general IP address mapping.
Q20.A company wants to route DNS traffic to the AWS resource that provides the best latency for the end user. Which Route 53 routing policy should be used?
✓ Correct: A. Latency-based routing directs users to the AWS region with the lowest network latency.
Why A is correct: Latency-based routing evaluates the latency between the user and AWS regions, then routes the DNS query to the region that provides the lowest latency. AWS maintains a latency database that is continuously updated. This is ideal for multi-region deployments where performance is the priority.
Why others are wrong:
B) Geolocation routing — Geolocation routing directs traffic based on the geographic location of the user (country/continent), not network latency. A user in Germany might have lower latency to a US East region than EU, but geolocation would still route them to EU.
C) Weighted routing — Weighted routing distributes traffic based on assigned weights (percentages), useful for blue/green deployments, not latency optimization.
D) Simple routing — Simple routing returns a single resource (or multiple values randomly) with no intelligence about latency or location.
Why A is correct: Latency-based routing evaluates the latency between the user and AWS regions, then routes the DNS query to the region that provides the lowest latency. AWS maintains a latency database that is continuously updated. This is ideal for multi-region deployments where performance is the priority.
Why others are wrong:
B) Geolocation routing — Geolocation routing directs traffic based on the geographic location of the user (country/continent), not network latency. A user in Germany might have lower latency to a US East region than EU, but geolocation would still route them to EU.
C) Weighted routing — Weighted routing distributes traffic based on assigned weights (percentages), useful for blue/green deployments, not latency optimization.
D) Simple routing — Simple routing returns a single resource (or multiple values randomly) with no intelligence about latency or location.
Q21.A company wants to use Route 53 to ensure DNS traffic is only routed to healthy endpoints. If an endpoint fails a health check, Route 53 should stop routing traffic to it. Which feature provides this?
✓ Correct: C. Route 53 health checks monitor endpoint health, and failover routing redirects traffic to healthy endpoints.
Why C is correct: Route 53 health checks monitor the health of endpoints by sending HTTP, HTTPS, or TCP requests at regular intervals. When combined with failover routing, Route 53 automatically routes traffic away from unhealthy primary endpoints to healthy secondary endpoints. Health checks can also be associated with latency, weighted, and geolocation routing policies.
Why others are wrong:
A) Route 53 Resolver — Route 53 Resolver handles DNS resolution between VPCs and on-premises networks. It does not provide health checking or failover.
B) DNSSEC signing — DNSSEC protects against DNS spoofing by signing DNS records. It does not monitor endpoint health.
D) Geoproximity routing — Geoproximity routes traffic based on the geographic location of resources and users with bias adjustments, not based on health status.
Why C is correct: Route 53 health checks monitor the health of endpoints by sending HTTP, HTTPS, or TCP requests at regular intervals. When combined with failover routing, Route 53 automatically routes traffic away from unhealthy primary endpoints to healthy secondary endpoints. Health checks can also be associated with latency, weighted, and geolocation routing policies.
Why others are wrong:
A) Route 53 Resolver — Route 53 Resolver handles DNS resolution between VPCs and on-premises networks. It does not provide health checking or failover.
B) DNSSEC signing — DNSSEC protects against DNS spoofing by signing DNS records. It does not monitor endpoint health.
D) Geoproximity routing — Geoproximity routes traffic based on the geographic location of resources and users with bias adjustments, not based on health status.
Q22.A company wants to protect their Route 53 hosted zone from DNS spoofing attacks where an attacker might forge DNS responses. Which feature should be enabled?
✓ Correct: B. DNSSEC signing cryptographically signs DNS records to prevent DNS spoofing.
Why B is correct: DNSSEC (Domain Name System Security Extensions) adds cryptographic signatures to DNS records. Route 53 supports DNSSEC signing for public hosted zones using a KMS key. Resolvers that support DNSSEC validation can verify that DNS responses have not been tampered with, protecting against man-in-the-middle and cache poisoning attacks.
Why others are wrong:
A) Health checks — Health checks monitor endpoint availability; they do not protect against DNS spoofing.
C) AWS WAF — WAF protects web applications at layer 7 (HTTP/HTTPS). It does not operate on DNS protocol traffic.
D) VPC DNS resolution — VPC DNS settings control whether DNS resolution is enabled within the VPC. They do not add cryptographic protection to DNS responses.
Why B is correct: DNSSEC (Domain Name System Security Extensions) adds cryptographic signatures to DNS records. Route 53 supports DNSSEC signing for public hosted zones using a KMS key. Resolvers that support DNSSEC validation can verify that DNS responses have not been tampered with, protecting against man-in-the-middle and cache poisoning attacks.
Why others are wrong:
A) Health checks — Health checks monitor endpoint availability; they do not protect against DNS spoofing.
C) AWS WAF — WAF protects web applications at layer 7 (HTTP/HTTPS). It does not operate on DNS protocol traffic.
D) VPC DNS resolution — VPC DNS settings control whether DNS resolution is enabled within the VPC. They do not add cryptographic protection to DNS responses.
Q23.A company runs a web application behind an Application Load Balancer. They need to route traffic based on the URL path — requests to
/api/* should go to one target group, and /images/* to another. Which ALB feature supports this?
✓ Correct: A. ALB supports path-based routing rules to direct requests to different target groups based on URL path.
Why A is correct: Application Load Balancer operates at layer 7 (HTTP/HTTPS) and supports advanced routing rules. Listener rules can route based on URL path, host header, HTTP method, query string, and source IP. Path-based routing allows
Why others are wrong:
B) NLB with TCP — NLB operates at layer 4 (TCP/UDP) and does not inspect HTTP path information. It cannot route based on URL paths.
C) Classic Load Balancer — CLB does not support path-based routing. Cookie-based stickiness maintains session affinity but doesn't route by path.
D) Route 53 weighted routing — Route 53 distributes DNS queries by weight. It operates at the DNS level and cannot route based on URL path.
Why A is correct: Application Load Balancer operates at layer 7 (HTTP/HTTPS) and supports advanced routing rules. Listener rules can route based on URL path, host header, HTTP method, query string, and source IP. Path-based routing allows
/api/* and /images/* to map to different target groups.Why others are wrong:
B) NLB with TCP — NLB operates at layer 4 (TCP/UDP) and does not inspect HTTP path information. It cannot route based on URL paths.
C) Classic Load Balancer — CLB does not support path-based routing. Cookie-based stickiness maintains session affinity but doesn't route by path.
D) Route 53 weighted routing — Route 53 distributes DNS queries by weight. It operates at the DNS level and cannot route based on URL path.
Q24.A company hosts multiple HTTPS websites on the same set of EC2 instances behind an ALB. Each website uses a different domain name and SSL certificate. How does the ALB serve the correct certificate for each domain?
✓ Correct: D. ALB supports SNI, which allows it to serve multiple SSL certificates on a single listener.
Why D is correct: Server Name Indication (SNI) is a TLS extension where the client indicates which hostname it is connecting to during the TLS handshake. ALB and NLB support SNI, allowing multiple SSL certificates to be attached to a single HTTPS listener. The load balancer selects the correct certificate based on the hostname in the client's request.
Why others are wrong:
A) Only one certificate — ALB supports multiple certificates per listener through SNI. You can attach a default certificate plus additional certificates.
B) Separate ALB per domain — This is unnecessary since SNI allows one ALB to serve multiple domains with different certificates.
C) Wildcard certificate — A wildcard certificate (e.g., *.example.com) only covers subdomains of one domain, not multiple distinct domain names.
Why D is correct: Server Name Indication (SNI) is a TLS extension where the client indicates which hostname it is connecting to during the TLS handshake. ALB and NLB support SNI, allowing multiple SSL certificates to be attached to a single HTTPS listener. The load balancer selects the correct certificate based on the hostname in the client's request.
Why others are wrong:
A) Only one certificate — ALB supports multiple certificates per listener through SNI. You can attach a default certificate plus additional certificates.
B) Separate ALB per domain — This is unnecessary since SNI allows one ALB to serve multiple domains with different certificates.
C) Wildcard certificate — A wildcard certificate (e.g., *.example.com) only covers subdomains of one domain, not multiple distinct domain names.
Q25.A company needs a load balancer that provides ultra-high performance with millions of requests per second and static IP addresses. The application uses TCP protocol. Which load balancer type is most appropriate?
✓ Correct: C. NLB operates at layer 4, provides static IPs, and handles millions of requests per second with ultra-low latency.
Why C is correct: Network Load Balancer operates at layer 4 (TCP/UDP/TLS) and is designed for extreme performance. It can handle millions of requests per second with ultra-low latency. Each NLB provides one static IP per AZ and can also be assigned Elastic IP addresses, making it ideal for applications that need whitelisted IPs.
Why others are wrong:
A) ALB — ALB operates at layer 7 (HTTP/HTTPS) and does not provide static IP addresses. It is not optimized for raw TCP traffic or extreme throughput.
B) CLB — Classic Load Balancer is a legacy option without static IP support and lower performance than NLB.
D) GWLB — Gateway Load Balancer is designed to deploy third-party virtual appliances (firewalls, IDS/IPS), not for serving application traffic.
Why C is correct: Network Load Balancer operates at layer 4 (TCP/UDP/TLS) and is designed for extreme performance. It can handle millions of requests per second with ultra-low latency. Each NLB provides one static IP per AZ and can also be assigned Elastic IP addresses, making it ideal for applications that need whitelisted IPs.
Why others are wrong:
A) ALB — ALB operates at layer 7 (HTTP/HTTPS) and does not provide static IP addresses. It is not optimized for raw TCP traffic or extreme throughput.
B) CLB — Classic Load Balancer is a legacy option without static IP support and lower performance than NLB.
D) GWLB — Gateway Load Balancer is designed to deploy third-party virtual appliances (firewalls, IDS/IPS), not for serving application traffic.
Q26.A company wants to deploy a fleet of third-party firewall appliances to inspect all traffic entering their VPC. They need a load balancer that can distribute traffic to these appliances transparently. Which load balancer type should they use?
✓ Correct: B. Gateway Load Balancer is designed for deploying and scaling third-party virtual network appliances.
Why B is correct: Gateway Load Balancer (GWLB) operates at layer 3 (network layer) and uses the GENEVE protocol. It transparently inserts virtual appliances (firewalls, IDS/IPS, deep packet inspection) into the traffic flow. It distributes traffic to appliances while preserving the original source and destination. GWLB endpoints in the VPC route table direct traffic through the appliances.
Why others are wrong:
A) ALB — ALB operates at layer 7 and is designed for HTTP/HTTPS application routing, not transparent network appliance insertion.
C) NLB — NLB operates at layer 4 and serves application traffic. It does not support transparent appliance insertion using GENEVE protocol.
D) CLB — Classic Load Balancer is legacy and does not support transparent network appliance deployment.
Why B is correct: Gateway Load Balancer (GWLB) operates at layer 3 (network layer) and uses the GENEVE protocol. It transparently inserts virtual appliances (firewalls, IDS/IPS, deep packet inspection) into the traffic flow. It distributes traffic to appliances while preserving the original source and destination. GWLB endpoints in the VPC route table direct traffic through the appliances.
Why others are wrong:
A) ALB — ALB operates at layer 7 and is designed for HTTP/HTTPS application routing, not transparent network appliance insertion.
C) NLB — NLB operates at layer 4 and serves application traffic. It does not support transparent appliance insertion using GENEVE protocol.
D) CLB — Classic Load Balancer is legacy and does not support transparent network appliance deployment.
Q27.An ALB is configured with an HTTPS listener. During a security audit, the team notices that the connection between the ALB and the backend EC2 instances is unencrypted. What does the term "SSL/TLS termination" mean in this context?
✓ Correct: A. SSL termination means the load balancer decrypts HTTPS traffic and sends unencrypted HTTP to backends.
Why A is correct: SSL/TLS termination means the ALB handles the encryption/decryption of HTTPS traffic. The client communicates with the ALB over HTTPS, and the ALB decrypts the traffic and forwards it to backend targets over HTTP. This offloads the CPU-intensive TLS processing from the EC2 instances. For end-to-end encryption, you can configure the ALB to re-encrypt traffic to targets.
Why others are wrong:
B) Terminates connection — "Termination" in this context refers to ending the SSL/TLS encryption, not stopping the connection entirely.
C) Forces TLS 1.3 — SSL termination does not force a specific TLS version on backends. TLS version is controlled by the ALB security policy.
D) Expired certificate — Certificate expiration is a separate issue from SSL termination, which is a normal architectural pattern.
Why A is correct: SSL/TLS termination means the ALB handles the encryption/decryption of HTTPS traffic. The client communicates with the ALB over HTTPS, and the ALB decrypts the traffic and forwards it to backend targets over HTTP. This offloads the CPU-intensive TLS processing from the EC2 instances. For end-to-end encryption, you can configure the ALB to re-encrypt traffic to targets.
Why others are wrong:
B) Terminates connection — "Termination" in this context refers to ending the SSL/TLS encryption, not stopping the connection entirely.
C) Forces TLS 1.3 — SSL termination does not force a specific TLS version on backends. TLS version is controlled by the ALB security policy.
D) Expired certificate — Certificate expiration is a separate issue from SSL termination, which is a normal architectural pattern.
Q28.What is the purpose of the "connection draining" (deregistration delay) feature on an Elastic Load Balancer?
✓ Correct: B. Connection draining gives in-flight requests time to complete before a target is deregistered.
Why B is correct: Connection draining (deregistration delay) ensures that when a target is deregistered or becomes unhealthy, the load balancer stops sending new requests to it but allows existing in-flight requests to complete within a configurable timeout (default 300 seconds). This prevents users from experiencing interrupted requests during deployments or scaling events.
Why others are wrong:
A) Immediate termination — This is the opposite of connection draining. Draining specifically avoids abrupt termination of active connections.
C) Bandwidth draining — Connection draining has nothing to do with bandwidth management or throttling.
D) Idle connection removal — Idle timeout is a separate feature. Connection draining specifically handles the graceful removal of active targets.
Why B is correct: Connection draining (deregistration delay) ensures that when a target is deregistered or becomes unhealthy, the load balancer stops sending new requests to it but allows existing in-flight requests to complete within a configurable timeout (default 300 seconds). This prevents users from experiencing interrupted requests during deployments or scaling events.
Why others are wrong:
A) Immediate termination — This is the opposite of connection draining. Draining specifically avoids abrupt termination of active connections.
C) Bandwidth draining — Connection draining has nothing to do with bandwidth management or throttling.
D) Idle connection removal — Idle timeout is a separate feature. Connection draining specifically handles the graceful removal of active targets.
Q29.A company is building a REST API using Amazon API Gateway. They want the API to be accessible only from within their VPC, not from the public internet. Which API Gateway endpoint type should they use?
✓ Correct: C. A private API Gateway endpoint is only accessible from within a VPC through a VPC Interface Endpoint.
Why C is correct: A private endpoint in API Gateway is only accessible from within your VPC using a VPC Interface Endpoint (powered by PrivateLink). Traffic never leaves the AWS network. You control access using VPC endpoint policies and API Gateway resource policies.
Why others are wrong:
A) Edge-optimized — Edge-optimized endpoints are routed through CloudFront edge locations and are publicly accessible. This is the default endpoint type.
B) Regional endpoint — Regional endpoints are publicly accessible from the internet within the same region. They do not restrict access to VPC-only.
D) Internal endpoint — "Internal endpoint" is not a valid API Gateway endpoint type. The correct term is "private."
Why C is correct: A private endpoint in API Gateway is only accessible from within your VPC using a VPC Interface Endpoint (powered by PrivateLink). Traffic never leaves the AWS network. You control access using VPC endpoint policies and API Gateway resource policies.
Why others are wrong:
A) Edge-optimized — Edge-optimized endpoints are routed through CloudFront edge locations and are publicly accessible. This is the default endpoint type.
B) Regional endpoint — Regional endpoints are publicly accessible from the internet within the same region. They do not restrict access to VPC-only.
D) Internal endpoint — "Internal endpoint" is not a valid API Gateway endpoint type. The correct term is "private."
Q30.A company uses Amazon API Gateway for their public API. They need to authenticate requests using an existing corporate identity provider that issues JWT tokens. Which API Gateway authentication mechanism should they use?
✓ Correct: D. Cognito User Pool authorizers validate JWT tokens; Lambda authorizers support custom authentication logic.
Why D is correct: A Cognito User Pool authorizer can validate JWT tokens issued by Cognito or federated identity providers. Alternatively, a Lambda authorizer (formerly custom authorizer) can implement custom token validation logic for any identity provider. Both approaches support JWT-based authentication from external identity providers.
Why others are wrong:
A) API key validation — API keys are used for usage tracking and throttling, not authentication. They should not be relied upon for security.
B) IAM authorization — IAM authorization requires AWS credentials (SigV4 signing) and is not suitable for external corporate identity providers issuing JWT tokens.
C) AWS WAF — WAF provides request filtering (IP blocking, rate limiting, SQL injection protection) but does not authenticate users or validate JWT tokens.
Why D is correct: A Cognito User Pool authorizer can validate JWT tokens issued by Cognito or federated identity providers. Alternatively, a Lambda authorizer (formerly custom authorizer) can implement custom token validation logic for any identity provider. Both approaches support JWT-based authentication from external identity providers.
Why others are wrong:
A) API key validation — API keys are used for usage tracking and throttling, not authentication. They should not be relied upon for security.
B) IAM authorization — IAM authorization requires AWS credentials (SigV4 signing) and is not suitable for external corporate identity providers issuing JWT tokens.
C) AWS WAF — WAF provides request filtering (IP blocking, rate limiting, SQL injection protection) but does not authenticate users or validate JWT tokens.
Q31.A company wants to restrict which AWS accounts and VPCs can invoke their private API Gateway endpoint. Which feature should they configure?
✓ Correct: B. API Gateway resource policies control which accounts, VPCs, or VPC endpoints can invoke the API.
Why B is correct: API Gateway resource policies are JSON-based policies (similar to S3 bucket policies) that define who can invoke the API. For private APIs, you can specify allowed VPC endpoints, VPC IDs, or AWS account IDs. This is the mechanism to control cross-account or cross-VPC access to private APIs.
Why others are wrong:
A) Usage plans — Usage plans define throttling limits and quotas for API keys. They control rate of access, not who can access the API.
C) Stage variables — Stage variables are name-value pairs for configuration (like environment variables for different stages). They do not control access.
D) Caching settings — Caching reduces latency by storing API responses. It has no role in access control.
Why B is correct: API Gateway resource policies are JSON-based policies (similar to S3 bucket policies) that define who can invoke the API. For private APIs, you can specify allowed VPC endpoints, VPC IDs, or AWS account IDs. This is the mechanism to control cross-account or cross-VPC access to private APIs.
Why others are wrong:
A) Usage plans — Usage plans define throttling limits and quotas for API keys. They control rate of access, not who can access the API.
C) Stage variables — Stage variables are name-value pairs for configuration (like environment variables for different stages). They do not control access.
D) Caching settings — Caching reduces latency by storing API responses. It has no role in access control.
Q32.A company serves static content from an S3 bucket through Amazon CloudFront. They want to ensure that users can only access the S3 content through CloudFront, not by directly accessing the S3 bucket URL. Which feature should they configure?
✓ Correct: A. Origin Access Control restricts S3 bucket access to only CloudFront, blocking direct S3 URL access.
Why A is correct: Origin Access Control (OAC) is the recommended way to restrict S3 access to CloudFront. You configure the S3 bucket policy to only allow requests from the CloudFront distribution's OAC. Direct access to the S3 bucket URL is denied. OAC replaces the older Origin Access Identity (OAI) and supports additional features like SSE-KMS encrypted objects.
Why others are wrong:
B) Bucket versioning — Versioning keeps multiple versions of objects. It does not control who can access the bucket.
C) Transfer Acceleration — Transfer Acceleration speeds up uploads to S3 using CloudFront edge locations. It does not restrict access.
D) Field-level encryption — Field-level encryption protects specific data fields in POST requests. It does not restrict how users access S3 content.
Why A is correct: Origin Access Control (OAC) is the recommended way to restrict S3 access to CloudFront. You configure the S3 bucket policy to only allow requests from the CloudFront distribution's OAC. Direct access to the S3 bucket URL is denied. OAC replaces the older Origin Access Identity (OAI) and supports additional features like SSE-KMS encrypted objects.
Why others are wrong:
B) Bucket versioning — Versioning keeps multiple versions of objects. It does not control who can access the bucket.
C) Transfer Acceleration — Transfer Acceleration speeds up uploads to S3 using CloudFront edge locations. It does not restrict access.
D) Field-level encryption — Field-level encryption protects specific data fields in POST requests. It does not restrict how users access S3 content.
Q33.A media company needs to distribute premium video content through CloudFront. They want to restrict access so that only paying subscribers can watch the videos. The access should expire after 24 hours. Which CloudFront feature should they use?
✓ Correct: D. CloudFront signed URLs/cookies provide time-limited access to content for authorized users.
Why D is correct: CloudFront signed URLs grant access to a specific file with an expiration time. Signed cookies grant access to multiple files (useful for HLS video streams). Both use a CloudFront key pair to create cryptographic signatures. The application generates signed URLs/cookies for authenticated subscribers with a 24-hour expiration.
Why others are wrong:
A) Geo-restriction — Geo-restriction blocks or allows access based on the user's country. It cannot authenticate individual subscribers or set time limits.
B) S3 presigned URLs — S3 presigned URLs bypass CloudFront entirely and go directly to S3, losing the benefits of CDN caching and edge delivery.
C) Origin Access Control — OAC restricts the S3 origin to only accept requests from CloudFront. It does not control which end users can access the content.
Why D is correct: CloudFront signed URLs grant access to a specific file with an expiration time. Signed cookies grant access to multiple files (useful for HLS video streams). Both use a CloudFront key pair to create cryptographic signatures. The application generates signed URLs/cookies for authenticated subscribers with a 24-hour expiration.
Why others are wrong:
A) Geo-restriction — Geo-restriction blocks or allows access based on the user's country. It cannot authenticate individual subscribers or set time limits.
B) S3 presigned URLs — S3 presigned URLs bypass CloudFront entirely and go directly to S3, losing the benefits of CDN caching and edge delivery.
C) Origin Access Control — OAC restricts the S3 origin to only accept requests from CloudFront. It does not control which end users can access the content.
Q34.A company wants to restrict access to their CloudFront distribution so that users from certain countries cannot access the content. Which CloudFront feature provides this?
✓ Correct: C. CloudFront geo-restriction allows you to block or allow access based on the user's country.
Why C is correct: CloudFront geo-restriction uses a country-level allowlist or denylist to control access. CloudFront uses a GeoIP database to determine the user's country. You can either create an allowlist (only specified countries can access) or a denylist (specified countries are blocked). Users from restricted countries receive a 403 Forbidden response.
Why others are wrong:
A) Signed URLs with IP restrictions — Signed URLs can include IP address restrictions but cannot restrict by country. You would need to know all IP ranges for a country.
B) WAF IP set rules — WAF can block specific IP addresses or ranges but does not natively provide country-level blocking (though WAF geo match conditions could work, CloudFront's built-in geo-restriction is simpler).
D) Route 53 geolocation — Geolocation routing directs traffic to different endpoints based on location but does not block access. Users would still reach some endpoint.
Why C is correct: CloudFront geo-restriction uses a country-level allowlist or denylist to control access. CloudFront uses a GeoIP database to determine the user's country. You can either create an allowlist (only specified countries can access) or a denylist (specified countries are blocked). Users from restricted countries receive a 403 Forbidden response.
Why others are wrong:
A) Signed URLs with IP restrictions — Signed URLs can include IP address restrictions but cannot restrict by country. You would need to know all IP ranges for a country.
B) WAF IP set rules — WAF can block specific IP addresses or ranges but does not natively provide country-level blocking (though WAF geo match conditions could work, CloudFront's built-in geo-restriction is simpler).
D) Route 53 geolocation — Geolocation routing directs traffic to different endpoints based on location but does not block access. Users would still reach some endpoint.
Q35.A company wants to protect sensitive credit card data submitted through a CloudFront-distributed web form. They need to encrypt specific form fields at the edge so that only the application server can decrypt them, not intermediate services. Which feature should they use?
✓ Correct: B. CloudFront field-level encryption encrypts specific form fields at the edge using asymmetric keys.
Why B is correct: Field-level encryption allows you to specify which POST body fields should be additionally encrypted at the CloudFront edge location using a public key. Only the application with the corresponding private key can decrypt these fields. This provides an extra layer of protection beyond HTTPS, ensuring that even if intermediate services (like the load balancer) handle the request, the sensitive data remains encrypted.
Why others are wrong:
A) HTTPS with TLS 1.3 — HTTPS encrypts the entire connection in transit but the data is decrypted at each TLS termination point (e.g., the ALB). Intermediate services can see the data.
C) KMS envelope encryption — KMS is for encrypting data at rest or in application code. It does not integrate with CloudFront edge field encryption.
D) S3 SSE — S3 encryption protects data stored in S3, not data in transit through CloudFront.
Why B is correct: Field-level encryption allows you to specify which POST body fields should be additionally encrypted at the CloudFront edge location using a public key. Only the application with the corresponding private key can decrypt these fields. This provides an extra layer of protection beyond HTTPS, ensuring that even if intermediate services (like the load balancer) handle the request, the sensitive data remains encrypted.
Why others are wrong:
A) HTTPS with TLS 1.3 — HTTPS encrypts the entire connection in transit but the data is decrypted at each TLS termination point (e.g., the ALB). Intermediate services can see the data.
C) KMS envelope encryption — KMS is for encrypting data at rest or in application code. It does not integrate with CloudFront edge field encryption.
D) S3 SSE — S3 encryption protects data stored in S3, not data in transit through CloudFront.
Q36.Which TWO statements correctly describe AWS Shield?SELECT TWO
✓ Correct: B, D. Shield Standard is free for L3/L4; Shield Advanced adds DRT access and cost protection.
Why B is correct: AWS Shield Standard is automatically enabled for all AWS customers at no additional cost. It protects against common layer 3 (network) and layer 4 (transport) DDoS attacks such as SYN floods and UDP reflection attacks.
Why D is correct: AWS Shield Advanced ($3,000/month) provides 24/7 access to the DDoS Response Team, advanced attack detection, real-time metrics, and cost protection that credits you for scaling charges incurred during DDoS attacks.
Why others are wrong:
A) Standard is paid/L7 — Shield Standard is free and protects at layers 3/4, not layer 7.
C) $100/month — Shield Advanced costs $3,000/month per organization, not $100/month.
E) CloudFront only — Shield Advanced protects CloudFront, Route 53, ELB, Elastic IP, and Global Accelerator resources.
Why B is correct: AWS Shield Standard is automatically enabled for all AWS customers at no additional cost. It protects against common layer 3 (network) and layer 4 (transport) DDoS attacks such as SYN floods and UDP reflection attacks.
Why D is correct: AWS Shield Advanced ($3,000/month) provides 24/7 access to the DDoS Response Team, advanced attack detection, real-time metrics, and cost protection that credits you for scaling charges incurred during DDoS attacks.
Why others are wrong:
A) Standard is paid/L7 — Shield Standard is free and protects at layers 3/4, not layer 7.
C) $100/month — Shield Advanced costs $3,000/month per organization, not $100/month.
E) CloudFront only — Shield Advanced protects CloudFront, Route 53, ELB, Elastic IP, and Global Accelerator resources.
Q37.A company wants to block SQL injection and cross-site scripting (XSS) attacks targeting their web application behind an Application Load Balancer. Which AWS service should they use?
✓ Correct: A. AWS WAF provides layer 7 protection against SQL injection, XSS, and other web application attacks.
Why A is correct: AWS WAF (Web Application Firewall) operates at layer 7 and inspects HTTP/HTTPS requests. It can be attached to ALB, CloudFront, API Gateway, and AppSync. WAF rules can detect and block SQL injection (SQLi) and cross-site scripting (XSS) patterns in request parameters, headers, and body content using built-in match conditions or managed rule groups.
Why others are wrong:
B) Shield Standard — Shield Standard protects against volumetric DDoS attacks at layers 3/4. It does not inspect HTTP content for SQLi or XSS.
C) Security Groups — Security Groups filter traffic based on IP, port, and protocol at layers 3/4. They cannot inspect HTTP request content.
D) Network ACLs — NACLs are stateless layer 3/4 filters. They cannot detect application-layer attack patterns.
Why A is correct: AWS WAF (Web Application Firewall) operates at layer 7 and inspects HTTP/HTTPS requests. It can be attached to ALB, CloudFront, API Gateway, and AppSync. WAF rules can detect and block SQL injection (SQLi) and cross-site scripting (XSS) patterns in request parameters, headers, and body content using built-in match conditions or managed rule groups.
Why others are wrong:
B) Shield Standard — Shield Standard protects against volumetric DDoS attacks at layers 3/4. It does not inspect HTTP content for SQLi or XSS.
C) Security Groups — Security Groups filter traffic based on IP, port, and protocol at layers 3/4. They cannot inspect HTTP request content.
D) Network ACLs — NACLs are stateless layer 3/4 filters. They cannot detect application-layer attack patterns.
Q38.A company is experiencing a brute-force login attack where thousands of requests per second are hitting their login endpoint. They want to automatically block IP addresses that exceed a request threshold. Which AWS WAF feature should they use?
✓ Correct: C. Rate-based rules automatically block IPs that exceed a configured request threshold.
Why C is correct: WAF rate-based rules track the request rate from each IP address and automatically block IPs that exceed the threshold (minimum 100 requests per 5-minute window). Once the request rate drops below the threshold, the IP is automatically unblocked. This is ideal for mitigating brute-force attacks, DDoS, and web scraping.
Why others are wrong:
A) IP set with static blocklist — A static blocklist requires you to manually identify and add malicious IPs. It does not automatically detect and block IPs based on request rate.
B) Regex match rules — Regex match rules inspect request content for patterns. They do not track request rates or automatically block IPs.
D) Size constraint rules — Size constraint rules check request body/header sizes. They are not relevant to rate-based blocking.
Why C is correct: WAF rate-based rules track the request rate from each IP address and automatically block IPs that exceed the threshold (minimum 100 requests per 5-minute window). Once the request rate drops below the threshold, the IP is automatically unblocked. This is ideal for mitigating brute-force attacks, DDoS, and web scraping.
Why others are wrong:
A) IP set with static blocklist — A static blocklist requires you to manually identify and add malicious IPs. It does not automatically detect and block IPs based on request rate.
B) Regex match rules — Regex match rules inspect request content for patterns. They do not track request rates or automatically block IPs.
D) Size constraint rules — Size constraint rules check request body/header sizes. They are not relevant to rate-based blocking.
Q39.A company wants to quickly deploy a set of pre-configured WAF rules to protect against common web vulnerabilities (OWASP Top 10) without writing custom rules. What should they use?
✓ Correct: B. AWS Managed Rules are pre-configured WAF rule groups maintained by AWS and AWS Marketplace sellers.
Why B is correct: AWS Managed Rules for WAF are pre-built rule groups that address common threats including the OWASP Top 10. The Core Rule Set (CRS) covers SQLi, XSS, and other common vulnerabilities. Additional managed rule groups cover specific threats like bad bots, known bad inputs, and anonymous IP lists. They can be added to a Web ACL with no custom rule authoring required.
Why others are wrong:
A) Shield Advanced DRT — The DDoS Response Team helps during active DDoS attacks. They do not provide pre-configured WAF rules for web vulnerabilities.
C) AWS Config managed rules — AWS Config rules check resource configuration compliance, not web traffic inspection.
D) Amazon Inspector — Inspector assesses EC2 instances and container images for software vulnerabilities, not real-time web request filtering.
Why B is correct: AWS Managed Rules for WAF are pre-built rule groups that address common threats including the OWASP Top 10. The Core Rule Set (CRS) covers SQLi, XSS, and other common vulnerabilities. Additional managed rule groups cover specific threats like bad bots, known bad inputs, and anonymous IP lists. They can be added to a Web ACL with no custom rule authoring required.
Why others are wrong:
A) Shield Advanced DRT — The DDoS Response Team helps during active DDoS attacks. They do not provide pre-configured WAF rules for web vulnerabilities.
C) AWS Config managed rules — AWS Config rules check resource configuration compliance, not web traffic inspection.
D) Amazon Inspector — Inspector assesses EC2 instances and container images for software vulnerabilities, not real-time web request filtering.
Q40.Which TWO AWS services can AWS WAF be directly associated with to protect web applications?SELECT TWO
✓ Correct: A, D. WAF can be attached to CloudFront, ALB, API Gateway, AppSync, and Cognito User Pools.
Why A is correct: CloudFront is one of the primary services that supports WAF Web ACL attachment. WAF rules are evaluated at CloudFront edge locations, providing protection close to the user.
Why D is correct: Application Load Balancer supports WAF Web ACL attachment, allowing layer 7 inspection of HTTP/HTTPS requests before they reach backend targets.
Why others are wrong:
B) EC2 directly — WAF cannot be attached directly to EC2 instances. It must be associated with a supported AWS service (CloudFront, ALB, API Gateway, AppSync, Cognito).
C) NLB — Network Load Balancer operates at layer 4 and does not support WAF integration. WAF requires layer 7 HTTP traffic.
E) Direct Connect — Direct Connect is a network connectivity service, not a web application endpoint. WAF cannot be attached to it.
Why A is correct: CloudFront is one of the primary services that supports WAF Web ACL attachment. WAF rules are evaluated at CloudFront edge locations, providing protection close to the user.
Why D is correct: Application Load Balancer supports WAF Web ACL attachment, allowing layer 7 inspection of HTTP/HTTPS requests before they reach backend targets.
Why others are wrong:
B) EC2 directly — WAF cannot be attached directly to EC2 instances. It must be associated with a supported AWS service (CloudFront, ALB, API Gateway, AppSync, Cognito).
C) NLB — Network Load Balancer operates at layer 4 and does not support WAF integration. WAF requires layer 7 HTTP traffic.
E) Direct Connect — Direct Connect is a network connectivity service, not a web application endpoint. WAF cannot be attached to it.
Q41.A company wants to build a comprehensive DDoS mitigation architecture for their web application. Which combination of services provides the best multi-layer protection?
✓ Correct: A. CloudFront absorbs edge attacks, WAF blocks L7 attacks, Shield Advanced provides DDoS protection, Auto Scaling handles load.
Why A is correct: A defense-in-depth DDoS architecture uses: CloudFront to absorb volumetric attacks at edge locations; AWS WAF rate-based rules to block application-layer floods; Shield Advanced for DDoS detection, DRT support, and cost protection; and Auto Scaling to absorb traffic spikes. This covers layers 3, 4, and 7.
Why others are wrong:
B) Direct Connect + VPN + SGs — These are connectivity and basic filtering tools, not DDoS mitigation services. Direct Connect is a private connection, not useful for public web app DDoS protection.
C) NACLs + Flow Logs + CloudTrail — These provide filtering and logging but not active DDoS mitigation. Flow Logs and CloudTrail are monitoring tools, not protection services.
D) Route 53 + S3 + IAM — While Route 53 and S3 are resilient to DDoS, this combination lacks active DDoS detection and layer 7 protection.
Why A is correct: A defense-in-depth DDoS architecture uses: CloudFront to absorb volumetric attacks at edge locations; AWS WAF rate-based rules to block application-layer floods; Shield Advanced for DDoS detection, DRT support, and cost protection; and Auto Scaling to absorb traffic spikes. This covers layers 3, 4, and 7.
Why others are wrong:
B) Direct Connect + VPN + SGs — These are connectivity and basic filtering tools, not DDoS mitigation services. Direct Connect is a private connection, not useful for public web app DDoS protection.
C) NACLs + Flow Logs + CloudTrail — These provide filtering and logging but not active DDoS mitigation. Flow Logs and CloudTrail are monitoring tools, not protection services.
D) Route 53 + S3 + IAM — While Route 53 and S3 are resilient to DDoS, this combination lacks active DDoS detection and layer 7 protection.
Q42.A company is building a mobile application and needs to provide user sign-up, sign-in, and multi-factor authentication. They also need to integrate with social identity providers like Google and Facebook. Which AWS service should they use?
✓ Correct: D. Cognito User Pools provide user directory, authentication, MFA, and social IdP federation for applications.
Why D is correct: Amazon Cognito User Pools are a managed user directory that provides sign-up, sign-in, MFA, password policies, and federation with social identity providers (Google, Facebook, Apple) and SAML/OIDC enterprise identity providers. User Pools issue JWT tokens that applications use for authentication.
Why others are wrong:
A) AWS IAM — IAM is for managing access to AWS services and resources, not for application end-user authentication. Creating IAM users for application users is an anti-pattern.
B) Cognito Identity Pools — Identity Pools (Federated Identities) provide temporary AWS credentials to access AWS services. They do not provide a user directory or sign-up/sign-in flows. Identity Pools work with User Pools.
C) AWS Directory Service — Directory Service provides managed Microsoft Active Directory, typically for enterprise/corporate scenarios, not mobile app user management.
Why D is correct: Amazon Cognito User Pools are a managed user directory that provides sign-up, sign-in, MFA, password policies, and federation with social identity providers (Google, Facebook, Apple) and SAML/OIDC enterprise identity providers. User Pools issue JWT tokens that applications use for authentication.
Why others are wrong:
A) AWS IAM — IAM is for managing access to AWS services and resources, not for application end-user authentication. Creating IAM users for application users is an anti-pattern.
B) Cognito Identity Pools — Identity Pools (Federated Identities) provide temporary AWS credentials to access AWS services. They do not provide a user directory or sign-up/sign-in flows. Identity Pools work with User Pools.
C) AWS Directory Service — Directory Service provides managed Microsoft Active Directory, typically for enterprise/corporate scenarios, not mobile app user management.
Q43.A mobile application needs to allow authenticated users to upload files directly to an S3 bucket. The application uses Amazon Cognito User Pools for authentication. How can the app obtain temporary AWS credentials for S3 access?
✓ Correct: B. Cognito Identity Pools exchange User Pool tokens for temporary AWS credentials via STS.
Why B is correct: Cognito Identity Pools (Federated Identities) accept tokens from Cognito User Pools (or other identity providers) and exchange them for temporary AWS credentials (access key, secret key, session token) through STS. These credentials are scoped by an IAM role, allowing the mobile app to access S3 securely without embedded long-term credentials.
Why others are wrong:
A) Embedded access keys — Embedding IAM access keys in a mobile app is a critical security risk. The keys can be extracted and misused.
C) JWT as AWS credentials — Cognito JWT tokens authenticate users to your application but cannot be used directly as AWS credentials for S3 API calls.
D) Share role ARN — Knowing an IAM role ARN does not grant the ability to assume it. The mobile app needs temporary credentials, not a role ARN.
Why B is correct: Cognito Identity Pools (Federated Identities) accept tokens from Cognito User Pools (or other identity providers) and exchange them for temporary AWS credentials (access key, secret key, session token) through STS. These credentials are scoped by an IAM role, allowing the mobile app to access S3 securely without embedded long-term credentials.
Why others are wrong:
A) Embedded access keys — Embedding IAM access keys in a mobile app is a critical security risk. The keys can be extracted and misused.
C) JWT as AWS credentials — Cognito JWT tokens authenticate users to your application but cannot be used directly as AWS credentials for S3 API calls.
D) Share role ARN — Knowing an IAM role ARN does not grant the ability to assume it. The mobile app needs temporary credentials, not a role ARN.
Q44.A company wants to provide secure access to internal web applications for remote employees without using a VPN. They want to verify device posture and user identity before granting access. Which AWS service should they use?
✓ Correct: C. AWS Verified Access provides Zero Trust access to applications without a VPN, verifying user and device trust.
Why C is correct: AWS Verified Access implements a Zero Trust security model. It evaluates each access request against policies that check user identity (through IAM Identity Center or third-party IdPs) and device posture (through device management providers). Users access internal applications through a browser without needing a VPN client.
Why others are wrong:
A) AWS Client VPN — Client VPN provides traditional VPN access which the question specifically wants to avoid. It does not natively verify device posture.
B) PrivateLink — PrivateLink provides private connectivity between VPCs. It does not authenticate individual users or check device posture.
D) CloudFront signed URLs — Signed URLs provide time-limited access to content but do not verify user identity against an IdP or check device posture.
Why C is correct: AWS Verified Access implements a Zero Trust security model. It evaluates each access request against policies that check user identity (through IAM Identity Center or third-party IdPs) and device posture (through device management providers). Users access internal applications through a browser without needing a VPN client.
Why others are wrong:
A) AWS Client VPN — Client VPN provides traditional VPN access which the question specifically wants to avoid. It does not natively verify device posture.
B) PrivateLink — PrivateLink provides private connectivity between VPCs. It does not authenticate individual users or check device posture.
D) CloudFront signed URLs — Signed URLs provide time-limited access to content but do not verify user identity against an IdP or check device posture.
Q45.An application running on an EC2 instance retrieves its IAM role credentials from the instance metadata service. The security team is concerned about SSRF (Server-Side Request Forgery) attacks that could steal these credentials. What should they do?
✓ Correct: B. IMDSv2 requires a session token obtained via a PUT request, which mitigates SSRF attacks.
Why B is correct: IMDSv2 requires a two-step process: first, a PUT request to obtain a session token (with a TTL), then using that token in subsequent GET requests. Since SSRF attacks typically use GET requests and cannot easily add custom headers or make PUT requests, IMDSv2 effectively mitigates credential theft via SSRF. You can enforce IMDSv2 by setting
Why others are wrong:
A) Hardcoded credentials — Hardcoded credentials are a worse security practice. They cannot be rotated automatically and can be exposed in code repositories.
C) Disable IMDS entirely — Disabling IMDS would prevent the instance from obtaining IAM role credentials, breaking applications that rely on the instance role.
D) Security group blocking — Security groups operate at the ENI level for traffic to/from the instance, not for internal traffic to the metadata endpoint (169.254.169.254). This approach does not work.
Why B is correct: IMDSv2 requires a two-step process: first, a PUT request to obtain a session token (with a TTL), then using that token in subsequent GET requests. Since SSRF attacks typically use GET requests and cannot easily add custom headers or make PUT requests, IMDSv2 effectively mitigates credential theft via SSRF. You can enforce IMDSv2 by setting
HttpTokens to required.Why others are wrong:
A) Hardcoded credentials — Hardcoded credentials are a worse security practice. They cannot be rotated automatically and can be exposed in code repositories.
C) Disable IMDS entirely — Disabling IMDS would prevent the instance from obtaining IAM role credentials, breaking applications that rely on the instance role.
D) Security group blocking — Security groups operate at the ENI level for traffic to/from the instance, not for internal traffic to the metadata endpoint (169.254.169.254). This approach does not work.
Q46.What is the key difference between IMDSv1 and IMDSv2 on EC2 instances?
✓ Correct: A. IMDSv2 adds token-based session authentication requiring a PUT request before metadata access.
Why A is correct: IMDSv1 allows any process on the instance to make a simple HTTP GET request to
Why others are wrong:
B) HTTPS vs HTTP — Both IMDSv1 and IMDSv2 use HTTP (not HTTPS) to communicate with the link-local metadata endpoint.
C) Encrypted vs plaintext — Both versions return metadata in the same format. IMDSv2 does not add encryption to the metadata content.
D) OS availability — Both IMDSv1 and IMDSv2 are available on both Linux and Windows EC2 instances.
Why A is correct: IMDSv1 allows any process on the instance to make a simple HTTP GET request to
http://169.254.169.254/ to retrieve metadata. IMDSv2 adds security by requiring a session token: you must first make a PUT request with a TTL header to obtain a token, then include that token in the X-aws-ec2-metadata-token header on subsequent GET requests. This prevents SSRF attacks that can only issue GET requests.Why others are wrong:
B) HTTPS vs HTTP — Both IMDSv1 and IMDSv2 use HTTP (not HTTPS) to communicate with the link-local metadata endpoint.
C) Encrypted vs plaintext — Both versions return metadata in the same format. IMDSv2 does not add encryption to the metadata content.
D) OS availability — Both IMDSv1 and IMDSv2 are available on both Linux and Windows EC2 instances.
Q47.A Network ACL has the following inbound rules: Rule 100: Allow TCP 443 from 0.0.0.0/0, Rule 200: Deny TCP 443 from 10.0.1.0/24, Rule *: Deny All. What happens when an instance in the 10.0.1.0/24 subnet sends HTTPS traffic?
✓ Correct: D. NACLs evaluate rules in order by rule number; the first matching rule is applied.
Why D is correct: Network ACLs evaluate rules in ascending order by rule number. Rule 100 (Allow TCP 443 from 0.0.0.0/0) matches first because 0.0.0.0/0 includes 10.0.1.0/24, and 100 is lower than 200. Once a match is found, the action is taken and no further rules are evaluated. The traffic is allowed.
Why others are wrong:
A) Denied by Rule 200 — Rule 200 is never reached because Rule 100 matches first. NACLs use first-match, not most-specific-match.
B) Denied by default rule — The default rule (*) only applies if no numbered rule matches. Rule 100 matches before the default rule.
C) More specific match — NACLs do not use longest-prefix or most-specific matching. They strictly evaluate rules by number order.
Why D is correct: Network ACLs evaluate rules in ascending order by rule number. Rule 100 (Allow TCP 443 from 0.0.0.0/0) matches first because 0.0.0.0/0 includes 10.0.1.0/24, and 100 is lower than 200. Once a match is found, the action is taken and no further rules are evaluated. The traffic is allowed.
Why others are wrong:
A) Denied by Rule 200 — Rule 200 is never reached because Rule 100 matches first. NACLs use first-match, not most-specific-match.
B) Denied by default rule — The default rule (*) only applies if no numbered rule matches. Rule 100 matches before the default rule.
C) More specific match — NACLs do not use longest-prefix or most-specific matching. They strictly evaluate rules by number order.
Q48.A company wants to achieve maximum resiliency for their AWS Direct Connect connection. Which TWO approaches provide the highest level of resiliency?SELECT TWO
✓ Correct: C, E. Maximum resiliency requires multiple connections at multiple locations with device redundancy.
Why C is correct: Using separate Direct Connect locations protects against the failure of an entire facility. AWS recommends connections at two or more geographically diverse locations for high resiliency.
Why E is correct: Having two or more connections per location provides device-level redundancy. If one connection or device fails, the other connection at the same location can handle traffic. Combined with C, this provides maximum resiliency.
Why others are wrong:
A) Single DX + VPN backup — This provides some redundancy but VPN over the internet is less reliable than a second Direct Connect connection.
B) Single connection, single location — This is the least resilient option with a single point of failure for both the connection and the location.
D) Increased bandwidth — Increasing bandwidth does not improve resiliency. A single higher-bandwidth connection is still a single point of failure.
Why C is correct: Using separate Direct Connect locations protects against the failure of an entire facility. AWS recommends connections at two or more geographically diverse locations for high resiliency.
Why E is correct: Having two or more connections per location provides device-level redundancy. If one connection or device fails, the other connection at the same location can handle traffic. Combined with C, this provides maximum resiliency.
Why others are wrong:
A) Single DX + VPN backup — This provides some redundancy but VPN over the internet is less reliable than a second Direct Connect connection.
B) Single connection, single location — This is the least resilient option with a single point of failure for both the connection and the location.
D) Increased bandwidth — Increasing bandwidth does not improve resiliency. A single higher-bandwidth connection is still a single point of failure.
Q49.A company needs to use AWS Network Firewall to inspect all traffic entering their VPC from the internet. Where should the Network Firewall endpoint be deployed in the architecture?
✓ Correct: A. Network Firewall endpoints are deployed in dedicated subnets, and route tables direct traffic through them.
Why A is correct: AWS Network Firewall endpoints are deployed in dedicated firewall subnets. The architecture uses route table manipulation: the Internet Gateway's ingress route table directs traffic to the firewall endpoint, and the firewall subnet's route table forwards inspected traffic to the application subnet. This creates an inline inspection architecture.
Why others are wrong:
B) Attached to IGW — Network Firewall cannot be directly attached to an Internet Gateway. It is deployed as an endpoint in a subnet.
C) Same subnet as applications — Placing the firewall in the application subnet would not allow proper traffic routing. A dedicated subnet provides clear traffic flow control.
D) Default subnet, no routes — Network Firewall requires explicit route table changes to direct traffic through it. It does not automatically intercept traffic.
Why A is correct: AWS Network Firewall endpoints are deployed in dedicated firewall subnets. The architecture uses route table manipulation: the Internet Gateway's ingress route table directs traffic to the firewall endpoint, and the firewall subnet's route table forwards inspected traffic to the application subnet. This creates an inline inspection architecture.
Why others are wrong:
B) Attached to IGW — Network Firewall cannot be directly attached to an Internet Gateway. It is deployed as an endpoint in a subnet.
C) Same subnet as applications — Placing the firewall in the application subnet would not allow proper traffic routing. A dedicated subnet provides clear traffic flow control.
D) Default subnet, no routes — Network Firewall requires explicit route table changes to direct traffic through it. It does not automatically intercept traffic.
Q50.A company has a VPC Endpoint policy attached to their S3 Gateway Endpoint. The policy only allows access to a specific S3 bucket. An IAM user with full S3 permissions tries to access a different S3 bucket through the endpoint. What happens?
✓ Correct: C. VPC Endpoint policies are evaluated alongside IAM policies; both must allow the action.
Why C is correct: VPC Endpoint policies act as an additional layer of access control. When traffic goes through the VPC Endpoint, both the IAM policy and the endpoint policy must allow the request. Even though the IAM user has full S3 permissions, the endpoint policy restricts access to only the specified bucket, so access to any other bucket is denied through this endpoint.
Why others are wrong:
A) IAM overrides endpoint policy — IAM policies and endpoint policies are evaluated together. The request must be allowed by both. Neither overrides the other.
B) No policy support — Gateway Endpoints do support VPC Endpoint policies, which are JSON documents that control which principals can use the endpoint to access which resources.
D) Routed through NAT — The route table determines how traffic is routed. If the route table points to the Gateway Endpoint for S3, traffic goes through the endpoint, not the NAT Gateway.
Why C is correct: VPC Endpoint policies act as an additional layer of access control. When traffic goes through the VPC Endpoint, both the IAM policy and the endpoint policy must allow the request. Even though the IAM user has full S3 permissions, the endpoint policy restricts access to only the specified bucket, so access to any other bucket is denied through this endpoint.
Why others are wrong:
A) IAM overrides endpoint policy — IAM policies and endpoint policies are evaluated together. The request must be allowed by both. Neither overrides the other.
B) No policy support — Gateway Endpoints do support VPC Endpoint policies, which are JSON documents that control which principals can use the endpoint to access which resources.
D) Routed through NAT — The route table determines how traffic is routed. If the route table points to the Gateway Endpoint for S3, traffic goes through the endpoint, not the NAT Gateway.
Q51.A security engineer notices that a Security Group allows inbound traffic on port 22 from 0.0.0.0/0. They want to block SSH access from a specific malicious IP address while keeping the rule for all other IPs. What is the best approach?
✓ Correct: B. NACLs support deny rules, which can block specific IPs while allowing others through the Security Group.
Why B is correct: Security Groups only support allow rules, so you cannot add a deny rule in a Security Group. Network ACLs support both allow and deny rules. You can add a deny rule with a low rule number (to be evaluated first) for the malicious IP on port 22. All other traffic continues to be handled by the Security Group allow rule.
Why others are wrong:
A) SG deny rule — Security Groups do not support deny rules. You can only specify what is allowed; everything else is implicitly denied.
C) Change SSH port — Changing the port is security through obscurity and does not actually block the malicious IP from trying other attack vectors.
D) Delete IGW — Deleting the Internet Gateway would block all internet traffic, which is far too disruptive for blocking a single IP.
Why B is correct: Security Groups only support allow rules, so you cannot add a deny rule in a Security Group. Network ACLs support both allow and deny rules. You can add a deny rule with a low rule number (to be evaluated first) for the malicious IP on port 22. All other traffic continues to be handled by the Security Group allow rule.
Why others are wrong:
A) SG deny rule — Security Groups do not support deny rules. You can only specify what is allowed; everything else is implicitly denied.
C) Change SSH port — Changing the port is security through obscurity and does not actually block the malicious IP from trying other attack vectors.
D) Delete IGW — Deleting the Internet Gateway would block all internet traffic, which is far too disruptive for blocking a single IP.
Q52.What type of Route 53 record should be used to point a domain name (e.g., example.com) to an Application Load Balancer?
✓ Correct: D. An Alias record can point the zone apex to an ALB and is free for AWS resources.
Why D is correct: A Route 53 Alias record maps a domain name directly to an AWS resource like an ALB. Unlike CNAME records, Alias records can be used at the zone apex (e.g., example.com, not just www.example.com). Alias queries to AWS resources (ALB, CloudFront, S3) are also free of charge. The Alias resolves to the ALB's IP addresses automatically.
Why others are wrong:
A) CNAME record — A CNAME record cannot be used at the zone apex (bare domain like example.com). It only works for subdomains. Also, CNAME queries incur Route 53 charges.
B) A record with IP — ALB IP addresses change over time. You should never hardcode an ALB IP in a DNS record.
C) MX record — MX records are for email routing, not for pointing to load balancers.
Why D is correct: A Route 53 Alias record maps a domain name directly to an AWS resource like an ALB. Unlike CNAME records, Alias records can be used at the zone apex (e.g., example.com, not just www.example.com). Alias queries to AWS resources (ALB, CloudFront, S3) are also free of charge. The Alias resolves to the ALB's IP addresses automatically.
Why others are wrong:
A) CNAME record — A CNAME record cannot be used at the zone apex (bare domain like example.com). It only works for subdomains. Also, CNAME queries incur Route 53 charges.
B) A record with IP — ALB IP addresses change over time. You should never hardcode an ALB IP in a DNS record.
C) MX record — MX records are for email routing, not for pointing to load balancers.
Q53.Which TWO capabilities are specific to AWS Shield Advanced and NOT included in Shield Standard?SELECT TWO
✓ Correct: B, E. Cost protection and DRT access are exclusive to Shield Advanced.
Why B is correct: DDoS cost protection is exclusive to Shield Advanced. If a DDoS attack causes your application to scale up (e.g., larger EC2 instances, more ALB capacity), AWS credits the associated charges back to your account.
Why E is correct: The DDoS Response Team (DRT) provides 24/7 access to AWS DDoS experts who can assist during active attacks, create custom WAF rules, and analyze attack patterns. This is only available with Shield Advanced.
Why others are wrong:
A) SYN flood protection — Shield Standard already protects against common layer 3/4 attacks including SYN floods.
C) UDP reflection protection — Shield Standard already protects against UDP reflection attacks as part of its baseline protection.
D) Automatic protection — Automatic protection for all customers is a feature of Shield Standard, not Advanced. Advanced requires subscription.
Why B is correct: DDoS cost protection is exclusive to Shield Advanced. If a DDoS attack causes your application to scale up (e.g., larger EC2 instances, more ALB capacity), AWS credits the associated charges back to your account.
Why E is correct: The DDoS Response Team (DRT) provides 24/7 access to AWS DDoS experts who can assist during active attacks, create custom WAF rules, and analyze attack patterns. This is only available with Shield Advanced.
Why others are wrong:
A) SYN flood protection — Shield Standard already protects against common layer 3/4 attacks including SYN floods.
C) UDP reflection protection — Shield Standard already protects against UDP reflection attacks as part of its baseline protection.
D) Automatic protection — Automatic protection for all customers is a feature of Shield Standard, not Advanced. Advanced requires subscription.
Q54.A VPC has a public subnet (10.0.1.0/24) and a private subnet (10.0.2.0/24). An EC2 instance in the private subnet has a Security Group that allows all outbound traffic. The instance cannot reach the internet. The route table has a route to a NAT Gateway. What could be the issue?
✓ Correct: A. A NAT Gateway requires an Elastic IP to perform network address translation for internet access.
Why A is correct: A NAT Gateway requires an Elastic IP address to translate private IP addresses to a public IP for internet-bound traffic. Without an EIP, the NAT Gateway cannot route traffic to the internet. Additionally, the NAT Gateway must be in a public subnet that has a route to an Internet Gateway.
Why others are wrong:
B) Inbound SG rule — Security Groups are stateful. If outbound traffic is allowed, the return traffic is automatically allowed without an explicit inbound rule.
C) Public IP on instance — Instances in private subnets do not need public IPs. The NAT Gateway provides the public IP for internet communication.
D) Flow Logs — VPC Flow Logs are a monitoring feature. They do not affect whether traffic can flow or not.
Why A is correct: A NAT Gateway requires an Elastic IP address to translate private IP addresses to a public IP for internet-bound traffic. Without an EIP, the NAT Gateway cannot route traffic to the internet. Additionally, the NAT Gateway must be in a public subnet that has a route to an Internet Gateway.
Why others are wrong:
B) Inbound SG rule — Security Groups are stateful. If outbound traffic is allowed, the return traffic is automatically allowed without an explicit inbound rule.
C) Public IP on instance — Instances in private subnets do not need public IPs. The NAT Gateway provides the public IP for internet communication.
D) Flow Logs — VPC Flow Logs are a monitoring feature. They do not affect whether traffic can flow or not.
Q55.A company is configuring their Site-to-Site VPN and wants routes learned from the on-premises network to automatically appear in the VPC route table. What must they enable?
✓ Correct: C. Route propagation allows BGP-learned routes from the VPN to be automatically added to VPC route tables.
Why C is correct: Route propagation must be enabled on the VPC route table for the Virtual Private Gateway. When enabled, routes learned via BGP from the on-premises Customer Gateway are automatically propagated into the route table. This eliminates the need to manually add static routes for on-premises network CIDRs.
Why others are wrong:
A) NAT Gateway auto-routing — NAT Gateways do not have an auto-routing feature. They are for outbound internet access from private subnets.
B) VPC DNS resolution — DNS resolution settings control whether DNS queries within the VPC are resolved. They have no impact on VPN routing.
D) VPC peering transitive routing — VPC Peering does not support transitive routing, and this is unrelated to VPN route propagation.
Why C is correct: Route propagation must be enabled on the VPC route table for the Virtual Private Gateway. When enabled, routes learned via BGP from the on-premises Customer Gateway are automatically propagated into the route table. This eliminates the need to manually add static routes for on-premises network CIDRs.
Why others are wrong:
A) NAT Gateway auto-routing — NAT Gateways do not have an auto-routing feature. They are for outbound internet access from private subnets.
B) VPC DNS resolution — DNS resolution settings control whether DNS queries within the VPC are resolved. They have no impact on VPN routing.
D) VPC peering transitive routing — VPC Peering does not support transitive routing, and this is unrelated to VPN route propagation.
Q56.An API Gateway is configured with an edge-optimized endpoint. How does this endpoint type route API requests?
✓ Correct: B. Edge-optimized API endpoints route requests through CloudFront's global edge network.
Why B is correct: An edge-optimized endpoint is the default API Gateway type. Requests from clients are routed to the nearest CloudFront edge location, which then forwards the request to the API Gateway in its deployed region. This improves latency for geographically distributed clients. The CloudFront distribution is managed by API Gateway automatically.
Why others are wrong:
A) Direct to region — This describes a regional endpoint, where requests go directly to the API Gateway in the deployed region without CloudFront.
C) VPC Interface Endpoint — This describes a private endpoint, which is only accessible from within a VPC.
D) Global Accelerator — API Gateway does not use Global Accelerator. Edge-optimized endpoints use CloudFront's edge network.
Why B is correct: An edge-optimized endpoint is the default API Gateway type. Requests from clients are routed to the nearest CloudFront edge location, which then forwards the request to the API Gateway in its deployed region. This improves latency for geographically distributed clients. The CloudFront distribution is managed by API Gateway automatically.
Why others are wrong:
A) Direct to region — This describes a regional endpoint, where requests go directly to the API Gateway in the deployed region without CloudFront.
C) VPC Interface Endpoint — This describes a private endpoint, which is only accessible from within a VPC.
D) Global Accelerator — API Gateway does not use Global Accelerator. Edge-optimized endpoints use CloudFront's edge network.
Q57.Which TWO statements about Amazon CloudFront origins are correct?SELECT TWO
✓ Correct: A, C. CloudFront supports S3 buckets and ALBs (among others) as origins.
Why A is correct: S3 buckets are one of the most common CloudFront origins. CloudFront caches and serves S3 objects from edge locations. You can restrict S3 access using Origin Access Control (OAC).
Why C is correct: An Application Load Balancer can be used as a custom origin for dynamic content. CloudFront sends requests to the ALB, which distributes them to backend targets. The ALB must be publicly accessible (have a public DNS name).
Why others are wrong:
B) Only AWS origins — CloudFront supports custom origins, which can be any HTTP server with a public endpoint, including on-premises servers.
D) Same region requirement — CloudFront is a global service. Origins can be in any region. CloudFront distributions are not region-specific.
E) Must have public IPs — While custom HTTP origins need to be reachable, S3 origins with OAC do not require public access. Also, CloudFront can use VPC origins to reach resources in private subnets.
Why A is correct: S3 buckets are one of the most common CloudFront origins. CloudFront caches and serves S3 objects from edge locations. You can restrict S3 access using Origin Access Control (OAC).
Why C is correct: An Application Load Balancer can be used as a custom origin for dynamic content. CloudFront sends requests to the ALB, which distributes them to backend targets. The ALB must be publicly accessible (have a public DNS name).
Why others are wrong:
B) Only AWS origins — CloudFront supports custom origins, which can be any HTTP server with a public endpoint, including on-premises servers.
D) Same region requirement — CloudFront is a global service. Origins can be in any region. CloudFront distributions are not region-specific.
E) Must have public IPs — While custom HTTP origins need to be reachable, S3 origins with OAC do not require public access. Also, CloudFront can use VPC origins to reach resources in private subnets.
Q58.A company has configured AWS WAF with a Web ACL containing multiple rules. In what order are the rules evaluated?
✓ Correct: D. WAF Web ACL rules are evaluated by priority number, with the first match determining the action.
Why D is correct: AWS WAF evaluates rules by priority, starting with the lowest numerical value. When a request matches a rule, the associated action (Allow, Block, or Count) is applied and no further rules are evaluated. If no rule matches, the Web ACL's default action (Allow or Block) is applied. This is why rule ordering matters when designing WAF configurations.
Why others are wrong:
A) All evaluated simultaneously — Rules are evaluated sequentially by priority, not simultaneously. The first match stops evaluation.
B) Alphabetical order — Rules are evaluated by their numerical priority value, not by name.
C) Random order — Rule evaluation is deterministic and always follows the same priority order.
Why D is correct: AWS WAF evaluates rules by priority, starting with the lowest numerical value. When a request matches a rule, the associated action (Allow, Block, or Count) is applied and no further rules are evaluated. If no rule matches, the Web ACL's default action (Allow or Block) is applied. This is why rule ordering matters when designing WAF configurations.
Why others are wrong:
A) All evaluated simultaneously — Rules are evaluated sequentially by priority, not simultaneously. The first match stops evaluation.
B) Alphabetical order — Rules are evaluated by their numerical priority value, not by name.
C) Random order — Rule evaluation is deterministic and always follows the same priority order.
Q59.Which AWS Firewall Manager security policy type would you use to deploy consistent security group rules across all EC2 instances in multiple accounts within an AWS Organization?
✓ Correct: A. Firewall Manager security group policies centrally manage and deploy security group rules across accounts.
Why A is correct: Firewall Manager security group policies allow you to create a baseline set of security group rules and apply them to all specified resources across member accounts. The common security group policy deploys a primary security group to all in-scope resources. Firewall Manager also supports audit security group policies to check and remediate non-compliant security groups.
Why others are wrong:
B) WAF policy — WAF policies deploy Web ACL rules to CloudFront, ALB, and API Gateway resources, not security groups on EC2 instances.
C) Network Firewall policy — Network Firewall policies deploy AWS Network Firewall rules and endpoints, not security groups.
D) DNS Firewall policy — DNS Firewall policies manage Route 53 Resolver DNS Firewall rules for DNS query filtering, not security groups.
Why A is correct: Firewall Manager security group policies allow you to create a baseline set of security group rules and apply them to all specified resources across member accounts. The common security group policy deploys a primary security group to all in-scope resources. Firewall Manager also supports audit security group policies to check and remediate non-compliant security groups.
Why others are wrong:
B) WAF policy — WAF policies deploy Web ACL rules to CloudFront, ALB, and API Gateway resources, not security groups on EC2 instances.
C) Network Firewall policy — Network Firewall policies deploy AWS Network Firewall rules and endpoints, not security groups.
D) DNS Firewall policy — DNS Firewall policies manage Route 53 Resolver DNS Firewall rules for DNS query filtering, not security groups.
Q60.A company is designing a multi-tier web application architecture. Which TWO statements about using NACLs and Security Groups together are correct?SELECT TWO
✓ Correct: B, D. NACLs are evaluated first at the subnet level; Security Groups provide instance-level control.
Why B is correct: For inbound traffic, NACLs are evaluated first as traffic enters the subnet. If the NACL allows the traffic, it then reaches the instance where the Security Group is evaluated. Both must allow the traffic for it to reach the instance.
Why D is correct: NACLs operate at the subnet level and apply to all resources in the subnet, providing a broad first line of defense. Security Groups operate at the ENI/instance level, allowing fine-grained per-instance rules (e.g., web servers allow port 80, database servers allow port 3306).
Why others are wrong:
A) SG overrides NACL — NACLs and Security Groups are independent. If a NACL denies traffic, it never reaches the Security Group. Both must allow traffic.
C) Same layer and scope — NACLs operate at the subnet level; Security Groups operate at the instance/ENI level. They are complementary, not the same.
E) NACLs only without SGs — NACLs and Security Groups are complementary. Using both provides defense in depth. They serve different purposes and are recommended together.
Why B is correct: For inbound traffic, NACLs are evaluated first as traffic enters the subnet. If the NACL allows the traffic, it then reaches the instance where the Security Group is evaluated. Both must allow the traffic for it to reach the instance.
Why D is correct: NACLs operate at the subnet level and apply to all resources in the subnet, providing a broad first line of defense. Security Groups operate at the ENI/instance level, allowing fine-grained per-instance rules (e.g., web servers allow port 80, database servers allow port 3306).
Why others are wrong:
A) SG overrides NACL — NACLs and Security Groups are independent. If a NACL denies traffic, it never reaches the Security Group. Both must allow traffic.
C) Same layer and scope — NACLs operate at the subnet level; Security Groups operate at the instance/ENI level. They are complementary, not the same.
E) NACLs only without SGs — NACLs and Security Groups are complementary. Using both provides defense in depth. They serve different purposes and are recommended together.
Domain 4 — Identity and Access Management (60 Questions)
Q1.An IAM policy uses the
NotAction element with an Allow effect. Which statement BEST describes the behavior of this policy?
✓ Correct: B.
Why B is correct: When you use
Why others are wrong:
A) Denies listed actions —
C) Explicitly denies listed actions — There is no explicit deny here. Only a
D) Allows only listed actions — This describes the behavior of a regular
NotAction with Allow permits everything except the listed actions.Why B is correct: When you use
NotAction with an Allow effect, the policy allows all actions that are NOT in the NotAction list. This is useful when you want to grant broad access while excluding specific actions. It does NOT explicitly deny those excluded actions — they simply are not allowed by this policy.Why others are wrong:
A) Denies listed actions —
NotAction with Allow does not deny anything; it simply does not allow the listed actions. Another policy could still grant those actions.C) Explicitly denies listed actions — There is no explicit deny here. Only a
Deny effect creates an explicit deny. The excluded actions are just not covered by this policy.D) Allows only listed actions — This describes the behavior of a regular
Action element with Allow, not NotAction.
Q2.A company wants to ensure that IAM users can only make API calls from the corporate network (IP range 203.0.113.0/24). Which IAM policy condition key should they use?
✓ Correct: C.
Why C is correct: The
Why others are wrong:
A)
B)
D)
aws:SourceIp restricts API calls based on the caller's public IP address.Why C is correct: The
aws:SourceIp condition key checks the IP address of the requester. You can use it in a Deny policy with a NotIpAddress condition to block requests from outside your corporate CIDR range. This works for IAM users making direct API calls.Why others are wrong:
A)
aws:SourceVpc — This condition restricts access based on the VPC ID, not a public IP range. It is used with VPC endpoints.B)
aws:SourceVpce — This restricts access based on a specific VPC endpoint ID, not an IP range.D)
aws:VpcSourceIp — This is not a valid IAM condition key.
Q3.A security engineer wants to restrict EC2 actions so users can only start and stop instances tagged with their own username. Which IAM policy condition key enables this?
✓ Correct: A.
Why A is correct: The
Why others are wrong:
B)
C)
D)
ec2:ResourceTag checks tags on EC2 resources and can be combined with policy variables.Why A is correct: The
ec2:ResourceTag condition key lets you check tags on the EC2 instance being acted upon. By using the policy variable ${aws:username} as the tag value, you can enforce that users can only manage instances tagged with their own username. This is a form of attribute-based access control (ABAC).Why others are wrong:
B)
aws:PrincipalTag — This checks tags on the IAM principal (user/role), not on the EC2 resource being accessed.C)
ec2:InstanceTag — This is not a valid condition key. The correct key is ec2:ResourceTag.D)
aws:ResourceTag — While aws:ResourceTag exists for some services, EC2 uses the service-specific ec2:ResourceTag condition key.
Q4.What happens when an IAM permissions boundary is applied to an IAM user?
✓ Correct: D. Permissions boundaries limit the maximum permissions; effective permissions are the intersection.
Why D is correct: A permissions boundary sets the maximum permissions that an identity-based policy can grant. The user's effective permissions are only those that appear in BOTH the identity-based policy AND the permissions boundary. If a permission is in the identity policy but not in the boundary, it is not granted.
Why others are wrong:
A) Replaces identity-based policies — Permissions boundaries do not replace policies. They work alongside identity-based policies to limit effective permissions.
B) Grants additional permissions — Permissions boundaries never grant permissions. They only restrict what identity-based policies can grant.
C) Only applies to console access — Permissions boundaries apply to all actions regardless of whether they are made via the console, CLI, or API.
Why D is correct: A permissions boundary sets the maximum permissions that an identity-based policy can grant. The user's effective permissions are only those that appear in BOTH the identity-based policy AND the permissions boundary. If a permission is in the identity policy but not in the boundary, it is not granted.
Why others are wrong:
A) Replaces identity-based policies — Permissions boundaries do not replace policies. They work alongside identity-based policies to limit effective permissions.
B) Grants additional permissions — Permissions boundaries never grant permissions. They only restrict what identity-based policies can grant.
C) Only applies to console access — Permissions boundaries apply to all actions regardless of whether they are made via the console, CLI, or API.
Q5.A company uses AWS Organizations. They want to prevent any account in the organization from leaving the organization. Which type of policy should they use?
✓ Correct: B. SCPs can deny the
Why B is correct: Service Control Policies (SCPs) are applied at the AWS Organizations level and restrict what actions member accounts can perform. By creating an SCP that denies
Why others are wrong:
A) Permissions boundary — Permissions boundaries are set per IAM user or role and cannot be centrally enforced across all accounts in an organization.
C) Resource-based policy — Resource-based policies are attached to resources like S3 buckets, not used for organization-level governance.
D) IAM identity-based policy — Identity-based policies would need to be applied in every account individually and could be removed by account administrators.
organizations:LeaveOrganization action across all accounts.Why B is correct: Service Control Policies (SCPs) are applied at the AWS Organizations level and restrict what actions member accounts can perform. By creating an SCP that denies
organizations:LeaveOrganization, you prevent any account in the OU from leaving. SCPs apply to all users and roles in the affected accounts (except the management account).Why others are wrong:
A) Permissions boundary — Permissions boundaries are set per IAM user or role and cannot be centrally enforced across all accounts in an organization.
C) Resource-based policy — Resource-based policies are attached to resources like S3 buckets, not used for organization-level governance.
D) IAM identity-based policy — Identity-based policies would need to be applied in every account individually and could be removed by account administrators.
Q6.An administrator needs to grant a user in Account A access to an S3 bucket in Account B. Which approach uses resource-based policies?
✓ Correct: C. S3 bucket policies are resource-based policies that can grant cross-account access.
Why C is correct: A resource-based policy is attached directly to the resource (the S3 bucket). By specifying the Account A principal's ARN in the bucket policy, you grant cross-account access without requiring the user to assume a role. The user retains their original permissions from Account A while also gaining the permissions granted by the bucket policy.
Why others are wrong:
A) IAM role assumption — This is a valid cross-account approach, but it uses IAM roles, not resource-based policies. The user must give up their Account A permissions while assuming the role.
B) IAM Identity Center — This is a centralized SSO solution, not a resource-based policy approach.
D) Duplicate IAM user — Creating duplicate credentials across accounts is a poor security practice and does not use resource-based policies.
Why C is correct: A resource-based policy is attached directly to the resource (the S3 bucket). By specifying the Account A principal's ARN in the bucket policy, you grant cross-account access without requiring the user to assume a role. The user retains their original permissions from Account A while also gaining the permissions granted by the bucket policy.
Why others are wrong:
A) IAM role assumption — This is a valid cross-account approach, but it uses IAM roles, not resource-based policies. The user must give up their Account A permissions while assuming the role.
B) IAM Identity Center — This is a centralized SSO solution, not a resource-based policy approach.
D) Duplicate IAM user — Creating duplicate credentials across accounts is a poor security practice and does not use resource-based policies.
Q7.Which STS API call should an application use when it needs to assume an IAM role in another AWS account?
✓ Correct: A.
Why A is correct:
Why others are wrong:
B)
C)
D)
AssumeRole is the standard API for assuming a role in the same or another account.Why A is correct:
sts:AssumeRole allows an IAM user or role to assume another role, including roles in different AWS accounts. The caller must be granted permission to call sts:AssumeRole, and the target role's trust policy must allow the caller. It returns temporary security credentials.Why others are wrong:
B)
GetSessionToken — This is used to get temporary credentials for an IAM user, typically for MFA-protected API access. It does not assume a different role.C)
GetFederationToken — This is used by an IAM user to create temporary credentials for a federated user. It does not assume a role in another account.D)
AssumeRoleWithSAML — This is specifically for users authenticated via a SAML identity provider, not for application-to-role assumption.
Q8.What is the purpose of the
ExternalId parameter when assuming an IAM role?
✓ Correct: D. The ExternalId mitigates the confused deputy problem in cross-account role assumption.
Why D is correct: The confused deputy problem occurs when a third-party service is tricked into acting on a role it should not access. The
Why others are wrong:
A) Encrypts credentials — ExternalId has no encryption function. STS credentials are delivered over TLS regardless.
B) Session duration — The
C) Identifies the account — The role ARN identifies the account. ExternalId is a verification secret, not an account identifier.
Why D is correct: The confused deputy problem occurs when a third-party service is tricked into acting on a role it should not access. The
ExternalId is a secret value shared between the trusting account and the third party. The role's trust policy requires this ExternalId, so only the legitimate third party can assume the role.Why others are wrong:
A) Encrypts credentials — ExternalId has no encryption function. STS credentials are delivered over TLS regardless.
B) Session duration — The
DurationSeconds parameter controls session duration, not ExternalId.C) Identifies the account — The role ARN identifies the account. ExternalId is a verification secret, not an account identifier.
Q9.A company is implementing SAML 2.0 federation with AWS. Which TWO components are required in this architecture?SELECT TWO
✓ Correct: B, D. SAML federation requires both an IdP and an IAM SAML provider entity.
Why B is correct: The corporate Identity Provider (such as ADFS, Okta, or OneLogin) authenticates the user and generates the SAML assertion that AWS trusts. Without a SAML-capable IdP, federation cannot work.
Why D is correct: You must create an IAM SAML identity provider entity in your AWS account and upload the IdP's metadata document. This establishes the trust relationship between AWS and your IdP.
Why others are wrong:
A) Cognito User Pool — Cognito is a separate identity solution. SAML 2.0 federation with IAM does not require Cognito.
C) AWS Directory Service — Directory Service is optional. You may use it with AD, but it is not required for SAML federation.
E) IAM Identity Center — Identity Center is a separate SSO solution. Direct SAML federation with IAM does not require Identity Center.
Why B is correct: The corporate Identity Provider (such as ADFS, Okta, or OneLogin) authenticates the user and generates the SAML assertion that AWS trusts. Without a SAML-capable IdP, federation cannot work.
Why D is correct: You must create an IAM SAML identity provider entity in your AWS account and upload the IdP's metadata document. This establishes the trust relationship between AWS and your IdP.
Why others are wrong:
A) Cognito User Pool — Cognito is a separate identity solution. SAML 2.0 federation with IAM does not require Cognito.
C) AWS Directory Service — Directory Service is optional. You may use it with AD, but it is not required for SAML federation.
E) IAM Identity Center — Identity Center is a separate SSO solution. Direct SAML federation with IAM does not require Identity Center.
Q10.Which Amazon Cognito component is responsible for authenticating users and issuing JSON Web Tokens (JWTs)?
✓ Correct: B. Cognito User Pools handle user authentication and issue JWTs.
Why B is correct: Cognito User Pools are user directories that provide sign-up, sign-in, and user management. After successful authentication, the User Pool issues JWTs (ID token, access token, and refresh token). These tokens can be used to authorize API calls via API Gateway or other services.
Why others are wrong:
A) Cognito Identity Pool — Identity Pools (Federated Identities) exchange tokens for temporary AWS credentials. They do not authenticate users or issue JWTs.
C) Cognito Sync — Cognito Sync is a deprecated service for syncing user data across devices, not for authentication.
D) Cognito Federated Identities — This is another name for Identity Pools, which provide AWS credentials, not JWTs.
Why B is correct: Cognito User Pools are user directories that provide sign-up, sign-in, and user management. After successful authentication, the User Pool issues JWTs (ID token, access token, and refresh token). These tokens can be used to authorize API calls via API Gateway or other services.
Why others are wrong:
A) Cognito Identity Pool — Identity Pools (Federated Identities) exchange tokens for temporary AWS credentials. They do not authenticate users or issue JWTs.
C) Cognito Sync — Cognito Sync is a deprecated service for syncing user data across devices, not for authentication.
D) Cognito Federated Identities — This is another name for Identity Pools, which provide AWS credentials, not JWTs.
Q11.A mobile app needs to allow users to sign in with Google and then access an S3 bucket. Which Cognito component provides the temporary AWS credentials for S3 access?
✓ Correct: C. Cognito Identity Pools exchange third-party tokens for temporary AWS credentials.
Why C is correct: Cognito Identity Pools (Federated Identities) accept tokens from identity providers like Google, Facebook, or Cognito User Pools and exchange them for temporary AWS credentials (access key, secret key, session token). These credentials are scoped by an IAM role and allow direct access to AWS services like S3.
Why others are wrong:
A) Cognito User Pool — User Pools authenticate users and issue JWTs, but do not provide AWS credentials for accessing S3 directly.
B) Cognito Hosted UI — The Hosted UI is a sign-in web page provided by Cognito. It facilitates authentication but does not issue AWS credentials.
D) Lambda Triggers — Lambda triggers customize the authentication flow (e.g., pre-sign-up validation) but do not issue AWS credentials.
Why C is correct: Cognito Identity Pools (Federated Identities) accept tokens from identity providers like Google, Facebook, or Cognito User Pools and exchange them for temporary AWS credentials (access key, secret key, session token). These credentials are scoped by an IAM role and allow direct access to AWS services like S3.
Why others are wrong:
A) Cognito User Pool — User Pools authenticate users and issue JWTs, but do not provide AWS credentials for accessing S3 directly.
B) Cognito Hosted UI — The Hosted UI is a sign-in web page provided by Cognito. It facilitates authentication but does not issue AWS credentials.
D) Lambda Triggers — Lambda triggers customize the authentication flow (e.g., pre-sign-up validation) but do not issue AWS credentials.
Q12.Which IAM policy condition key allows you to restrict API calls to specific AWS regions?
✓ Correct: A.
Why A is correct: The
Why others are wrong:
B)
C)
D)
aws:RequestedRegion checks which region the API call targets.Why A is correct: The
aws:RequestedRegion condition key represents the AWS region that the API call is directed to. You can use it in a Deny policy to prevent users from creating resources or performing actions in unauthorized regions, such as restricting all activity to us-east-1 and eu-west-1 only.Why others are wrong:
B)
aws:SourceRegion — This is not a standard IAM global condition key for restricting region access.C)
aws:CurrentRegion — This is not a valid IAM condition key.D)
aws:ResourceRegion — This is not the standard condition key for restricting the region of API calls. aws:RequestedRegion is the correct key.
Q13.An organization wants to enforce MFA for all API calls that terminate EC2 instances. Which condition key should be used in the IAM policy?
✓ Correct: D.
Why D is correct: The
Why others are wrong:
A)
B)
C)
aws:MultiFactorAuthPresent checks whether MFA was used for the request.Why D is correct: The
aws:MultiFactorAuthPresent condition key is a boolean that is true when the request was authenticated with MFA. You can create a Deny policy for ec2:TerminateInstances with condition "BoolIfExists": {"aws:MultiFactorAuthPresent": "false"} to block termination without MFA.Why others are wrong:
A)
aws:MFAAuthenticated — This is not a valid IAM condition key. The correct key is aws:MultiFactorAuthPresent.B)
aws:TokenIssueTime — This checks when temporary credentials were issued, not whether MFA was used.C)
aws:SecureTransport — This checks whether the request was made over HTTPS (TLS), not whether MFA was used.
Q14.What is Attribute-Based Access Control (ABAC) in AWS IAM?
✓ Correct: B. ABAC uses tags on principals and resources to control access.
Why B is correct: Attribute-Based Access Control (ABAC) uses tags as attributes to make authorization decisions. For example, you can write a single policy that allows users with tag
Why others are wrong:
A) Organizational hierarchy — This describes organizational unit-based access or RBAC (Role-Based Access Control), not ABAC.
C) IP and time of day — While these are conditions, ABAC specifically refers to tag-based access control in AWS.
D) Resource-based policies only — ABAC can be implemented with identity-based or resource-based policies. It is about the use of tags, not the policy type.
Why B is correct: Attribute-Based Access Control (ABAC) uses tags as attributes to make authorization decisions. For example, you can write a single policy that allows users with tag
Department=Finance to access resources also tagged Department=Finance. This uses aws:PrincipalTag and aws:ResourceTag condition keys and scales well because you do not need to update policies when new resources are created.Why others are wrong:
A) Organizational hierarchy — This describes organizational unit-based access or RBAC (Role-Based Access Control), not ABAC.
C) IP and time of day — While these are conditions, ABAC specifically refers to tag-based access control in AWS.
D) Resource-based policies only — ABAC can be implemented with identity-based or resource-based policies. It is about the use of tags, not the policy type.
Q15.Which AWS service analyzes resource-based policies to identify resources shared with external entities?
✓ Correct: A. IAM Access Analyzer identifies resources shared with external principals.
Why A is correct: IAM Access Analyzer uses automated reasoning to analyze resource-based policies for S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and more. It generates findings when resources are accessible from outside your account or organization (the zone of trust). This helps you identify unintended public or cross-account access.
Why others are wrong:
B) AWS Config — AWS Config evaluates resource configurations against rules but does not perform policy analysis to identify external sharing.
C) CloudTrail — CloudTrail logs API calls but does not analyze policies to find external access.
D) Amazon Inspector — Inspector assesses EC2 instances and container images for vulnerabilities, not IAM policy analysis.
Why A is correct: IAM Access Analyzer uses automated reasoning to analyze resource-based policies for S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and more. It generates findings when resources are accessible from outside your account or organization (the zone of trust). This helps you identify unintended public or cross-account access.
Why others are wrong:
B) AWS Config — AWS Config evaluates resource configurations against rules but does not perform policy analysis to identify external sharing.
C) CloudTrail — CloudTrail logs API calls but does not analyze policies to find external access.
D) Amazon Inspector — Inspector assesses EC2 instances and container images for vulnerabilities, not IAM policy analysis.
Q16.In AWS Organizations, which account is NOT affected by Service Control Policies (SCPs)?
✓ Correct: C. The management account is never affected by SCPs.
Why C is correct: SCPs do not restrict the management account (formerly called the master account). Even if an SCP is applied at the root level, the management account retains full access to all AWS services. This is by design, so the management account can always manage the organization. This is why it is critical to limit the use of the management account.
Why others are wrong:
A) Member accounts in root OU — SCPs applied to the root OU affect all member accounts, including those directly in the root.
B) Member accounts in child OU — All member accounts in child OUs are subject to SCPs inherited from parent OUs and those directly attached.
D) Recently invited accounts — Invited accounts are affected by SCPs as soon as they accept the invitation and the SCP is applied to their OU.
Why C is correct: SCPs do not restrict the management account (formerly called the master account). Even if an SCP is applied at the root level, the management account retains full access to all AWS services. This is by design, so the management account can always manage the organization. This is why it is critical to limit the use of the management account.
Why others are wrong:
A) Member accounts in root OU — SCPs applied to the root OU affect all member accounts, including those directly in the root.
B) Member accounts in child OU — All member accounts in child OUs are subject to SCPs inherited from parent OUs and those directly attached.
D) Recently invited accounts — Invited accounts are affected by SCPs as soon as they accept the invitation and the SCP is applied to their OU.
Q17.What is the key difference between an IAM role and a resource-based policy for cross-account access?
✓ Correct: B. Role assumption means giving up your original permissions; resource-based policies let you keep them.
Why B is correct: When a principal assumes a cross-account IAM role, they temporarily give up their original permissions and get only the permissions of the assumed role. With resource-based policies, the principal keeps their original permissions AND gains the permissions specified in the resource policy. This is an important distinction for use cases like copying between S3 buckets in different accounts.
Why others are wrong:
A) More secure — Neither approach is inherently more secure. The choice depends on the use case, not security level.
C) Resource-based policies do not support cross-account — Many AWS resources (S3, SQS, SNS, Lambda, KMS) support cross-account access through resource-based policies.
D) Permanent vs temporary credentials — Resource-based policies do not provide any credentials. They grant access based on the caller's existing identity.
Why B is correct: When a principal assumes a cross-account IAM role, they temporarily give up their original permissions and get only the permissions of the assumed role. With resource-based policies, the principal keeps their original permissions AND gains the permissions specified in the resource policy. This is an important distinction for use cases like copying between S3 buckets in different accounts.
Why others are wrong:
A) More secure — Neither approach is inherently more secure. The choice depends on the use case, not security level.
C) Resource-based policies do not support cross-account — Many AWS resources (S3, SQS, SNS, Lambda, KMS) support cross-account access through resource-based policies.
D) Permanent vs temporary credentials — Resource-based policies do not provide any credentials. They grant access based on the caller's existing identity.
Q18.A Cognito User Pool is configured for a web application. Which Lambda trigger executes AFTER a user successfully authenticates but BEFORE the token is generated?
✓ Correct: D. The pre-token generation trigger runs after authentication and before tokens are issued.
Why D is correct: The pre-token generation trigger is invoked after the user is authenticated but before the tokens (ID token, access token) are created. This allows you to customize the token claims — for example, adding, removing, or modifying claims in the ID token. This is useful for adding custom attributes or group information to JWTs.
Why others are wrong:
A) Pre-signup trigger — This runs before a user is registered, allowing you to validate or reject sign-up requests. It has nothing to do with token generation.
B) Custom message trigger — This customizes verification or confirmation messages (email/SMS). It does not affect token generation.
C) Post-authentication trigger — This runs after authentication for logging or analytics purposes but does not allow modifying the generated tokens.
Why D is correct: The pre-token generation trigger is invoked after the user is authenticated but before the tokens (ID token, access token) are created. This allows you to customize the token claims — for example, adding, removing, or modifying claims in the ID token. This is useful for adding custom attributes or group information to JWTs.
Why others are wrong:
A) Pre-signup trigger — This runs before a user is registered, allowing you to validate or reject sign-up requests. It has nothing to do with token generation.
B) Custom message trigger — This customizes verification or confirmation messages (email/SMS). It does not affect token generation.
C) Post-authentication trigger — This runs after authentication for logging or analytics purposes but does not allow modifying the generated tokens.
Q19.Which AWS service enables you to share resources like VPC subnets, Transit Gateways, and License Manager configurations across AWS accounts?
✓ Correct: C. AWS RAM enables resource sharing across accounts and within an organization.
Why C is correct: AWS Resource Access Manager (RAM) allows you to share AWS resources with other AWS accounts or within your AWS Organization. Supported resources include VPC subnets, Transit Gateways, Route 53 Resolver rules, License Manager configurations, and more. This avoids the need to duplicate resources across accounts.
Why others are wrong:
A) AWS Organizations — Organizations provides account management and policy-based governance but does not directly share resources like subnets.
B) IAM Identity Center — Identity Center provides SSO access to accounts and applications, not resource sharing.
D) AWS Service Catalog — Service Catalog lets you create and manage approved IT products (CloudFormation templates), not share existing resources.
Why C is correct: AWS Resource Access Manager (RAM) allows you to share AWS resources with other AWS accounts or within your AWS Organization. Supported resources include VPC subnets, Transit Gateways, Route 53 Resolver rules, License Manager configurations, and more. This avoids the need to duplicate resources across accounts.
Why others are wrong:
A) AWS Organizations — Organizations provides account management and policy-based governance but does not directly share resources like subnets.
B) IAM Identity Center — Identity Center provides SSO access to accounts and applications, not resource sharing.
D) AWS Service Catalog — Service Catalog lets you create and manage approved IT products (CloudFormation templates), not share existing resources.
Q20.Which AWS Directory Service option establishes a trust relationship between your on-premises Active Directory and AWS?
✓ Correct: A. AWS Managed Microsoft AD supports trust relationships with on-premises AD.
Why A is correct: AWS Managed Microsoft AD runs a full Microsoft Active Directory in the AWS Cloud. It supports establishing a two-way forest trust with your on-premises AD, enabling users from either directory to access resources in the other. It also supports MFA, Group Policies, and other native AD features.
Why others are wrong:
B) AD Connector — AD Connector is a proxy that redirects directory requests to your on-premises AD. It does not create a trust relationship or run its own AD.
C) Simple AD — Simple AD is a Samba-based directory for basic AD features. It does not support trust relationships with on-premises AD.
D) Amazon Cloud Directory — Cloud Directory is a fully managed, non-AD-compatible directory for hierarchical data. It has no relationship with Microsoft AD.
Why A is correct: AWS Managed Microsoft AD runs a full Microsoft Active Directory in the AWS Cloud. It supports establishing a two-way forest trust with your on-premises AD, enabling users from either directory to access resources in the other. It also supports MFA, Group Policies, and other native AD features.
Why others are wrong:
B) AD Connector — AD Connector is a proxy that redirects directory requests to your on-premises AD. It does not create a trust relationship or run its own AD.
C) Simple AD — Simple AD is a Samba-based directory for basic AD features. It does not support trust relationships with on-premises AD.
D) Amazon Cloud Directory — Cloud Directory is a fully managed, non-AD-compatible directory for hierarchical data. It has no relationship with Microsoft AD.
Q21.An IAM policy has the following statement:
"Effect": "Allow", "NotAction": "iam:*", "Resource": "*". What does this statement do?
✓ Correct: B.
Why B is correct: This statement allows all actions on all resources EXCEPT IAM actions. However, it does not explicitly deny IAM actions. If another policy in the account grants IAM permissions, those could still apply. To truly block IAM, you would need a separate Deny statement.
Why others are wrong:
A) Denies all IAM actions — There is no Deny effect here.
C) Allows only IAM actions — This is the opposite of what
D) Explicitly denies IAM — There is no explicit deny. The IAM actions are simply not covered by this Allow statement. Another policy could still grant them.
NotAction with Allow allows everything except the specified actions, without denying them.Why B is correct: This statement allows all actions on all resources EXCEPT IAM actions. However, it does not explicitly deny IAM actions. If another policy in the account grants IAM permissions, those could still apply. To truly block IAM, you would need a separate Deny statement.
Why others are wrong:
A) Denies all IAM actions — There is no Deny effect here.
NotAction with Allow simply excludes IAM from what is allowed, but it does not deny it.C) Allows only IAM actions — This is the opposite of what
NotAction does. A regular "Action": "iam:*" with Allow would do this.D) Explicitly denies IAM — There is no explicit deny. The IAM actions are simply not covered by this Allow statement. Another policy could still grant them.
Q22.Which TWO features are provided by Cognito User Pools?SELECT TWO
✓ Correct: A, E. Cognito User Pools provide adaptive authentication and a hosted UI.
Why A is correct: Cognito User Pools support adaptive authentication, which analyzes sign-in attempts for risk factors (new device, unusual location) and can automatically require MFA or block sign-in when risk is detected.
Why E is correct: Cognito User Pools provide a customizable hosted UI that handles sign-up, sign-in, and password reset flows. It supports OAuth 2.0 and can integrate with social identity providers (Google, Facebook, Apple) and SAML providers.
Why others are wrong:
B) Temporary AWS credentials — This is a feature of Cognito Identity Pools, not User Pools. User Pools issue JWTs, not AWS credentials.
C) IAM role assumption — Role assumption for federated users is handled by Cognito Identity Pools or STS, not User Pools.
D) Guest access — Unauthenticated (guest) access is a feature of Cognito Identity Pools, not User Pools.
Why A is correct: Cognito User Pools support adaptive authentication, which analyzes sign-in attempts for risk factors (new device, unusual location) and can automatically require MFA or block sign-in when risk is detected.
Why E is correct: Cognito User Pools provide a customizable hosted UI that handles sign-up, sign-in, and password reset flows. It supports OAuth 2.0 and can integrate with social identity providers (Google, Facebook, Apple) and SAML providers.
Why others are wrong:
B) Temporary AWS credentials — This is a feature of Cognito Identity Pools, not User Pools. User Pools issue JWTs, not AWS credentials.
C) IAM role assumption — Role assumption for federated users is handled by Cognito Identity Pools or STS, not User Pools.
D) Guest access — Unauthenticated (guest) access is a feature of Cognito Identity Pools, not User Pools.
Q23.AWS Control Tower uses guardrails to govern accounts. What are the two types of guardrails?
✓ Correct: D. Control Tower uses preventive and detective guardrails.
Why D is correct: Preventive guardrails use SCPs to prevent actions that violate policies (e.g., disallow creation of access keys for the root user). Detective guardrails use AWS Config rules to detect noncompliant resources (e.g., detect whether MFA is enabled for the root user). Together, they provide governance for multi-account environments.
Why others are wrong:
A) Identity and resource guardrails — This is not how Control Tower categorizes its guardrails.
B) Network and security guardrails — These are not the two types of Control Tower guardrails.
C) Mandatory and optional — While guardrails can be mandatory, strongly recommended, or elective, the two functional types are preventive and detective.
Why D is correct: Preventive guardrails use SCPs to prevent actions that violate policies (e.g., disallow creation of access keys for the root user). Detective guardrails use AWS Config rules to detect noncompliant resources (e.g., detect whether MFA is enabled for the root user). Together, they provide governance for multi-account environments.
Why others are wrong:
A) Identity and resource guardrails — This is not how Control Tower categorizes its guardrails.
B) Network and security guardrails — These are not the two types of Control Tower guardrails.
C) Mandatory and optional — While guardrails can be mandatory, strongly recommended, or elective, the two functional types are preventive and detective.
Q24.AWS IAM Identity Center (formerly AWS SSO) integrates with which service to manage permission sets across multiple AWS accounts?
✓ Correct: C. IAM Identity Center integrates with AWS Organizations for multi-account SSO.
Why C is correct: AWS IAM Identity Center must be enabled in the management account and integrates directly with AWS Organizations. This integration allows you to create permission sets and assign them to users/groups across all member accounts in the organization. Users get a single sign-on portal to access all their assigned accounts.
Why others are wrong:
A) CloudFormation — CloudFormation is an infrastructure-as-code service, not the service that Identity Center integrates with for multi-account management.
B) Amazon Cognito — Cognito is for application-level identity management, not for managing AWS account access across an organization.
D) AWS Directory Service — While Identity Center can use Directory Service as an identity source, the multi-account permission management comes from the Organizations integration.
Why C is correct: AWS IAM Identity Center must be enabled in the management account and integrates directly with AWS Organizations. This integration allows you to create permission sets and assign them to users/groups across all member accounts in the organization. Users get a single sign-on portal to access all their assigned accounts.
Why others are wrong:
A) CloudFormation — CloudFormation is an infrastructure-as-code service, not the service that Identity Center integrates with for multi-account management.
B) Amazon Cognito — Cognito is for application-level identity management, not for managing AWS account access across an organization.
D) AWS Directory Service — While Identity Center can use Directory Service as an identity source, the multi-account permission management comes from the Organizations integration.
Q25.What is the purpose of the
aws:PrincipalOrgID condition key in a resource-based policy?
✓ Correct: A.
Why A is correct: The
Why others are wrong:
B) Identifies the OU —
C) AWS account ID — The account ID is checked using the
D) Requires MFA — MFA enforcement uses
aws:PrincipalOrgID restricts access to members of a specific organization.Why A is correct: The
aws:PrincipalOrgID condition key checks whether the calling principal belongs to the specified AWS Organization. This is extremely useful in resource-based policies (e.g., S3 bucket policies) to ensure that only accounts within your organization can access the resource, without having to list every account individually.Why others are wrong:
B) Identifies the OU —
aws:PrincipalOrgID identifies the organization, not the specific OU. There is a separate aws:PrincipalOrgPaths key for OU-level checks.C) AWS account ID — The account ID is checked using the
Principal element or aws:SourceAccount condition, not aws:PrincipalOrgID.D) Requires MFA — MFA enforcement uses
aws:MultiFactorAuthPresent, not the organization ID condition key.
Q26.Which STS API should an IAM user call to obtain temporary credentials that include MFA verification?
✓ Correct: B.
Why B is correct:
Why others are wrong:
A)
C)
D)
GetSessionToken is used by IAM users to get MFA-verified temporary credentials.Why B is correct:
sts:GetSessionToken is the appropriate API call for an IAM user who wants to obtain temporary credentials that reflect MFA authentication. You pass the MFA device serial number and the current MFA code. The returned temporary credentials include aws:MultiFactorAuthPresent: true, enabling access to MFA-protected resources.Why others are wrong:
A)
AssumeRole — While AssumeRole can include MFA parameters, GetSessionToken is the primary API for IAM users to obtain MFA-verified session credentials without assuming a different role.C)
GetFederationToken — This creates credentials for federated users and does not support MFA parameters.D)
AssumeRoleWithWebIdentity — This is for web identity federation (e.g., Google, Facebook tokens), not for MFA verification of IAM users.
Q27.In a Cognito Identity Pool, what allows unauthenticated users to access AWS resources?
✓ Correct: A. Cognito Identity Pools support guest access with a dedicated unauthenticated IAM role.
Why A is correct: Cognito Identity Pools can be configured to allow unauthenticated (guest) access. When enabled, the Identity Pool assigns temporary AWS credentials scoped to a separate IAM role designated for unauthenticated users. This role should have very limited permissions, allowing guest users to access only specific resources.
Why others are wrong:
B) Cognito User Pool anonymous sign-in — User Pools do not have an anonymous sign-in feature. Guest access is a feature of Identity Pools.
C) Public access policy — There is no concept of a public access policy on an Identity Pool. Access is controlled through IAM roles.
D) Disabling authentication on the resource — Making resources publicly accessible is not the same as guest access through Cognito and would be a security risk.
Why A is correct: Cognito Identity Pools can be configured to allow unauthenticated (guest) access. When enabled, the Identity Pool assigns temporary AWS credentials scoped to a separate IAM role designated for unauthenticated users. This role should have very limited permissions, allowing guest users to access only specific resources.
Why others are wrong:
B) Cognito User Pool anonymous sign-in — User Pools do not have an anonymous sign-in feature. Guest access is a feature of Identity Pools.
C) Public access policy — There is no concept of a public access policy on an Identity Pool. Access is controlled through IAM roles.
D) Disabling authentication on the resource — Making resources publicly accessible is not the same as guest access through Cognito and would be a security risk.
Q28.Which type of policy in AWS Organizations controls access to resources based on who is accessing them, rather than controlling what actions principals can take?
✓ Correct: D. Resource Control Policies (RCPs) control who can access resources in your organization.
Why D is correct: Resource Control Policies (RCPs) are a type of authorization policy in AWS Organizations that set the maximum permissions on resources in your organization. While SCPs restrict what actions principals can perform, RCPs restrict who can access your resources. RCPs are applied to resources and can prevent external principals from accessing them.
Why others are wrong:
A) SCP — SCPs restrict the maximum permissions for principals (users and roles) in member accounts. They control what actions can be taken, not who can access resources.
B) Tag Policy — Tag policies enforce standardized tags across resources in the organization. They do not control access.
C) Backup Policy — Backup policies configure AWS Backup plans across the organization. They do not control resource access.
Why D is correct: Resource Control Policies (RCPs) are a type of authorization policy in AWS Organizations that set the maximum permissions on resources in your organization. While SCPs restrict what actions principals can perform, RCPs restrict who can access your resources. RCPs are applied to resources and can prevent external principals from accessing them.
Why others are wrong:
A) SCP — SCPs restrict the maximum permissions for principals (users and roles) in member accounts. They control what actions can be taken, not who can access resources.
B) Tag Policy — Tag policies enforce standardized tags across resources in the organization. They do not control access.
C) Backup Policy — Backup policies configure AWS Backup plans across the organization. They do not control resource access.
Q29.Which TWO statements about IAM Access Analyzer are correct?SELECT TWO
✓ Correct: B, D. Access Analyzer generates policies from activity and validates policy grammar.
Why B is correct: IAM Access Analyzer can analyze CloudTrail logs to determine which AWS services and actions were actually used, and then generate a least-privilege IAM policy based on that access activity. This helps you right-size permissions.
Why D is correct: Access Analyzer includes policy validation that checks IAM policies against IAM best practices, grammar errors, and security warnings. It provides actionable recommendations to fix policy issues before deployment.
Why others are wrong:
A) Automatically remediates — Access Analyzer generates findings and recommendations, but it does not automatically change or remediate policies. You must take action manually.
C) Only S3 policies — Access Analyzer supports multiple resource types including S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and Secrets Manager secrets.
E) Requires AWS Config — Access Analyzer works independently and does not require AWS Config to be enabled.
Why B is correct: IAM Access Analyzer can analyze CloudTrail logs to determine which AWS services and actions were actually used, and then generate a least-privilege IAM policy based on that access activity. This helps you right-size permissions.
Why D is correct: Access Analyzer includes policy validation that checks IAM policies against IAM best practices, grammar errors, and security warnings. It provides actionable recommendations to fix policy issues before deployment.
Why others are wrong:
A) Automatically remediates — Access Analyzer generates findings and recommendations, but it does not automatically change or remediate policies. You must take action manually.
C) Only S3 policies — Access Analyzer supports multiple resource types including S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, and Secrets Manager secrets.
E) Requires AWS Config — Access Analyzer works independently and does not require AWS Config to be enabled.
Q30.A company uses a custom identity broker for federation. What must the broker do to provide users with AWS console access?
✓ Correct: C. A custom identity broker obtains temporary credentials and constructs a console sign-in URL.
Why C is correct: With a custom identity broker, the broker authenticates the user against your corporate directory, then calls
Why others are wrong:
A)
B) Create IAM users — Creating IAM users for federated users defeats the purpose of federation and does not scale.
D) Cognito Identity Pools — A custom identity broker manages its own authentication flow and does not require Cognito Identity Pools.
Why C is correct: With a custom identity broker, the broker authenticates the user against your corporate directory, then calls
sts:GetFederationToken or sts:AssumeRole to get temporary credentials. It then constructs a sign-in URL using the AWS federation endpoint (signin.aws.amazon.com/federation) and redirects the user to the AWS console.Why others are wrong:
A)
AssumeRoleWithSAML — This API is for SAML 2.0 federation, not for a custom identity broker. A custom broker does not use SAML assertions.B) Create IAM users — Creating IAM users for federated users defeats the purpose of federation and does not scale.
D) Cognito Identity Pools — A custom identity broker manages its own authentication flow and does not require Cognito Identity Pools.
Q31.In AWS Organizations, what is the effect of the default
FullAWSAccess SCP that is attached to every OU?
✓ Correct: B. The default FullAWSAccess SCP allows all actions, meaning no restrictions by default.
Why B is correct: When you enable SCPs in AWS Organizations, every OU and account gets a default
Why others are wrong:
A) Grants all permissions — SCPs do not grant permissions. They only set the maximum allowed permissions. The actual permissions come from IAM policies within each account.
C) Disables SCP restrictions — The FullAWSAccess SCP does not disable the SCP feature. Additional restrictive SCPs can still be applied alongside it.
D) Prevents additional SCPs — Multiple SCPs can be attached to the same OU. The FullAWSAccess SCP does not prevent others.
Why B is correct: When you enable SCPs in AWS Organizations, every OU and account gets a default
FullAWSAccess SCP that allows "Action": "*" on "Resource": "*". This means SCPs do not restrict anything until you attach additional deny SCPs or remove the FullAWSAccess policy and replace it with more restrictive allow SCPs. Remember, SCPs are filters — they do not grant permissions.Why others are wrong:
A) Grants all permissions — SCPs do not grant permissions. They only set the maximum allowed permissions. The actual permissions come from IAM policies within each account.
C) Disables SCP restrictions — The FullAWSAccess SCP does not disable the SCP feature. Additional restrictive SCPs can still be applied alongside it.
D) Prevents additional SCPs — Multiple SCPs can be attached to the same OU. The FullAWSAccess SCP does not prevent others.
Q32.A developer needs to use policy variables in an IAM policy to grant each IAM user access to their own folder in an S3 bucket. Which variable represents the IAM user's name?
✓ Correct: A.
Why A is correct: The
Why others are wrong:
B)
C)
D)
${aws:username} is the policy variable for the IAM user's friendly name.Why A is correct: The
${aws:username} policy variable resolves to the friendly name of the IAM user making the request. You can use it in the Resource element of an S3 policy, like arn:aws:s3:::my-bucket/${aws:username}/*, so each user can only access their own folder.Why others are wrong:
B)
${aws:userid} — This resolves to the unique ID of the principal (e.g., AIDAEXAMPLE for users or AROAEXAMPLE:session-name for roles), not the friendly name.C)
${aws:PrincipalTag/username} — This references a tag called "username" on the principal, which may not exist. It does not automatically resolve to the IAM username.D)
${iam:username} — This is not a valid IAM policy variable. The correct prefix is aws:.
Q33.AD Connector in AWS Directory Services is best described as:
✓ Correct: D. AD Connector is a proxy that redirects requests to your on-premises AD.
Why D is correct: AD Connector is a directory gateway that forwards authentication requests to your existing on-premises Microsoft Active Directory without caching any information in the cloud. It does not store any directory data in AWS and requires a VPN or Direct Connect connection to your on-premises network.
Why others are wrong:
A) Full Microsoft AD — This describes AWS Managed Microsoft AD, which runs a complete AD in the cloud.
B) Samba-compatible directory — This describes Simple AD, which is a standalone Samba-based directory.
C) Replicates on-premises AD — AD Connector does not replicate or store any directory data. It only proxies requests.
Why D is correct: AD Connector is a directory gateway that forwards authentication requests to your existing on-premises Microsoft Active Directory without caching any information in the cloud. It does not store any directory data in AWS and requires a VPN or Direct Connect connection to your on-premises network.
Why others are wrong:
A) Full Microsoft AD — This describes AWS Managed Microsoft AD, which runs a complete AD in the cloud.
B) Samba-compatible directory — This describes Simple AD, which is a standalone Samba-based directory.
C) Replicates on-premises AD — AD Connector does not replicate or store any directory data. It only proxies requests.
Q34.Which TWO of the following are valid identity sources for AWS IAM Identity Center?SELECT TWO
✓ Correct: A, C. Identity Center supports its built-in store, Active Directory, and external SAML IdPs.
Why A is correct: IAM Identity Center has a built-in identity store where you can create and manage users and groups directly. This is the simplest option for organizations that do not have an existing identity provider.
Why C is correct: Identity Center supports connecting to an external SAML 2.0 identity provider such as Okta, Azure AD, or OneLogin. This allows organizations to use their existing corporate IdP for AWS SSO access.
Why others are wrong:
B) Cognito User Pool — Cognito User Pools are for application user management. They are not supported as an identity source for IAM Identity Center.
D) DynamoDB user table — DynamoDB is a database, not an identity provider. It cannot serve as an identity source for Identity Center.
E) Lambda custom authenticator — Identity Center does not support Lambda functions as an identity source.
Why A is correct: IAM Identity Center has a built-in identity store where you can create and manage users and groups directly. This is the simplest option for organizations that do not have an existing identity provider.
Why C is correct: Identity Center supports connecting to an external SAML 2.0 identity provider such as Okta, Azure AD, or OneLogin. This allows organizations to use their existing corporate IdP for AWS SSO access.
Why others are wrong:
B) Cognito User Pool — Cognito User Pools are for application user management. They are not supported as an identity source for IAM Identity Center.
D) DynamoDB user table — DynamoDB is a database, not an identity provider. It cannot serve as an identity source for Identity Center.
E) Lambda custom authenticator — Identity Center does not support Lambda functions as an identity source.
Q35.An organization wants to enforce that all EC2 instances must have a
CostCenter tag with specific allowed values. Which AWS Organizations policy type should they use?
✓ Correct: C. Tag policies standardize tag keys and allowed values across the organization.
Why C is correct: Tag policies in AWS Organizations allow you to define rules for tag keys and their allowed values across member accounts. You can specify that the
Why others are wrong:
A) SCP — SCPs restrict which API actions principals can perform. While you could use an SCP to deny resource creation without tags, SCPs do not enforce specific tag values in the way tag policies do.
B) RCP — Resource Control Policies restrict who can access resources. They are not designed for tag enforcement.
D) AI services opt-out policy — This policy controls whether AWS AI services can store and use your content for service improvement. It has nothing to do with tagging.
Why C is correct: Tag policies in AWS Organizations allow you to define rules for tag keys and their allowed values across member accounts. You can specify that the
CostCenter tag must use specific values (e.g., "Marketing", "Engineering") and apply the policy to all or specific resource types like EC2 instances. Tag policies help maintain consistent tagging for cost allocation and governance.Why others are wrong:
A) SCP — SCPs restrict which API actions principals can perform. While you could use an SCP to deny resource creation without tags, SCPs do not enforce specific tag values in the way tag policies do.
B) RCP — Resource Control Policies restrict who can access resources. They are not designed for tag enforcement.
D) AI services opt-out policy — This policy controls whether AWS AI services can store and use your content for service improvement. It has nothing to do with tagging.
Q36.A user assumes an IAM role and receives temporary credentials. The user then tries to access an S3 bucket. Which principal does the S3 bucket policy evaluate?
✓ Correct: B. When using assumed role credentials, the principal is the role, not the original user.
Why B is correct: When a user assumes a role, they give up their original identity and take on the role's identity. The S3 bucket policy evaluates the principal as the IAM role ARN (specifically the role session ARN). This means the bucket policy must allow the role, not the original user. This is a key distinction between role-based and resource-based policy cross-account access.
Why others are wrong:
A) Original IAM user — When assuming a role, the original user's identity is not used for policy evaluation. The role's identity takes over.
C) Both principals — Only the assumed role's identity is used. The original user's identity is not evaluated by the bucket policy.
D) Root user ARN — The root user is only evaluated when the root user itself makes the request.
Why B is correct: When a user assumes a role, they give up their original identity and take on the role's identity. The S3 bucket policy evaluates the principal as the IAM role ARN (specifically the role session ARN). This means the bucket policy must allow the role, not the original user. This is a key distinction between role-based and resource-based policy cross-account access.
Why others are wrong:
A) Original IAM user — When assuming a role, the original user's identity is not used for policy evaluation. The role's identity takes over.
C) Both principals — Only the assumed role's identity is used. The original user's identity is not evaluated by the bucket policy.
D) Root user ARN — The root user is only evaluated when the root user itself makes the request.
Q37.Which feature of AWS Control Tower automates the creation of new AWS accounts with pre-configured governance?
✓ Correct: A. Account Factory automates account provisioning with governance controls.
Why A is correct: Account Factory in AWS Control Tower is an automated account provisioning tool. It uses AWS Service Catalog to create new AWS accounts with pre-configured settings, including VPC configurations, guardrails, and organizational policies. This ensures every new account meets your governance and compliance requirements from day one.
Why others are wrong:
B) Landing Zone — The Landing Zone is the overall multi-account environment that Control Tower sets up, including the management, log archive, and audit accounts. It is not specifically the account creation feature.
C) Guardrail Manager — This is not an actual Control Tower feature name. Guardrails are managed through the Control Tower console, not a separate component.
D) Organizational Blueprint — This is not an actual Control Tower feature.
Why A is correct: Account Factory in AWS Control Tower is an automated account provisioning tool. It uses AWS Service Catalog to create new AWS accounts with pre-configured settings, including VPC configurations, guardrails, and organizational policies. This ensures every new account meets your governance and compliance requirements from day one.
Why others are wrong:
B) Landing Zone — The Landing Zone is the overall multi-account environment that Control Tower sets up, including the management, log archive, and audit accounts. It is not specifically the account creation feature.
C) Guardrail Manager — This is not an actual Control Tower feature name. Guardrails are managed through the Control Tower console, not a separate component.
D) Organizational Blueprint — This is not an actual Control Tower feature.
Q38.A company uses Cognito Identity Pools and wants to restrict each authenticated user to read only their own items in a DynamoDB table. The table's partition key is
userId. Which approach should they use?
✓ Correct: D. Use Cognito identity policy variables to restrict DynamoDB access per user.
Why D is correct: In the IAM role policy attached to the Cognito Identity Pool, you can use the
Why others are wrong:
A) Separate tables — Creating a table per user is not scalable and adds unnecessary management overhead. Policy variables solve this efficiently.
B) Lambda authorizer — Lambda authorizers are used with API Gateway, not for direct DynamoDB access control. Even with API Gateway, IAM policy conditions are the correct approach.
C) Encryption at rest — Encryption protects data at rest but does not control which items a user can read.
Why D is correct: In the IAM role policy attached to the Cognito Identity Pool, you can use the
${cognito-identity.amazonaws.com:sub} policy variable as a DynamoDB LeadingKeys condition. This restricts each user to only access items where the partition key matches their Cognito identity ID. This is called fine-grained access control for DynamoDB.Why others are wrong:
A) Separate tables — Creating a table per user is not scalable and adds unnecessary management overhead. Policy variables solve this efficiently.
B) Lambda authorizer — Lambda authorizers are used with API Gateway, not for direct DynamoDB access control. Even with API Gateway, IAM policy conditions are the correct approach.
C) Encryption at rest — Encryption protects data at rest but does not control which items a user can read.
Q39.What is a permission set in AWS IAM Identity Center?
✓ Correct: C. A permission set is a collection of policies that defines the level of access to an AWS account.
Why C is correct: In AWS IAM Identity Center, a permission set is a template that defines the permissions users and groups have when they access an AWS account. It can include AWS managed policies, custom policies, and inline policies. When assigned to a user/group and an account, Identity Center automatically creates an IAM role in that account with the specified permissions.
Why others are wrong:
A) Group of IAM users — This describes an IAM group, not a permission set. Permission sets define permissions, not groups of users.
B) Policy attached to an account — Permission sets are managed in Identity Center, not directly attached to accounts as IAM policies.
D) Role trust policy — A trust policy is part of an IAM role that specifies who can assume it. Permission sets create roles, but they are not trust policies themselves.
Why C is correct: In AWS IAM Identity Center, a permission set is a template that defines the permissions users and groups have when they access an AWS account. It can include AWS managed policies, custom policies, and inline policies. When assigned to a user/group and an account, Identity Center automatically creates an IAM role in that account with the specified permissions.
Why others are wrong:
A) Group of IAM users — This describes an IAM group, not a permission set. Permission sets define permissions, not groups of users.
B) Policy attached to an account — Permission sets are managed in Identity Center, not directly attached to accounts as IAM policies.
D) Role trust policy — A trust policy is part of an IAM role that specifies who can assume it. Permission sets create roles, but they are not trust policies themselves.
Q40.A company wants to use SAML 2.0 federation to allow employees to access the AWS Management Console. Which flow correctly describes the process?
✓ Correct: B. The user authenticates with the IdP, receives a SAML assertion, and posts it to AWS.
Why B is correct: In SAML 2.0 federation for console access, the user first authenticates with the corporate Identity Provider (IdP). The IdP generates a SAML assertion containing the user's attributes and the IAM role to assume. The user's browser posts this assertion to the AWS sign-in endpoint, which calls
Why others are wrong:
A) User signs in to AWS first — In IdP-initiated SSO (the standard flow), the user authenticates with the IdP first, not with AWS. AWS does not send SAML requests in this flow.
C) IdP creates IAM user — The IdP never creates IAM users. Federation eliminates the need for IAM users by using temporary role-based access.
D) Permanent credentials — STS always returns temporary credentials, never permanent ones.
Why B is correct: In SAML 2.0 federation for console access, the user first authenticates with the corporate Identity Provider (IdP). The IdP generates a SAML assertion containing the user's attributes and the IAM role to assume. The user's browser posts this assertion to the AWS sign-in endpoint, which calls
sts:AssumeRoleWithSAML and redirects the user to the AWS Console.Why others are wrong:
A) User signs in to AWS first — In IdP-initiated SSO (the standard flow), the user authenticates with the IdP first, not with AWS. AWS does not send SAML requests in this flow.
C) IdP creates IAM user — The IdP never creates IAM users. Federation eliminates the need for IAM users by using temporary role-based access.
D) Permanent credentials — STS always returns temporary credentials, never permanent ones.
Q41.An SCP in AWS Organizations has the following structure:
"Effect": "Deny", "Action": "ec2:RunInstances", "Resource": "*", "Condition": {"StringNotEquals": {"ec2:InstanceType": ["t3.micro", "t3.small"]}}. What does this SCP do?
✓ Correct: A. The Deny applies when the instance type is NOT t3.micro or t3.small.
Why A is correct: The
Why others are wrong:
B) Allows launching — SCPs do not grant permissions. This SCP denies non-matching instance types, but actual permissions must still be granted by IAM policies.
C) Denies t3.micro and t3.small — The
D) Allows all except those types — This misreads the condition. The deny blocks everything except t3.micro and t3.small.
Why A is correct: The
StringNotEquals condition means the Deny effect triggers when the instance type does NOT match t3.micro or t3.small. So if someone tries to launch a t3.large, the condition is true and the action is denied. Only t3.micro and t3.small are allowed because the Deny does not trigger for those types.Why others are wrong:
B) Allows launching — SCPs do not grant permissions. This SCP denies non-matching instance types, but actual permissions must still be granted by IAM policies.
C) Denies t3.micro and t3.small — The
StringNotEquals condition means the opposite — the deny applies when the type is NOT one of these values.D) Allows all except those types — This misreads the condition. The deny blocks everything except t3.micro and t3.small.
Q42.Which TWO types of principals can be specified in the
Principal element of an IAM role trust policy?SELECT TWO
✓ Correct: B, E. Trust policies can specify AWS services and account IDs as principals.
Why B is correct: AWS services like
Why E is correct: You can specify an AWS account ID (e.g.,
Why others are wrong:
A) IAM groups — IAM groups cannot be specified as principals in any policy, including trust policies. Only users, roles, accounts, and services can be principals.
C) IAM policies — Policies are not principals. They define permissions but cannot assume roles.
D) Security groups — Security groups are network-level constructs for EC2. They are not IAM principals.
Why B is correct: AWS services like
ec2.amazonaws.com, lambda.amazonaws.com, or ecs-tasks.amazonaws.com can be specified as principals in a trust policy. This allows the service to assume the role on behalf of your resources.Why E is correct: You can specify an AWS account ID (e.g.,
"AWS": "arn:aws:iam::123456789012:root") to allow any principal in that account to assume the role, provided they also have sts:AssumeRole permission in their own account.Why others are wrong:
A) IAM groups — IAM groups cannot be specified as principals in any policy, including trust policies. Only users, roles, accounts, and services can be principals.
C) IAM policies — Policies are not principals. They define permissions but cannot assume roles.
D) Security groups — Security groups are network-level constructs for EC2. They are not IAM principals.
Q43.A company wants to use Cognito User Pools to require MFA only when a sign-in attempt appears risky (e.g., new device or unusual location). Which feature should they enable?
✓ Correct: C. Adaptive authentication automatically applies risk-based MFA challenges.
Why C is correct: Adaptive authentication in Cognito User Pools uses machine learning to assess the risk of each sign-in attempt. Based on factors like device familiarity, IP reputation, and location, it assigns a risk level (low, medium, high) and can automatically require MFA only for medium or high-risk attempts. This balances security with user experience.
Why others are wrong:
A) Mandatory MFA — This requires MFA for every sign-in, regardless of risk level. It does not provide risk-based conditional MFA.
B) Custom authentication Lambda — While you could build a custom solution, adaptive authentication is the built-in feature designed for this exact purpose.
D) Device tracking without MFA — Device tracking alone only remembers devices. Without adaptive authentication enabled, it does not trigger risk-based MFA.
Why C is correct: Adaptive authentication in Cognito User Pools uses machine learning to assess the risk of each sign-in attempt. Based on factors like device familiarity, IP reputation, and location, it assigns a risk level (low, medium, high) and can automatically require MFA only for medium or high-risk attempts. This balances security with user experience.
Why others are wrong:
A) Mandatory MFA — This requires MFA for every sign-in, regardless of risk level. It does not provide risk-based conditional MFA.
B) Custom authentication Lambda — While you could build a custom solution, adaptive authentication is the built-in feature designed for this exact purpose.
D) Device tracking without MFA — Device tracking alone only remembers devices. Without adaptive authentication enabled, it does not trigger risk-based MFA.
Q44.Which AWS service or feature should you use to implement ABAC (Attribute-Based Access Control) in AWS IAM Identity Center?
✓ Correct: B. Identity Center supports ABAC by passing user attributes as session tags.
Why B is correct: AWS IAM Identity Center supports ABAC by mapping user attributes from the identity source (such as department, cost center, or project) to session tags. These session tags are then available as
Why others are wrong:
A) SCPs with tag conditions — SCPs restrict maximum permissions but are not the mechanism for implementing ABAC in Identity Center.
C) Permissions boundaries — Permissions boundaries limit maximum permissions for a role but are not related to ABAC attribute passing.
D) Resource-based policies — While resource-based policies can use tag conditions, the question asks about the Identity Center mechanism for ABAC, which is session tag propagation.
Why B is correct: AWS IAM Identity Center supports ABAC by mapping user attributes from the identity source (such as department, cost center, or project) to session tags. These session tags are then available as
aws:PrincipalTag conditions in IAM policies, enabling you to write a single policy that grants access based on matching tags between the user and the resource.Why others are wrong:
A) SCPs with tag conditions — SCPs restrict maximum permissions but are not the mechanism for implementing ABAC in Identity Center.
C) Permissions boundaries — Permissions boundaries limit maximum permissions for a role but are not related to ABAC attribute passing.
D) Resource-based policies — While resource-based policies can use tag conditions, the question asks about the Identity Center mechanism for ABAC, which is session tag propagation.
Q45.In a
NotAction with Deny statement, what is the effect?
✓ Correct: A.
Why A is correct: When you combine
Why others are wrong:
B) Allows all except listed — This describes
C) Denies only listed actions — This describes a regular
D) Same as NotAction with Allow — These have very different effects.
NotAction with Deny denies everything except the specified actions.Why A is correct: When you combine
NotAction with a Deny effect, the policy explicitly denies all actions EXCEPT those listed. This is commonly used to restrict users to specific services only. For example, denying everything except IAM and S3 actions ensures users can only use those two services. The listed actions are the only ones NOT denied.Why others are wrong:
B) Allows all except listed — This describes
NotAction with Allow, not with Deny.C) Denies only listed actions — This describes a regular
Action with Deny. NotAction inverts the selection.D) Same as NotAction with Allow — These have very different effects.
Deny explicitly blocks; Allow simply does not grant.
Q46.An application running on an EC2 instance needs to call the STS API in the
ap-southeast-1 region. Where should the STS endpoint be configured?
✓ Correct: D. Regional STS endpoints must be activated and the SDK configured accordingly.
Why D is correct: By default, STS uses the global endpoint in
Why others are wrong:
A) Only us-east-1 — STS has regional endpoints in most AWS regions, but they must be activated. The global endpoint is in us-east-1 by default.
B) Automatically selected — The SDK does not automatically select the regional STS endpoint. You must configure it explicitly.
C) Management account only — STS regional endpoints are activated per account in the IAM console, not only in the management account.
Why D is correct: By default, STS uses the global endpoint in
us-east-1. To use a regional endpoint like sts.ap-southeast-1.amazonaws.com, you must first activate regional STS endpoints in the IAM console and then configure your SDK or CLI to use the regional endpoint. Regional endpoints reduce latency and improve resilience.Why others are wrong:
A) Only us-east-1 — STS has regional endpoints in most AWS regions, but they must be activated. The global endpoint is in us-east-1 by default.
B) Automatically selected — The SDK does not automatically select the regional STS endpoint. You must configure it explicitly.
C) Management account only — STS regional endpoints are activated per account in the IAM console, not only in the management account.
Q47.Which TWO statements about SCPs (Service Control Policies) are correct?SELECT TWO
✓ Correct: A, D. SCPs do not affect service-linked roles and they apply to all principals including root in member accounts.
Why A is correct: SCPs do not restrict service-linked roles. Service-linked roles are used by AWS services to perform actions on your behalf, and SCPs are designed not to interfere with their operation.
Why D is correct: SCPs apply to all IAM principals in member accounts, including the root user of those accounts. This is a powerful governance mechanism — even the root user of a member account cannot perform actions denied by an SCP.
Why others are wrong:
B) SCPs grant permissions — SCPs never grant permissions. They only define the maximum permissions that IAM policies in member accounts can grant.
C) Apply to management account — SCPs do NOT affect the management account, even if attached to the root OU.
E) Only at OU level — SCPs can be attached to both OUs and individual member accounts.
Why A is correct: SCPs do not restrict service-linked roles. Service-linked roles are used by AWS services to perform actions on your behalf, and SCPs are designed not to interfere with their operation.
Why D is correct: SCPs apply to all IAM principals in member accounts, including the root user of those accounts. This is a powerful governance mechanism — even the root user of a member account cannot perform actions denied by an SCP.
Why others are wrong:
B) SCPs grant permissions — SCPs never grant permissions. They only define the maximum permissions that IAM policies in member accounts can grant.
C) Apply to management account — SCPs do NOT affect the management account, even if attached to the root OU.
E) Only at OU level — SCPs can be attached to both OUs and individual member accounts.
Q48.A web application uses
sts:AssumeRoleWithWebIdentity to allow Google-authenticated users to access AWS resources. Which AWS service is the RECOMMENDED replacement for this direct federation approach?
✓ Correct: C. Cognito Identity Pools is the recommended replacement for direct web identity federation.
Why C is correct: AWS recommends using Amazon Cognito Identity Pools instead of calling
Why others are wrong:
A) IAM Identity Center — Identity Center is designed for workforce users accessing AWS accounts, not for application users authenticated with Google or social providers.
B) SAML 2.0 federation — SAML is for enterprise federation with corporate IdPs, not for social identity providers like Google.
D) AWS Directory Service — Directory Service is for Microsoft Active Directory integration, not web identity federation.
Why C is correct: AWS recommends using Amazon Cognito Identity Pools instead of calling
sts:AssumeRoleWithWebIdentity directly. Cognito provides additional features like anonymous access, data synchronization, and MFA. It also handles the complexity of token exchange and credential management, making it easier to implement web identity federation securely.Why others are wrong:
A) IAM Identity Center — Identity Center is designed for workforce users accessing AWS accounts, not for application users authenticated with Google or social providers.
B) SAML 2.0 federation — SAML is for enterprise federation with corporate IdPs, not for social identity providers like Google.
D) AWS Directory Service — Directory Service is for Microsoft Active Directory integration, not web identity federation.
Q49.In IAM policy evaluation logic, what happens when there is both an explicit Allow and an explicit Deny for the same action?
✓ Correct: B. An explicit Deny always wins over any Allow in IAM policy evaluation.
Why B is correct: In IAM policy evaluation, the order of precedence is: (1) explicit Deny, (2) explicit Allow, (3) implicit Deny (default). If any policy contains an explicit Deny for an action, that action is denied regardless of any Allow statements in other policies. This is a fundamental IAM concept.
Why others are wrong:
A) Allow takes precedence — Allow never overrides an explicit Deny. The evaluation order does not matter; all policies are evaluated together.
C) Most recent policy — Policy creation time has no effect on evaluation. All applicable policies are evaluated simultaneously.
D) Allow is default — The default is implicit Deny, not Allow. If there is no explicit Allow, the action is denied.
Why B is correct: In IAM policy evaluation, the order of precedence is: (1) explicit Deny, (2) explicit Allow, (3) implicit Deny (default). If any policy contains an explicit Deny for an action, that action is denied regardless of any Allow statements in other policies. This is a fundamental IAM concept.
Why others are wrong:
A) Allow takes precedence — Allow never overrides an explicit Deny. The evaluation order does not matter; all policies are evaluated together.
C) Most recent policy — Policy creation time has no effect on evaluation. All applicable policies are evaluated simultaneously.
D) Allow is default — The default is implicit Deny, not Allow. If there is no explicit Allow, the action is denied.
Q50.Which AWS Organizations policy type allows you to opt out of having your content used by AWS AI services for service improvement?
✓ Correct: A. AI services opt-out policies control whether AWS AI services can use your content.
Why A is correct: AI services opt-out policies in AWS Organizations allow you to opt out of having your content used by AWS AI services for developing and improving their AI/ML models. This can be applied across all member accounts in the organization, ensuring consistent data privacy controls. Services affected include Amazon Lex, Amazon Comprehend, and others.
Why others are wrong:
B) SCP — SCPs restrict API actions but cannot control how AWS internally uses your data for service improvement.
C) Data protection policy — This is not a specific policy type in AWS Organizations for AI opt-out purposes.
D) Privacy Control Policy — This is not an actual AWS Organizations policy type.
Why A is correct: AI services opt-out policies in AWS Organizations allow you to opt out of having your content used by AWS AI services for developing and improving their AI/ML models. This can be applied across all member accounts in the organization, ensuring consistent data privacy controls. Services affected include Amazon Lex, Amazon Comprehend, and others.
Why others are wrong:
B) SCP — SCPs restrict API actions but cannot control how AWS internally uses your data for service improvement.
C) Data protection policy — This is not a specific policy type in AWS Organizations for AI opt-out purposes.
D) Privacy Control Policy — This is not an actual AWS Organizations policy type.
Q51.Which feature of IAM Access Analyzer helps you define a zone of trust?
✓ Correct: D. When creating an Access Analyzer, you define the zone of trust as your account or organization.
Why D is correct: When you create an IAM Access Analyzer, you specify a zone of trust, which is either your AWS account or your AWS Organization. The analyzer then generates findings for any resource that is accessible from outside this zone of trust. For example, an organization-level analyzer flags resources shared with any principal outside the organization.
Why others are wrong:
A) VPC endpoint policy — VPC endpoint policies control access through VPC endpoints, not related to Access Analyzer's zone of trust.
B) Permissions boundaries — Permissions boundaries limit individual user/role permissions, not the Access Analyzer zone of trust.
C) CloudTrail — CloudTrail provides API logging but is not used to define the zone of trust for Access Analyzer.
Why D is correct: When you create an IAM Access Analyzer, you specify a zone of trust, which is either your AWS account or your AWS Organization. The analyzer then generates findings for any resource that is accessible from outside this zone of trust. For example, an organization-level analyzer flags resources shared with any principal outside the organization.
Why others are wrong:
A) VPC endpoint policy — VPC endpoint policies control access through VPC endpoints, not related to Access Analyzer's zone of trust.
B) Permissions boundaries — Permissions boundaries limit individual user/role permissions, not the Access Analyzer zone of trust.
C) CloudTrail — CloudTrail provides API logging but is not used to define the zone of trust for Access Analyzer.
Q52.How does the SCP inheritance model work in AWS Organizations when an OU has multiple parent OUs?
✓ Correct: C. The effective SCP permissions are the intersection of all inherited SCPs.
Why C is correct: SCPs follow an inheritance model where the effective permissions for an account are the intersection of all SCPs applied from the organization root down to the account's OU and the account itself. An action must be allowed at every level of the hierarchy. If a parent OU denies an action, no child OU or account can allow it.
Why others are wrong:
A) Only directly attached — SCPs are inherited from parent OUs. Both directly attached and inherited SCPs apply.
B) OR logic — SCPs use AND (intersection) logic, not OR. An action must be permitted at every level in the hierarchy.
D) Child overrides parent — Child SCPs cannot override parent SCPs. If a parent denies something, it is denied regardless of child SCPs.
Why C is correct: SCPs follow an inheritance model where the effective permissions for an account are the intersection of all SCPs applied from the organization root down to the account's OU and the account itself. An action must be allowed at every level of the hierarchy. If a parent OU denies an action, no child OU or account can allow it.
Why others are wrong:
A) Only directly attached — SCPs are inherited from parent OUs. Both directly attached and inherited SCPs apply.
B) OR logic — SCPs use AND (intersection) logic, not OR. An action must be permitted at every level in the hierarchy.
D) Child overrides parent — Child SCPs cannot override parent SCPs. If a parent denies something, it is denied regardless of child SCPs.
Q53.A security team wants to allow an administrator to create IAM users and roles, but prevent the administrator from granting permissions they do not have themselves. Which IAM feature achieves this?
✓ Correct: B. Permissions boundaries prevent privilege escalation when delegating IAM administration.
Why B is correct: Permissions boundaries are designed for this exact use case. You can allow an administrator to create IAM users and roles but require them to attach a permissions boundary to every entity they create. The boundary limits the maximum permissions of the new entity, preventing the administrator from creating users or roles that have more permissions than intended. This prevents privilege escalation.
Why others are wrong:
A) SCP — SCPs restrict what actions can be performed in an account but do not control which permissions an administrator can grant to other IAM entities.
C) IAM Access Analyzer — Access Analyzer detects external access but does not prevent privilege escalation during IAM administration.
D) Policy condition keys — While you could use condition keys like
Why B is correct: Permissions boundaries are designed for this exact use case. You can allow an administrator to create IAM users and roles but require them to attach a permissions boundary to every entity they create. The boundary limits the maximum permissions of the new entity, preventing the administrator from creating users or roles that have more permissions than intended. This prevents privilege escalation.
Why others are wrong:
A) SCP — SCPs restrict what actions can be performed in an account but do not control which permissions an administrator can grant to other IAM entities.
C) IAM Access Analyzer — Access Analyzer detects external access but does not prevent privilege escalation during IAM administration.
D) Policy condition keys — While you could use condition keys like
iam:PermissionsBoundary to enforce boundaries, the feature itself is permissions boundaries.
Q54.Which TWO resources can be shared using AWS Resource Access Manager (RAM)?SELECT TWO
✓ Correct: C, D. RAM can share VPC subnets and Transit Gateways across accounts.
Why C is correct: VPC subnets are one of the most commonly shared resources through RAM. This allows multiple accounts to launch resources (like EC2 instances) into a shared subnet, enabling centralized network management.
Why D is correct: Transit Gateways can be shared via RAM, allowing multiple accounts to attach their VPCs to a centrally managed Transit Gateway. This simplifies multi-account network architecture.
Why others are wrong:
A) IAM roles — IAM roles cannot be shared through RAM. Cross-account role access is managed through trust policies and STS.
B) S3 buckets — S3 cross-account access is managed through bucket policies (resource-based policies), not RAM.
E) CloudFormation stacks — CloudFormation stacks are not shared through RAM. Cross-account deployments use StackSets.
Why C is correct: VPC subnets are one of the most commonly shared resources through RAM. This allows multiple accounts to launch resources (like EC2 instances) into a shared subnet, enabling centralized network management.
Why D is correct: Transit Gateways can be shared via RAM, allowing multiple accounts to attach their VPCs to a centrally managed Transit Gateway. This simplifies multi-account network architecture.
Why others are wrong:
A) IAM roles — IAM roles cannot be shared through RAM. Cross-account role access is managed through trust policies and STS.
B) S3 buckets — S3 cross-account access is managed through bucket policies (resource-based policies), not RAM.
E) CloudFormation stacks — CloudFormation stacks are not shared through RAM. Cross-account deployments use StackSets.
Q55.Which of the following is a valid use case for
sts:GetFederationToken?
✓ Correct: A.
Why A is correct:
Why others are wrong:
B) EC2 assuming a role — EC2 cross-account role assumption uses
C) SAML authentication — SAML federation uses
D) Permanent access keys — STS only generates temporary credentials. No STS API creates permanent access keys.
GetFederationToken is used by IAM users to create temporary credentials for federated users.Why A is correct:
sts:GetFederationToken is called by an IAM user (not a role) to create temporary credentials for a federated user. The caller can specify a policy that scopes down the permissions. This is commonly used by proxy applications or server-based systems that need to distribute limited credentials to client applications or users.Why others are wrong:
B) EC2 assuming a role — EC2 cross-account role assumption uses
sts:AssumeRole, not GetFederationToken.C) SAML authentication — SAML federation uses
sts:AssumeRoleWithSAML, not GetFederationToken.D) Permanent access keys — STS only generates temporary credentials. No STS API creates permanent access keys.
Q56.When using IAM Identity Center, what happens when a user clicks on an AWS account in the SSO portal?
✓ Correct: D. Identity Center assumes a role in the target account to provide access.
Why D is correct: When a user selects an account and permission set in the IAM Identity Center portal, Identity Center assumes the corresponding IAM role that was created in the target account (based on the permission set). The user receives temporary credentials scoped to that role, which give them the permissions defined in the permission set for that account.
Why others are wrong:
A) IAM user created — Identity Center uses roles, not IAM users. No IAM users are created in target accounts.
B) Credentials forwarded — Corporate credentials are never forwarded to AWS accounts. Authentication happens at the Identity Center level.
C) Permanent access keys — Identity Center always provides temporary credentials through role assumption, never permanent keys.
Why D is correct: When a user selects an account and permission set in the IAM Identity Center portal, Identity Center assumes the corresponding IAM role that was created in the target account (based on the permission set). The user receives temporary credentials scoped to that role, which give them the permissions defined in the permission set for that account.
Why others are wrong:
A) IAM user created — Identity Center uses roles, not IAM users. No IAM users are created in target accounts.
B) Credentials forwarded — Corporate credentials are never forwarded to AWS accounts. Authentication happens at the Identity Center level.
C) Permanent access keys — Identity Center always provides temporary credentials through role assumption, never permanent keys.
Q57.Simple AD in AWS Directory Services is BEST suited for which scenario?
✓ Correct: C. Simple AD is a low-cost standalone directory for basic AD features.
Why C is correct: Simple AD is a Samba-based, Microsoft AD-compatible directory that provides basic features like user accounts, group memberships, and Kerberos-based SSO. It is the least expensive option and is suitable for small organizations that need basic directory services without on-premises AD integration or advanced features.
Why others are wrong:
A) Trust relationship — Simple AD does not support trust relationships with on-premises AD. AWS Managed Microsoft AD supports this.
B) Proxying to on-premises AD — This is the function of AD Connector, not Simple AD.
D) Advanced Group Policy — Simple AD has limited Group Policy support and no multi-region replication. AWS Managed Microsoft AD supports these features.
Why C is correct: Simple AD is a Samba-based, Microsoft AD-compatible directory that provides basic features like user accounts, group memberships, and Kerberos-based SSO. It is the least expensive option and is suitable for small organizations that need basic directory services without on-premises AD integration or advanced features.
Why others are wrong:
A) Trust relationship — Simple AD does not support trust relationships with on-premises AD. AWS Managed Microsoft AD supports this.
B) Proxying to on-premises AD — This is the function of AD Connector, not Simple AD.
D) Advanced Group Policy — Simple AD has limited Group Policy support and no multi-region replication. AWS Managed Microsoft AD supports these features.
Q58.Which TWO Cognito User Pool Lambda triggers can modify the authentication flow BEFORE the user is authenticated?SELECT TWO
✓ Correct: B, E. Pre-authentication and pre-sign-up triggers execute before user authentication.
Why B is correct: The pre-authentication trigger executes just before Cognito authenticates a user. You can use it to implement custom validation logic, check external systems, or deny sign-in attempts based on custom criteria.
Why E is correct: The pre-sign-up trigger executes before a new user is registered. You can use it to validate registration data, auto-confirm users, or reject sign-up attempts based on custom logic. This runs before the user exists in the pool.
Why others are wrong:
A) Post-confirmation — This trigger runs AFTER a user confirms their account, not before authentication.
C) Post-authentication — This trigger runs AFTER successful authentication, for logging or external integration.
D) Pre-token generation — This runs after authentication but before token issuance. The user is already authenticated at this point.
Why B is correct: The pre-authentication trigger executes just before Cognito authenticates a user. You can use it to implement custom validation logic, check external systems, or deny sign-in attempts based on custom criteria.
Why E is correct: The pre-sign-up trigger executes before a new user is registered. You can use it to validate registration data, auto-confirm users, or reject sign-up attempts based on custom logic. This runs before the user exists in the pool.
Why others are wrong:
A) Post-confirmation — This trigger runs AFTER a user confirms their account, not before authentication.
C) Post-authentication — This trigger runs AFTER successful authentication, for logging or external integration.
D) Pre-token generation — This runs after authentication but before token issuance. The user is already authenticated at this point.
Q59.A company wants to share AWS resources within their organization WITHOUT requiring accounts to accept an invitation. How can they achieve this with RAM?
✓ Correct: B. Enabling RAM sharing with Organizations removes the invitation requirement.
Why B is correct: When you enable RAM sharing with AWS Organizations in the organization's management account, resources shared within the organization are automatically available to member accounts without requiring them to accept an invitation. This simplifies resource sharing and makes it seamless across the organization.
Why others are wrong:
A) IAM roles instead — IAM roles provide cross-account access differently than RAM and cannot share resources like subnets or Transit Gateways.
C) Resource-based policy — While resource-based policies can grant cross-account access for some resources, RAM is the service for sharing subnets, Transit Gateways, and other resources not supported by resource-based policies.
D) Always requires invitation — This is false. When RAM is integrated with Organizations, invitations are not required for sharing within the organization.
Why B is correct: When you enable RAM sharing with AWS Organizations in the organization's management account, resources shared within the organization are automatically available to member accounts without requiring them to accept an invitation. This simplifies resource sharing and makes it seamless across the organization.
Why others are wrong:
A) IAM roles instead — IAM roles provide cross-account access differently than RAM and cannot share resources like subnets or Transit Gateways.
C) Resource-based policy — While resource-based policies can grant cross-account access for some resources, RAM is the service for sharing subnets, Transit Gateways, and other resources not supported by resource-based policies.
D) Always requires invitation — This is false. When RAM is integrated with Organizations, invitations are not required for sharing within the organization.
Q60.Which TWO statements about the
aws:PrincipalTag condition key are correct?SELECT TWO
✓ Correct: A, C.
Why A is correct: The
Why C is correct:
Why others are wrong:
B) Only identity-based policies —
D) Checks resource tags —
E) Only IAM users —
aws:PrincipalTag checks the calling principal's tags and is key to ABAC.Why A is correct: The
aws:PrincipalTag condition key evaluates the tags attached to the IAM principal (user, role, or federated session) making the API request. For example, aws:PrincipalTag/Department would check the value of the "Department" tag on the caller.Why C is correct:
aws:PrincipalTag is a fundamental building block of ABAC in AWS. A common pattern is to compare aws:PrincipalTag/Project against aws:ResourceTag/Project to ensure users can only access resources that belong to their project.Why others are wrong:
B) Only identity-based policies —
aws:PrincipalTag can be used in both identity-based and resource-based policies.D) Checks resource tags —
aws:PrincipalTag checks the principal's tags. aws:ResourceTag checks the resource's tags.E) Only IAM users —
aws:PrincipalTag works with IAM users, roles, and session tags from federated sessions.
Domain 5 — Data Protection (60 Questions)
Q1.A security engineer needs to encrypt a 4 KB secret using AWS KMS. Which API call should they use to directly encrypt this small piece of data?
✓ Correct: B. The KMS
Why B is correct: The
Why others are wrong:
A) GenerateDataKey — This generates a data encryption key (DEK) for envelope encryption of larger data sets, not for directly encrypting small secrets.
C) GenerateDataKeyWithoutPlaintext — This generates an encrypted DEK without returning the plaintext version. It is used in deferred encryption scenarios, not direct encryption.
D) ReEncrypt — This decrypts ciphertext and re-encrypts it under a different CMK without exposing plaintext. It does not perform initial encryption.
Encrypt API is used to directly encrypt small amounts of data (up to 4 KB).Why B is correct: The
Encrypt API call sends data directly to KMS, which encrypts it using the specified CMK and returns the ciphertext. This is designed for small data payloads up to 4 KB in size.Why others are wrong:
A) GenerateDataKey — This generates a data encryption key (DEK) for envelope encryption of larger data sets, not for directly encrypting small secrets.
C) GenerateDataKeyWithoutPlaintext — This generates an encrypted DEK without returning the plaintext version. It is used in deferred encryption scenarios, not direct encryption.
D) ReEncrypt — This decrypts ciphertext and re-encrypts it under a different CMK without exposing plaintext. It does not perform initial encryption.
Q2.A company wants to encrypt large files stored in S3 using KMS. The security team wants to minimize the number of API calls to KMS. Which encryption approach should they use?
✓ Correct: C. Envelope encryption uses
Why C is correct: With envelope encryption, you call
Why others are wrong:
A) Encrypt API for each file — The
B) Hardcoded key — This is a severe security anti-pattern and does not use KMS at all.
D) ReEncrypt API —
GenerateDataKey to get a data key, encrypts data locally, and only calls KMS once per key generation.Why C is correct: With envelope encryption, you call
GenerateDataKey once to receive a plaintext data key and an encrypted copy. You encrypt the large file locally using the plaintext key, then store the encrypted data key alongside the ciphertext. This minimizes KMS API calls.Why others are wrong:
A) Encrypt API for each file — The
Encrypt API only supports data up to 4 KB and would require one API call per file, not reducing calls.B) Hardcoded key — This is a severe security anti-pattern and does not use KMS at all.
D) ReEncrypt API —
ReEncrypt is for changing the CMK on already-encrypted data, not for initial encryption of files.
Q3.Which type of KMS key is fully managed by AWS, used by AWS services on your behalf, and cannot be viewed, rotated, or managed by the customer?
✓ Correct: A. AWS owned keys are managed entirely by AWS and are not visible in your account.
Why A is correct: AWS owned keys are a collection of CMKs that an AWS service owns and manages for use across multiple AWS accounts. You cannot view, use, track, or audit them. They are free and require no management from customers.
Why others are wrong:
B) AWS managed keys — These are visible in your account (prefixed with
C) Customer managed keys — These are created, owned, and managed by the customer with full control over key policies, rotation, and deletion.
D) Imported key material keys — These are customer managed keys where the key material is imported from outside AWS, giving the customer even more control.
Why A is correct: AWS owned keys are a collection of CMKs that an AWS service owns and manages for use across multiple AWS accounts. You cannot view, use, track, or audit them. They are free and require no management from customers.
Why others are wrong:
B) AWS managed keys — These are visible in your account (prefixed with
aws/), automatically rotated every year, and you can view their policies, though you cannot manage them directly.C) Customer managed keys — These are created, owned, and managed by the customer with full control over key policies, rotation, and deletion.
D) Imported key material keys — These are customer managed keys where the key material is imported from outside AWS, giving the customer even more control.
Q4.An organization requires that the encryption key material be generated outside of AWS and imported into KMS. After import, they want the ability to set an expiration date on the key material. Which key material origin should they choose?
✓ Correct: D. The EXTERNAL origin means the key material is imported from outside AWS, and it supports setting an expiration date.
Why D is correct: When you create a KMS key with
Why others are wrong:
A) AWS_KMS — This is the default origin where AWS generates and manages the key material within KMS. You cannot set expiration dates on the key material.
B) AWS_CLOUDHSM — This origin stores key material in a CloudHSM cluster you control, but the material is generated within the HSM, not imported externally.
C) EXTERNAL_KEY_STORE — This is for keys managed in an external key manager outside AWS (XKS), not for importing key material into KMS.
Why D is correct: When you create a KMS key with
EXTERNAL origin, you import your own key material. This allows you to set an optional expiration date, after which AWS deletes the key material. You are responsible for the security and durability of the key material outside of AWS.Why others are wrong:
A) AWS_KMS — This is the default origin where AWS generates and manages the key material within KMS. You cannot set expiration dates on the key material.
B) AWS_CLOUDHSM — This origin stores key material in a CloudHSM cluster you control, but the material is generated within the HSM, not imported externally.
C) EXTERNAL_KEY_STORE — This is for keys managed in an external key manager outside AWS (XKS), not for importing key material into KMS.
Q5.A company has enabled automatic key rotation for a customer managed KMS key. How often does AWS automatically rotate the key material, and what happens to the old key material?
✓ Correct: B. Automatic rotation happens every year by default (now configurable) and old key material is retained so previously encrypted data can still be decrypted.
Why B is correct: When automatic key rotation is enabled, KMS generates new cryptographic material every year (the rotation period is now configurable from 90 days to 2560 days). The key ID and ARN remain the same. KMS retains all previous versions of the key material to decrypt data that was encrypted with older versions.
Why others are wrong:
A) 90 days, deleted — While 90 days is now the minimum configurable rotation period, old key material is never deleted—it is preserved for decryption of older ciphertext.
C) 3 years, archived — The default period is one year, and old key material stays within KMS, not archived to S3.
D) 6 months, manual delete — The rotation is fully automatic and old key material is managed by KMS without manual intervention.
Why B is correct: When automatic key rotation is enabled, KMS generates new cryptographic material every year (the rotation period is now configurable from 90 days to 2560 days). The key ID and ARN remain the same. KMS retains all previous versions of the key material to decrypt data that was encrypted with older versions.
Why others are wrong:
A) 90 days, deleted — While 90 days is now the minimum configurable rotation period, old key material is never deleted—it is preserved for decryption of older ciphertext.
C) 3 years, archived — The default period is one year, and old key material stays within KMS, not archived to S3.
D) 6 months, manual delete — The rotation is fully automatic and old key material is managed by KMS without manual intervention.
Q6.A developer needs to perform manual key rotation for a KMS key with imported key material. What is the correct approach?
✓ Correct: C. Manual rotation requires creating a new key and using a KMS alias to transparently point applications to the new key.
Why C is correct: For keys with imported key material, automatic rotation is not supported. You must create a new KMS key, import new key material, and then update the KMS alias to point to the new key. Applications referencing the alias will automatically use the new key without code changes. The old key should be kept for decrypting old data.
Why others are wrong:
A) Enable automatic rotation — Automatic rotation is not supported for KMS keys with imported key material (EXTERNAL origin).
B) RotateKeyMaterial API — There is no such API in AWS KMS. Rotation of imported key material must be done manually.
D) Delete and reimport — Deleting the key would make all data encrypted with it permanently unrecoverable. The old key must be retained.
Why C is correct: For keys with imported key material, automatic rotation is not supported. You must create a new KMS key, import new key material, and then update the KMS alias to point to the new key. Applications referencing the alias will automatically use the new key without code changes. The old key should be kept for decrypting old data.
Why others are wrong:
A) Enable automatic rotation — Automatic rotation is not supported for KMS keys with imported key material (EXTERNAL origin).
B) RotateKeyMaterial API — There is no such API in AWS KMS. Rotation of imported key material must be done manually.
D) Delete and reimport — Deleting the key would make all data encrypted with it permanently unrecoverable. The old key must be retained.
Q7.An application in Account A needs to use a KMS key in Account B to encrypt data. Which two configurations are required for cross-account KMS access?
✓ Correct: A. Cross-account KMS access requires both a key policy in the target account and an IAM policy in the calling account.
Why A is correct: For cross-account access, the KMS key policy in Account B must include a statement allowing the principal (role/user) or root account from Account A. Additionally, the IAM policy attached to the principal in Account A must grant the relevant KMS actions on the key ARN in Account B. Both sides are required.
Why others are wrong:
B) Only key policy — The key policy alone is not sufficient unless it directly names a specific IAM principal in Account A (not just the root). Best practice requires both key policy and IAM policy.
C) KMS grant alone — Grants are an alternative to key policy statements but still require proper IAM permissions in the calling account.
D) CreateReplica — There is no
Why A is correct: For cross-account access, the KMS key policy in Account B must include a statement allowing the principal (role/user) or root account from Account A. Additionally, the IAM policy attached to the principal in Account A must grant the relevant KMS actions on the key ARN in Account B. Both sides are required.
Why others are wrong:
B) Only key policy — The key policy alone is not sufficient unless it directly names a specific IAM principal in Account A (not just the root). Best practice requires both key policy and IAM policy.
C) KMS grant alone — Grants are an alternative to key policy statements but still require proper IAM permissions in the calling account.
D) CreateReplica — There is no
CreateReplica for standard KMS keys. Multi-region keys use a different mechanism and are not required for cross-account access.
Q8.A security administrator wants to restrict a KMS key so it can only be used by Amazon S3 and no other AWS service. Which KMS condition key should they use in the key policy?
✓ Correct: D. The
Why D is correct: The
Why others are wrong:
A) kms:CallerAccount — This condition restricts which AWS account can use the key, not which service. It is useful for cross-account restrictions.
B) kms:KeyOrigin — This is not a valid condition key for restricting service usage. Key origin refers to where the key material was generated.
C) aws:SourceService — This is not the correct condition key for KMS. The
kms:ViaService condition key restricts a KMS key so it can only be used when the request comes through a specified AWS service.Why D is correct: The
kms:ViaService condition key allows you to limit KMS key usage to requests made by specific AWS services (e.g., s3.us-east-1.amazonaws.com). This ensures the key cannot be called directly by users or by other services.Why others are wrong:
A) kms:CallerAccount — This condition restricts which AWS account can use the key, not which service. It is useful for cross-account restrictions.
B) kms:KeyOrigin — This is not a valid condition key for restricting service usage. Key origin refers to where the key material was generated.
C) aws:SourceService — This is not the correct condition key for KMS. The
kms:ViaService key is the KMS-specific condition for service-based restrictions.
Q9.Which S3 server-side encryption method requires the customer to provide the encryption key with every request and AWS does NOT store the key?
✓ Correct: B. SSE-C (Server-Side Encryption with Customer-Provided Keys) requires the customer to send the encryption key with every PUT and GET request.
Why B is correct: With SSE-C, the customer manages the encryption keys and must include the key in every HTTPS request. AWS uses the provided key to encrypt/decrypt the object but does not store the key. HTTPS is mandatory for SSE-C because the key travels in the request headers.
Why others are wrong:
A) SSE-S3 — AWS fully manages the keys using AES-256. No customer key is needed in requests.
C) SSE-KMS — AWS KMS manages the keys. You specify which KMS key to use, but don't provide raw key material in each request.
D) DSSE-KMS — This is dual-layer server-side encryption with KMS. KMS manages the keys, not the customer.
Why B is correct: With SSE-C, the customer manages the encryption keys and must include the key in every HTTPS request. AWS uses the provided key to encrypt/decrypt the object but does not store the key. HTTPS is mandatory for SSE-C because the key travels in the request headers.
Why others are wrong:
A) SSE-S3 — AWS fully manages the keys using AES-256. No customer key is needed in requests.
C) SSE-KMS — AWS KMS manages the keys. You specify which KMS key to use, but don't provide raw key material in each request.
D) DSSE-KMS — This is dual-layer server-side encryption with KMS. KMS manages the keys, not the customer.
Q10.A company wants to enforce that all objects uploaded to an S3 bucket must use SSE-KMS encryption. Which bucket policy condition should they use to deny unencrypted uploads?
✓ Correct: C. To enforce SSE-KMS, deny PutObject requests where the
Why C is correct: The condition
Why others are wrong:
A) AES256 — The value
B) s3:x-amz-acl — This condition relates to access control lists, not encryption. It has nothing to do with server-side encryption.
D) s3:x-amz-storage-class — This condition controls the storage class (Standard, IA, Glacier, etc.), not encryption settings.
s3:x-amz-server-side-encryption header is not aws:kms.Why C is correct: The condition
"StringNotEquals": {"s3:x-amz-server-side-encryption": "aws:kms"} on a Deny statement ensures that any upload not specifying SSE-KMS encryption is rejected. This forces all uploads to use KMS-managed keys.Why others are wrong:
A) AES256 — The value
AES256 corresponds to SSE-S3 encryption, not SSE-KMS. This would enforce S3-managed keys instead.B) s3:x-amz-acl — This condition relates to access control lists, not encryption. It has nothing to do with server-side encryption.
D) s3:x-amz-storage-class — This condition controls the storage class (Standard, IA, Glacier, etc.), not encryption settings.
Q11.A company wants to protect S3 objects from accidental deletion and overwriting. Which two S3 features can help achieve this?SELECT TWO
✓ Correct: B, D. S3 Object Lock and Versioning protect objects from deletion and overwriting.
Why B is correct: S3 Object Lock enables a Write Once Read Many (WORM) model, preventing objects from being deleted or overwritten for a specified retention period. It requires versioning to be enabled.
Why D is correct: S3 Versioning preserves every version of every object. When an object is deleted, a delete marker is placed, but previous versions remain and can be restored.
Why others are wrong:
A) Transfer Acceleration — This speeds up uploads using CloudFront edge locations. It has no data protection capability.
C) Intelligent-Tiering — This automatically moves objects between access tiers to optimize cost. It does not protect against deletion.
E) Lifecycle Policies — These automate transitioning or expiring objects, which could actually cause deletion, not prevent it.
Why B is correct: S3 Object Lock enables a Write Once Read Many (WORM) model, preventing objects from being deleted or overwritten for a specified retention period. It requires versioning to be enabled.
Why D is correct: S3 Versioning preserves every version of every object. When an object is deleted, a delete marker is placed, but previous versions remain and can be restored.
Why others are wrong:
A) Transfer Acceleration — This speeds up uploads using CloudFront edge locations. It has no data protection capability.
C) Intelligent-Tiering — This automatically moves objects between access tiers to optimize cost. It does not protect against deletion.
E) Lifecycle Policies — These automate transitioning or expiring objects, which could actually cause deletion, not prevent it.
Q12.A compliance team requires that certain S3 objects cannot be deleted or modified by anyone, including the root user, until a specific date. Which S3 Object Lock mode should they use?
✓ Correct: A. Compliance mode prevents any user, including root, from deleting or overwriting an object version during the retention period.
Why A is correct: In Compliance mode, the retention period cannot be shortened, and the object version cannot be deleted by any user, including the root user. This provides the strongest protection for regulatory compliance requirements.
Why others are wrong:
B) Governance mode — Governance mode protects objects from most users, but users with the
C) Legal hold — Legal hold prevents deletion indefinitely but has no expiration date. It can be removed by users with the
D) Glacier Vault Lock — This applies WORM policies to Glacier vaults, not S3 objects. It is a different service feature.
Why A is correct: In Compliance mode, the retention period cannot be shortened, and the object version cannot be deleted by any user, including the root user. This provides the strongest protection for regulatory compliance requirements.
Why others are wrong:
B) Governance mode — Governance mode protects objects from most users, but users with the
s3:BypassGovernanceRetention permission can override the lock and delete objects.C) Legal hold — Legal hold prevents deletion indefinitely but has no expiration date. It can be removed by users with the
s3:PutObjectLegalHold permission at any time.D) Glacier Vault Lock — This applies WORM policies to Glacier vaults, not S3 objects. It is a different service feature.
Q13.A security engineer wants to ensure all traffic to an S3 bucket uses HTTPS (TLS). What should they add to the bucket policy?
✓ Correct: D. Denying requests where
Why D is correct: The condition
Why others are wrong:
A) SourceIp condition — Restricting by IP address does not enforce encryption in transit. HTTP requests from allowed IPs would still succeed.
B) Encryption header null — This enforces server-side encryption at rest, not encryption in transit (HTTPS).
C) Allow where SecureTransport is true — An Allow statement alone is insufficient because it does not explicitly deny HTTP. Other policies could still permit unencrypted access. A Deny statement is the correct approach.
aws:SecureTransport is false ensures only HTTPS traffic is allowed.Why D is correct: The condition
"Bool": {"aws:SecureTransport": "false"} on a Deny statement blocks any request made over HTTP (unencrypted). This is the AWS-recommended approach to enforce HTTPS/TLS for S3 bucket access.Why others are wrong:
A) SourceIp condition — Restricting by IP address does not enforce encryption in transit. HTTP requests from allowed IPs would still succeed.
B) Encryption header null — This enforces server-side encryption at rest, not encryption in transit (HTTPS).
C) Allow where SecureTransport is true — An Allow statement alone is insufficient because it does not explicitly deny HTTP. Other policies could still permit unencrypted access. A Deny statement is the correct approach.
Q14.What is the primary benefit of enabling S3 Bucket Keys when using SSE-KMS encryption?
✓ Correct: B. S3 Bucket Keys reduce the number of KMS API calls by generating a bucket-level key, significantly lowering costs.
Why B is correct: S3 Bucket Keys work by generating a short-lived bucket-level key from KMS, which is then used to create data keys for objects within the bucket. This reduces calls to KMS by up to 99%, saving on KMS request costs which can be significant for high-volume workloads.
Why others are wrong:
A) Stronger encryption — AES-512 does not exist in this context. S3 Bucket Keys still use AES-256 encryption. The benefit is cost, not strength.
C) Key rotation every 30 days — S3 Bucket Keys do not change the KMS key rotation schedule. Automatic rotation is a separate KMS feature.
D) Cross-region replication — S3 Bucket Keys do not enable CRR. Cross-region replication of encrypted objects has its own configuration requirements.
Why B is correct: S3 Bucket Keys work by generating a short-lived bucket-level key from KMS, which is then used to create data keys for objects within the bucket. This reduces calls to KMS by up to 99%, saving on KMS request costs which can be significant for high-volume workloads.
Why others are wrong:
A) Stronger encryption — AES-512 does not exist in this context. S3 Bucket Keys still use AES-256 encryption. The benefit is cost, not strength.
C) Key rotation every 30 days — S3 Bucket Keys do not change the KMS key rotation schedule. Automatic rotation is a separate KMS feature.
D) Cross-region replication — S3 Bucket Keys do not enable CRR. Cross-region replication of encrypted objects has its own configuration requirements.
Q15.A developer generates a pre-signed URL for an S3 object using their IAM user credentials with a 1-hour expiration. What happens if the IAM user's permissions are revoked 30 minutes later?
✓ Correct: C. A pre-signed URL inherits the permissions of the identity that created it, so revoking those permissions invalidates the URL.
Why C is correct: Pre-signed URLs use the permissions of the IAM identity that generated them. When S3 receives a request with a pre-signed URL, it checks whether the creator still has the required permissions. If the permissions have been revoked, the request is denied even if the URL has not yet expired.
Why others are wrong:
A) Continues to work — The URL does not carry independent permissions. It is evaluated against the creator's current permissions at the time of each request.
B) Expires immediately — The URL is valid when generated and works as long as the creator has permissions. It does not expire at generation time.
D) Cannot be invalidated — Pre-signed URLs can be effectively invalidated by removing the generating identity's permissions or deleting the identity.
Why C is correct: Pre-signed URLs use the permissions of the IAM identity that generated them. When S3 receives a request with a pre-signed URL, it checks whether the creator still has the required permissions. If the permissions have been revoked, the request is denied even if the URL has not yet expired.
Why others are wrong:
A) Continues to work — The URL does not carry independent permissions. It is evaluated against the creator's current permissions at the time of each request.
B) Expires immediately — The URL is valid when generated and works as long as the creator has permissions. It does not expire at generation time.
D) Cannot be invalidated — Pre-signed URLs can be effectively invalidated by removing the generating identity's permissions or deleting the identity.
Q16.What is the purpose of an S3 Glacier Vault Lock policy?
✓ Correct: A. Glacier Vault Lock enforces an immutable WORM compliance policy on a vault.
Why A is correct: Glacier Vault Lock allows you to deploy and enforce compliance controls on a vault with a Vault Lock policy. Once the policy is locked, it becomes immutable and cannot be changed or deleted—not even by the root user. This is commonly used for regulatory compliance (e.g., SEC Rule 17a-4).
Why others are wrong:
B) Encryption with CMK — Vault Lock is about retention and deletion policies, not encryption. Glacier encryption is configured separately.
C) VPC restriction — VPC-based access control uses vault access policies or VPC endpoints, not Vault Lock.
D) Prevent region moves — Glacier vaults are regional by nature. Vault Lock does not control cross-region behavior.
Why A is correct: Glacier Vault Lock allows you to deploy and enforce compliance controls on a vault with a Vault Lock policy. Once the policy is locked, it becomes immutable and cannot be changed or deleted—not even by the root user. This is commonly used for regulatory compliance (e.g., SEC Rule 17a-4).
Why others are wrong:
B) Encryption with CMK — Vault Lock is about retention and deletion policies, not encryption. Glacier encryption is configured separately.
C) VPC restriction — VPC-based access control uses vault access policies or VPC endpoints, not Vault Lock.
D) Prevent region moves — Glacier vaults are regional by nature. Vault Lock does not control cross-region behavior.
Q17.Which feature distinguishes AWS CloudHSM from AWS KMS?
✓ Correct: D. CloudHSM provides dedicated, single-tenant hardware security modules, unlike the shared infrastructure of KMS.
Why D is correct: AWS CloudHSM gives you dedicated HSM instances in your VPC. You have exclusive access to the hardware, full control over key management, and the keys never leave the HSM. This meets compliance requirements that mandate single-tenant hardware (e.g., FIPS 140-2 Level 3).
Why others are wrong:
A) Automatic key rotation — CloudHSM does not provide automatic key rotation. You must manage rotation yourself. KMS supports automatic rotation for customer managed keys.
B) Shared multi-tenant — CloudHSM is single-tenant, not shared. KMS uses shared (multi-tenant) infrastructure with logical separation.
C) Free to use — CloudHSM is expensive (charged per HSM instance per hour). KMS has a much lower cost model.
Why D is correct: AWS CloudHSM gives you dedicated HSM instances in your VPC. You have exclusive access to the hardware, full control over key management, and the keys never leave the HSM. This meets compliance requirements that mandate single-tenant hardware (e.g., FIPS 140-2 Level 3).
Why others are wrong:
A) Automatic key rotation — CloudHSM does not provide automatic key rotation. You must manage rotation yourself. KMS supports automatic rotation for customer managed keys.
B) Shared multi-tenant — CloudHSM is single-tenant, not shared. KMS uses shared (multi-tenant) infrastructure with logical separation.
C) Free to use — CloudHSM is expensive (charged per HSM instance per hour). KMS has a much lower cost model.
Q18.A company needs to deploy CloudHSM for high availability. Which two steps are required?SELECT TWO
✓ Correct: A, E. High availability for CloudHSM requires multiple HSM instances across Availability Zones.
Why A is correct: Deploying HSM instances in different Availability Zones ensures that if one AZ fails, the other HSM instances remain available. AWS recommends at least two AZs for HA.
Why E is correct: A CloudHSM cluster must have at least two HSM instances for high availability. The cluster automatically synchronizes keys across all HSM instances in the cluster.
Why others are wrong:
B) Multi-region replication — CloudHSM does not have a built-in multi-region replication feature. HA is achieved within a single region using multiple AZs.
C) Public subnet — CloudHSM instances should be deployed in private subnets for security, not public subnets.
D) AWS Backup for CloudHSM — CloudHSM has its own backup mechanism. AWS Backup does not directly support CloudHSM clusters.
Why A is correct: Deploying HSM instances in different Availability Zones ensures that if one AZ fails, the other HSM instances remain available. AWS recommends at least two AZs for HA.
Why E is correct: A CloudHSM cluster must have at least two HSM instances for high availability. The cluster automatically synchronizes keys across all HSM instances in the cluster.
Why others are wrong:
B) Multi-region replication — CloudHSM does not have a built-in multi-region replication feature. HA is achieved within a single region using multiple AZs.
C) Public subnet — CloudHSM instances should be deployed in private subnets for security, not public subnets.
D) AWS Backup for CloudHSM — CloudHSM has its own backup mechanism. AWS Backup does not directly support CloudHSM clusters.
Q19.In AWS CloudHSM, who is responsible for managing the encryption keys?
✓ Correct: B. With CloudHSM, the customer has full control over key management while AWS manages the underlying hardware.
Why B is correct: CloudHSM follows a shared responsibility model where AWS provisions, maintains, and patches the HSM hardware, but the customer is fully responsible for creating, managing, and deleting encryption keys. AWS has no access to the keys stored in CloudHSM.
Why others are wrong:
A) AWS manages keys — This is backwards. AWS cannot access or manage the keys in CloudHSM. Only the customer's crypto officer (CO) can manage keys.
C) Shared key management — Key management is entirely the customer's responsibility. AWS has zero access to the cryptographic key material.
D) AWS manages everything — This describes a fully managed service like KMS with AWS owned keys, not CloudHSM.
Why B is correct: CloudHSM follows a shared responsibility model where AWS provisions, maintains, and patches the HSM hardware, but the customer is fully responsible for creating, managing, and deleting encryption keys. AWS has no access to the keys stored in CloudHSM.
Why others are wrong:
A) AWS manages keys — This is backwards. AWS cannot access or manage the keys in CloudHSM. Only the customer's crypto officer (CO) can manage keys.
C) Shared key management — Key management is entirely the customer's responsibility. AWS has zero access to the cryptographic key material.
D) AWS manages everything — This describes a fully managed service like KMS with AWS owned keys, not CloudHSM.
Q20.A development team needs to store database credentials and automatically rotate them every 30 days. The credentials are for an Amazon RDS MySQL instance. Which AWS service is BEST suited for this?
✓ Correct: C. AWS Secrets Manager is purpose-built for storing and automatically rotating database credentials.
Why C is correct: Secrets Manager has native integration with Amazon RDS, Aurora, and Redshift for automatic credential rotation. It uses a Lambda function (provided by AWS) to rotate credentials on a configurable schedule. It is the recommended service for managing database secrets with automatic rotation.
Why others are wrong:
A) SSM Parameter Store — Parameter Store can store secrets as SecureString but does not have built-in automatic rotation. You would need to build custom rotation logic.
B) KMS with Lambda — KMS is for encryption key management, not credential storage. While you could build a custom solution, Secrets Manager provides this out of the box.
D) CloudHSM — CloudHSM is for hardware-based key management, not for storing and rotating database credentials.
Why C is correct: Secrets Manager has native integration with Amazon RDS, Aurora, and Redshift for automatic credential rotation. It uses a Lambda function (provided by AWS) to rotate credentials on a configurable schedule. It is the recommended service for managing database secrets with automatic rotation.
Why others are wrong:
A) SSM Parameter Store — Parameter Store can store secrets as SecureString but does not have built-in automatic rotation. You would need to build custom rotation logic.
B) KMS with Lambda — KMS is for encryption key management, not credential storage. While you could build a custom solution, Secrets Manager provides this out of the box.
D) CloudHSM — CloudHSM is for hardware-based key management, not for storing and rotating database credentials.
Q21.How does AWS Secrets Manager rotate secrets for a supported Amazon RDS database?
✓ Correct: A. Secrets Manager uses an AWS Lambda function to rotate the secret value and update the database credentials simultaneously.
Why A is correct: When rotation is triggered, Secrets Manager invokes a Lambda function that generates a new password, updates the secret in Secrets Manager, and then updates the password on the RDS database. AWS provides pre-built Lambda rotation templates for supported databases (RDS, Aurora, Redshift).
Why others are wrong:
B) SNS notification — Rotation is fully automated. There is no manual step requiring a DBA to change passwords.
C) Direct RDS API — Secrets Manager does not directly call the RDS API. It uses a Lambda function as an intermediary to handle the rotation logic.
D) New RDS instance — Rotation changes the credentials on the existing instance. It does not create new instances or migrate data.
Why A is correct: When rotation is triggered, Secrets Manager invokes a Lambda function that generates a new password, updates the secret in Secrets Manager, and then updates the password on the RDS database. AWS provides pre-built Lambda rotation templates for supported databases (RDS, Aurora, Redshift).
Why others are wrong:
B) SNS notification — Rotation is fully automated. There is no manual step requiring a DBA to change passwords.
C) Direct RDS API — Secrets Manager does not directly call the RDS API. It uses a Lambda function as an intermediary to handle the rotation logic.
D) New RDS instance — Rotation changes the credentials on the existing instance. It does not create new instances or migrate data.
Q22.A company has a multi-region application and needs the same database secret available in multiple AWS regions with automatic replication. Which feature should they use?
✓ Correct: D. Secrets Manager multi-region secrets replicate secrets across multiple AWS regions automatically.
Why D is correct: Secrets Manager supports multi-region secrets, which replicate a secret to multiple AWS regions. The replicas are read-only copies kept in sync with the primary secret. This enables multi-region applications to access secrets locally with low latency and supports disaster recovery scenarios.
Why others are wrong:
A) SSM Parameter Store cross-region sync — Parameter Store does not have a built-in cross-region replication feature for parameters.
B) KMS multi-region keys — Multi-region KMS keys replicate encryption keys across regions, but they do not replicate secret values like database credentials.
C) CloudFormation StackSets — While StackSets can deploy resources across regions, they don't provide automatic synchronization of secret values when they change.
Why D is correct: Secrets Manager supports multi-region secrets, which replicate a secret to multiple AWS regions. The replicas are read-only copies kept in sync with the primary secret. This enables multi-region applications to access secrets locally with low latency and supports disaster recovery scenarios.
Why others are wrong:
A) SSM Parameter Store cross-region sync — Parameter Store does not have a built-in cross-region replication feature for parameters.
B) KMS multi-region keys — Multi-region KMS keys replicate encryption keys across regions, but they do not replicate secret values like database credentials.
C) CloudFormation StackSets — While StackSets can deploy resources across regions, they don't provide automatic synchronization of secret values when they change.
Q23.Which SSM Parameter Store parameter type should be used to store a database password securely?
✓ Correct: B. SecureString parameters are encrypted using AWS KMS, making them suitable for sensitive data like passwords.
Why B is correct: The SecureString parameter type encrypts the parameter value using a KMS key (either the default
Why others are wrong:
A) String — The String type stores values as plain text without encryption. It is not suitable for sensitive data like passwords.
C) StringList — StringList stores comma-separated values in plain text. It does not support encryption and is not designed for sensitive data.
D) EncryptedList — This parameter type does not exist in SSM Parameter Store. The only types are String, StringList, and SecureString.
Why B is correct: The SecureString parameter type encrypts the parameter value using a KMS key (either the default
aws/ssm key or a customer managed key). This ensures that sensitive values like passwords, API keys, and connection strings are encrypted at rest in Parameter Store.Why others are wrong:
A) String — The String type stores values as plain text without encryption. It is not suitable for sensitive data like passwords.
C) StringList — StringList stores comma-separated values in plain text. It does not support encryption and is not designed for sensitive data.
D) EncryptedList — This parameter type does not exist in SSM Parameter Store. The only types are String, StringList, and SecureString.
Q24.Which two features are available in SSM Parameter Store Advanced tier but NOT in the Standard tier?SELECT TWO
✓ Correct: B, E. Advanced tier adds parameter policies and larger parameter sizes (up to 8 KB).
Why B is correct: Parameter policies are only available in the Advanced tier. They allow you to assign expiration dates to parameters and set up EventBridge notifications before a parameter expires, enabling automated credential rotation workflows.
Why E is correct: The Advanced tier supports parameter values up to 8 KB in size, compared to the Standard tier's limit of 4 KB. This is useful for storing larger configuration data or certificates.
Why others are wrong:
A) SecureString encryption — SecureString encryption with KMS is available in both Standard and Advanced tiers.
C) CloudFormation integration — Both Standard and Advanced tier parameters can be referenced in CloudFormation templates.
D) Hierarchical organization — Both tiers support hierarchical parameter paths (e.g.,
Why B is correct: Parameter policies are only available in the Advanced tier. They allow you to assign expiration dates to parameters and set up EventBridge notifications before a parameter expires, enabling automated credential rotation workflows.
Why E is correct: The Advanced tier supports parameter values up to 8 KB in size, compared to the Standard tier's limit of 4 KB. This is useful for storing larger configuration data or certificates.
Why others are wrong:
A) SecureString encryption — SecureString encryption with KMS is available in both Standard and Advanced tiers.
C) CloudFormation integration — Both Standard and Advanced tier parameters can be referenced in CloudFormation templates.
D) Hierarchical organization — Both tiers support hierarchical parameter paths (e.g.,
/app/prod/db/password).
Q25.A team is deciding between AWS Secrets Manager and SSM Parameter Store for storing application secrets. The secrets require automatic rotation. Which statement is correct?
✓ Correct: C. Secrets Manager has built-in automatic rotation; Parameter Store does not.
Why C is correct: AWS Secrets Manager natively supports automatic rotation of secrets for RDS, Aurora, Redshift, and other services using built-in Lambda rotation functions. SSM Parameter Store does not have built-in rotation capability—you would need to create custom automation using Lambda and EventBridge.
Why others are wrong:
A) Both provide rotation — Only Secrets Manager has built-in automatic rotation. Parameter Store requires custom implementation.
B) Parameter Store rotates automatically — This is the opposite of reality. Parameter Store has no built-in rotation capability.
D) Neither supports rotation — Secrets Manager explicitly supports and is designed for automatic secret rotation.
Why C is correct: AWS Secrets Manager natively supports automatic rotation of secrets for RDS, Aurora, Redshift, and other services using built-in Lambda rotation functions. SSM Parameter Store does not have built-in rotation capability—you would need to create custom automation using Lambda and EventBridge.
Why others are wrong:
A) Both provide rotation — Only Secrets Manager has built-in automatic rotation. Parameter Store requires custom implementation.
B) Parameter Store rotates automatically — This is the opposite of reality. Parameter Store has no built-in rotation capability.
D) Neither supports rotation — Secrets Manager explicitly supports and is designed for automatic secret rotation.
Q26.A company wants to use an SSL/TLS certificate with their Application Load Balancer. They want AWS to handle certificate renewal automatically. What should they use?
✓ Correct: A. ACM provides free public certificates that are automatically renewed when used with supported AWS services like ALB.
Why A is correct: AWS Certificate Manager can provision, manage, and deploy public SSL/TLS certificates for use with AWS services. Certificates requested through ACM are free and are automatically renewed before expiration. ACM integrates natively with ALB, CloudFront, and API Gateway.
Why others are wrong:
B) Self-signed certificate — Self-signed certificates are not trusted by browsers, and uploading to IAM does not provide automatic renewal.
C) Import into ACM — Imported certificates are not automatically renewed by ACM. You must manually replace them before they expire.
D) CloudHSM — CloudHSM is for key management, not for certificate provisioning and renewal. It does not provide TLS certificate lifecycle management.
Why A is correct: AWS Certificate Manager can provision, manage, and deploy public SSL/TLS certificates for use with AWS services. Certificates requested through ACM are free and are automatically renewed before expiration. ACM integrates natively with ALB, CloudFront, and API Gateway.
Why others are wrong:
B) Self-signed certificate — Self-signed certificates are not trusted by browsers, and uploading to IAM does not provide automatic renewal.
C) Import into ACM — Imported certificates are not automatically renewed by ACM. You must manually replace them before they expire.
D) CloudHSM — CloudHSM is for key management, not for certificate provisioning and renewal. It does not provide TLS certificate lifecycle management.
Q27.A company imports a third-party TLS certificate into AWS Certificate Manager. What is their responsibility regarding this certificate?
✓ Correct: D. Imported certificates are not automatically renewed. The customer must track expiration and reimport new certificates.
Why D is correct: When you import a certificate into ACM, AWS does not manage its renewal. You are responsible for monitoring the certificate's expiration date and importing a new certificate before the old one expires. ACM can send expiration notifications via EventBridge and AWS Health events to help track this.
Why others are wrong:
A) Automatic renewal — Automatic renewal only applies to certificates that were requested through ACM, not imported ones.
B) Converted to ACM-managed — Imported certificates remain imported. ACM does not convert them to managed certificates.
C) Rotated by KMS — KMS manages encryption keys, not TLS certificates. Certificate rotation is not a KMS function.
Why D is correct: When you import a certificate into ACM, AWS does not manage its renewal. You are responsible for monitoring the certificate's expiration date and importing a new certificate before the old one expires. ACM can send expiration notifications via EventBridge and AWS Health events to help track this.
Why others are wrong:
A) Automatic renewal — Automatic renewal only applies to certificates that were requested through ACM, not imported ones.
B) Converted to ACM-managed — Imported certificates remain imported. ACM does not convert them to managed certificates.
C) Rotated by KMS — KMS manages encryption keys, not TLS certificates. Certificate rotation is not a KMS function.
Q28.Which AWS service should be used to issue private certificates for internal applications and microservices?
✓ Correct: B. ACM Private CA allows you to create a private certificate authority to issue certificates for internal use.
Why B is correct: ACM Private CA enables you to create a private subordinate CA to issue private certificates for internal applications, IoT devices, and microservices. These certificates are trusted within your organization's private PKI infrastructure and are not publicly trusted by browsers.
Why others are wrong:
A) ACM public certificates — Public ACM certificates are for public-facing websites and services. They are issued by Amazon's public CA and are not suitable for internal-only PKI.
C) CloudHSM — CloudHSM provides hardware for key storage and cryptographic operations but does not issue certificates or act as a certificate authority.
D) IAM Server Certificates — IAM can store certificates but cannot issue them. It is a legacy approach largely replaced by ACM.
Why B is correct: ACM Private CA enables you to create a private subordinate CA to issue private certificates for internal applications, IoT devices, and microservices. These certificates are trusted within your organization's private PKI infrastructure and are not publicly trusted by browsers.
Why others are wrong:
A) ACM public certificates — Public ACM certificates are for public-facing websites and services. They are issued by Amazon's public CA and are not suitable for internal-only PKI.
C) CloudHSM — CloudHSM provides hardware for key storage and cryptographic operations but does not issue certificates or act as a certificate authority.
D) IAM Server Certificates — IAM can store certificates but cannot issue them. It is a legacy approach largely replaced by ACM.
Q29.A company has an EBS-backed EC2 instance with an unencrypted root volume. They want to encrypt this volume. What is the correct approach?
✓ Correct: C. You cannot encrypt an existing unencrypted EBS volume in-place. You must snapshot, copy with encryption, and create a new volume.
Why C is correct: The process to encrypt an unencrypted EBS volume is: (1) Create a snapshot of the unencrypted volume, (2) Copy the snapshot and enable encryption during the copy, (3) Create a new encrypted volume from the encrypted snapshot, (4) Replace the original volume with the new encrypted one.
Why others are wrong:
A) Enable encryption directly — You cannot toggle encryption on an existing EBS volume. Encryption must be set at volume creation time.
B) ModifyVolume API — The
D) dd at block level — While technically possible, this is error-prone, requires downtime, and is not the recommended AWS approach. The snapshot-copy method is the standard practice.
Why C is correct: The process to encrypt an unencrypted EBS volume is: (1) Create a snapshot of the unencrypted volume, (2) Copy the snapshot and enable encryption during the copy, (3) Create a new encrypted volume from the encrypted snapshot, (4) Replace the original volume with the new encrypted one.
Why others are wrong:
A) Enable encryption directly — You cannot toggle encryption on an existing EBS volume. Encryption must be set at volume creation time.
B) ModifyVolume API — The
ModifyVolume API can change volume type, size, and IOPS, but it cannot change the encryption state of an existing volume.D) dd at block level — While technically possible, this is error-prone, requires downtime, and is not the recommended AWS approach. The snapshot-copy method is the standard practice.
Q30.When you create an encrypted EBS snapshot and share it with another AWS account, what must you also share?
✓ Correct: A. You must grant the target account access to the KMS key that was used to encrypt the snapshot.
Why A is correct: Encrypted EBS snapshots can only be decrypted with the KMS key that encrypted them. When sharing across accounts, you must update the KMS key policy to allow the target account to use the key (specifically
Why others are wrong:
B) IAM role — IAM roles are account-specific and cannot be shared. The target account uses its own IAM principals with permissions granted through the KMS key policy.
C) VPC — VPC configuration is unrelated to snapshot sharing. EBS snapshots are regional resources independent of VPC.
D) Nothing additional — Without access to the encryption key, the target account cannot decrypt or use the shared encrypted snapshot.
Why A is correct: Encrypted EBS snapshots can only be decrypted with the KMS key that encrypted them. When sharing across accounts, you must update the KMS key policy to allow the target account to use the key (specifically
kms:Decrypt and kms:CreateGrant permissions). Note that snapshots encrypted with the default AWS managed key cannot be shared.Why others are wrong:
B) IAM role — IAM roles are account-specific and cannot be shared. The target account uses its own IAM principals with permissions granted through the KMS key policy.
C) VPC — VPC configuration is unrelated to snapshot sharing. EBS snapshots are regional resources independent of VPC.
D) Nothing additional — Without access to the encryption key, the target account cannot decrypt or use the shared encrypted snapshot.
Q31.Which KMS key type allows the same key to be replicated across multiple AWS regions, with each replica capable of being used independently for encrypt and decrypt operations?
✓ Correct: D. Multi-region keys are KMS keys that are replicated across regions with the same key material.
Why D is correct: KMS multi-region keys have the same key ID and key material in multiple AWS regions. Data encrypted with a multi-region key in one region can be decrypted with the corresponding replica key in another region without cross-region API calls. They share a common key ID prefix of
Why others are wrong:
A) AWS managed key — AWS managed keys are regional and cannot be replicated to other regions. Each region has its own separate AWS managed keys.
B) Imported key material — While you could import the same material in multiple regions, this is not an automatic replication feature and requires manual management.
C) Symmetric CMK with alias — An alias is just a friendly name. Standard symmetric CMKs are regional and cannot be replicated across regions.
Why D is correct: KMS multi-region keys have the same key ID and key material in multiple AWS regions. Data encrypted with a multi-region key in one region can be decrypted with the corresponding replica key in another region without cross-region API calls. They share a common key ID prefix of
mrk-.Why others are wrong:
A) AWS managed key — AWS managed keys are regional and cannot be replicated to other regions. Each region has its own separate AWS managed keys.
B) Imported key material — While you could import the same material in multiple regions, this is not an automatic replication feature and requires manual management.
C) Symmetric CMK with alias — An alias is just a friendly name. Standard symmetric CMKs are regional and cannot be replicated across regions.
Q32.What is the default encryption method applied to all new objects uploaded to an S3 bucket if no encryption is specified?
✓ Correct: B. As of January 2023, Amazon S3 automatically encrypts all new objects using SSE-S3 (AES-256) by default.
Why B is correct: AWS enabled default encryption for all new S3 objects starting January 5, 2023. All new objects are automatically encrypted with SSE-S3 using AES-256 at no additional cost. This applies even if no encryption header is specified in the upload request.
Why others are wrong:
A) No encryption — This was the previous default behavior but is no longer the case since January 2023. All objects are now encrypted by default.
C) SSE-KMS default key — SSE-KMS is not the default. You must explicitly configure it if you want KMS encryption. SSE-S3 is the default.
D) SSE-C auto-generated — SSE-C requires the customer to provide the key with each request. It cannot be used as a default because there is no auto-generated customer key.
Why B is correct: AWS enabled default encryption for all new S3 objects starting January 5, 2023. All new objects are automatically encrypted with SSE-S3 using AES-256 at no additional cost. This applies even if no encryption header is specified in the upload request.
Why others are wrong:
A) No encryption — This was the previous default behavior but is no longer the case since January 2023. All objects are now encrypted by default.
C) SSE-KMS default key — SSE-KMS is not the default. You must explicitly configure it if you want KMS encryption. SSE-S3 is the default.
D) SSE-C auto-generated — SSE-C requires the customer to provide the key with each request. It cannot be used as a default because there is no auto-generated customer key.
Q33.A company is configuring S3 cross-region replication between two buckets. The source bucket uses SSE-KMS encryption. What must be configured for the encrypted objects to be replicated successfully?
✓ Correct: C. For S3 CRR with SSE-KMS, you must specify a destination KMS key and grant the IAM replication role proper decrypt/encrypt permissions.
Why C is correct: When replicating SSE-KMS encrypted objects, the replication configuration must specify a KMS key in the destination region for encrypting replicated objects. The IAM role for replication needs
Why others are wrong:
A) Disable encryption — You do not need to disable encryption. S3 CRR fully supports replicating SSE-KMS encrypted objects with proper configuration.
B) Same key in both regions — Standard KMS keys are regional and cannot exist in multiple regions. Multi-region keys can be used but are not required.
D) Convert to SSE-S3 — Converting encryption type is unnecessary. SSE-KMS encrypted objects can be replicated with the correct KMS configuration.
Why C is correct: When replicating SSE-KMS encrypted objects, the replication configuration must specify a KMS key in the destination region for encrypting replicated objects. The IAM role for replication needs
kms:Decrypt permission on the source key and kms:Encrypt permission on the destination key. You can also use multi-region keys to simplify this.Why others are wrong:
A) Disable encryption — You do not need to disable encryption. S3 CRR fully supports replicating SSE-KMS encrypted objects with proper configuration.
B) Same key in both regions — Standard KMS keys are regional and cannot exist in multiple regions. Multi-region keys can be used but are not required.
D) Convert to SSE-S3 — Converting encryption type is unnecessary. SSE-KMS encrypted objects can be replicated with the correct KMS configuration.
Q34.A KMS key policy contains the following statement:
"Principal": {"AWS": "arn:aws:iam::123456789012:root"}. What does this allow?
✓ Correct: A. Granting access to the account root principal enables IAM policies within that account to manage access to the KMS key.
Why A is correct: When the key policy grants access to the account root (
Why others are wrong:
B) Only root user — The root principal in a key policy does not mean only the root user. It means the entire account, enabling IAM policy-based access.
C) Denies all except root — An Allow statement for the root principal does not deny anyone. It enables the IAM permission system for the key.
D) Enables key rotation — Key rotation is a separate configuration. The root principal in the key policy has nothing to do with rotation settings.
Why A is correct: When the key policy grants access to the account root (
arn:aws:iam::123456789012:root), it delegates permission management to IAM. This means IAM policies attached to users and roles in that account can grant KMS permissions. This is the default key policy statement and is essential for IAM-based access control to work.Why others are wrong:
B) Only root user — The root principal in a key policy does not mean only the root user. It means the entire account, enabling IAM policy-based access.
C) Denies all except root — An Allow statement for the root principal does not deny anyone. It enables the IAM permission system for the key.
D) Enables key rotation — Key rotation is a separate configuration. The root principal in the key policy has nothing to do with rotation settings.
Q35.What is a KMS Grant, and when would you use one?
✓ Correct: B. A KMS Grant is a way to programmatically delegate temporary use of a KMS key to a specific principal.
Why B is correct: KMS Grants allow you to grant temporary access to KMS keys without modifying the key policy or IAM policies. They are commonly used by AWS services (like EBS) to encrypt/decrypt data on your behalf. Grants can be created programmatically and can be revoked or retired when no longer needed.
Why others are wrong:
A) Permanent policy replacing key policy — Grants do not replace key policies. They are supplementary, temporary permissions that work alongside key policies and IAM policies.
C) Financial grant — KMS Grants have nothing to do with cost or billing. They are an access control mechanism.
D) Transfer key ownership — Grants do not transfer ownership. They only delegate usage permissions. Key ownership remains with the account that created it.
Why B is correct: KMS Grants allow you to grant temporary access to KMS keys without modifying the key policy or IAM policies. They are commonly used by AWS services (like EBS) to encrypt/decrypt data on your behalf. Grants can be created programmatically and can be revoked or retired when no longer needed.
Why others are wrong:
A) Permanent policy replacing key policy — Grants do not replace key policies. They are supplementary, temporary permissions that work alongside key policies and IAM policies.
C) Financial grant — KMS Grants have nothing to do with cost or billing. They are an access control mechanism.
D) Transfer key ownership — Grants do not transfer ownership. They only delegate usage permissions. Key ownership remains with the account that created it.
Q36.Which two services can natively integrate with ACM certificates for TLS termination?SELECT TWO
✓ Correct: A, D. ALB and CloudFront natively integrate with ACM for TLS termination.
Why A is correct: Application Load Balancers integrate directly with ACM. You can select an ACM certificate in the ALB HTTPS listener configuration for TLS termination at the load balancer.
Why D is correct: CloudFront integrates with ACM for custom domain HTTPS. Note that CloudFront requires ACM certificates to be provisioned in the
Why others are wrong:
B) EC2 instances directly — ACM certificates cannot be exported or installed directly on EC2 instances. You must use a load balancer or other supported service in front.
C) Amazon S3 — S3 does not use ACM certificates for TLS. S3 endpoints use AWS-managed certificates.
E) Lambda functions — Lambda does not directly integrate with ACM. Lambda functions behind API Gateway or ALB can benefit from ACM certificates on those services.
Why A is correct: Application Load Balancers integrate directly with ACM. You can select an ACM certificate in the ALB HTTPS listener configuration for TLS termination at the load balancer.
Why D is correct: CloudFront integrates with ACM for custom domain HTTPS. Note that CloudFront requires ACM certificates to be provisioned in the
us-east-1 region specifically.Why others are wrong:
B) EC2 instances directly — ACM certificates cannot be exported or installed directly on EC2 instances. You must use a load balancer or other supported service in front.
C) Amazon S3 — S3 does not use ACM certificates for TLS. S3 endpoints use AWS-managed certificates.
E) Lambda functions — Lambda does not directly integrate with ACM. Lambda functions behind API Gateway or ALB can benefit from ACM certificates on those services.
Q37.An organization needs to process sensitive data (like PII) on EC2 instances in a way that prevents even privileged users and the host operating system from accessing the data during processing. Which AWS technology should they use?
✓ Correct: C. AWS Nitro Enclaves provides isolated compute environments that prevent anyone, including root users and administrators, from accessing the data being processed.
Why C is correct: Nitro Enclaves create isolated virtual machines with their own kernel, memory, and CPU. They have no persistent storage, no external networking, and no interactive access—even the parent EC2 instance's root user cannot SSH into or access the enclave's memory. This is ideal for processing highly sensitive data like PII, healthcare data, and financial data.
Why others are wrong:
A) CloudHSM — CloudHSM protects cryptographic keys in hardware, but it does not provide an isolated compute environment for processing sensitive data.
B) KMS with CMK — KMS provides encryption key management. It protects data at rest and in transit but does not protect data during processing in memory.
D) Trusted Advisor — Trusted Advisor provides recommendations for cost optimization, security, and performance. It has no data protection processing capabilities.
Why C is correct: Nitro Enclaves create isolated virtual machines with their own kernel, memory, and CPU. They have no persistent storage, no external networking, and no interactive access—even the parent EC2 instance's root user cannot SSH into or access the enclave's memory. This is ideal for processing highly sensitive data like PII, healthcare data, and financial data.
Why others are wrong:
A) CloudHSM — CloudHSM protects cryptographic keys in hardware, but it does not provide an isolated compute environment for processing sensitive data.
B) KMS with CMK — KMS provides encryption key management. It protects data at rest and in transit but does not protect data during processing in memory.
D) Trusted Advisor — Trusted Advisor provides recommendations for cost optimization, security, and performance. It has no data protection processing capabilities.
Q38.Which S3 encryption option provides two layers of server-side encryption for objects?
✓ Correct: A. DSSE-KMS (Dual-layer Server-Side Encryption with KMS) applies two layers of encryption to S3 objects.
Why A is correct: DSSE-KMS applies two distinct layers of encryption to objects stored in S3. Each layer uses a different implementation of AES-256-GCM encryption, both managed through KMS keys. This meets compliance requirements like CNSSP 15 that mandate dual-layer encryption.
Why others are wrong:
B) SSE-KMS — SSE-KMS applies a single layer of server-side encryption using a KMS key, not dual-layer.
C) SSE-S3 — SSE-S3 applies a single layer of AES-256 encryption using S3-managed keys.
D) SSE-C — SSE-C applies a single layer of encryption using customer-provided keys.
Why A is correct: DSSE-KMS applies two distinct layers of encryption to objects stored in S3. Each layer uses a different implementation of AES-256-GCM encryption, both managed through KMS keys. This meets compliance requirements like CNSSP 15 that mandate dual-layer encryption.
Why others are wrong:
B) SSE-KMS — SSE-KMS applies a single layer of server-side encryption using a KMS key, not dual-layer.
C) SSE-S3 — SSE-S3 applies a single layer of AES-256 encryption using S3-managed keys.
D) SSE-C — SSE-C applies a single layer of encryption using customer-provided keys.
Q39.A company wants to give specific users in a department access to a subset of S3 objects in a large shared bucket. They want to simplify permission management by creating a specific endpoint with its own access policy. Which feature should they use?
✓ Correct: B. S3 Access Points simplify data access management by creating dedicated access endpoints with their own access policies.
Why B is correct: S3 Access Points provide named network endpoints with individual access point policies. Each access point can restrict access to a specific prefix, have its own DNS name, and enforce different policies for different user groups. This avoids complex, monolithic bucket policies.
Why others are wrong:
A) Bucket policy with IAM conditions — While functional, bucket policies can become very complex with many users and prefixes. Access Points provide a cleaner solution at scale.
C) S3 Object Lambda — Object Lambda transforms data as it is retrieved from S3. It is for data transformation, not access management.
D) S3 Batch Operations — Batch Operations perform bulk operations on existing objects (copy, tag, restore). It is not an access management feature.
Why B is correct: S3 Access Points provide named network endpoints with individual access point policies. Each access point can restrict access to a specific prefix, have its own DNS name, and enforce different policies for different user groups. This avoids complex, monolithic bucket policies.
Why others are wrong:
A) Bucket policy with IAM conditions — While functional, bucket policies can become very complex with many users and prefixes. Access Points provide a cleaner solution at scale.
C) S3 Object Lambda — Object Lambda transforms data as it is retrieved from S3. It is for data transformation, not access management.
D) S3 Batch Operations — Batch Operations perform bulk operations on existing objects (copy, tag, restore). It is not an access management feature.
Q40.A company needs to access S3 data from multiple regions with automatic routing to the nearest S3 bucket to minimize latency. Which feature should they use?
✓ Correct: D. S3 Multi-Region Access Points provide a single global endpoint that routes requests to the closest S3 bucket.
Why D is correct: S3 Multi-Region Access Points give you a global endpoint that automatically routes S3 requests to the nearest bucket based on network latency. They work with S3 Cross-Region Replication to keep data in sync across regions. This provides both low-latency access and data redundancy.
Why others are wrong:
A) Transfer Acceleration — Transfer Acceleration speeds up uploads to a single S3 bucket using CloudFront edge locations. It does not route between multiple buckets in different regions.
B) CloudFront with S3 — CloudFront caches content at edge locations but serves from a single origin bucket. It does not dynamically route to the nearest of multiple buckets.
C) S3 Access Points — Standard S3 Access Points are regional and provide access to a single bucket. They do not provide multi-region routing.
Why D is correct: S3 Multi-Region Access Points give you a global endpoint that automatically routes S3 requests to the nearest bucket based on network latency. They work with S3 Cross-Region Replication to keep data in sync across regions. This provides both low-latency access and data redundancy.
Why others are wrong:
A) Transfer Acceleration — Transfer Acceleration speeds up uploads to a single S3 bucket using CloudFront edge locations. It does not route between multiple buckets in different regions.
B) CloudFront with S3 — CloudFront caches content at edge locations but serves from a single origin bucket. It does not dynamically route to the nearest of multiple buckets.
C) S3 Access Points — Standard S3 Access Points are regional and provide access to a single bucket. They do not provide multi-region routing.
Q41.An RDS MySQL database is running without encryption. The security team mandates that it must be encrypted. What is the correct approach?
✓ Correct: C. You cannot enable encryption on an existing unencrypted RDS instance. You must use the snapshot-copy-restore method.
Why C is correct: To encrypt an unencrypted RDS database: (1) Create a snapshot of the unencrypted instance, (2) Copy the snapshot and enable encryption with a KMS key during the copy, (3) Restore a new RDS instance from the encrypted snapshot, (4) Update the application to point to the new encrypted instance.
Why others are wrong:
A) ModifyDBInstance — You cannot change the encryption state of an existing RDS instance. Encryption must be configured at creation time.
B) Parameter group — Encryption at rest is not controlled through RDS parameter groups. It is a property set during instance creation.
D) Attach encrypted EBS — RDS manages its own storage. You cannot directly attach or manage EBS volumes for an RDS instance.
Why C is correct: To encrypt an unencrypted RDS database: (1) Create a snapshot of the unencrypted instance, (2) Copy the snapshot and enable encryption with a KMS key during the copy, (3) Restore a new RDS instance from the encrypted snapshot, (4) Update the application to point to the new encrypted instance.
Why others are wrong:
A) ModifyDBInstance — You cannot change the encryption state of an existing RDS instance. Encryption must be configured at creation time.
B) Parameter group — Encryption at rest is not controlled through RDS parameter groups. It is a property set during instance creation.
D) Attach encrypted EBS — RDS manages its own storage. You cannot directly attach or manage EBS volumes for an RDS instance.
Q42.A DynamoDB table contains sensitive customer data. The security team wants the data encrypted at rest using a key they can manage and audit through CloudTrail. Which encryption option should they choose?
✓ Correct: A. A customer managed KMS key gives full control over the key and provides CloudTrail audit logging of key usage.
Why A is correct: With a customer managed KMS key, the security team can define the key policy, enable/disable the key, set up key rotation, and audit all usage through CloudTrail. Every DynamoDB read/write that uses the key generates a CloudTrail event, providing complete visibility into data access patterns.
Why others are wrong:
B) AWS owned key — AWS owned keys are free but provide no visibility or control. Usage is not logged in CloudTrail, and you cannot manage the key policy.
C) AWS managed key — The
D) Client-side encryption only — Client-side encryption is the application's responsibility and is not a DynamoDB encryption-at-rest option. It adds complexity without providing DynamoDB-native key management.
Why A is correct: With a customer managed KMS key, the security team can define the key policy, enable/disable the key, set up key rotation, and audit all usage through CloudTrail. Every DynamoDB read/write that uses the key generates a CloudTrail event, providing complete visibility into data access patterns.
Why others are wrong:
B) AWS owned key — AWS owned keys are free but provide no visibility or control. Usage is not logged in CloudTrail, and you cannot manage the key policy.
C) AWS managed key — The
aws/dynamodb key provides CloudTrail logging but you cannot manage the key policy, enable/disable it, or control rotation beyond the default.D) Client-side encryption only — Client-side encryption is the application's responsibility and is not a DynamoDB encryption-at-rest option. It adds complexity without providing DynamoDB-native key management.
Q43.Which two statements about Amazon EFS encryption are correct?SELECT TWO
✓ Correct: B, D. EFS encryption at rest is set at creation, and encryption in transit uses the mount helper with TLS.
Why B is correct: EFS encryption at rest must be enabled when the file system is created. You cannot enable encryption at rest on an existing unencrypted EFS file system. If you need encryption, you must create a new encrypted file system and migrate your data.
Why D is correct: EFS encryption in transit is enabled by using the Amazon EFS mount helper (
Why others are wrong:
A) Enabled after creation — This is incorrect. Encryption at rest cannot be changed after the file system is created.
C) In transit by default — Encryption in transit is not enabled by default. You must explicitly use the TLS mount option.
E) No KMS support — EFS supports KMS keys for encryption at rest. You can use the default
Why B is correct: EFS encryption at rest must be enabled when the file system is created. You cannot enable encryption at rest on an existing unencrypted EFS file system. If you need encryption, you must create a new encrypted file system and migrate your data.
Why D is correct: EFS encryption in transit is enabled by using the Amazon EFS mount helper (
amazon-efs-utils) with the -o tls mount option. This establishes a TLS-encrypted connection between the EC2 instance and the EFS mount target.Why others are wrong:
A) Enabled after creation — This is incorrect. Encryption at rest cannot be changed after the file system is created.
C) In transit by default — Encryption in transit is not enabled by default. You must explicitly use the TLS mount option.
E) No KMS support — EFS supports KMS keys for encryption at rest. You can use the default
aws/elasticfilesystem key or a customer managed key.
Q44.A security engineer needs to ensure that a KMS key can only be used by principals within a specific AWS account, regardless of what the IAM policies in other accounts might allow. Which condition key should they use?
✓ Correct: D. The
Why D is correct: The
Why others are wrong:
A) kms:ViaService — This restricts which AWS service can use the key, not which account. It filters by the service making the KMS request.
B) aws:SourceAccount — This global condition is used for service-to-service access scenarios (like S3 event notifications), not for KMS key policies to restrict caller accounts.
C) aws:PrincipalOrgID — This restricts access to principals within an AWS Organization, not a specific account. It is broader than account-level restriction.
kms:CallerAccount condition restricts key usage to principals from a specific AWS account.Why D is correct: The
kms:CallerAccount condition key filters requests based on the AWS account ID of the caller. By adding this condition to an Allow statement in the key policy, you ensure that only principals from the specified account can use the key, providing a strong account-level boundary.Why others are wrong:
A) kms:ViaService — This restricts which AWS service can use the key, not which account. It filters by the service making the KMS request.
B) aws:SourceAccount — This global condition is used for service-to-service access scenarios (like S3 event notifications), not for KMS key policies to restrict caller accounts.
C) aws:PrincipalOrgID — This restricts access to principals within an AWS Organization, not a specific account. It is broader than account-level restriction.
Q45.When using SSE-KMS encryption with S3, which operation requires a call to the KMS
Decrypt API?
✓ Correct: B. Downloading (reading) an SSE-KMS encrypted object requires S3 to call
Why B is correct: When an SSE-KMS encrypted object is downloaded, S3 must decrypt the encrypted data key stored with the object by calling
Why others are wrong:
A) Uploading an object — Upload operations call
C) Listing objects —
D) Deleting an object — Deleting an object does not require decryption. The
kms:Decrypt to decrypt the data key.Why B is correct: When an SSE-KMS encrypted object is downloaded, S3 must decrypt the encrypted data key stored with the object by calling
kms:Decrypt. The decrypted data key is then used to decrypt the object data. The caller must have kms:Decrypt permission on the KMS key.Why others are wrong:
A) Uploading an object — Upload operations call
kms:GenerateDataKey to create a new data key for encrypting the object, not kms:Decrypt.C) Listing objects —
ListObjects only returns metadata (keys, sizes, dates) and does not access object content, so no KMS call is needed.D) Deleting an object — Deleting an object does not require decryption. The
DeleteObject operation does not interact with KMS.
Q46.A security team wants to store AWS CodeBuild build artifacts in S3 with encryption. They also want to ensure that environment variables containing secrets are encrypted. How should CodeBuild secrets be protected?
✓ Correct: A. CodeBuild should reference secrets stored in SSM Parameter Store or Secrets Manager rather than storing them as plaintext environment variables.
Why A is correct: CodeBuild supports referencing secrets from SSM Parameter Store (SecureString) and Secrets Manager directly in the buildspec file or project environment variables. This keeps secrets encrypted and centrally managed. CodeBuild resolves the references at build time without exposing the values in logs or configuration.
Why others are wrong:
B) Plaintext environment variables — Storing secrets as plaintext environment variables is insecure. They can be exposed in build logs and are visible in the CodeBuild console.
C) Public S3 bucket — Storing secrets in a public bucket is a critical security vulnerability and violates security best practices.
D) Source code repository — Embedding secrets in source code is a major security risk. Anyone with repository access can see the secrets, and they may leak through version history.
Why A is correct: CodeBuild supports referencing secrets from SSM Parameter Store (SecureString) and Secrets Manager directly in the buildspec file or project environment variables. This keeps secrets encrypted and centrally managed. CodeBuild resolves the references at build time without exposing the values in logs or configuration.
Why others are wrong:
B) Plaintext environment variables — Storing secrets as plaintext environment variables is insecure. They can be exposed in build logs and are visible in the CodeBuild console.
C) Public S3 bucket — Storing secrets in a public bucket is a critical security vulnerability and violates security best practices.
D) Source code repository — Embedding secrets in source code is a major security risk. Anyone with repository access can see the secrets, and they may leak through version history.
Q47.Which S3 Object Lock retention mode allows users with special permissions to delete objects before the retention period expires?
✓ Correct: C. Governance mode allows users with the
Why C is correct: In Governance mode, users with the
Why others are wrong:
A) Compliance mode — Compliance mode does not allow anyone, including the root user, to delete objects before the retention period expires. It is the strictest mode.
B) Legal hold — Legal hold is not a retention mode. It is a separate flag that prevents deletion regardless of retention settings and has no expiration date.
D) Vault Lock mode — This is a Glacier feature, not an S3 Object Lock mode.
s3:BypassGovernanceRetention permission to override or delete objects before retention expires.Why C is correct: In Governance mode, users with the
s3:BypassGovernanceRetention permission can bypass the retention settings and delete or modify object versions. This mode is suitable for testing retention policies or for scenarios where certain administrators need override capabilities.Why others are wrong:
A) Compliance mode — Compliance mode does not allow anyone, including the root user, to delete objects before the retention period expires. It is the strictest mode.
B) Legal hold — Legal hold is not a retention mode. It is a separate flag that prevents deletion regardless of retention settings and has no expiration date.
D) Vault Lock mode — This is a Glacier feature, not an S3 Object Lock mode.
Q48.A company has a legal investigation and needs to prevent specific S3 objects from being deleted for an indefinite period, regardless of any retention period settings. Which S3 feature should they use?
✓ Correct: B. Legal Hold prevents object deletion indefinitely until explicitly removed, independent of retention periods.
Why B is correct: S3 Object Lock Legal Hold is designed for legal preservation scenarios. It prevents an object version from being deleted or overwritten, and unlike retention modes, it has no expiration date. It can be applied and removed by any user with the
Why others are wrong:
A) Compliance mode with 100 years — While this would prevent deletion, it is impractical. Legal Hold is specifically designed for this use case and can be removed when the investigation ends.
C) Glacier Vault Lock — This applies WORM policies to Glacier vaults, not individual S3 objects. It is a different mechanism for a different storage tier.
D) Versioning with MFA Delete — MFA Delete requires MFA to delete versions, but it does not prevent deletion by authorized users. It adds friction, not a legal hold.
Why B is correct: S3 Object Lock Legal Hold is designed for legal preservation scenarios. It prevents an object version from being deleted or overwritten, and unlike retention modes, it has no expiration date. It can be applied and removed by any user with the
s3:PutObjectLegalHold permission, making it ideal for legal investigations with uncertain timelines.Why others are wrong:
A) Compliance mode with 100 years — While this would prevent deletion, it is impractical. Legal Hold is specifically designed for this use case and can be removed when the investigation ends.
C) Glacier Vault Lock — This applies WORM policies to Glacier vaults, not individual S3 objects. It is a different mechanism for a different storage tier.
D) Versioning with MFA Delete — MFA Delete requires MFA to delete versions, but it does not prevent deletion by authorized users. It adds friction, not a legal hold.
Q49.Which AWS service provides centralized, policy-based backup management across multiple AWS services including EBS, RDS, DynamoDB, and EFS?
✓ Correct: D. AWS Backup is a centralized service for managing backups across multiple AWS services.
Why D is correct: AWS Backup provides a single place to configure and manage backups across AWS services including EBS, RDS, Aurora, DynamoDB, EFS, S3, and more. You create backup plans with schedules, retention policies, and lifecycle rules. It supports cross-region and cross-account backup for disaster recovery.
Why others are wrong:
A) DataSync — AWS DataSync is for transferring data between on-premises storage and AWS, not for managing backup policies across AWS services.
B) Storage Gateway — Storage Gateway provides hybrid cloud storage, connecting on-premises environments to AWS storage. It is not a backup management service.
C) S3 Lifecycle Policies — These manage the lifecycle of S3 objects (transition, expiration) but cannot manage backups for EBS, RDS, DynamoDB, or other services.
Why D is correct: AWS Backup provides a single place to configure and manage backups across AWS services including EBS, RDS, Aurora, DynamoDB, EFS, S3, and more. You create backup plans with schedules, retention policies, and lifecycle rules. It supports cross-region and cross-account backup for disaster recovery.
Why others are wrong:
A) DataSync — AWS DataSync is for transferring data between on-premises storage and AWS, not for managing backup policies across AWS services.
B) Storage Gateway — Storage Gateway provides hybrid cloud storage, connecting on-premises environments to AWS storage. It is not a backup management service.
C) S3 Lifecycle Policies — These manage the lifecycle of S3 objects (transition, expiration) but cannot manage backups for EBS, RDS, DynamoDB, or other services.
Q50.A developer needs to decrypt data that was encrypted under a KMS key in another AWS account without exposing the plaintext to their application. They want to re-encrypt it under a different key. Which KMS API should they use?
✓ Correct: A. The
Why A is correct:
Why others are wrong:
B) Decrypt then Encrypt — While this achieves the same result, it exposes the plaintext to the application between the two API calls.
C) GenerateDataKey — This generates a new data key for envelope encryption. It does not re-encrypt existing ciphertext under a different CMK.
D) CreateAlias — This creates a friendly name for a KMS key. It has no encryption or decryption functionality.
ReEncrypt API decrypts ciphertext and re-encrypts it under a different key, all within KMS without exposing plaintext.Why A is correct:
ReEncrypt performs decryption and encryption in a single API call within KMS. The plaintext is never returned to the caller. This is ideal for re-encrypting data under a new key, such as during key rotation or when migrating data between accounts. The caller needs decrypt permission on the source key and encrypt permission on the destination key.Why others are wrong:
B) Decrypt then Encrypt — While this achieves the same result, it exposes the plaintext to the application between the two API calls.
ReEncrypt is more secure.C) GenerateDataKey — This generates a new data key for envelope encryption. It does not re-encrypt existing ciphertext under a different CMK.
D) CreateAlias — This creates a friendly name for a KMS key. It has no encryption or decryption functionality.
Q51.Which two statements correctly describe the differences between CloudHSM and KMS?SELECT TWO
✓ Correct: C, E. CloudHSM supports more key types and has a higher FIPS validation level than KMS.
Why C is correct: CloudHSM supports symmetric keys, asymmetric keys (RSA, ECC), and hashing. It also integrates with KMS as a custom key store, allowing you to use CloudHSM-backed keys through the KMS API.
Why E is correct: CloudHSM hardware is validated at FIPS 140-2 Level 3, which provides physical tamper resistance. KMS uses HSMs validated at FIPS 140-2 Level 2 (with Level 3 in some areas), which is a lower overall certification level.
Why others are wrong:
A) KMS single-tenant — This is backwards. CloudHSM is single-tenant; KMS is multi-tenant with logical isolation between customers.
B) CloudHSM only symmetric — CloudHSM supports both symmetric and asymmetric encryption, digital signing, and hashing.
D) KMS keys exportable — KMS keys cannot be exported. They are designed to never leave the KMS service boundary.
Why C is correct: CloudHSM supports symmetric keys, asymmetric keys (RSA, ECC), and hashing. It also integrates with KMS as a custom key store, allowing you to use CloudHSM-backed keys through the KMS API.
Why E is correct: CloudHSM hardware is validated at FIPS 140-2 Level 3, which provides physical tamper resistance. KMS uses HSMs validated at FIPS 140-2 Level 2 (with Level 3 in some areas), which is a lower overall certification level.
Why others are wrong:
A) KMS single-tenant — This is backwards. CloudHSM is single-tenant; KMS is multi-tenant with logical isolation between customers.
B) CloudHSM only symmetric — CloudHSM supports both symmetric and asymmetric encryption, digital signing, and hashing.
D) KMS keys exportable — KMS keys cannot be exported. They are designed to never leave the KMS service boundary.
Q52.A company needs to restrict S3 bucket access so that objects can only be accessed through a specific VPC endpoint. Which bucket policy condition should they use?
✓ Correct: B. Use
Why B is correct: The
Why others are wrong:
A) aws:SourceIp with VPC CIDR —
C) aws:SecureTransport — This condition enforces HTTPS but does not restrict access to a specific VPC endpoint.
D) Encryption header — This condition enforces server-side encryption, which is unrelated to VPC endpoint access control.
aws:sourceVpce to restrict bucket access to a specific VPC endpoint.Why B is correct: The
aws:sourceVpce condition key allows you to restrict S3 bucket access to requests that come through a specific VPC endpoint (e.g., vpce-1234567890abcdef0). You create a Deny statement that denies all access unless the request comes from the specified VPC endpoint.Why others are wrong:
A) aws:SourceIp with VPC CIDR —
aws:SourceIp does not work with VPC endpoints because requests through VPC endpoints use private IPs that are not visible in this condition. Use aws:sourceVpce or aws:sourceVpc instead.C) aws:SecureTransport — This condition enforces HTTPS but does not restrict access to a specific VPC endpoint.
D) Encryption header — This condition enforces server-side encryption, which is unrelated to VPC endpoint access control.
Q53.An Aurora cluster is encrypted with a KMS key. A read replica needs to be created in a different AWS region. Which statement about the encryption of the cross-region read replica is correct?
✓ Correct: C. Cross-region read replicas are encrypted using a KMS key in the destination region.
Why C is correct: When creating a cross-region read replica for an encrypted Aurora cluster, you specify a KMS key in the destination region for encryption. Standard KMS keys are regional, so a different key must be used in each region. The data is decrypted in the source region and re-encrypted in the destination region during replication.
Why others are wrong:
A) Replica unencrypted — If the source is encrypted, all read replicas (including cross-region) must also be encrypted. Unencrypted replicas from encrypted sources are not allowed.
B) Same KMS key — Standard KMS keys are regional and cannot be used in other regions. A different key in the destination region is required (unless using multi-region keys).
D) Not supported — Cross-region read replicas are fully supported for encrypted Aurora clusters.
Why C is correct: When creating a cross-region read replica for an encrypted Aurora cluster, you specify a KMS key in the destination region for encryption. Standard KMS keys are regional, so a different key must be used in each region. The data is decrypted in the source region and re-encrypted in the destination region during replication.
Why others are wrong:
A) Replica unencrypted — If the source is encrypted, all read replicas (including cross-region) must also be encrypted. Unencrypted replicas from encrypted sources are not allowed.
B) Same KMS key — Standard KMS keys are regional and cannot be used in other regions. A different key in the destination region is required (unless using multi-region keys).
D) Not supported — Cross-region read replicas are fully supported for encrypted Aurora clusters.
Q54.What happens when you schedule a KMS customer managed key for deletion?
✓ Correct: A. KMS key deletion has a mandatory waiting period of 7-30 days during which the deletion can be cancelled.
Why A is correct: When you schedule a KMS key for deletion, it enters a pending deletion state for a configurable period between 7 and 30 days (default is 30 days). During this period, the key cannot be used for cryptographic operations but the deletion can be cancelled. This safeguard prevents accidental permanent data loss.
Why others are wrong:
B) Immediately deleted — KMS does not allow immediate key deletion. The waiting period is a mandatory safety feature to prevent accidental data loss.
C) Archived in Glacier — KMS keys are not archived to Glacier. After the waiting period, the key material is permanently and irreversibly deleted.
D) Material rotated — Scheduling deletion does not rotate the key. It disables the key and starts the countdown to permanent deletion.
Why A is correct: When you schedule a KMS key for deletion, it enters a pending deletion state for a configurable period between 7 and 30 days (default is 30 days). During this period, the key cannot be used for cryptographic operations but the deletion can be cancelled. This safeguard prevents accidental permanent data loss.
Why others are wrong:
B) Immediately deleted — KMS does not allow immediate key deletion. The waiting period is a mandatory safety feature to prevent accidental data loss.
C) Archived in Glacier — KMS keys are not archived to Glacier. After the waiting period, the key material is permanently and irreversibly deleted.
D) Material rotated — Scheduling deletion does not rotate the key. It disables the key and starts the countdown to permanent deletion.
Q55.Which two encryption-related features apply to Amazon RDS instances?SELECT TWO
✓ Correct: A, C. RDS supports KMS encryption at rest (covering all storage) and SSL/TLS for encryption in transit.
Why A is correct: When RDS encryption at rest is enabled, it encrypts the underlying EBS storage, automated backups, snapshots, and read replicas using AES-256 with KMS keys. All data written to disk is encrypted.
Why C is correct: RDS supports SSL/TLS connections to encrypt data in transit between the application and the database. You can force SSL connections by setting the
Why others are wrong:
B) Enable/disable anytime — Encryption at rest must be configured at instance creation. You cannot toggle it on an existing instance.
D) Client-side encryption by default — RDS does not perform client-side encryption. It provides server-side encryption at rest and SSL/TLS for transit.
E) AES-128 — RDS uses AES-256 encryption, not AES-128.
Why A is correct: When RDS encryption at rest is enabled, it encrypts the underlying EBS storage, automated backups, snapshots, and read replicas using AES-256 with KMS keys. All data written to disk is encrypted.
Why C is correct: RDS supports SSL/TLS connections to encrypt data in transit between the application and the database. You can force SSL connections by setting the
rds.force_ssl parameter (for PostgreSQL) or using REQUIRE SSL grants (for MySQL).Why others are wrong:
B) Enable/disable anytime — Encryption at rest must be configured at instance creation. You cannot toggle it on an existing instance.
D) Client-side encryption by default — RDS does not perform client-side encryption. It provides server-side encryption at rest and SSL/TLS for transit.
E) AES-128 — RDS uses AES-256 encryption, not AES-128.
Q56.A KMS key has the following condition in its policy:
"StringEquals": {"kms:ViaService": "ec2.us-east-1.amazonaws.com"}. What does this condition do?
✓ Correct: D. The
Why D is correct: The
Why others are wrong:
A) EC2 instances calling KMS directly — The condition restricts to requests made through the EC2 service, not from EC2 instances. Applications on EC2 calling KMS directly would be denied by this condition.
B) Only EC2 AMIs — The condition applies to all EC2-related KMS operations (EBS volumes, snapshots, etc.), not just AMIs.
C) Any region — The condition specifies
kms:ViaService condition restricts key usage to requests made through the EC2 service in us-east-1.Why D is correct: The
kms:ViaService condition ensures that the KMS key can only be used when the KMS API call originates from the specified AWS service (EC2 in us-east-1). For example, when EBS encryption calls KMS on behalf of EC2. Direct KMS API calls from users would be denied.Why others are wrong:
A) EC2 instances calling KMS directly — The condition restricts to requests made through the EC2 service, not from EC2 instances. Applications on EC2 calling KMS directly would be denied by this condition.
B) Only EC2 AMIs — The condition applies to all EC2-related KMS operations (EBS volumes, snapshots, etc.), not just AMIs.
C) Any region — The condition specifies
us-east-1 specifically. Requests from EC2 in other regions would not match this condition.
Q57.Which statement about AWS Nitro Enclaves is correct?
✓ Correct: B. Nitro Enclaves are isolated with no storage, no networking, and only communicate via vsock.
Why B is correct: Nitro Enclaves are completely isolated virtual machines with no persistent storage, no external network connectivity, and no interactive access (SSH, console, etc.). The only communication channel is a local vsock (virtual socket) connection between the enclave and its parent EC2 instance. This extreme isolation protects sensitive data processing.
Why others are wrong:
A) Persistent storage and network — Enclaves intentionally have no persistent storage or network interfaces to maintain isolation.
C) SSH access by root — No one, including root, can SSH into a Nitro Enclave. There is no interactive access mechanism.
D) Separate physical hardware — Enclaves run on the same physical Nitro host as the parent EC2 instance, but in an isolated partition of the Nitro hypervisor.
Why B is correct: Nitro Enclaves are completely isolated virtual machines with no persistent storage, no external network connectivity, and no interactive access (SSH, console, etc.). The only communication channel is a local vsock (virtual socket) connection between the enclave and its parent EC2 instance. This extreme isolation protects sensitive data processing.
Why others are wrong:
A) Persistent storage and network — Enclaves intentionally have no persistent storage or network interfaces to maintain isolation.
C) SSH access by root — No one, including root, can SSH into a Nitro Enclave. There is no interactive access mechanism.
D) Separate physical hardware — Enclaves run on the same physical Nitro host as the parent EC2 instance, but in an isolated partition of the Nitro hypervisor.
Q58.A security team needs to ensure that EBS volumes attached to EC2 instances are always encrypted, even if developers forget to specify encryption. Which account-level setting should they enable?
✓ Correct: A. EBS encryption by default is an account-level setting that ensures all new EBS volumes and snapshots are automatically encrypted.
Why A is correct: When you enable EBS encryption by default in an AWS account (on a per-region basis), all newly created EBS volumes and snapshot copies are encrypted automatically, even if the user does not specify encryption. It uses the default EBS KMS key unless a specific key is specified.
Why others are wrong:
B) AWS Config rule — AWS Config can detect unencrypted volumes and alert or remediate, but it does not prevent creation of unencrypted volumes. It is detective, not preventive.
C) S3 default encryption — This applies to S3 objects, not EBS volumes. They are separate services with separate encryption settings.
D) KMS automatic key rotation — Key rotation changes the key material on a schedule. It does not enforce encryption on new EBS volumes.
Why A is correct: When you enable EBS encryption by default in an AWS account (on a per-region basis), all newly created EBS volumes and snapshot copies are encrypted automatically, even if the user does not specify encryption. It uses the default EBS KMS key unless a specific key is specified.
Why others are wrong:
B) AWS Config rule — AWS Config can detect unencrypted volumes and alert or remediate, but it does not prevent creation of unencrypted volumes. It is detective, not preventive.
C) S3 default encryption — This applies to S3 objects, not EBS volumes. They are separate services with separate encryption settings.
D) KMS automatic key rotation — Key rotation changes the key material on a schedule. It does not enforce encryption on new EBS volumes.
Q59.A company wants to use AWS Backup for their disaster recovery strategy. Which two capabilities does AWS Backup provide?SELECT TWO
✓ Correct: B, E. AWS Backup supports cross-region and cross-account backup copies.
Why B is correct: AWS Backup can copy backups to different AWS regions as part of a backup plan. This ensures that if a primary region has an outage, backups are available in another region for recovery.
Why E is correct: AWS Backup supports cross-account backup, allowing you to copy backups to a different AWS account. Combined with AWS Organizations, this protects against account-level compromise by storing backups in a separate, secure account.
Why others are wrong:
A) Real-time replication — AWS Backup creates point-in-time backups on a schedule, not real-time replication. Services like DynamoDB Global Tables or S3 CRR handle real-time replication.
C) Automatic failover — AWS Backup stores backups but does not handle automatic failover. Failover is managed by services like Route 53, RDS Multi-AZ, or Aurora Global Database.
D) Load balancing — AWS Backup has no load balancing functionality. This is the domain of ELB/ALB.
Why B is correct: AWS Backup can copy backups to different AWS regions as part of a backup plan. This ensures that if a primary region has an outage, backups are available in another region for recovery.
Why E is correct: AWS Backup supports cross-account backup, allowing you to copy backups to a different AWS account. Combined with AWS Organizations, this protects against account-level compromise by storing backups in a separate, secure account.
Why others are wrong:
A) Real-time replication — AWS Backup creates point-in-time backups on a schedule, not real-time replication. Services like DynamoDB Global Tables or S3 CRR handle real-time replication.
C) Automatic failover — AWS Backup stores backups but does not handle automatic failover. Failover is managed by services like Route 53, RDS Multi-AZ, or Aurora Global Database.
D) Load balancing — AWS Backup has no load balancing functionality. This is the domain of ELB/ALB.
Q60.A company is using the
GenerateDataKeyWithoutPlaintext KMS API. In which scenario is this API most useful?
✓ Correct: C.
Why C is correct: This API generates a data encryption key and returns only the encrypted (ciphertext) copy. It is useful in scenarios where you need to pre-generate data keys for future encryption operations. When you are ready to encrypt, you call
Why others are wrong:
A) Immediately encrypt a file — To encrypt immediately, you need the plaintext key, so you would use
B) Decrypt existing ciphertext — Decryption is done with the
D) Manual key rotation — Manual key rotation involves creating a new KMS key and updating aliases, not generating data keys.
GenerateDataKeyWithoutPlaintext returns only the encrypted data key, useful when you need the key later but not immediately.Why C is correct: This API generates a data encryption key and returns only the encrypted (ciphertext) copy. It is useful in scenarios where you need to pre-generate data keys for future encryption operations. When you are ready to encrypt, you call
Decrypt on the encrypted key to get the plaintext version. This avoids having the plaintext key in memory before it is needed.Why others are wrong:
A) Immediately encrypt a file — To encrypt immediately, you need the plaintext key, so you would use
GenerateDataKey (which returns both plaintext and encrypted copies).B) Decrypt existing ciphertext — Decryption is done with the
Decrypt API, not with key generation APIs.D) Manual key rotation — Manual key rotation involves creating a new KMS key and updating aliases, not generating data keys.
Domain 6 — Security Foundations and Governance (60 Questions)
Q1.A company is deploying Amazon EC2 instances in AWS. The security team wants to understand which security responsibilities belong to AWS and which belong to the customer. Under the AWS Shared Responsibility Model, who is responsible for patching the guest operating system on EC2 instances?
✓ Correct: B. The customer is responsible for patching the guest operating system on EC2 instances.
Why B is correct: Under the Shared Responsibility Model, AWS manages security of the cloud (hardware, hypervisor, network infrastructure), while the customer manages security in the cloud. For EC2, the guest OS, applications, and data are the customer's responsibility.
Why others are wrong:
A) AWS patches the guest OS — AWS only patches the underlying host infrastructure and hypervisor, not the guest OS running on EC2 instances.
C) Shared equally — OS patching is entirely the customer's responsibility for EC2. There is no shared component for guest OS management.
D) Depends on instance type — The responsibility model for EC2 guest OS patching does not change based on instance type. It is always the customer's responsibility.
Why B is correct: Under the Shared Responsibility Model, AWS manages security of the cloud (hardware, hypervisor, network infrastructure), while the customer manages security in the cloud. For EC2, the guest OS, applications, and data are the customer's responsibility.
Why others are wrong:
A) AWS patches the guest OS — AWS only patches the underlying host infrastructure and hypervisor, not the guest OS running on EC2 instances.
C) Shared equally — OS patching is entirely the customer's responsibility for EC2. There is no shared component for guest OS management.
D) Depends on instance type — The responsibility model for EC2 guest OS patching does not change based on instance type. It is always the customer's responsibility.
Q2.Which of the following is an AWS responsibility under the Shared Responsibility Model when a customer uses Amazon RDS?
✓ Correct: C. AWS is responsible for patching the underlying OS of managed services like RDS.
Why C is correct: Amazon RDS is a managed service. Under the Shared Responsibility Model, AWS handles the underlying infrastructure including OS patching, database engine patching, and hardware maintenance. The customer cannot access the underlying OS of an RDS instance.
Why others are wrong:
A) Database user accounts — Managing database users, roles, and permissions is the customer's responsibility within the RDS instance.
B) Encrypting data at rest — While AWS provides the encryption mechanism, the customer decides whether to enable encryption and manages their KMS keys.
D) Security groups — Configuring network security groups to control access to the RDS instance is the customer's responsibility.
Why C is correct: Amazon RDS is a managed service. Under the Shared Responsibility Model, AWS handles the underlying infrastructure including OS patching, database engine patching, and hardware maintenance. The customer cannot access the underlying OS of an RDS instance.
Why others are wrong:
A) Database user accounts — Managing database users, roles, and permissions is the customer's responsibility within the RDS instance.
B) Encrypting data at rest — While AWS provides the encryption mechanism, the customer decides whether to enable encryption and manages their KMS keys.
D) Security groups — Configuring network security groups to control access to the RDS instance is the customer's responsibility.
Q3.A security administrator is reviewing the AWS Acceptable Use Policy. Which of the following activities would violate this policy?
✓ Correct: A. Sending unsolicited bulk emails (spam) violates the AWS Acceptable Use Policy.
Why A is correct: The AWS Acceptable Use Policy explicitly prohibits using AWS services for illegal, harmful, or offensive activities. This includes distributing spam, hosting malicious content, and conducting network abuse. Sending unsolicited bulk email is a direct violation.
Why others are wrong:
B) Penetration testing own instances — AWS permits penetration testing on many of its services (including EC2) without prior approval, as long as it targets your own resources.
C) Public web application — Hosting legitimate web applications is a standard and permitted use of AWS services.
D) Encrypted data in S3 — Storing encrypted sensitive data in S3 is a best practice and fully compliant with the Acceptable Use Policy.
Why A is correct: The AWS Acceptable Use Policy explicitly prohibits using AWS services for illegal, harmful, or offensive activities. This includes distributing spam, hosting malicious content, and conducting network abuse. Sending unsolicited bulk email is a direct violation.
Why others are wrong:
B) Penetration testing own instances — AWS permits penetration testing on many of its services (including EC2) without prior approval, as long as it targets your own resources.
C) Public web application — Hosting legitimate web applications is a standard and permitted use of AWS services.
D) Encrypted data in S3 — Storing encrypted sensitive data in S3 is a best practice and fully compliant with the Acceptable Use Policy.
Q4.A company just created a new AWS account. The security team wants to follow best practices for the root user. Which action should they take FIRST?
✓ Correct: D. Enabling MFA on the root user is one of the first security steps for a new AWS account.
Why D is correct: The root user has unrestricted access to all AWS resources. Enabling MFA adds an extra layer of protection beyond just a password. AWS best practices recommend enabling MFA on the root account immediately and then creating IAM users for daily tasks.
Why others are wrong:
A) Create root access keys — Creating access keys for the root user is strongly discouraged. If access keys are compromised, an attacker gains full account access.
B) Use root for daily tasks — The root user should only be used for tasks that specifically require root privileges. IAM users should be created for daily operations.
C) Share root credentials — Root credentials should never be shared. Individual IAM users with appropriate permissions should be created for each team member.
Why D is correct: The root user has unrestricted access to all AWS resources. Enabling MFA adds an extra layer of protection beyond just a password. AWS best practices recommend enabling MFA on the root account immediately and then creating IAM users for daily tasks.
Why others are wrong:
A) Create root access keys — Creating access keys for the root user is strongly discouraged. If access keys are compromised, an attacker gains full account access.
B) Use root for daily tasks — The root user should only be used for tasks that specifically require root privileges. IAM users should be created for daily operations.
C) Share root credentials — Root credentials should never be shared. Individual IAM users with appropriate permissions should be created for each team member.
Q5.Which of the following tasks can ONLY be performed by the AWS account root user?
✓ Correct: B. Closing an AWS account is a task that requires root user credentials.
Why B is correct: Certain actions are restricted to the root user only, including: closing the AWS account, changing the account name, changing root user credentials, restoring IAM permissions, changing the AWS Support plan, and registering as a seller in the Reserved Instance Marketplace.
Why others are wrong:
A) Creating IAM users — Any IAM user or role with the appropriate IAM permissions can create other IAM users and groups.
C) Launching EC2 instances — Any IAM principal with EC2 permissions can launch instances. This does not require root access.
D) Creating S3 buckets — Any IAM principal with S3 permissions can create buckets. This is a standard operation.
Why B is correct: Certain actions are restricted to the root user only, including: closing the AWS account, changing the account name, changing root user credentials, restoring IAM permissions, changing the AWS Support plan, and registering as a seller in the Reserved Instance Marketplace.
Why others are wrong:
A) Creating IAM users — Any IAM user or role with the appropriate IAM permissions can create other IAM users and groups.
C) Launching EC2 instances — Any IAM principal with EC2 permissions can launch instances. This does not require root access.
D) Creating S3 buckets — Any IAM principal with S3 permissions can create buckets. This is a standard operation.
Q6.A company wants to follow AWS best practices for securing their AWS account. Which TWO of the following are recommended security practices for the root user?SELECT TWO
✓ Correct: B, D. Enable hardware MFA and delete root access keys.
Why B is correct: A hardware MFA device provides the strongest level of multi-factor authentication for the root user. AWS recommends using hardware tokens or FIDO security keys for root account protection.
Why D is correct: Root user access keys should be deleted if they exist. If root access keys are compromised, an attacker would have unrestricted access to the entire AWS account through the API.
Why others are wrong:
A) Store root access keys in Secrets Manager — The best practice is to not have root access keys at all, not to store them somewhere. Delete them entirely.
C) Use root for all admin tasks — The root user should only be used for tasks that specifically require root privileges. Create IAM users for daily operations.
E) Share root credentials — Root credentials should never be shared. Use IAM roles and break-glass procedures for emergency access.
Why B is correct: A hardware MFA device provides the strongest level of multi-factor authentication for the root user. AWS recommends using hardware tokens or FIDO security keys for root account protection.
Why D is correct: Root user access keys should be deleted if they exist. If root access keys are compromised, an attacker would have unrestricted access to the entire AWS account through the API.
Why others are wrong:
A) Store root access keys in Secrets Manager — The best practice is to not have root access keys at all, not to store them somewhere. Delete them entirely.
C) Use root for all admin tasks — The root user should only be used for tasks that specifically require root privileges. Create IAM users for daily operations.
E) Share root credentials — Root credentials should never be shared. Use IAM roles and break-glass procedures for emergency access.
Q7.A solutions architect is reviewing the AWS Well-Architected Framework for a security audit. Which design principle belongs to the Security Pillar?
✓ Correct: C. "Apply security at all layers" is a design principle of the Security Pillar.
Why C is correct: The Security Pillar of the AWS Well-Architected Framework includes principles such as: implement a strong identity foundation, enable traceability, apply security at all layers, automate security best practices, protect data in transit and at rest, keep people away from data, and prepare for security events.
Why others are wrong:
A) Test at production scale — This is a design principle of the Performance Efficiency pillar, not the Security pillar.
B) Serverless to reduce costs — This relates to the Cost Optimization pillar. Serverless can also help security but is not a stated Security pillar principle.
D) Auto-scaling for capacity — This is a Reliability or Performance Efficiency principle, not specific to the Security pillar.
Why C is correct: The Security Pillar of the AWS Well-Architected Framework includes principles such as: implement a strong identity foundation, enable traceability, apply security at all layers, automate security best practices, protect data in transit and at rest, keep people away from data, and prepare for security events.
Why others are wrong:
A) Test at production scale — This is a design principle of the Performance Efficiency pillar, not the Security pillar.
B) Serverless to reduce costs — This relates to the Cost Optimization pillar. Serverless can also help security but is not a stated Security pillar principle.
D) Auto-scaling for capacity — This is a Reliability or Performance Efficiency principle, not specific to the Security pillar.
Q8.A compliance officer needs to download AWS SOC reports and ISO certifications for an audit. Which AWS service provides access to these compliance documents?
✓ Correct: A. AWS Artifact provides on-demand access to AWS compliance reports and agreements.
Why A is correct: AWS Artifact is a self-service portal that provides access to AWS security and compliance documents such as SOC reports, PCI-DSS reports, ISO certifications, and other third-party audit reports. It also allows you to review and accept agreements like the BAA (Business Associate Addendum).
Why others are wrong:
B) AWS Config — AWS Config tracks resource configuration changes and compliance with rules, but does not provide downloadable compliance certifications.
C) Trusted Advisor — Trusted Advisor provides recommendations for cost, performance, security, and fault tolerance, not compliance documents.
D) Security Hub — Security Hub aggregates security findings from multiple AWS services but does not host compliance certification documents.
Why A is correct: AWS Artifact is a self-service portal that provides access to AWS security and compliance documents such as SOC reports, PCI-DSS reports, ISO certifications, and other third-party audit reports. It also allows you to review and accept agreements like the BAA (Business Associate Addendum).
Why others are wrong:
B) AWS Config — AWS Config tracks resource configuration changes and compliance with rules, but does not provide downloadable compliance certifications.
C) Trusted Advisor — Trusted Advisor provides recommendations for cost, performance, security, and fault tolerance, not compliance documents.
D) Security Hub — Security Hub aggregates security findings from multiple AWS services but does not host compliance certification documents.
Q9.A company needs to continuously assess their AWS environment against compliance frameworks like PCI-DSS and GDPR and collect evidence automatically. Which AWS service should they use?
✓ Correct: B. AWS Audit Manager continuously collects evidence to help with compliance audits.
Why B is correct: AWS Audit Manager helps you continuously audit your AWS usage to simplify risk and compliance assessment. It automates evidence collection against prebuilt frameworks (PCI-DSS, GDPR, SOC 2, etc.) and maps AWS usage data to compliance requirements.
Why others are wrong:
A) AWS Artifact — Artifact provides downloadable compliance reports from AWS itself, not continuous assessment of your own environment.
C) AWS Inspector — Inspector scans EC2 instances and container images for software vulnerabilities, not compliance framework assessments.
D) AWS CloudTrail — CloudTrail logs API calls for auditing purposes but does not map evidence to compliance frameworks or generate audit reports.
Why B is correct: AWS Audit Manager helps you continuously audit your AWS usage to simplify risk and compliance assessment. It automates evidence collection against prebuilt frameworks (PCI-DSS, GDPR, SOC 2, etc.) and maps AWS usage data to compliance requirements.
Why others are wrong:
A) AWS Artifact — Artifact provides downloadable compliance reports from AWS itself, not continuous assessment of your own environment.
C) AWS Inspector — Inspector scans EC2 instances and container images for software vulnerabilities, not compliance framework assessments.
D) AWS CloudTrail — CloudTrail logs API calls for auditing purposes but does not map evidence to compliance frameworks or generate audit reports.
Q10.A finance team wants to receive an alert when their monthly AWS spending is projected to exceed $10,000. Which AWS service should they use?
✓ Correct: D. AWS Budgets allows you to set custom budgets and receive alerts when thresholds are exceeded.
Why D is correct: AWS Budgets lets you set custom cost and usage budgets. You can configure alerts based on actual or forecasted spending and receive notifications via SNS or email when thresholds are breached. It supports cost, usage, reservation, and savings plan budget types.
Why others are wrong:
A) Cost Explorer — Cost Explorer is a visualization tool for analyzing past and forecasted costs. It does not send proactive spending alerts based on thresholds.
B) Trusted Advisor — Trusted Advisor checks for cost optimization opportunities (like idle resources) but does not provide budget threshold alerts.
C) CloudWatch Metrics — While you can set billing alarms in CloudWatch, AWS Budgets is the purpose-built service for budget management with forecasted spend alerts.
Why D is correct: AWS Budgets lets you set custom cost and usage budgets. You can configure alerts based on actual or forecasted spending and receive notifications via SNS or email when thresholds are breached. It supports cost, usage, reservation, and savings plan budget types.
Why others are wrong:
A) Cost Explorer — Cost Explorer is a visualization tool for analyzing past and forecasted costs. It does not send proactive spending alerts based on thresholds.
B) Trusted Advisor — Trusted Advisor checks for cost optimization opportunities (like idle resources) but does not provide budget threshold alerts.
C) CloudWatch Metrics — While you can set billing alarms in CloudWatch, AWS Budgets is the purpose-built service for budget management with forecasted spend alerts.
Q11.A security engineer wants to analyze historical AWS spending patterns and identify cost anomalies. Which service provides interactive graphs and filtering to explore cost data?
✓ Correct: A. AWS Cost Explorer provides interactive visualizations of cost and usage data.
Why A is correct: AWS Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time. It provides interactive charts, filtering by service/region/tag, forecasting, and the ability to identify spending anomalies through its anomaly detection feature.
Why others are wrong:
B) AWS Budgets — Budgets is for setting spending thresholds and alerts, not for interactive exploration and analysis of historical cost data.
C) Billing Dashboard — The Billing Dashboard provides a summary of current charges but lacks the detailed interactive analysis capabilities of Cost Explorer.
D) Pricing Calculator — The Pricing Calculator estimates costs for planned AWS deployments. It does not analyze historical spending data.
Why A is correct: AWS Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time. It provides interactive charts, filtering by service/region/tag, forecasting, and the ability to identify spending anomalies through its anomaly detection feature.
Why others are wrong:
B) AWS Budgets — Budgets is for setting spending thresholds and alerts, not for interactive exploration and analysis of historical cost data.
C) Billing Dashboard — The Billing Dashboard provides a summary of current charges but lacks the detailed interactive analysis capabilities of Cost Explorer.
D) Pricing Calculator — The Pricing Calculator estimates costs for planned AWS deployments. It does not analyze historical spending data.
Q12.Which AWS Trusted Advisor check is available to ALL AWS customers regardless of their support plan?
✓ Correct: C. The S3 bucket permissions check is available in the free tier of Trusted Advisor.
Why C is correct: AWS Trusted Advisor provides a core set of checks to all customers for free, including: S3 bucket permissions, security groups (unrestricted ports), IAM use, MFA on root account, EBS public snapshots, and RDS public snapshots. The S3 open access check is one of these free core security checks.
Why others are wrong:
A) EBS public snapshots — While this is actually also a free check, S3 bucket permissions is the most commonly cited core check. (Note: EBS public snapshots is indeed free too.)
B) Underutilized EC2 instances — Cost optimization checks like underutilized EC2 instances require a Business or Enterprise Support plan.
D) RDS idle DB instances — Checking for idle RDS instances is a cost optimization check that requires Business or Enterprise Support.
Why C is correct: AWS Trusted Advisor provides a core set of checks to all customers for free, including: S3 bucket permissions, security groups (unrestricted ports), IAM use, MFA on root account, EBS public snapshots, and RDS public snapshots. The S3 open access check is one of these free core security checks.
Why others are wrong:
A) EBS public snapshots — While this is actually also a free check, S3 bucket permissions is the most commonly cited core check. (Note: EBS public snapshots is indeed free too.)
B) Underutilized EC2 instances — Cost optimization checks like underutilized EC2 instances require a Business or Enterprise Support plan.
D) RDS idle DB instances — Checking for idle RDS instances is a cost optimization check that requires Business or Enterprise Support.
Q13.A company needs access to ALL AWS Trusted Advisor checks, including cost optimization and fault tolerance recommendations. What is the minimum AWS Support plan required?
✓ Correct: B. Business Support plan is the minimum to access all Trusted Advisor checks.
Why B is correct: The Business Support plan is the minimum tier that unlocks the full set of Trusted Advisor checks across all five categories: cost optimization, performance, security, fault tolerance, and service limits. Basic and Developer plans only get the core security checks.
Why others are wrong:
A) Developer — The Developer plan only provides access to the core (free) Trusted Advisor checks, same as Basic support.
C) Enterprise On-Ramp — While this plan includes all Trusted Advisor checks, it is not the minimum required. Business is sufficient.
D) Enterprise — Enterprise includes all checks but is more expensive than necessary. Business is the minimum plan needed.
Why B is correct: The Business Support plan is the minimum tier that unlocks the full set of Trusted Advisor checks across all five categories: cost optimization, performance, security, fault tolerance, and service limits. Basic and Developer plans only get the core security checks.
Why others are wrong:
A) Developer — The Developer plan only provides access to the core (free) Trusted Advisor checks, same as Basic support.
C) Enterprise On-Ramp — While this plan includes all Trusted Advisor checks, it is not the minimum required. Business is sufficient.
D) Enterprise — Enterprise includes all checks but is more expensive than necessary. Business is the minimum plan needed.
Q14.A company requires a designated Technical Account Manager (TAM) and concierge support. Which AWS Support plan provides these features?
✓ Correct: D. The Enterprise Support plan provides a designated TAM and concierge support.
Why D is correct: The Enterprise Support plan is the only plan that includes a designated Technical Account Manager (TAM) who provides proactive guidance and advocacy. It also includes concierge support for billing and account inquiries, and infrastructure event management.
Why others are wrong:
A) Basic — Basic support only includes customer service for billing questions, documentation, and forums. No technical support cases.
B) Developer — Developer provides email-based technical support during business hours but no TAM or concierge.
C) Business — Business provides 24/7 phone, email, and chat support but does not include a designated TAM or concierge support.
Why D is correct: The Enterprise Support plan is the only plan that includes a designated Technical Account Manager (TAM) who provides proactive guidance and advocacy. It also includes concierge support for billing and account inquiries, and infrastructure event management.
Why others are wrong:
A) Basic — Basic support only includes customer service for billing questions, documentation, and forums. No technical support cases.
B) Developer — Developer provides email-based technical support during business hours but no TAM or concierge.
C) Business — Business provides 24/7 phone, email, and chat support but does not include a designated TAM or concierge support.
Q15.A DevOps team has deployed infrastructure using AWS CloudFormation. After deployment, a team member manually modified a security group through the AWS Console. How can the team detect this unauthorized change?
✓ Correct: C. CloudFormation drift detection identifies resources that have been modified outside of CloudFormation.
Why C is correct: CloudFormation drift detection compares the current state of stack resources against their expected template configuration. When a resource is manually modified (drifted), drift detection reports the differences, allowing the team to identify and remediate unauthorized changes.
Why others are wrong:
A) AWS Config rules — Config rules can detect non-compliant configurations but do not specifically tell you if a resource has drifted from its CloudFormation template.
B) CloudTrail logs — CloudTrail would show who made the change and when, but it does not compare the current state against the CloudFormation template definition.
D) Delete and recreate — Deleting the stack would cause downtime and is not a detection mechanism. It is a destructive remediation, not a detection approach.
Why C is correct: CloudFormation drift detection compares the current state of stack resources against their expected template configuration. When a resource is manually modified (drifted), drift detection reports the differences, allowing the team to identify and remediate unauthorized changes.
Why others are wrong:
A) AWS Config rules — Config rules can detect non-compliant configurations but do not specifically tell you if a resource has drifted from its CloudFormation template.
B) CloudTrail logs — CloudTrail would show who made the change and when, but it does not compare the current state against the CloudFormation template definition.
D) Delete and recreate — Deleting the stack would cause downtime and is not a detection mechanism. It is a destructive remediation, not a detection approach.
Q16.A company wants to ensure that when a CloudFormation stack is deleted, the RDS database is preserved. Which CloudFormation feature should they use?
✓ Correct: A. The DeletionPolicy: Retain attribute preserves a resource when its stack is deleted.
Why A is correct: The DeletionPolicy attribute in CloudFormation controls what happens to a resource when the stack is deleted. Setting it to
Why others are wrong:
B) Termination protection — Termination protection prevents the entire stack from being deleted, but it does not selectively preserve individual resources if the stack is eventually deleted.
C) Stack policy — Stack policies control which resources can be updated during stack updates, not what happens during stack deletion.
D) Tags — Tags have no effect on CloudFormation deletion behavior. They are metadata only and do not control resource lifecycle.
Why A is correct: The DeletionPolicy attribute in CloudFormation controls what happens to a resource when the stack is deleted. Setting it to
Retain keeps the resource intact. Other options include Snapshot (creates a snapshot before deletion, for supported resources like RDS) and Delete (the default).Why others are wrong:
B) Termination protection — Termination protection prevents the entire stack from being deleted, but it does not selectively preserve individual resources if the stack is eventually deleted.
C) Stack policy — Stack policies control which resources can be updated during stack updates, not what happens during stack deletion.
D) Tags — Tags have no effect on CloudFormation deletion behavior. They are metadata only and do not control resource lifecycle.
Q17.An organization with multiple AWS accounts wants to deploy a standardized CloudFormation stack across all accounts and regions simultaneously. Which feature should they use?
✓ Correct: B. CloudFormation StackSets deploy stacks across multiple accounts and regions from a single template.
Why B is correct: CloudFormation StackSets extend stack functionality by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. This is ideal for deploying standardized security baselines, compliance rules, or governance configurations across an organization.
Why others are wrong:
A) Nested stacks — Nested stacks are stacks within stacks that help organize complex templates. They operate within a single account and region.
C) Change sets — Change sets preview the impact of proposed changes to a stack before execution. They do not deploy across multiple accounts.
D) Drift detection — Drift detection identifies configuration changes made outside CloudFormation. It does not deploy resources.
Why B is correct: CloudFormation StackSets extend stack functionality by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. This is ideal for deploying standardized security baselines, compliance rules, or governance configurations across an organization.
Why others are wrong:
A) Nested stacks — Nested stacks are stacks within stacks that help organize complex templates. They operate within a single account and region.
C) Change sets — Change sets preview the impact of proposed changes to a stack before execution. They do not deploy across multiple accounts.
D) Drift detection — Drift detection identifies configuration changes made outside CloudFormation. It does not deploy resources.
Q18.An enterprise wants to provide pre-approved, self-service IT products (such as approved AMIs and CloudFormation templates) to development teams while maintaining governance. Which AWS service should they use?
✓ Correct: C. AWS Service Catalog allows organizations to create and manage catalogs of approved IT services.
Why C is correct: AWS Service Catalog enables organizations to create and manage catalogs of IT services (called products) that are approved for use. Administrators define portfolios of products using CloudFormation templates, and end users can launch these pre-approved resources through a self-service portal while maintaining compliance.
Why others are wrong:
A) Systems Manager — SSM manages operational tasks like patching and configuration, but does not provide a self-service product catalog for end users.
B) CloudFormation — CloudFormation is the underlying IaC tool, but it does not provide a governed self-service portal or product catalog functionality.
D) AWS Config — Config monitors and records resource configurations for compliance, but does not offer a self-service catalog of approved products.
Why C is correct: AWS Service Catalog enables organizations to create and manage catalogs of IT services (called products) that are approved for use. Administrators define portfolios of products using CloudFormation templates, and end users can launch these pre-approved resources through a self-service portal while maintaining compliance.
Why others are wrong:
A) Systems Manager — SSM manages operational tasks like patching and configuration, but does not provide a self-service product catalog for end users.
B) CloudFormation — CloudFormation is the underlying IaC tool, but it does not provide a governed self-service portal or product catalog functionality.
D) AWS Config — Config monitors and records resource configurations for compliance, but does not offer a self-service catalog of approved products.
Q19.A security team wants to automatically respond when an IAM access key is created by triggering a Lambda function. Which AWS service provides event-driven automation for this use case?
✓ Correct: B. Amazon EventBridge enables event-driven automation by routing events to targets like Lambda.
Why B is correct: Amazon EventBridge (formerly CloudWatch Events) is an event bus service that can match events from AWS services (including CloudTrail API calls) and route them to targets like Lambda functions. You can create a rule that matches the
Why others are wrong:
A) CloudTrail — CloudTrail records the API call but does not trigger automated actions by itself. It is a logging service, not an automation service.
C) SNS — SNS sends notifications but does not match and filter events. EventBridge is needed to detect the event and route it to targets.
D) AWS Config — Config can detect configuration changes and trigger remediations via Config rules, but EventBridge is the more direct and flexible approach for API-event-driven automation.
Why B is correct: Amazon EventBridge (formerly CloudWatch Events) is an event bus service that can match events from AWS services (including CloudTrail API calls) and route them to targets like Lambda functions. You can create a rule that matches the
CreateAccessKey API call and triggers a Lambda function for automated response.Why others are wrong:
A) CloudTrail — CloudTrail records the API call but does not trigger automated actions by itself. It is a logging service, not an automation service.
C) SNS — SNS sends notifications but does not match and filter events. EventBridge is needed to detect the event and route it to targets.
D) AWS Config — Config can detect configuration changes and trigger remediations via Config rules, but EventBridge is the more direct and flexible approach for API-event-driven automation.
Q20.A company wants to schedule an EventBridge rule to run a security scan Lambda function every day at midnight UTC. Which EventBridge feature supports this?
✓ Correct: A. EventBridge supports scheduled rules using cron or rate expressions.
Why A is correct: Amazon EventBridge supports two types of rules: event pattern rules (that match incoming events) and scheduled rules (that trigger on a time-based schedule using cron or rate expressions). A cron expression like
Why others are wrong:
B) Event pattern matching — Event pattern rules react to specific events from AWS services, not time-based schedules.
C) Schema registry — The schema registry discovers and stores event schemas for development purposes. It does not trigger scheduled executions.
D) Event replay — Event replay replays archived events for debugging or reprocessing, not for scheduling regular executions.
Why A is correct: Amazon EventBridge supports two types of rules: event pattern rules (that match incoming events) and scheduled rules (that trigger on a time-based schedule using cron or rate expressions). A cron expression like
cron(0 0 * * ? *) would trigger every day at midnight UTC.Why others are wrong:
B) Event pattern matching — Event pattern rules react to specific events from AWS services, not time-based schedules.
C) Schema registry — The schema registry discovers and stores event schemas for development purposes. It does not trigger scheduled executions.
D) Event replay — Event replay replays archived events for debugging or reprocessing, not for scheduling regular executions.
Q21.A systems administrator needs to run a shell script on 500 EC2 instances simultaneously to update a configuration file. Which AWS Systems Manager feature should they use?
✓ Correct: D. SSM Run Command executes commands across multiple instances without SSH.
Why D is correct: SSM Run Command allows you to remotely execute commands (scripts, shell commands) on managed instances at scale without needing SSH access. It uses SSM Documents to define the commands and can target instances by tags, resource groups, or instance IDs. It supports rate control and error thresholds.
Why others are wrong:
A) Session Manager — Session Manager provides interactive shell access to individual instances. It is not designed for running commands across hundreds of instances simultaneously.
B) Patch Manager — Patch Manager automates the patching process for OS and application patches, not arbitrary shell scripts.
C) Parameter Store — Parameter Store securely stores configuration data and secrets. It does not execute commands on instances.
Why D is correct: SSM Run Command allows you to remotely execute commands (scripts, shell commands) on managed instances at scale without needing SSH access. It uses SSM Documents to define the commands and can target instances by tags, resource groups, or instance IDs. It supports rate control and error thresholds.
Why others are wrong:
A) Session Manager — Session Manager provides interactive shell access to individual instances. It is not designed for running commands across hundreds of instances simultaneously.
B) Patch Manager — Patch Manager automates the patching process for OS and application patches, not arbitrary shell scripts.
C) Parameter Store — Parameter Store securely stores configuration data and secrets. It does not execute commands on instances.
Q22.A security team wants to provide secure shell access to EC2 instances without opening SSH port 22 in security groups and without managing SSH keys. Which AWS service feature should they use?
✓ Correct: C. SSM Session Manager provides secure shell access without opening inbound ports or managing keys.
Why C is correct: SSM Session Manager provides browser-based or CLI shell access to EC2 instances through the SSM agent. It does not require inbound security group rules for SSH (port 22), bastion hosts, or SSH key management. All sessions are logged and can be audited through CloudTrail and S3.
Why others are wrong:
A) Run Command — Run Command executes predefined commands but does not provide interactive shell sessions to instances.
B) EC2 Instance Connect — Instance Connect provides temporary SSH access but still requires port 22 to be open in the security group.
D) Direct Connect — Direct Connect is a dedicated network connection from on-premises to AWS. It is not related to instance shell access.
Why C is correct: SSM Session Manager provides browser-based or CLI shell access to EC2 instances through the SSM agent. It does not require inbound security group rules for SSH (port 22), bastion hosts, or SSH key management. All sessions are logged and can be audited through CloudTrail and S3.
Why others are wrong:
A) Run Command — Run Command executes predefined commands but does not provide interactive shell sessions to instances.
B) EC2 Instance Connect — Instance Connect provides temporary SSH access but still requires port 22 to be open in the security group.
D) Direct Connect — Direct Connect is a dedicated network connection from on-premises to AWS. It is not related to instance shell access.
Q23.A company needs to automate the patching of their EC2 fleet and ensure all instances are compliant with their patching policy. Which SSM feature should they use?
✓ Correct: B. SSM Patch Manager automates the patching process for managed instances.
Why B is correct: SSM Patch Manager automates the process of patching managed instances with both security-related and other types of updates. It uses patch baselines to define which patches to approve, maintenance windows to schedule patching, and provides compliance reporting to show which instances are out of compliance.
Why others are wrong:
A) SSM Inventory — Inventory collects metadata about instances (installed software, configurations) but does not apply patches.
C) SSM State Manager — State Manager ensures instances maintain a defined configuration state, but Patch Manager is the specific tool for patch automation.
D) SSM Automation — Automation runs predefined runbooks for common tasks but is not specifically designed for patch management workflows.
Why B is correct: SSM Patch Manager automates the process of patching managed instances with both security-related and other types of updates. It uses patch baselines to define which patches to approve, maintenance windows to schedule patching, and provides compliance reporting to show which instances are out of compliance.
Why others are wrong:
A) SSM Inventory — Inventory collects metadata about instances (installed software, configurations) but does not apply patches.
C) SSM State Manager — State Manager ensures instances maintain a defined configuration state, but Patch Manager is the specific tool for patch automation.
D) SSM Automation — Automation runs predefined runbooks for common tasks but is not specifically designed for patch management workflows.
Q24.A security team needs to collect information about all installed applications, network configurations, and Windows updates across their EC2 fleet. Which SSM feature provides this visibility?
✓ Correct: A. SSM Inventory collects metadata about managed instances including installed software and configurations.
Why A is correct: SSM Inventory collects metadata from your managed instances, including information about installed applications, OS details, network configurations, Windows updates, running services, and more. This data can be aggregated and queried to gain visibility across your entire fleet.
Why others are wrong:
B) Patch Manager — Patch Manager manages patching operations and compliance, but does not provide a broad inventory of installed applications and configurations.
C) Run Command — Run Command could execute scripts to gather this data, but SSM Inventory is the purpose-built feature for automated metadata collection.
D) AWS Config — AWS Config tracks AWS resource configurations (like EC2 settings), but does not collect instance-level details like installed software.
Why A is correct: SSM Inventory collects metadata from your managed instances, including information about installed applications, OS details, network configurations, Windows updates, running services, and more. This data can be aggregated and queried to gain visibility across your entire fleet.
Why others are wrong:
B) Patch Manager — Patch Manager manages patching operations and compliance, but does not provide a broad inventory of installed applications and configurations.
C) Run Command — Run Command could execute scripts to gather this data, but SSM Inventory is the purpose-built feature for automated metadata collection.
D) AWS Config — AWS Config tracks AWS resource configurations (like EC2 settings), but does not collect instance-level details like installed software.
Q25.A company is implementing AWS Systems Manager across their environment. Which TWO of the following are capabilities of SSM State Manager?SELECT TWO
✓ Correct: A, D. State Manager maintains desired configuration state through scheduled document application.
Why A is correct: SSM State Manager is designed to keep managed instances in a defined, consistent state. It creates associations between SSM documents and target instances to enforce desired configurations.
Why D is correct: State Manager works by automatically applying SSM documents (such as running scripts or configuring settings) to instances on a defined schedule to maintain the desired state.
Why others are wrong:
B) Interactive shell sessions — This is the function of SSM Session Manager, not State Manager.
C) Software inventory — Collecting installed software metadata is the function of SSM Inventory, not State Manager.
E) Patch baselines — Creating and managing patch baselines is the function of SSM Patch Manager, not State Manager.
Why A is correct: SSM State Manager is designed to keep managed instances in a defined, consistent state. It creates associations between SSM documents and target instances to enforce desired configurations.
Why D is correct: State Manager works by automatically applying SSM documents (such as running scripts or configuring settings) to instances on a defined schedule to maintain the desired state.
Why others are wrong:
B) Interactive shell sessions — This is the function of SSM Session Manager, not State Manager.
C) Software inventory — Collecting installed software metadata is the function of SSM Inventory, not State Manager.
E) Patch baselines — Creating and managing patch baselines is the function of SSM Patch Manager, not State Manager.
Q26.A DevOps engineer wants to automate a multi-step workflow that includes stopping an EC2 instance, creating an AMI, and then restarting the instance. Which SSM feature is best suited for this?
✓ Correct: D. SSM Automation executes multi-step runbooks for common operational tasks.
Why D is correct: SSM Automation simplifies common maintenance and deployment tasks by using predefined or custom runbooks (Automation documents). It supports multi-step workflows with approvals, conditional logic, and error handling. AWS provides many pre-built runbooks such as
Why others are wrong:
A) Run Command — Run Command executes single commands or scripts on instances but is not designed for orchestrating multi-step workflows with dependencies.
B) State Manager — State Manager maintains desired configuration state but is not designed for complex multi-step operational workflows.
C) Patch Manager — Patch Manager is specifically for managing OS and application patches, not general-purpose workflow automation.
Why D is correct: SSM Automation simplifies common maintenance and deployment tasks by using predefined or custom runbooks (Automation documents). It supports multi-step workflows with approvals, conditional logic, and error handling. AWS provides many pre-built runbooks such as
AWS-RestartEC2Instance and AWS-CreateImage.Why others are wrong:
A) Run Command — Run Command executes single commands or scripts on instances but is not designed for orchestrating multi-step workflows with dependencies.
B) State Manager — State Manager maintains desired configuration state but is not designed for complex multi-step operational workflows.
C) Patch Manager — Patch Manager is specifically for managing OS and application patches, not general-purpose workflow automation.
Q27.A security team needs a centralized dashboard to aggregate, organize, and prioritize operational issues and remediation actions from multiple AWS services. Which SSM feature provides this?
✓ Correct: C. SSM OpsCenter aggregates operational issues (OpsItems) from multiple sources for centralized management.
Why C is correct: SSM OpsCenter provides a central location to view, investigate, and resolve operational issues. It aggregates OpsItems from sources like CloudWatch alarms, Config rules, CloudFormation events, and other AWS services. Each OpsItem includes relevant data, related resources, and recommended runbooks for remediation.
Why others are wrong:
A) SSM Inventory — Inventory collects instance metadata, not operational issues or remediation actions.
B) SSM Automation — Automation runs runbooks for remediation but does not aggregate and prioritize issues from multiple sources.
D) Security Hub — Security Hub aggregates security findings specifically, while OpsCenter handles broader operational issues including non-security items.
Why C is correct: SSM OpsCenter provides a central location to view, investigate, and resolve operational issues. It aggregates OpsItems from sources like CloudWatch alarms, Config rules, CloudFormation events, and other AWS services. Each OpsItem includes relevant data, related resources, and recommended runbooks for remediation.
Why others are wrong:
A) SSM Inventory — Inventory collects instance metadata, not operational issues or remediation actions.
B) SSM Automation — Automation runs runbooks for remediation but does not aggregate and prioritize issues from multiple sources.
D) Security Hub — Security Hub aggregates security findings specifically, while OpsCenter handles broader operational issues including non-security items.
Q28.A company wants to create a standardized, automated pipeline to build, test, and distribute Amazon Machine Images (AMIs) for their EC2 fleet. Which AWS service should they use?
✓ Correct: B. EC2 Image Builder automates the creation, testing, and distribution of AMIs.
Why B is correct: EC2 Image Builder is a fully managed service that simplifies the creation, maintenance, and deployment of customized, secure, and up-to-date server images. It provides an automated pipeline with components for building, validating (testing), and distributing AMIs across regions and accounts.
Why others are wrong:
A) CodePipeline — CodePipeline is a CI/CD service for application deployments, not specifically designed for AMI creation pipelines.
C) Lambda — While Lambda could script parts of an AMI creation process, it is not a purpose-built image building pipeline service.
D) CloudFormation — CloudFormation provisions infrastructure but does not provide an automated image building, testing, and distribution pipeline.
Why B is correct: EC2 Image Builder is a fully managed service that simplifies the creation, maintenance, and deployment of customized, secure, and up-to-date server images. It provides an automated pipeline with components for building, validating (testing), and distributing AMIs across regions and accounts.
Why others are wrong:
A) CodePipeline — CodePipeline is a CI/CD service for application deployments, not specifically designed for AMI creation pipelines.
C) Lambda — While Lambda could script parts of an AMI creation process, it is not a purpose-built image building pipeline service.
D) CloudFormation — CloudFormation provisions infrastructure but does not provide an automated image building, testing, and distribution pipeline.
Q29.An EC2 Image Builder pipeline has been configured. In which order do the pipeline stages execute?
✓ Correct: A. Image Builder follows the Build, Test, then Distribute pipeline order.
Why A is correct: EC2 Image Builder pipelines follow a logical sequence: first build the image (apply components like software installations and configurations), then validate/test the image (run test components to ensure compliance), and finally distribute the image (copy to specified regions and share with accounts).
Why others are wrong:
B) Distribute first — You cannot distribute an image before it has been built and tested. Distribution is always the final step.
C) Test first — Testing cannot occur before the image is built. The build phase creates the image that will be validated.
D) Distribute before test — Distributing before testing would risk pushing untested images to production. Testing must occur before distribution.
Why A is correct: EC2 Image Builder pipelines follow a logical sequence: first build the image (apply components like software installations and configurations), then validate/test the image (run test components to ensure compliance), and finally distribute the image (copy to specified regions and share with accounts).
Why others are wrong:
B) Distribute first — You cannot distribute an image before it has been built and tested. Distribution is always the final step.
C) Test first — Testing cannot occur before the image is built. The build phase creates the image that will be validated.
D) Distribute before test — Distributing before testing would risk pushing untested images to production. Testing must occur before distribution.
Q30.A company has created a custom AMI and wants to share it with another AWS account. The AMI contains an encrypted EBS snapshot using a customer-managed KMS key. What additional step is required for the other account to use this AMI?
✓ Correct: C. The KMS key used to encrypt the AMI's snapshot must be shared with the target account.
Why C is correct: When sharing an AMI with encrypted EBS snapshots, the recipient account needs access to the KMS key used for encryption. The key policy must be updated to grant the target account
Why others are wrong:
A) Recreate the key — KMS keys cannot be duplicated with the same key material across accounts. The correct approach is to share access to the existing key.
B) Decrypt before sharing — You cannot share an unencrypted copy of an encrypted AMI. The encryption is tied to the snapshot, and sharing the key permission is the correct approach.
D) Copy to S3 — AMI sharing is done through the EC2 AMI sharing mechanism, not through S3 presigned URLs.
Why C is correct: When sharing an AMI with encrypted EBS snapshots, the recipient account needs access to the KMS key used for encryption. The key policy must be updated to grant the target account
kms:Decrypt and kms:CreateGrant permissions. The recipient can then launch instances or re-encrypt with their own key.Why others are wrong:
A) Recreate the key — KMS keys cannot be duplicated with the same key material across accounts. The correct approach is to share access to the existing key.
B) Decrypt before sharing — You cannot share an unencrypted copy of an encrypted AMI. The encryption is tied to the snapshot, and sharing the key permission is the correct approach.
D) Copy to S3 — AMI sharing is done through the EC2 AMI sharing mechanism, not through S3 presigned URLs.
Q31.A security engineer wants to automatically deprecate old AMIs so that new instances cannot be launched from them after a specified date. Which AMI lifecycle feature supports this?
✓ Correct: D. AMI deprecation allows setting a date after which the AMI is marked as deprecated.
Why D is correct: AWS supports setting a deprecation date on AMIs. After this date, the AMI is marked as deprecated and will not appear in standard AMI listings. Existing instances are unaffected, but new launches from deprecated AMIs will show a warning. This is different from deregistration, which completely removes the AMI.
Why others are wrong:
A) AMI deregistration — Deregistration permanently removes the AMI so it cannot be used at all. Deprecation is a softer approach that warns but does not fully prevent use.
B) EBS snapshot archival — Snapshot archival moves snapshots to cheaper storage but does not manage AMI lifecycle or deprecation.
C) Tagging with expiration dates — Tags are metadata and have no enforcement mechanism. They do not automatically deprecate or restrict AMI usage.
Why D is correct: AWS supports setting a deprecation date on AMIs. After this date, the AMI is marked as deprecated and will not appear in standard AMI listings. Existing instances are unaffected, but new launches from deprecated AMIs will show a warning. This is different from deregistration, which completely removes the AMI.
Why others are wrong:
A) AMI deregistration — Deregistration permanently removes the AMI so it cannot be used at all. Deprecation is a softer approach that warns but does not fully prevent use.
B) EBS snapshot archival — Snapshot archival moves snapshots to cheaper storage but does not manage AMI lifecycle or deprecation.
C) Tagging with expiration dates — Tags are metadata and have no enforcement mechanism. They do not automatically deprecate or restrict AMI usage.
Q32.An organization wants to manage and track software licenses from vendors like Microsoft and Oracle across their AWS environment. Which AWS service helps with license compliance management?
✓ Correct: B. AWS License Manager helps manage and track software licenses across AWS and on-premises.
Why B is correct: AWS License Manager simplifies the process of managing software licenses from vendors like Microsoft, SAP, and Oracle. It allows you to create licensing rules, track license consumption, enforce license limits, and reduce the risk of non-compliance. It works across AWS accounts and on-premises environments.
Why others are wrong:
A) Service Catalog — Service Catalog manages approved IT product offerings, not software license tracking and compliance.
C) Systems Manager — SSM manages instance operations and configurations but does not provide license tracking or compliance enforcement.
D) AWS Marketplace — Marketplace is where you purchase software, but License Manager is the tool for tracking and managing license usage.
Why B is correct: AWS License Manager simplifies the process of managing software licenses from vendors like Microsoft, SAP, and Oracle. It allows you to create licensing rules, track license consumption, enforce license limits, and reduce the risk of non-compliance. It works across AWS accounts and on-premises environments.
Why others are wrong:
A) Service Catalog — Service Catalog manages approved IT product offerings, not software license tracking and compliance.
C) Systems Manager — SSM manages instance operations and configurations but does not provide license tracking or compliance enforcement.
D) AWS Marketplace — Marketplace is where you purchase software, but License Manager is the tool for tracking and managing license usage.
Q33.A company wants to receive recommendations for right-sizing their EC2 instances based on actual utilization metrics. Which AWS service analyzes workloads and provides these recommendations?
✓ Correct: A. AWS Compute Optimizer analyzes utilization and recommends optimal instance types.
Why A is correct: AWS Compute Optimizer uses machine learning to analyze historical utilization metrics from CloudWatch and recommends optimal AWS resource configurations. It covers EC2 instances, Auto Scaling groups, EBS volumes, and Lambda functions, helping you identify over-provisioned or under-provisioned resources.
Why others are wrong:
B) Cost Explorer — Cost Explorer provides cost analysis and basic right-sizing recommendations, but Compute Optimizer provides more detailed, ML-driven instance type recommendations.
C) Trusted Advisor — Trusted Advisor identifies underutilized instances but does not provide the same depth of ML-based right-sizing recommendations as Compute Optimizer.
D) Auto Scaling — Auto Scaling adjusts capacity dynamically but does not recommend optimal instance types based on utilization analysis.
Why A is correct: AWS Compute Optimizer uses machine learning to analyze historical utilization metrics from CloudWatch and recommends optimal AWS resource configurations. It covers EC2 instances, Auto Scaling groups, EBS volumes, and Lambda functions, helping you identify over-provisioned or under-provisioned resources.
Why others are wrong:
B) Cost Explorer — Cost Explorer provides cost analysis and basic right-sizing recommendations, but Compute Optimizer provides more detailed, ML-driven instance type recommendations.
C) Trusted Advisor — Trusted Advisor identifies underutilized instances but does not provide the same depth of ML-based right-sizing recommendations as Compute Optimizer.
D) Auto Scaling — Auto Scaling adjusts capacity dynamically but does not recommend optimal instance types based on utilization analysis.
Q34.A company wants to assess the resilience of their application against potential disruptions and validate their recovery procedures. Which AWS service helps define and validate application resiliency?
✓ Correct: C. AWS Resilience Hub helps define, validate, and track application resiliency.
Why C is correct: AWS Resilience Hub provides a central place to define your resiliency policy (RTO and RPO targets), assess your application architecture against those targets, identify gaps, and provide actionable recommendations. It also helps validate recovery procedures and track resiliency posture over time.
Why others are wrong:
A) AWS Backup — AWS Backup manages backup policies and schedules, but does not assess application resiliency or validate recovery against RTO/RPO targets.
B) Fault Injection Simulator — FIS runs chaos engineering experiments to test resilience, but Resilience Hub is the assessment and recommendation service for resiliency posture.
D) Well-Architected Tool — The Well-Architected Tool reviews workloads against all pillars, but Resilience Hub provides deeper, automated resiliency assessment with specific RTO/RPO validation.
Why C is correct: AWS Resilience Hub provides a central place to define your resiliency policy (RTO and RPO targets), assess your application architecture against those targets, identify gaps, and provide actionable recommendations. It also helps validate recovery procedures and track resiliency posture over time.
Why others are wrong:
A) AWS Backup — AWS Backup manages backup policies and schedules, but does not assess application resiliency or validate recovery against RTO/RPO targets.
B) Fault Injection Simulator — FIS runs chaos engineering experiments to test resilience, but Resilience Hub is the assessment and recommendation service for resiliency posture.
D) Well-Architected Tool — The Well-Architected Tool reviews workloads against all pillars, but Resilience Hub provides deeper, automated resiliency assessment with specific RTO/RPO validation.
Q35.Which TWO of the following are design principles of the Security Pillar in the AWS Well-Architected Framework?SELECT TWO
✓ Correct: B, E. Enable traceability and automate security best practices are Security Pillar principles.
Why B is correct: Enable traceability means logging and monitoring all actions and changes to your environment in real time. This supports audit, incident response, and compliance requirements.
Why E is correct: Automate security best practices means using software-based security mechanisms to scale quickly and cost-effectively. Automated processes are more reliable and consistent than manual security procedures.
Why others are wrong:
A) Consumption model — This is a design principle of the Cost Optimization pillar, not the Security pillar.
C) Multiple AZs — This is a best practice for the Reliability pillar, not specifically a Security pillar design principle.
D) Stop guessing capacity — This is a design principle of the Reliability pillar, not the Security pillar.
Why B is correct: Enable traceability means logging and monitoring all actions and changes to your environment in real time. This supports audit, incident response, and compliance requirements.
Why E is correct: Automate security best practices means using software-based security mechanisms to scale quickly and cost-effectively. Automated processes are more reliable and consistent than manual security procedures.
Why others are wrong:
A) Consumption model — This is a design principle of the Cost Optimization pillar, not the Security pillar.
C) Multiple AZs — This is a best practice for the Reliability pillar, not specifically a Security pillar design principle.
D) Stop guessing capacity — This is a design principle of the Reliability pillar, not the Security pillar.
Q36.Under the AWS Shared Responsibility Model, which of the following is the customer's responsibility when using AWS Lambda?
✓ Correct: D. The customer is responsible for configuring IAM permissions for their Lambda functions.
Why D is correct: Lambda is a serverless service where AWS manages the infrastructure, OS, and runtime. The customer's responsibilities are limited to: writing the function code, configuring IAM execution roles and resource policies, managing environment variables and secrets, and ensuring code security.
Why others are wrong:
A) Patching the OS — AWS manages the underlying operating system for Lambda. The customer has no access to it.
B) Runtime environment — AWS manages and updates the Lambda execution environment. Customers choose the runtime but do not maintain it.
C) Physical infrastructure — Physical security is always AWS's responsibility, regardless of the service used.
Why D is correct: Lambda is a serverless service where AWS manages the infrastructure, OS, and runtime. The customer's responsibilities are limited to: writing the function code, configuring IAM execution roles and resource policies, managing environment variables and secrets, and ensuring code security.
Why others are wrong:
A) Patching the OS — AWS manages the underlying operating system for Lambda. The customer has no access to it.
B) Runtime environment — AWS manages and updates the Lambda execution environment. Customers choose the runtime but do not maintain it.
C) Physical infrastructure — Physical security is always AWS's responsibility, regardless of the service used.
Q37.A company using AWS Artifact needs to accept a Business Associate Addendum (BAA) to handle protected health information (PHI) under HIPAA. Where in AWS can they accept this agreement?
✓ Correct: B. AWS Artifact Agreements allows you to review and accept compliance agreements like the BAA.
Why B is correct: AWS Artifact has two main sections: Artifact Reports (for downloading compliance documents) and Artifact Agreements (for reviewing and accepting agreements). The BAA for HIPAA compliance can be accepted directly through the Artifact Agreements section, both for individual accounts and across an AWS Organization.
Why others are wrong:
A) Support Center — While support can help with compliance questions, the BAA is self-service through Artifact Agreements, not through support tickets.
C) SCPs — Service Control Policies restrict permissions in an organization. They do not manage compliance agreements.
D) Config dashboard — AWS Config monitors resource compliance against rules, but does not handle legal compliance agreements.
Why B is correct: AWS Artifact has two main sections: Artifact Reports (for downloading compliance documents) and Artifact Agreements (for reviewing and accepting agreements). The BAA for HIPAA compliance can be accepted directly through the Artifact Agreements section, both for individual accounts and across an AWS Organization.
Why others are wrong:
A) Support Center — While support can help with compliance questions, the BAA is self-service through Artifact Agreements, not through support tickets.
C) SCPs — Service Control Policies restrict permissions in an organization. They do not manage compliance agreements.
D) Config dashboard — AWS Config monitors resource compliance against rules, but does not handle legal compliance agreements.
Q38.An operations team is using SSM Patch Manager and wants to define which patches should be automatically approved for installation on their EC2 instances. Which Patch Manager concept allows them to define these rules?
✓ Correct: A. Patch baselines define which patches are approved or rejected for installation.
Why A is correct: A patch baseline in SSM Patch Manager defines rules for auto-approving patches (based on severity, classification, and days after release) and lists of explicitly approved or rejected patches. AWS provides default baselines for each OS, and you can create custom baselines for your specific requirements.
Why others are wrong:
B) Maintenance window — Maintenance windows define when patching occurs (the schedule), not which patches are approved. They set the time window for the operation.
C) Patch group — Patch groups are tags used to associate instances with specific patch baselines. They organize which instances get which baseline, not which patches are approved.
D) SSM Document — SSM Documents define the actions to execute (like
Why A is correct: A patch baseline in SSM Patch Manager defines rules for auto-approving patches (based on severity, classification, and days after release) and lists of explicitly approved or rejected patches. AWS provides default baselines for each OS, and you can create custom baselines for your specific requirements.
Why others are wrong:
B) Maintenance window — Maintenance windows define when patching occurs (the schedule), not which patches are approved. They set the time window for the operation.
C) Patch group — Patch groups are tags used to associate instances with specific patch baselines. They organize which instances get which baseline, not which patches are approved.
D) SSM Document — SSM Documents define the actions to execute (like
AWS-RunPatchBaseline), but the approval rules are defined in the patch baseline itself.
Q39.A company wants to use CloudFormation to deploy the same security baseline stack to 15 different AWS accounts in their organization. They want to manage this from a single administrator account. Which approach should they use?
✓ Correct: C. StackSets with service-managed permissions can deploy across organization accounts automatically.
Why C is correct: CloudFormation StackSets with service-managed permissions (integrated with AWS Organizations) allows you to deploy stacks to all accounts in an OU or organization automatically. When new accounts are added, the stack is deployed automatically. This is the most efficient and scalable approach for multi-account deployments.
Why others are wrong:
A) Manual stacks — Creating stacks manually in each account is time-consuming, error-prone, and does not scale. It requires individual account access.
B) Nested stacks — Nested stacks work within a single account and region. They cannot deploy across multiple accounts.
D) CodePipeline — While possible, CodePipeline adds unnecessary complexity for this use case. StackSets is the purpose-built solution for multi-account stack deployment.
Why C is correct: CloudFormation StackSets with service-managed permissions (integrated with AWS Organizations) allows you to deploy stacks to all accounts in an OU or organization automatically. When new accounts are added, the stack is deployed automatically. This is the most efficient and scalable approach for multi-account deployments.
Why others are wrong:
A) Manual stacks — Creating stacks manually in each account is time-consuming, error-prone, and does not scale. It requires individual account access.
B) Nested stacks — Nested stacks work within a single account and region. They cannot deploy across multiple accounts.
D) CodePipeline — While possible, CodePipeline adds unnecessary complexity for this use case. StackSets is the purpose-built solution for multi-account stack deployment.
Q40.A security team wants to use Amazon EventBridge for security automation. Which TWO of the following are valid event sources that EventBridge can capture?SELECT TWO
✓ Correct: A, C. CloudTrail API calls and Security Hub findings are valid EventBridge event sources.
Why A is correct: CloudTrail integration with EventBridge allows you to create rules that match specific API calls (like
Why C is correct: AWS Security Hub sends findings to EventBridge, enabling automated responses to security issues such as triggering Lambda functions for remediation when critical findings are detected.
Why others are wrong:
B) SSH login attempts — SSH login attempts are OS-level events and are not directly captured by EventBridge. They would need to be processed through CloudWatch Logs or another mechanism first.
D) VPC flow logs in real time — VPC flow logs are delivered to CloudWatch Logs or S3, not directly to EventBridge as real-time events.
E) Packet captures — Individual packet captures are not an EventBridge event source. This would require VPC Traffic Mirroring or similar tools.
Why A is correct: CloudTrail integration with EventBridge allows you to create rules that match specific API calls (like
CreateAccessKey, StopLogging) and trigger automated responses.Why C is correct: AWS Security Hub sends findings to EventBridge, enabling automated responses to security issues such as triggering Lambda functions for remediation when critical findings are detected.
Why others are wrong:
B) SSH login attempts — SSH login attempts are OS-level events and are not directly captured by EventBridge. They would need to be processed through CloudWatch Logs or another mechanism first.
D) VPC flow logs in real time — VPC flow logs are delivered to CloudWatch Logs or S3, not directly to EventBridge as real-time events.
E) Packet captures — Individual packet captures are not an EventBridge event source. This would require VPC Traffic Mirroring or similar tools.
Q41.A company is concerned about someone accidentally deleting their CloudFormation stack that contains critical production infrastructure. How can they prevent accidental stack deletion?
✓ Correct: D. Termination protection prevents a CloudFormation stack from being accidentally deleted.
Why D is correct: Termination protection is a CloudFormation feature that prevents a stack from being deleted. When enabled, any attempt to delete the stack will fail until termination protection is explicitly disabled. This is the direct mechanism to prevent accidental deletion.
Why others are wrong:
A) DeletionPolicy Retain — This preserves individual resources when a stack is deleted, but it does not prevent the stack itself from being deleted. The stack would still be removed.
B) Deny all CloudFormation actions — This would prevent all CloudFormation operations including updates and creation, which is too restrictive for normal operations.
C) Stack policy preventing updates — Stack policies control resource updates during stack updates, not stack deletion. They do not prevent stack deletion.
Why D is correct: Termination protection is a CloudFormation feature that prevents a stack from being deleted. When enabled, any attempt to delete the stack will fail until termination protection is explicitly disabled. This is the direct mechanism to prevent accidental deletion.
Why others are wrong:
A) DeletionPolicy Retain — This preserves individual resources when a stack is deleted, but it does not prevent the stack itself from being deleted. The stack would still be removed.
B) Deny all CloudFormation actions — This would prevent all CloudFormation operations including updates and creation, which is too restrictive for normal operations.
C) Stack policy preventing updates — Stack policies control resource updates during stack updates, not stack deletion. They do not prevent stack deletion.
Q42.Which AWS service provides a set of prebuilt compliance framework templates including CIS Benchmarks, SOC 2, and PCI-DSS to automate evidence collection?
✓ Correct: B. AWS Audit Manager provides prebuilt frameworks for automated compliance evidence collection.
Why B is correct: AWS Audit Manager offers prebuilt and custom framework templates for standards like CIS Benchmarks, SOC 2, PCI-DSS, GDPR, and HIPAA. It automatically collects evidence from AWS services and maps it to the relevant framework controls, simplifying audit preparation.
Why others are wrong:
A) Config conformance packs — Conformance packs bundle AWS Config rules for compliance checks but do not provide the full audit management lifecycle with evidence collection and reporting.
C) Security Hub — Security Hub checks resources against security standards but focuses on findings, not comprehensive audit evidence collection and management.
D) Artifact — Artifact provides AWS's own compliance reports and certifications, not tools for assessing your own environment against frameworks.
Why B is correct: AWS Audit Manager offers prebuilt and custom framework templates for standards like CIS Benchmarks, SOC 2, PCI-DSS, GDPR, and HIPAA. It automatically collects evidence from AWS services and maps it to the relevant framework controls, simplifying audit preparation.
Why others are wrong:
A) Config conformance packs — Conformance packs bundle AWS Config rules for compliance checks but do not provide the full audit management lifecycle with evidence collection and reporting.
C) Security Hub — Security Hub checks resources against security standards but focuses on findings, not comprehensive audit evidence collection and management.
D) Artifact — Artifact provides AWS's own compliance reports and certifications, not tools for assessing your own environment against frameworks.
Q43.A security engineer notices that AWS Trusted Advisor is flagging security groups with unrestricted access (0.0.0.0/0) on specific ports. This check is available on which support plan?
✓ Correct: A. Security group unrestricted access checks are part of the free core Trusted Advisor checks.
Why A is correct: Trusted Advisor provides a set of core security checks to all AWS customers for free, regardless of their support plan. These include: security groups with unrestricted access (specific ports), IAM use, MFA on root account, S3 bucket permissions, EBS public snapshots, and RDS public snapshots.
Why others are wrong:
B) Developer and above — Developer plan does not unlock additional Trusted Advisor checks beyond the free core checks.
C) Business and above — Business unlocks all Trusted Advisor checks, but the security group check is already free.
D) Enterprise only — This check is not restricted to Enterprise. It is available to all customers.
Why A is correct: Trusted Advisor provides a set of core security checks to all AWS customers for free, regardless of their support plan. These include: security groups with unrestricted access (specific ports), IAM use, MFA on root account, S3 bucket permissions, EBS public snapshots, and RDS public snapshots.
Why others are wrong:
B) Developer and above — Developer plan does not unlock additional Trusted Advisor checks beyond the free core checks.
C) Business and above — Business unlocks all Trusted Advisor checks, but the security group check is already free.
D) Enterprise only — This check is not restricted to Enterprise. It is available to all customers.
Q44.Which TWO of the following are valid DeletionPolicy values in AWS CloudFormation?SELECT TWO
✓ Correct: C, E. Snapshot and Retain are valid DeletionPolicy values.
Why C is correct: Snapshot creates a snapshot of the resource before it is deleted. This is supported for resources like RDS instances, EBS volumes, and ElastiCache clusters. It preserves the data even after the resource is removed.
Why E is correct: Retain keeps the resource intact when the stack is deleted. The resource is preserved but is no longer managed by CloudFormation.
Why others are wrong:
A) Archive — Archive is not a valid DeletionPolicy value in CloudFormation.
B) Protect — Protect is not a valid DeletionPolicy value. The correct term is Retain.
D) Backup — Backup is not a valid DeletionPolicy value. The valid options are Delete (default), Retain, and Snapshot.
Why C is correct: Snapshot creates a snapshot of the resource before it is deleted. This is supported for resources like RDS instances, EBS volumes, and ElastiCache clusters. It preserves the data even after the resource is removed.
Why E is correct: Retain keeps the resource intact when the stack is deleted. The resource is preserved but is no longer managed by CloudFormation.
Why others are wrong:
A) Archive — Archive is not a valid DeletionPolicy value in CloudFormation.
B) Protect — Protect is not a valid DeletionPolicy value. The correct term is Retain.
D) Backup — Backup is not a valid DeletionPolicy value. The valid options are Delete (default), Retain, and Snapshot.
Q45.A company is using EC2 Image Builder. They want to ensure that every AMI produced by the pipeline meets their security hardening standards before distribution. Which Image Builder component should they configure?
✓ Correct: C. Test components validate that the built image meets security and compliance standards.
Why C is correct: In EC2 Image Builder, test components run after the build phase to validate the image. You can create custom test components that check for security hardening requirements (CIS benchmarks, required software versions, security configurations). If tests fail, the pipeline stops and the AMI is not distributed.
Why others are wrong:
A) Build components only — Build components install and configure software but do not validate that the final image meets standards. Testing is a separate phase.
B) Distribution settings — Distribution settings define where the AMI is copied and shared, not whether it meets security standards.
D) Infrastructure configuration — Infrastructure configuration defines the instance type, VPC, and security group used during the build process, not image validation criteria.
Why C is correct: In EC2 Image Builder, test components run after the build phase to validate the image. You can create custom test components that check for security hardening requirements (CIS benchmarks, required software versions, security configurations). If tests fail, the pipeline stops and the AMI is not distributed.
Why others are wrong:
A) Build components only — Build components install and configure software but do not validate that the final image meets standards. Testing is a separate phase.
B) Distribution settings — Distribution settings define where the AMI is copied and shared, not whether it meets security standards.
D) Infrastructure configuration — Infrastructure configuration defines the instance type, VPC, and security group used during the build process, not image validation criteria.
Q46.A company wants to track whether their AWS environment meets the security requirements defined in the Security Pillar of the Well-Architected Framework. Which AWS tool allows them to perform this review?
✓ Correct: D. The AWS Well-Architected Tool helps review workloads against the Well-Architected Framework pillars.
Why D is correct: The AWS Well-Architected Tool is a service in the AWS Console that helps you review your workloads against the best practices defined in the Well-Architected Framework. It asks a series of questions for each pillar (including Security) and generates a report with identified risks and improvement recommendations.
Why others are wrong:
A) Trusted Advisor — Trusted Advisor provides automated checks for common best practices, but it is not the formal Well-Architected Framework review tool.
B) Audit Manager — Audit Manager focuses on compliance framework evidence collection, not Well-Architected Framework reviews.
C) Security Hub — Security Hub aggregates security findings but does not perform a structured Well-Architected Framework review.
Why D is correct: The AWS Well-Architected Tool is a service in the AWS Console that helps you review your workloads against the best practices defined in the Well-Architected Framework. It asks a series of questions for each pillar (including Security) and generates a report with identified risks and improvement recommendations.
Why others are wrong:
A) Trusted Advisor — Trusted Advisor provides automated checks for common best practices, but it is not the formal Well-Architected Framework review tool.
B) Audit Manager — Audit Manager focuses on compliance framework evidence collection, not Well-Architected Framework reviews.
C) Security Hub — Security Hub aggregates security findings but does not perform a structured Well-Architected Framework review.
Q47.A DevOps engineer is using SSM Run Command and wants to control the execution speed so that only 20% of target instances run the command at a time. Which Run Command parameter should they configure?
✓ Correct: B. Max concurrency controls how many instances execute the command simultaneously.
Why B is correct: SSM Run Command supports rate control through two parameters: max concurrency (controls how many targets run the command at the same time, as a number or percentage) and max errors (stops execution if errors exceed a threshold). Setting max concurrency to 20% would run the command on 20% of targets at a time.
Why others are wrong:
A) Error threshold — The error threshold (max errors) stops execution when too many instances fail, but it does not control execution speed or concurrency.
C) Timeout seconds — Timeout controls how long the command can run on each instance, not how many instances run simultaneously.
D) Output S3 bucket prefix — This controls where command output is stored, not execution speed.
Why B is correct: SSM Run Command supports rate control through two parameters: max concurrency (controls how many targets run the command at the same time, as a number or percentage) and max errors (stops execution if errors exceed a threshold). Setting max concurrency to 20% would run the command on 20% of targets at a time.
Why others are wrong:
A) Error threshold — The error threshold (max errors) stops execution when too many instances fail, but it does not control execution speed or concurrency.
C) Timeout seconds — Timeout controls how long the command can run on each instance, not how many instances run simultaneously.
D) Output S3 bucket prefix — This controls where command output is stored, not execution speed.
Q48.A company has an on-premises server that they want to manage using AWS Systems Manager. What is required for SSM to manage this on-premises server?
✓ Correct: A. On-premises servers need the SSM Agent and a hybrid activation to be managed by SSM.
Why A is correct: To manage on-premises servers with SSM, you must install the SSM Agent on the server and register it using a hybrid activation. This creates a managed instance (with an
Why others are wrong:
B) Direct Connect — Direct Connect provides a dedicated network link but is not required for SSM management. Standard internet access is sufficient.
C) Migrate to EC2 — SSM supports managing on-premises servers directly without migration. Migration is unnecessary for SSM functionality.
D) AWS Outposts — Outposts extends AWS infrastructure on-premises, but SSM can manage existing on-premises servers without Outposts.
Why A is correct: To manage on-premises servers with SSM, you must install the SSM Agent on the server and register it using a hybrid activation. This creates a managed instance (with an
mi- prefix) that can use SSM features like Run Command, Patch Manager, and Session Manager. The server needs outbound internet access to reach SSM endpoints.Why others are wrong:
B) Direct Connect — Direct Connect provides a dedicated network link but is not required for SSM management. Standard internet access is sufficient.
C) Migrate to EC2 — SSM supports managing on-premises servers directly without migration. Migration is unnecessary for SSM functionality.
D) AWS Outposts — Outposts extends AWS infrastructure on-premises, but SSM can manage existing on-premises servers without Outposts.
Q49.Which AWS Support plan offers the fastest response time of less than 15 minutes for business-critical system-down cases?
✓ Correct: C. Enterprise Support provides a less than 15-minute response time for critical cases.
Why C is correct: The Enterprise Support plan offers the fastest response times: less than 15 minutes for business/mission-critical system-down cases. The response time tiers are: General guidance (24 hours), System impaired (12 hours), Production system impaired (4 hours), Production system down (1 hour), and Business/mission-critical system down (15 minutes).
Why others are wrong:
A) Developer — Developer plan offers 12-hour response for system impaired cases and 24-hour for general guidance only. No critical case category exists.
B) Business — Business plan offers less than 1 hour for production system down, but does not have the 15-minute business-critical tier.
D) Enterprise On-Ramp — Enterprise On-Ramp offers less than 30 minutes for business-critical cases, not 15 minutes.
Why C is correct: The Enterprise Support plan offers the fastest response times: less than 15 minutes for business/mission-critical system-down cases. The response time tiers are: General guidance (24 hours), System impaired (12 hours), Production system impaired (4 hours), Production system down (1 hour), and Business/mission-critical system down (15 minutes).
Why others are wrong:
A) Developer — Developer plan offers 12-hour response for system impaired cases and 24-hour for general guidance only. No critical case category exists.
B) Business — Business plan offers less than 1 hour for production system down, but does not have the 15-minute business-critical tier.
D) Enterprise On-Ramp — Enterprise On-Ramp offers less than 30 minutes for business-critical cases, not 15 minutes.
Q50.A security architect is implementing the AWS Well-Architected Security Pillar best practices. Which TWO practices align with the Security Pillar?SELECT TWO
✓ Correct: B, D. Protecting data and implementing strong identity foundations are Security Pillar practices.
Why B is correct: Protect data in transit and at rest is a core Security Pillar principle. This includes using encryption, tokenization, and access controls to protect data throughout its lifecycle.
Why D is correct: Implement a strong identity foundation with the principle of least privilege is a foundational Security Pillar design principle. This means using IAM with fine-grained permissions and avoiding long-term credentials.
Why others are wrong:
A) Loose coupling — This is a Reliability pillar best practice, not specifically a Security pillar principle.
C) Horizontal scaling — This is a Performance Efficiency or Reliability consideration, not a Security pillar principle.
E) Managed services — While managed services can improve security, this principle is more associated with Operational Excellence.
Why B is correct: Protect data in transit and at rest is a core Security Pillar principle. This includes using encryption, tokenization, and access controls to protect data throughout its lifecycle.
Why D is correct: Implement a strong identity foundation with the principle of least privilege is a foundational Security Pillar design principle. This means using IAM with fine-grained permissions and avoiding long-term credentials.
Why others are wrong:
A) Loose coupling — This is a Reliability pillar best practice, not specifically a Security pillar principle.
C) Horizontal scaling — This is a Performance Efficiency or Reliability consideration, not a Security pillar principle.
E) Managed services — While managed services can improve security, this principle is more associated with Operational Excellence.
Q51.A company is using AWS Service Catalog. The IT administrator wants to control which users can access which products. How does Service Catalog manage access?
✓ Correct: B. Service Catalog uses portfolios to organize and control access to products.
Why B is correct: In AWS Service Catalog, products are organized into portfolios. Administrators grant IAM users, groups, or roles access to specific portfolios. Users can then browse and launch any product within the portfolios they have access to. This provides a clean governance model for self-service IT.
Why others are wrong:
A) Assign products to users — Products are not directly assigned to users. Access is managed through portfolio-level permissions.
C) Tag products with user IDs — Tags do not control access to Service Catalog products. Portfolio-based access control is the mechanism.
D) S3 bucket policies — While CloudFormation templates may be stored in S3, access to Service Catalog products is controlled through portfolios and IAM, not S3 policies.
Why B is correct: In AWS Service Catalog, products are organized into portfolios. Administrators grant IAM users, groups, or roles access to specific portfolios. Users can then browse and launch any product within the portfolios they have access to. This provides a clean governance model for self-service IT.
Why others are wrong:
A) Assign products to users — Products are not directly assigned to users. Access is managed through portfolio-level permissions.
C) Tag products with user IDs — Tags do not control access to Service Catalog products. Portfolio-based access control is the mechanism.
D) S3 bucket policies — While CloudFormation templates may be stored in S3, access to Service Catalog products is controlled through portfolios and IAM, not S3 policies.
Q52.An organization wants to use AWS Resilience Hub to assess an application. They need to define acceptable downtime of 1 hour and data loss of 15 minutes. What do these values represent?
✓ Correct: A. RTO is the acceptable downtime and RPO is the acceptable data loss window.
Why A is correct: RTO (Recovery Time Objective) defines the maximum acceptable time to restore service after a disruption — in this case, 1 hour. RPO (Recovery Point Objective) defines the maximum acceptable amount of data loss measured in time — in this case, 15 minutes. AWS Resilience Hub uses these values to assess if your architecture can meet these targets.
Why others are wrong:
B) RPO and RTO reversed — RPO measures data loss (not downtime) and RTO measures recovery time (not data loss). This reverses the definitions.
C) MTTR and MTBF — MTTR (Mean Time To Repair) and MTBF (Mean Time Between Failures) are reliability metrics, not the policy targets set in Resilience Hub.
D) SLA and SLO — SLAs and SLOs are service-level agreements/objectives, not the specific recovery time and data loss targets used in Resilience Hub.
Why A is correct: RTO (Recovery Time Objective) defines the maximum acceptable time to restore service after a disruption — in this case, 1 hour. RPO (Recovery Point Objective) defines the maximum acceptable amount of data loss measured in time — in this case, 15 minutes. AWS Resilience Hub uses these values to assess if your architecture can meet these targets.
Why others are wrong:
B) RPO and RTO reversed — RPO measures data loss (not downtime) and RTO measures recovery time (not data loss). This reverses the definitions.
C) MTTR and MTBF — MTTR (Mean Time To Repair) and MTBF (Mean Time Between Failures) are reliability metrics, not the policy targets set in Resilience Hub.
D) SLA and SLO — SLAs and SLOs are service-level agreements/objectives, not the specific recovery time and data loss targets used in Resilience Hub.
Q53.A security team wants to use AWS Budgets to track not just cost thresholds but also to automatically take action when a budget is exceeded. Which AWS Budgets feature allows automated responses?
✓ Correct: D. Budget actions can automatically execute responses when budget thresholds are exceeded.
Why D is correct: AWS Budgets actions allow you to define automated responses when budget thresholds are breached. Actions can include applying an IAM policy to restrict further spending, applying an SCP to an organizational unit, or stopping specific EC2 or RDS instances. Actions can run automatically or require approval.
Why others are wrong:
A) Budget reports — Budget reports deliver scheduled summaries of budget performance via email. They are informational, not automated response mechanisms.
B) Budget forecasts — Forecasts predict future spending based on trends but do not take automated actions when thresholds are exceeded.
C) Cost allocation tags — Tags help organize and categorize costs but do not trigger automated responses to budget breaches.
Why D is correct: AWS Budgets actions allow you to define automated responses when budget thresholds are breached. Actions can include applying an IAM policy to restrict further spending, applying an SCP to an organizational unit, or stopping specific EC2 or RDS instances. Actions can run automatically or require approval.
Why others are wrong:
A) Budget reports — Budget reports deliver scheduled summaries of budget performance via email. They are informational, not automated response mechanisms.
B) Budget forecasts — Forecasts predict future spending based on trends but do not take automated actions when thresholds are exceeded.
C) Cost allocation tags — Tags help organize and categorize costs but do not trigger automated responses to budget breaches.
Q54.Which of the following is a prerequisite for an EC2 instance to be managed by AWS Systems Manager?
✓ Correct: C. SSM Agent and appropriate IAM permissions are required for SSM management.
Why C is correct: For an EC2 instance to be managed by SSM, it needs: the SSM Agent installed and running (pre-installed on many Amazon Linux and Windows AMIs), an IAM instance profile with the
Why others are wrong:
A) Public IP address — A public IP is not required. Instances in private subnets can use VPC endpoints to communicate with SSM.
B) Public subnet — Instances in private subnets can use SSM via VPC endpoints. A public subnet is not required.
D) SSH port 22 open — SSM does not use SSH. One of the key benefits of SSM Session Manager is that it eliminates the need for open SSH ports.
Why C is correct: For an EC2 instance to be managed by SSM, it needs: the SSM Agent installed and running (pre-installed on many Amazon Linux and Windows AMIs), an IAM instance profile with the
AmazonSSMManagedInstanceCore policy, and network connectivity to SSM endpoints (via internet or VPC endpoints).Why others are wrong:
A) Public IP address — A public IP is not required. Instances in private subnets can use VPC endpoints to communicate with SSM.
B) Public subnet — Instances in private subnets can use SSM via VPC endpoints. A public subnet is not required.
D) SSH port 22 open — SSM does not use SSH. One of the key benefits of SSM Session Manager is that it eliminates the need for open SSH ports.
Q55.Which TWO of the following are features provided by the AWS Enterprise Support plan but NOT by the Business Support plan?SELECT TWO
✓ Correct: A, E. TAM and Concierge Support are exclusive to Enterprise plans.
Why A is correct: A designated Technical Account Manager (TAM) is exclusive to the Enterprise Support plan. The TAM provides proactive guidance, architectural reviews, and serves as the primary point of contact within AWS.
Why E is correct: The Concierge Support Team is exclusive to Enterprise Support. They assist with billing questions, account best practices, and help navigate AWS services for non-technical inquiries.
Why others are wrong:
B) Full Trusted Advisor checks — Both Business and Enterprise plans provide access to the full set of Trusted Advisor checks.
C) 24/7 phone support — Both Business and Enterprise plans include 24/7 phone, email, and chat support for technical issues.
D) Personal Health Dashboard — The Personal Health Dashboard (now AWS Health) is available to all AWS customers regardless of support plan.
Why A is correct: A designated Technical Account Manager (TAM) is exclusive to the Enterprise Support plan. The TAM provides proactive guidance, architectural reviews, and serves as the primary point of contact within AWS.
Why E is correct: The Concierge Support Team is exclusive to Enterprise Support. They assist with billing questions, account best practices, and help navigate AWS services for non-technical inquiries.
Why others are wrong:
B) Full Trusted Advisor checks — Both Business and Enterprise plans provide access to the full set of Trusted Advisor checks.
C) 24/7 phone support — Both Business and Enterprise plans include 24/7 phone, email, and chat support for technical issues.
D) Personal Health Dashboard — The Personal Health Dashboard (now AWS Health) is available to all AWS customers regardless of support plan.
Q56.A security engineer wants to create a golden AMI that automatically includes the latest security patches. They want this AMI to be rebuilt weekly. Which combination of services provides this automated pipeline?
✓ Correct: B. EC2 Image Builder supports scheduled pipelines to automatically rebuild AMIs.
Why B is correct: EC2 Image Builder pipelines can be configured with a schedule (using cron expressions) to automatically rebuild AMIs at regular intervals. The pipeline can include build components that apply security patches, test components that validate the image, and distribution settings to share the AMI across accounts and regions.
Why others are wrong:
A) Lambda with cron — While possible, this would require custom code to manage the entire AMI creation process. Image Builder provides this as a managed service.
C) Patch Manager — Patch Manager patches running instances, but does not build new AMIs with patches baked in.
D) CodeBuild — CodeBuild is designed for building application code, not specifically for creating and managing AMI pipelines.
Why B is correct: EC2 Image Builder pipelines can be configured with a schedule (using cron expressions) to automatically rebuild AMIs at regular intervals. The pipeline can include build components that apply security patches, test components that validate the image, and distribution settings to share the AMI across accounts and regions.
Why others are wrong:
A) Lambda with cron — While possible, this would require custom code to manage the entire AMI creation process. Image Builder provides this as a managed service.
C) Patch Manager — Patch Manager patches running instances, but does not build new AMIs with patches baked in.
D) CodeBuild — CodeBuild is designed for building application code, not specifically for creating and managing AMI pipelines.
Q57.Under the Shared Responsibility Model, which of the following is AWS's responsibility for Amazon S3?
✓ Correct: A. AWS is responsible for the underlying S3 infrastructure, including durability and availability.
Why A is correct: Under the Shared Responsibility Model, AWS is responsible for the infrastructure that runs S3, ensuring 99.999999999% (11 nines) durability and high availability across multiple AZs. The customer is responsible for how they configure and use S3.
Why others are wrong:
B) Bucket policies — Configuring bucket policies, ACLs, and access controls is the customer's responsibility to control who can access their data.
C) Enabling encryption — While AWS provides encryption mechanisms, the customer decides whether to enable server-side encryption and which encryption method to use.
D) Data classification — Classifying and categorizing data is entirely the customer's responsibility. AWS does not know what data customers store.
Why A is correct: Under the Shared Responsibility Model, AWS is responsible for the infrastructure that runs S3, ensuring 99.999999999% (11 nines) durability and high availability across multiple AZs. The customer is responsible for how they configure and use S3.
Why others are wrong:
B) Bucket policies — Configuring bucket policies, ACLs, and access controls is the customer's responsibility to control who can access their data.
C) Enabling encryption — While AWS provides encryption mechanisms, the customer decides whether to enable server-side encryption and which encryption method to use.
D) Data classification — Classifying and categorizing data is entirely the customer's responsibility. AWS does not know what data customers store.
Q58.A company wants to detect when an EventBridge rule fails to deliver an event to its target. What EventBridge feature helps with monitoring delivery failures?
✓ Correct: D. A dead-letter queue captures events that could not be delivered to the target.
Why D is correct: EventBridge supports configuring a dead-letter queue (DLQ) using an SQS queue for each rule target. When event delivery fails after all retry attempts, the failed event is sent to the DLQ. This allows you to investigate and reprocess failed events without data loss.
Why others are wrong:
A) Event replay — Event replay replays previously archived events. It does not specifically capture or monitor delivery failures.
B) Event archive — Archives store events for later replay but do not handle or flag delivery failures to targets.
C) Schema registry — The schema registry discovers and stores the structure of events. It does not monitor delivery success or failure.
Why D is correct: EventBridge supports configuring a dead-letter queue (DLQ) using an SQS queue for each rule target. When event delivery fails after all retry attempts, the failed event is sent to the DLQ. This allows you to investigate and reprocess failed events without data loss.
Why others are wrong:
A) Event replay — Event replay replays previously archived events. It does not specifically capture or monitor delivery failures.
B) Event archive — Archives store events for later replay but do not handle or flag delivery failures to targets.
C) Schema registry — The schema registry discovers and stores the structure of events. It does not monitor delivery success or failure.
Q59.Which TWO of the following are benefits of using SSM Session Manager instead of traditional SSH access to EC2 instances?SELECT TWO
✓ Correct: B, C. No open ports required and full session logging are key Session Manager benefits.
Why B is correct: SSM Session Manager communicates through the SSM Agent, which uses outbound HTTPS connections. There is no need to open inbound port 22 in security groups, eliminating a common attack vector and reducing the security surface.
Why C is correct: Session Manager provides comprehensive session logging. All session activity (commands typed and output) can be logged to S3 and CloudWatch Logs, and session metadata is recorded in CloudTrail for complete auditability.
Why others are wrong:
A) Faster throughput — Session Manager is not designed for faster file transfer speeds. It provides shell access, not optimized data transfer.
D) Graphical desktop — Session Manager provides command-line shell access, not graphical desktop sessions. Use RDP or NICE DCV for graphical access.
E) Automatic patching — Session Manager does not automatically apply patches. That is the role of SSM Patch Manager.
Why B is correct: SSM Session Manager communicates through the SSM Agent, which uses outbound HTTPS connections. There is no need to open inbound port 22 in security groups, eliminating a common attack vector and reducing the security surface.
Why C is correct: Session Manager provides comprehensive session logging. All session activity (commands typed and output) can be logged to S3 and CloudWatch Logs, and session metadata is recorded in CloudTrail for complete auditability.
Why others are wrong:
A) Faster throughput — Session Manager is not designed for faster file transfer speeds. It provides shell access, not optimized data transfer.
D) Graphical desktop — Session Manager provides command-line shell access, not graphical desktop sessions. Use RDP or NICE DCV for graphical access.
E) Automatic patching — Session Manager does not automatically apply patches. That is the role of SSM Patch Manager.
Q60.A company wants to ensure that all AWS resources deployed by developers conform to approved configurations. The IT team creates standardized CloudFormation templates and wants developers to launch resources only from these templates. Which AWS service enforces this governance model?
✓ Correct: B. AWS Service Catalog enforces that developers can only launch pre-approved products.
Why B is correct: AWS Service Catalog allows IT administrators to create portfolios of approved products (built from CloudFormation templates) and grant developers access only to these products. Developers can self-service launch resources, but only from the approved catalog. Combined with IAM restrictions, this ensures developers cannot deploy non-compliant resources.
Why others are wrong:
A) AWS Config — Config detects non-compliant resources after deployment but does not prevent developers from deploying unapproved configurations in the first place.
C) StackSets — StackSets deploy stacks across accounts and regions but do not provide a self-service governance model for developers.
D) Organizations — Organizations with SCPs can restrict AWS actions broadly, but Service Catalog provides the specific self-service product catalog governance model described.
Why B is correct: AWS Service Catalog allows IT administrators to create portfolios of approved products (built from CloudFormation templates) and grant developers access only to these products. Developers can self-service launch resources, but only from the approved catalog. Combined with IAM restrictions, this ensures developers cannot deploy non-compliant resources.
Why others are wrong:
A) AWS Config — Config detects non-compliant resources after deployment but does not prevent developers from deploying unapproved configurations in the first place.
C) StackSets — StackSets deploy stacks across accounts and regions but do not provide a self-service governance model for developers.
D) Organizations — Organizations with SCPs can restrict AWS actions broadly, but Service Catalog provides the specific self-service product catalog governance model described.