0 / 63 answered
Q1.After a security incident, you want to store management events and data events from CloudTrail in separate S3 buckets. How do you achieve this?
✓ Correct: B. Create a new trail for management events only and send to a different S3 bucket.
Key Concepts:
Management Events (Control Plane) — Operations that manage your AWS resources. Examples:
Data Events (Data Plane) — Operations performed on data within resources. Examples: S3
Security Best Practice:
Separating management and data events into different S3 buckets provides: (1) Separation of concerns — security teams can monitor control-plane changes without sifting through high-volume data events; (2) Cost optimization — data events generate far more log volume, so you can apply different lifecycle/storage policies; (3) Access control — apply different bucket policies so only certain teams access each log type.
Why B is correct: AWS CloudTrail supports up to 5 trails per region. You can create a second trail configured to capture only management events and direct it to a separate S3 bucket, while the original trail continues logging data events to its own bucket. This cleanly separates the two event types.
Why others are wrong:
A) Modify the existing trail to store data events in a new S3 bucket — A single trail sends all its events to one S3 bucket. You cannot configure one trail to split different event types across different buckets.
C) Deleting the existing trail is necessary — Deleting the trail is unnecessary and dangerous — it would create a gap in logging during recreation, potentially missing critical security events.
D) Create a new trail inheriting from the original — CloudTrail trails do not support inheritance. Each trail is independently configured; there is no parent-child relationship between trails.
Key Concepts:
Management Events (Control Plane) — Operations that manage your AWS resources. Examples:
CreateBucket, CreateTrail, RunInstances, AttachRolePolicy, IAM operations, console sign-in events. These tell you who did what to your infrastructure.Data Events (Data Plane) — Operations performed on data within resources. Examples: S3
GetObject/PutObject, Lambda Invoke, DynamoDB GetItem/PutItem. These are high-volume and tell you what data was accessed or modified.Security Best Practice:
Separating management and data events into different S3 buckets provides: (1) Separation of concerns — security teams can monitor control-plane changes without sifting through high-volume data events; (2) Cost optimization — data events generate far more log volume, so you can apply different lifecycle/storage policies; (3) Access control — apply different bucket policies so only certain teams access each log type.
Why B is correct: AWS CloudTrail supports up to 5 trails per region. You can create a second trail configured to capture only management events and direct it to a separate S3 bucket, while the original trail continues logging data events to its own bucket. This cleanly separates the two event types.
Why others are wrong:
A) Modify the existing trail to store data events in a new S3 bucket — A single trail sends all its events to one S3 bucket. You cannot configure one trail to split different event types across different buckets.
C) Deleting the existing trail is necessary — Deleting the trail is unnecessary and dangerous — it would create a gap in logging during recreation, potentially missing critical security events.
D) Create a new trail inheriting from the original — CloudTrail trails do not support inheritance. Each trail is independently configured; there is no parent-child relationship between trails.
Q2.After an accidental credential leak, your Head of Security wants you to consolidate CloudTrail logs and make them queryable through standard SQL. What should you do?
✓ Correct: B. Direct CloudTrail logs to a single S3 bucket and use Athena for querying.
Key Concepts:
AWS CloudTrail — Records all API calls and account activity across your AWS environment. By default, CloudTrail delivers log files as compressed JSON to an S3 bucket. In a multi-account or multi-region setup, you can consolidate all trails into a single S3 bucket (called an organization trail or cross-account logging).
Amazon Athena — A serverless, interactive query service that lets you analyze data directly in S3 using standard SQL. No infrastructure to manage — you simply define a table schema over your S3 data and start querying. Athena has built-in support for CloudTrail log format with pre-built table definitions.
Security Best Practice:
After a credential leak, you need to investigate: which APIs were called, from what IP addresses, and what resources were accessed. The fastest path is: (1) Consolidate all CloudTrail logs into one S3 bucket, (2) Create an Athena table pointing to that bucket, (3) Run SQL queries like
Why B is correct: CloudTrail natively logs to S3 in JSON format. Athena can query S3 data directly using standard SQL — no ETL, no data movement, no servers to manage. AWS even provides a ready-made Athena table definition for CloudTrail logs, making this the simplest and most cost-effective solution.
Why others are wrong:
A) Use DynamoDB to store CloudTrail logs and query using Athena — DynamoDB is a NoSQL key-value database, not designed for log storage or SQL analytics. Moving logs from S3 to DynamoDB adds unnecessary complexity and cost. Also, Athena queries S3, not DynamoDB.
C) Store CloudTrail logs in an RDS database and query using Athena — RDS is a relational database (MySQL, PostgreSQL, etc.). Loading massive log files into RDS is expensive and slow. Athena does not query RDS directly — it queries S3. This adds unnecessary infrastructure.
D) Use Macie to query logs — Amazon Macie is a data security service that discovers and protects sensitive data (PII, credentials) in S3. It does not provide SQL query capabilities for CloudTrail logs. Macie is for data classification, not log analysis.
E) Use DynamoDB to query logs in S3 — DynamoDB is a database, not a query engine. It cannot query data stored in S3. DynamoDB stores and retrieves its own data using key-value lookups, not SQL queries over external files.
Key Concepts:
AWS CloudTrail — Records all API calls and account activity across your AWS environment. By default, CloudTrail delivers log files as compressed JSON to an S3 bucket. In a multi-account or multi-region setup, you can consolidate all trails into a single S3 bucket (called an organization trail or cross-account logging).
Amazon Athena — A serverless, interactive query service that lets you analyze data directly in S3 using standard SQL. No infrastructure to manage — you simply define a table schema over your S3 data and start querying. Athena has built-in support for CloudTrail log format with pre-built table definitions.
Security Best Practice:
After a credential leak, you need to investigate: which APIs were called, from what IP addresses, and what resources were accessed. The fastest path is: (1) Consolidate all CloudTrail logs into one S3 bucket, (2) Create an Athena table pointing to that bucket, (3) Run SQL queries like
SELECT * FROM cloudtrail_logs WHERE useridentity.accesskeyid = 'AKIA...' to trace the compromised credential's activity. This is the AWS-recommended approach for forensic investigation.Why B is correct: CloudTrail natively logs to S3 in JSON format. Athena can query S3 data directly using standard SQL — no ETL, no data movement, no servers to manage. AWS even provides a ready-made Athena table definition for CloudTrail logs, making this the simplest and most cost-effective solution.
Why others are wrong:
A) Use DynamoDB to store CloudTrail logs and query using Athena — DynamoDB is a NoSQL key-value database, not designed for log storage or SQL analytics. Moving logs from S3 to DynamoDB adds unnecessary complexity and cost. Also, Athena queries S3, not DynamoDB.
C) Store CloudTrail logs in an RDS database and query using Athena — RDS is a relational database (MySQL, PostgreSQL, etc.). Loading massive log files into RDS is expensive and slow. Athena does not query RDS directly — it queries S3. This adds unnecessary infrastructure.
D) Use Macie to query logs — Amazon Macie is a data security service that discovers and protects sensitive data (PII, credentials) in S3. It does not provide SQL query capabilities for CloudTrail logs. Macie is for data classification, not log analysis.
E) Use DynamoDB to query logs in S3 — DynamoDB is a database, not a query engine. It cannot query data stored in S3. DynamoDB stores and retrieves its own data using key-value lookups, not SQL queries over external files.
Q3.An external auditor needs to inspect API events across your 7 AWS accounts for the coming fortnight. How should you set this up?
✓ Correct: C. Enable CloudTrail in each account, send all logs to one centralized S3 bucket, and update bucket policies for cross-account access.
How to Think About This:
When a question mentions "multiple AWS accounts" + "auditor/compliance/central visibility", your mental model should be: Centralized Logging. The pattern is always: (1) each account runs its own CloudTrail, (2) all trails deliver to one central S3 bucket in a designated logging/security account, (3) bucket policy grants cross-account
Key Concepts:
Cross-Account CloudTrail Logging — Each AWS account enables its own CloudTrail trail but configures the same destination S3 bucket (owned by a central account). The central bucket's policy must explicitly allow
S3 Bucket Policy for Cross-Account — The key mechanism. The bucket owner adds a policy allowing CloudTrail from accounts 2-7 to deliver logs. Each account's logs land in a prefix like
Security Best Practice:
For auditor access: (1) Centralize all logs into one bucket — auditors review one location, not 7. (2) Grant the auditor read-only IAM access to that single bucket. (3) Enable S3 Object Lock or versioning so logs cannot be tampered with. (4) Use a dedicated security/logging account to own the bucket — separate from workload accounts — so no one can delete or modify logs from their own account.
Why C is correct: CloudTrail in each account captures that account's API events. Configuring all trails to deliver to one centralized S3 bucket with cross-account bucket policies gives the auditor a single place to inspect all 7 accounts' activity. This is the standard AWS multi-account logging architecture.
Why others are wrong:
A) Create an Organization, enable CloudTrail only for the master account — While AWS Organizations supports organization trails, this answer says "only for the master account." An organization trail from the management account does capture all member accounts, but the phrasing "enable CloudTrail only for the master account" implies the other accounts wouldn't be logging — which is misleading. Also, creating an entire AWS Organization just for a 2-week audit is overkill if you don't already have one.
B) Send logs to separate S3 buckets — This defeats the purpose of "consolidation." The auditor would have to check 7 different buckets across 7 accounts, each requiring separate access permissions. Not practical for a fortnight audit.
D) Enable CloudWatch in every account — CloudWatch is for metrics, alarms, and application logs, not API event auditing. CloudTrail is the service that records API calls. Also, sending to "individual S3 buckets" again fails the consolidation requirement.
How to Think About This:
When a question mentions "multiple AWS accounts" + "auditor/compliance/central visibility", your mental model should be: Centralized Logging. The pattern is always: (1) each account runs its own CloudTrail, (2) all trails deliver to one central S3 bucket in a designated logging/security account, (3) bucket policy grants cross-account
s3:PutObject permission. This is the AWS Well-Architected security pillar pattern for multi-account audit.Key Concepts:
Cross-Account CloudTrail Logging — Each AWS account enables its own CloudTrail trail but configures the same destination S3 bucket (owned by a central account). The central bucket's policy must explicitly allow
cloudtrail.amazonaws.com to write from each account using conditions like "aws:SourceArn". This gives auditors a single location to review all API activity.S3 Bucket Policy for Cross-Account — The key mechanism. The bucket owner adds a policy allowing CloudTrail from accounts 2-7 to deliver logs. Each account's logs land in a prefix like
AWSLogs/<account-id>/ so they remain organized and identifiable.Security Best Practice:
For auditor access: (1) Centralize all logs into one bucket — auditors review one location, not 7. (2) Grant the auditor read-only IAM access to that single bucket. (3) Enable S3 Object Lock or versioning so logs cannot be tampered with. (4) Use a dedicated security/logging account to own the bucket — separate from workload accounts — so no one can delete or modify logs from their own account.
Why C is correct: CloudTrail in each account captures that account's API events. Configuring all trails to deliver to one centralized S3 bucket with cross-account bucket policies gives the auditor a single place to inspect all 7 accounts' activity. This is the standard AWS multi-account logging architecture.
Why others are wrong:
A) Create an Organization, enable CloudTrail only for the master account — While AWS Organizations supports organization trails, this answer says "only for the master account." An organization trail from the management account does capture all member accounts, but the phrasing "enable CloudTrail only for the master account" implies the other accounts wouldn't be logging — which is misleading. Also, creating an entire AWS Organization just for a 2-week audit is overkill if you don't already have one.
B) Send logs to separate S3 buckets — This defeats the purpose of "consolidation." The auditor would have to check 7 different buckets across 7 accounts, each requiring separate access permissions. Not practical for a fortnight audit.
D) Enable CloudWatch in every account — CloudWatch is for metrics, alarms, and application logs, not API event auditing. CloudTrail is the service that records API calls. Also, sending to "individual S3 buckets" again fails the consolidation requirement.
Q4.During testing of your pet supply website, customers can't update their contact details. What could be the issue?
✓ Correct: A. The Lambda execution role lacks permission to write to the DynamoDB table.
Why is Lambda in the picture? (The Architecture Concept)
In modern web applications, customers never talk directly to the database. The architecture looks like this:
Think of it like a restaurant: customers (browser) don't walk into the kitchen (database) themselves. They tell the waiter (API Gateway), who passes the order to the chef (Lambda), who actually cooks (reads/writes DynamoDB). This is called a serverless architecture. Customers interact with a website frontend that calls an API — the API triggers a Lambda function — and Lambda performs the database operation on their behalf.
Key Concepts:
Lambda Execution Role — Every Lambda function runs under an IAM role (called its execution role). This role defines what AWS services the function can access. If the role doesn't include
Why customers don't have IAM roles — End users (website customers) are not IAM users. They don't have AWS credentials. They authenticate through your application (login form, Cognito, etc.), not through AWS IAM. Only the backend Lambda function needs AWS permissions because it's the one making AWS API calls.
Principle of Least Privilege — Lambda execution roles should only have the minimum permissions needed. For this use case:
How to Think About This:
When a question says "users can't do X on a website" and the options mention Lambda + a database, think: the Lambda execution role is the bottleneck. The chain is User → API → Lambda → AWS Service. If anything fails, check Lambda's IAM execution role first — it's the most common permission issue in serverless apps.
Why others are wrong:
B) IAM role for customers can't write to DynamoDB — Customers of a website are not IAM users. They don't have IAM roles or AWS credentials. They interact through the application frontend, which calls Lambda. The Lambda execution role handles all AWS permissions, not the customer.
C) The Lambda function is not triggering correctly — While possible, the question asks about a permission issue ("can't update"). If Lambda wasn't triggering at all, the error would be different (timeout, 404, no response). The specific symptom of "can't update" points to a write permission denial, not a trigger failure.
D) The Lambda execution role can't write to S3 — The question is about updating contact details (structured data like name, email, phone). This is stored in a database (DynamoDB), not in S3 file storage. S3 is for objects/files, not transactional record updates.
Why is Lambda in the picture? (The Architecture Concept)
In modern web applications, customers never talk directly to the database. The architecture looks like this:
Customer Browser → API Gateway → Lambda Function → DynamoDBThink of it like a restaurant: customers (browser) don't walk into the kitchen (database) themselves. They tell the waiter (API Gateway), who passes the order to the chef (Lambda), who actually cooks (reads/writes DynamoDB). This is called a serverless architecture. Customers interact with a website frontend that calls an API — the API triggers a Lambda function — and Lambda performs the database operation on their behalf.
Key Concepts:
Lambda Execution Role — Every Lambda function runs under an IAM role (called its execution role). This role defines what AWS services the function can access. If the role doesn't include
dynamodb:PutItem or dynamodb:UpdateItem permissions, the function will get an AccessDeniedException when trying to write customer data — even though the code is correct.Why customers don't have IAM roles — End users (website customers) are not IAM users. They don't have AWS credentials. They authenticate through your application (login form, Cognito, etc.), not through AWS IAM. Only the backend Lambda function needs AWS permissions because it's the one making AWS API calls.
Principle of Least Privilege — Lambda execution roles should only have the minimum permissions needed. For this use case:
dynamodb:UpdateItem on the specific table ARN — nothing more.How to Think About This:
When a question says "users can't do X on a website" and the options mention Lambda + a database, think: the Lambda execution role is the bottleneck. The chain is User → API → Lambda → AWS Service. If anything fails, check Lambda's IAM execution role first — it's the most common permission issue in serverless apps.
Why others are wrong:
B) IAM role for customers can't write to DynamoDB — Customers of a website are not IAM users. They don't have IAM roles or AWS credentials. They interact through the application frontend, which calls Lambda. The Lambda execution role handles all AWS permissions, not the customer.
C) The Lambda function is not triggering correctly — While possible, the question asks about a permission issue ("can't update"). If Lambda wasn't triggering at all, the error would be different (timeout, 404, no response). The specific symptom of "can't update" points to a write permission denial, not a trigger failure.
D) The Lambda execution role can't write to S3 — The question is about updating contact details (structured data like name, email, phone). This is stored in a database (DynamoDB), not in S3 file storage. S3 is for objects/files, not transactional record updates.
Q5.For real-time network and application layer protection against vulnerability exploits and brute force attacks, which AWS service should be used?
✓ Correct: D. AWS Network Firewall provides real-time network and application layer filtering with IPS capabilities.
How to Think About This:
AWS security services fall into 4 distinct categories. When you see a question, first classify what it's asking for:
• PROTECT (block/filter traffic) → Network Firewall, WAF, Security Groups, NACLs
• DETECT (find threats) → GuardDuty, Security Hub
• ASSESS (scan for vulnerabilities) → Inspector
• LOG (record for analysis) → VPC Flow Logs, CloudTrail
This question says "real-time protection" and "against attacks" — that's the PROTECT category. Only a firewall actively blocks traffic in real-time.
Key Concepts:
AWS Network Firewall — A managed firewall service deployed inside your VPC that inspects traffic in real-time. Key capabilities: (1) Intrusion Prevention System (IPS) — detects and blocks known exploit signatures, (2) Stateful packet inspection — tracks connection state for protocol-level filtering, (3) Application-layer filtering — inspects HTTP/TLS traffic, blocks malicious payloads, (4) Suricata-compatible rules — uses industry-standard rule format to match brute force patterns, SQL injection, port scans, etc. It sits in a dedicated firewall subnet and all traffic routes through it via VPC route tables.
Network Layer vs Application Layer — Network layer (Layer 3/4) filters by IP, port, protocol. Application layer (Layer 7) inspects actual content — HTTP headers, request bodies, TLS metadata. AWS Network Firewall handles both layers, which is why it's the answer when the question mentions both.
Why D is correct: The question asks for three things: (1) real-time, (2) network AND application layer, (3) protection against exploits and brute force. AWS Network Firewall is the only service that actively blocks traffic at both layers in real-time using IPS rules. It doesn't just detect or log — it prevents the attack from reaching your resources.
Why others are wrong:
A) AWS Inspector — Inspector is an assessment tool, not protection. It scans EC2 instances, Lambda functions, and container images for known vulnerabilities (CVEs) and network exposure. It produces a report saying "you have vulnerability X" — but it does not block any traffic. Think of it as a health checkup, not a bodyguard.
B) GuardDuty — GuardDuty is a threat detection service. It analyzes CloudTrail logs, VPC Flow Logs, and DNS logs using machine learning to detect suspicious activity (like unusual API calls or communication with known malicious IPs). It generates findings (alerts) but does not block anything. It tells you "someone is attacking" — but doesn't stop them.
C) VPC Flow Logs — Flow Logs are purely a logging mechanism. They record metadata about traffic (source/dest IP, port, protocol, accept/reject) flowing through network interfaces. They are used after the fact for analysis and troubleshooting. They do not inspect content, do not block traffic, and have no real-time protection capability.
How to Think About This:
AWS security services fall into 4 distinct categories. When you see a question, first classify what it's asking for:
• PROTECT (block/filter traffic) → Network Firewall, WAF, Security Groups, NACLs
• DETECT (find threats) → GuardDuty, Security Hub
• ASSESS (scan for vulnerabilities) → Inspector
• LOG (record for analysis) → VPC Flow Logs, CloudTrail
This question says "real-time protection" and "against attacks" — that's the PROTECT category. Only a firewall actively blocks traffic in real-time.
Key Concepts:
AWS Network Firewall — A managed firewall service deployed inside your VPC that inspects traffic in real-time. Key capabilities: (1) Intrusion Prevention System (IPS) — detects and blocks known exploit signatures, (2) Stateful packet inspection — tracks connection state for protocol-level filtering, (3) Application-layer filtering — inspects HTTP/TLS traffic, blocks malicious payloads, (4) Suricata-compatible rules — uses industry-standard rule format to match brute force patterns, SQL injection, port scans, etc. It sits in a dedicated firewall subnet and all traffic routes through it via VPC route tables.
Network Layer vs Application Layer — Network layer (Layer 3/4) filters by IP, port, protocol. Application layer (Layer 7) inspects actual content — HTTP headers, request bodies, TLS metadata. AWS Network Firewall handles both layers, which is why it's the answer when the question mentions both.
Why D is correct: The question asks for three things: (1) real-time, (2) network AND application layer, (3) protection against exploits and brute force. AWS Network Firewall is the only service that actively blocks traffic at both layers in real-time using IPS rules. It doesn't just detect or log — it prevents the attack from reaching your resources.
Why others are wrong:
A) AWS Inspector — Inspector is an assessment tool, not protection. It scans EC2 instances, Lambda functions, and container images for known vulnerabilities (CVEs) and network exposure. It produces a report saying "you have vulnerability X" — but it does not block any traffic. Think of it as a health checkup, not a bodyguard.
B) GuardDuty — GuardDuty is a threat detection service. It analyzes CloudTrail logs, VPC Flow Logs, and DNS logs using machine learning to detect suspicious activity (like unusual API calls or communication with known malicious IPs). It generates findings (alerts) but does not block anything. It tells you "someone is attacking" — but doesn't stop them.
C) VPC Flow Logs — Flow Logs are purely a logging mechanism. They record metadata about traffic (source/dest IP, port, protocol, accept/reject) flowing through network interfaces. They are used after the fact for analysis and troubleshooting. They do not inspect content, do not block traffic, and have no real-time protection capability.
Q6.For real-time network and application layer protection against vulnerability exploits and brute force attacks, which AWS service should be used?
✓ Correct: D. AWS Network Firewall provides real-time network and application layer filtering with IPS capabilities.
How to Think About This:
This is a duplicate of Q5 — and the exam may repeat similar questions with slightly different wording. The same mental model applies: classify what the question is asking for — PROTECT, DETECT, ASSESS, or LOG. "Real-time protection against attacks" = PROTECT category = firewall/filtering service.
Key Concepts:
AWS Network Firewall — A managed, stateful firewall deployed inside your VPC. It provides: (1) Intrusion Prevention System (IPS) that detects and blocks known exploit patterns, (2) Deep packet inspection at both network (Layer 3/4) and application (Layer 7) layers, (3) Suricata-compatible rules for matching brute force attempts, SQL injection payloads, port scans, and more. Traffic routes through a dedicated firewall subnet via VPC route tables.
Protection vs Detection vs Assessment vs Logging — The four pillars of AWS security services:
• PROTECT: Network Firewall, WAF, Security Groups, NACLs — actively block traffic
• DETECT: GuardDuty, Security Hub — find threats and alert
• ASSESS: Inspector — scan for vulnerabilities
• LOG: VPC Flow Logs, CloudTrail — record for analysis
Why D is correct: Only AWS Network Firewall provides real-time, inline traffic filtering with IPS capabilities at both network and application layers. It actively blocks malicious traffic before it reaches your resources — the only service in the options that does this.
Why others are wrong:
A) AWS Inspector — Assessment tool that scans for vulnerabilities (CVEs) in EC2, Lambda, and containers. It produces reports but does not block traffic. It's a health checkup, not a bodyguard.
B) GuardDuty — Threat detection service analyzing CloudTrail, VPC Flow Logs, and DNS logs. It generates findings (alerts) but does not block anything. Detective, not preventive.
C) VPC Flow Logs — Passive logging of network traffic metadata (source/dest IP, port, accept/reject). No inspection, no blocking, no real-time protection capability.
How to Think About This:
This is a duplicate of Q5 — and the exam may repeat similar questions with slightly different wording. The same mental model applies: classify what the question is asking for — PROTECT, DETECT, ASSESS, or LOG. "Real-time protection against attacks" = PROTECT category = firewall/filtering service.
Key Concepts:
AWS Network Firewall — A managed, stateful firewall deployed inside your VPC. It provides: (1) Intrusion Prevention System (IPS) that detects and blocks known exploit patterns, (2) Deep packet inspection at both network (Layer 3/4) and application (Layer 7) layers, (3) Suricata-compatible rules for matching brute force attempts, SQL injection payloads, port scans, and more. Traffic routes through a dedicated firewall subnet via VPC route tables.
Protection vs Detection vs Assessment vs Logging — The four pillars of AWS security services:
• PROTECT: Network Firewall, WAF, Security Groups, NACLs — actively block traffic
• DETECT: GuardDuty, Security Hub — find threats and alert
• ASSESS: Inspector — scan for vulnerabilities
• LOG: VPC Flow Logs, CloudTrail — record for analysis
Why D is correct: Only AWS Network Firewall provides real-time, inline traffic filtering with IPS capabilities at both network and application layers. It actively blocks malicious traffic before it reaches your resources — the only service in the options that does this.
Why others are wrong:
A) AWS Inspector — Assessment tool that scans for vulnerabilities (CVEs) in EC2, Lambda, and containers. It produces reports but does not block traffic. It's a health checkup, not a bodyguard.
B) GuardDuty — Threat detection service analyzing CloudTrail, VPC Flow Logs, and DNS logs. It generates findings (alerts) but does not block anything. Detective, not preventive.
C) VPC Flow Logs — Passive logging of network traffic metadata (source/dest IP, port, accept/reject). No inspection, no blocking, no real-time protection capability.
Q7.For strict compliance with the Principle of Least Privilege, how should you secure a decryption service in a healthcare workflow?
✓ Correct: D. Programmatically grant and revoke access to the CMK just before and after decryption.
How to Think About This:
When a question mentions "Principle of Least Privilege" + "encryption/decryption", think about time-based access. The strictest form of least privilege isn't just limiting who can access a key — it's limiting when they can access it. The gold standard: grant permission → perform the operation → immediately revoke permission. This is called Just-In-Time (JIT) access.
Key Concepts:
CMK (Customer Master Key) — A key in AWS KMS (Key Management Service) used to encrypt and decrypt data. In healthcare, patient data (PHI — Protected Health Information) is encrypted at rest using a CMK. To read the data, a service must call
KMS Grants — A lightweight, temporary permission mechanism in KMS. Unlike IAM policies (which are persistent), grants can be created and retired programmatically in milliseconds. The workflow: (1)
Why Healthcare Demands This — HIPAA compliance requires strict access controls over PHI. If a decryption key is permanently accessible, a compromised service could decrypt all patient records at any time. With JIT grants, even a compromised service can only decrypt during the brief window the grant is active — massively reducing the blast radius.
Why D is correct: Using KMS grants to programmatically grant access just before decryption and revoke immediately after is the strictest implementation of least privilege. The access exists only for the exact duration needed — typically seconds. This is the AWS-recommended pattern for sensitive workloads like healthcare, finance, and government.
Why others are wrong:
A) Use a grant constraint and enable CloudTrail alerts — Grant constraints (like
B) Manipulate IAM policies programmatically — While technically possible, IAM policy changes are slow (eventual consistency, can take seconds to minutes to propagate). They're also heavy-weight — IAM policies are meant to be relatively stable, not toggled on/off per request. KMS grants are purpose-built for temporary, per-operation access and propagate immediately.
C) Consider a more monolithic architecture — A monolithic architecture is the opposite of a security best practice. It concentrates all services into one, meaning one compromise exposes everything. Microservices with isolated permissions are more secure. This answer doesn't address the encryption access problem at all.
How to Think About This:
When a question mentions "Principle of Least Privilege" + "encryption/decryption", think about time-based access. The strictest form of least privilege isn't just limiting who can access a key — it's limiting when they can access it. The gold standard: grant permission → perform the operation → immediately revoke permission. This is called Just-In-Time (JIT) access.
Key Concepts:
CMK (Customer Master Key) — A key in AWS KMS (Key Management Service) used to encrypt and decrypt data. In healthcare, patient data (PHI — Protected Health Information) is encrypted at rest using a CMK. To read the data, a service must call
kms:Decrypt with that CMK — but who gets permission to call kms:Decrypt and for how long is the security question.KMS Grants — A lightweight, temporary permission mechanism in KMS. Unlike IAM policies (which are persistent), grants can be created and retired programmatically in milliseconds. The workflow: (1)
CreateGrant — gives a service permission to use the CMK for decryption, (2) Service performs Decrypt, (3) RetireGrant — immediately removes the permission. The access window is seconds, not permanent.Why Healthcare Demands This — HIPAA compliance requires strict access controls over PHI. If a decryption key is permanently accessible, a compromised service could decrypt all patient records at any time. With JIT grants, even a compromised service can only decrypt during the brief window the grant is active — massively reducing the blast radius.
Why D is correct: Using KMS grants to programmatically grant access just before decryption and revoke immediately after is the strictest implementation of least privilege. The access exists only for the exact duration needed — typically seconds. This is the AWS-recommended pattern for sensitive workloads like healthcare, finance, and government.
Why others are wrong:
A) Use a grant constraint and enable CloudTrail alerts — Grant constraints (like
EncryptionContextEquals) add conditions to grants, which is good but incomplete. CloudTrail alerts are detective controls — they notify you after unauthorized access happens. The question asks for prevention (least privilege), not detection. Monitoring alone doesn't enforce least privilege.B) Manipulate IAM policies programmatically — While technically possible, IAM policy changes are slow (eventual consistency, can take seconds to minutes to propagate). They're also heavy-weight — IAM policies are meant to be relatively stable, not toggled on/off per request. KMS grants are purpose-built for temporary, per-operation access and propagate immediately.
C) Consider a more monolithic architecture — A monolithic architecture is the opposite of a security best practice. It concentrates all services into one, meaning one compromise exposes everything. Microservices with isolated permissions are more secure. This answer doesn't address the encryption access problem at all.
Q8.For which DNS configurations can you enable Route 53 DNS Query Logging? Select All
✓ Correct: C, D. Route 53 DNS Query Logging works for public hosted zones and private hosted zones — but only when resolved by Route 53's own DNS servers.
How to Think About This:
The key rule is simple: Route 53 can only log queries that Route 53 itself handles. If DNS resolution goes through an external DNS server (like GoDaddy, Cloudflare, or on-premises DNS), Route 53 never sees those queries, so it can't log them. Think of it like a security camera — it can only record what happens in its own building, not someone else's.
Key Concepts:
DNS Hosted Zone — A container for DNS records (A, CNAME, MX, etc.) for a domain. Two types:
• Public Hosted Zone — Resolves domain names on the public internet (e.g.,
• Private Hosted Zone — Resolves domain names only within your VPC (e.g.,
Route 53 DNS Query Logging — Logs every DNS query that Route 53 resolves, including: domain name queried, query type (A, AAAA, CNAME), response code, timestamp. Logs are sent to CloudWatch Logs. This is essential for security monitoring — detecting DNS exfiltration, C2 callbacks, or unauthorized domain lookups.
External DNS Servers — If your domain's name servers point to a third-party provider (not Route 53), those queries never touch Route 53 infrastructure. Route 53 has no visibility into queries handled by external servers, so query logging is impossible for those configurations.
Why C and D are correct:
C) Public hosted zone with Route 53 servers — Route 53 handles the DNS resolution, so it can log every query. This is the standard setup when your domain's NS records point to Route 53's name servers.
D) Private hosted zone with Route 53 servers — Route 53 Resolver handles DNS for your VPC internally. All private zone queries go through Route 53 Resolver, so they can be logged.
Why others are wrong:
A) Public hosted zone with external DNS servers — If you created a public hosted zone in Route 53 but your domain's NS records still point to an external DNS provider (e.g., GoDaddy), queries go to that external provider, not Route 53. Route 53 never sees the traffic, so it cannot log anything. The hosted zone exists in Route 53 but is effectively unused.
B) Private hosted zone with external DNS servers — Private hosted zones are designed to work with Route 53 Resolver inside a VPC. If DNS queries are being forwarded to external DNS servers (via Route 53 Resolver outbound endpoints), the resolution happens externally. Route 53 cannot log queries it forwards to third-party resolvers — it only logs queries it resolves itself.
How to Think About This:
The key rule is simple: Route 53 can only log queries that Route 53 itself handles. If DNS resolution goes through an external DNS server (like GoDaddy, Cloudflare, or on-premises DNS), Route 53 never sees those queries, so it can't log them. Think of it like a security camera — it can only record what happens in its own building, not someone else's.
Key Concepts:
DNS Hosted Zone — A container for DNS records (A, CNAME, MX, etc.) for a domain. Two types:
• Public Hosted Zone — Resolves domain names on the public internet (e.g.,
globesec.ai → 54.x.x.x). When you create one in Route 53, AWS assigns 4 name servers (e.g., ns-123.awsdns-45.com). Your domain registrar must point to these Route 53 name servers for Route 53 to handle queries.• Private Hosted Zone — Resolves domain names only within your VPC (e.g.,
db.internal → 10.0.1.50). Uses Route 53 Resolver, which automatically handles DNS for resources inside the VPC.Route 53 DNS Query Logging — Logs every DNS query that Route 53 resolves, including: domain name queried, query type (A, AAAA, CNAME), response code, timestamp. Logs are sent to CloudWatch Logs. This is essential for security monitoring — detecting DNS exfiltration, C2 callbacks, or unauthorized domain lookups.
External DNS Servers — If your domain's name servers point to a third-party provider (not Route 53), those queries never touch Route 53 infrastructure. Route 53 has no visibility into queries handled by external servers, so query logging is impossible for those configurations.
Why C and D are correct:
C) Public hosted zone with Route 53 servers — Route 53 handles the DNS resolution, so it can log every query. This is the standard setup when your domain's NS records point to Route 53's name servers.
D) Private hosted zone with Route 53 servers — Route 53 Resolver handles DNS for your VPC internally. All private zone queries go through Route 53 Resolver, so they can be logged.
Why others are wrong:
A) Public hosted zone with external DNS servers — If you created a public hosted zone in Route 53 but your domain's NS records still point to an external DNS provider (e.g., GoDaddy), queries go to that external provider, not Route 53. Route 53 never sees the traffic, so it cannot log anything. The hosted zone exists in Route 53 but is effectively unused.
B) Private hosted zone with external DNS servers — Private hosted zones are designed to work with Route 53 Resolver inside a VPC. If DNS queries are being forwarded to external DNS servers (via Route 53 Resolver outbound endpoints), the resolution happens externally. Route 53 cannot log queries it forwards to third-party resolvers — it only logs queries it resolves itself.
Q9.How can Route 53 help your application handle a sudden spike in traffic? Select All
✓ Correct: B, C, D. These three routing policies actively distribute traffic across multiple endpoints, which helps absorb a sudden spike.
How to Think About This:
When a question says "handle a traffic spike", think: distribute the load across multiple endpoints. The key word is "spike" — you need to spread traffic out, not just switch from one endpoint to another. Ask yourself: does this routing policy send traffic to multiple destinations simultaneously? If yes, it helps with spikes.
Key Concepts — Route 53 Routing Policies:
• Weighted Routing / Multi-AZ Distribution — Splits traffic across multiple resources by percentage (e.g., 70% to AZ-A, 30% to AZ-B). Directly spreads the load across availability zones. Best for spike handling.
• Latency-Based Routing — Routes each user to the region with lowest latency. If you have endpoints in us-east-1, eu-west-1, and ap-southeast-1, traffic is naturally distributed across all three regions based on user location. More endpoints = more capacity to absorb spikes.
• Geographic (Geolocation) Routing — Routes based on user's country/continent. US users go to US servers, EU users to EU servers. Like latency-based, this distributes traffic across multiple regional endpoints, spreading the load geographically.
Failover Routing — An active-passive pattern. Only ONE endpoint receives traffic at a time. The secondary sits idle until the primary fails a health check. During a traffic spike, failover does not help because: (1) all traffic still hits a single endpoint, (2) the secondary only activates if the primary goes DOWN, not if it's just busy, (3) it doesn't distribute — it switches. Failover is for disaster recovery, not load distribution.
Why B, C, D are correct: All three send traffic to multiple endpoints simultaneously, effectively spreading the spike across several resources that share the load.
Why A is wrong:
A) Failover routing — Failover is active-passive, meaning only one endpoint handles traffic at any time. Think of it as having a backup generator — it only turns on when the power goes out, not when you need more power. A traffic spike needs more capacity (distribute), not a backup (switch). Even if the spike crashes the primary and failover kicks in, you've just moved the entire spike to the secondary, which may also be overwhelmed.
How to Think About This:
When a question says "handle a traffic spike", think: distribute the load across multiple endpoints. The key word is "spike" — you need to spread traffic out, not just switch from one endpoint to another. Ask yourself: does this routing policy send traffic to multiple destinations simultaneously? If yes, it helps with spikes.
Key Concepts — Route 53 Routing Policies:
• Weighted Routing / Multi-AZ Distribution — Splits traffic across multiple resources by percentage (e.g., 70% to AZ-A, 30% to AZ-B). Directly spreads the load across availability zones. Best for spike handling.
• Latency-Based Routing — Routes each user to the region with lowest latency. If you have endpoints in us-east-1, eu-west-1, and ap-southeast-1, traffic is naturally distributed across all three regions based on user location. More endpoints = more capacity to absorb spikes.
• Geographic (Geolocation) Routing — Routes based on user's country/continent. US users go to US servers, EU users to EU servers. Like latency-based, this distributes traffic across multiple regional endpoints, spreading the load geographically.
Failover Routing — An active-passive pattern. Only ONE endpoint receives traffic at a time. The secondary sits idle until the primary fails a health check. During a traffic spike, failover does not help because: (1) all traffic still hits a single endpoint, (2) the secondary only activates if the primary goes DOWN, not if it's just busy, (3) it doesn't distribute — it switches. Failover is for disaster recovery, not load distribution.
Why B, C, D are correct: All three send traffic to multiple endpoints simultaneously, effectively spreading the spike across several resources that share the load.
Why A is wrong:
A) Failover routing — Failover is active-passive, meaning only one endpoint handles traffic at any time. Think of it as having a backup generator — it only turns on when the power goes out, not when you need more power. A traffic spike needs more capacity (distribute), not a backup (switch). Even if the spike crashes the primary and failover kicks in, you've just moved the entire spike to the secondary, which may also be overwhelmed.
Q10.How can a consultant review API events across multiple AWS accounts?
✓ Correct: C. Create an AWS Organization and use a single CloudTrail and S3 bucket.
How to Think About This:
Compare this with Q3 (similar topic, different answer). The key difference:
• Q3: "7 accounts, external auditor, fortnight" — no mention of AWS Organizations. Answer: cross-account bucket policies (manual setup).
• Q10: Answer option explicitly says "Create an AWS Organization" — this unlocks the Organization Trail feature, the cleanest and most scalable approach.
When "AWS Organizations" is an option and the question asks for centralized logging, that's the best answer because it's a single configuration that automatically covers all member accounts.
Key Concepts:
AWS Organizations — A service that lets you centrally manage multiple AWS accounts under one umbrella. The management account (formerly "master") can enforce policies and create shared resources across all member accounts.
Organization Trail — A special CloudTrail trail created from the management account that automatically logs API events from ALL member accounts into a single S3 bucket. No need to configure CloudTrail individually in each account — the organization trail handles it. Key benefits: (1) one trail covers all accounts, (2) one S3 bucket for all logs, (3) member accounts cannot disable or modify the trail, (4) new accounts added to the org are automatically included.
Consultant Access Pattern — For a consultant to review: create an IAM role in the logging account with read-only access to the S3 bucket, then give the consultant cross-account access to assume that role. One bucket, one role, complete visibility.
Why C is correct: An Organization Trail is the AWS-recommended approach for centralized, multi-account logging. One CloudTrail configuration in the management account + one S3 bucket = complete API event visibility across all accounts. It's the simplest, most scalable, and most secure option because member accounts cannot tamper with the trail.
Why others are wrong:
A) Separate CloudTrail and S3 buckets in each account — This is the opposite of centralization. The consultant would need access to each account's S3 bucket individually — impractical to manage and review. If you have 20 accounts, that's 20 separate buckets and 20 sets of permissions to configure.
B) Use CloudWatch Logs and a single S3 bucket — CloudWatch Logs is for application logs, metrics, and alarms — not for API event auditing. CloudTrail is the service that records API calls. You can't replace CloudTrail with CloudWatch for this use case. Also, CloudWatch doesn't natively export to S3 across accounts in a centralized way.
D) Create an Organization and enable individual CloudTrails — If you already have AWS Organizations, why configure CloudTrail individually in each account? The entire point of an Organization Trail is to avoid this. Individual trails mean individual management, individual S3 buckets (or manual cross-account bucket policies), and member accounts could disable their own trails. This defeats the purpose of having Organizations.
How to Think About This:
Compare this with Q3 (similar topic, different answer). The key difference:
• Q3: "7 accounts, external auditor, fortnight" — no mention of AWS Organizations. Answer: cross-account bucket policies (manual setup).
• Q10: Answer option explicitly says "Create an AWS Organization" — this unlocks the Organization Trail feature, the cleanest and most scalable approach.
When "AWS Organizations" is an option and the question asks for centralized logging, that's the best answer because it's a single configuration that automatically covers all member accounts.
Key Concepts:
AWS Organizations — A service that lets you centrally manage multiple AWS accounts under one umbrella. The management account (formerly "master") can enforce policies and create shared resources across all member accounts.
Organization Trail — A special CloudTrail trail created from the management account that automatically logs API events from ALL member accounts into a single S3 bucket. No need to configure CloudTrail individually in each account — the organization trail handles it. Key benefits: (1) one trail covers all accounts, (2) one S3 bucket for all logs, (3) member accounts cannot disable or modify the trail, (4) new accounts added to the org are automatically included.
Consultant Access Pattern — For a consultant to review: create an IAM role in the logging account with read-only access to the S3 bucket, then give the consultant cross-account access to assume that role. One bucket, one role, complete visibility.
Why C is correct: An Organization Trail is the AWS-recommended approach for centralized, multi-account logging. One CloudTrail configuration in the management account + one S3 bucket = complete API event visibility across all accounts. It's the simplest, most scalable, and most secure option because member accounts cannot tamper with the trail.
Why others are wrong:
A) Separate CloudTrail and S3 buckets in each account — This is the opposite of centralization. The consultant would need access to each account's S3 bucket individually — impractical to manage and review. If you have 20 accounts, that's 20 separate buckets and 20 sets of permissions to configure.
B) Use CloudWatch Logs and a single S3 bucket — CloudWatch Logs is for application logs, metrics, and alarms — not for API event auditing. CloudTrail is the service that records API calls. You can't replace CloudTrail with CloudWatch for this use case. Also, CloudWatch doesn't natively export to S3 across accounts in a centralized way.
D) Create an Organization and enable individual CloudTrails — If you already have AWS Organizations, why configure CloudTrail individually in each account? The entire point of an Organization Trail is to avoid this. Individual trails mean individual management, individual S3 buckets (or manual cross-account bucket policies), and member accounts could disable their own trails. This defeats the purpose of having Organizations.
Q11.How can you acquire AWS's PCI DSS Attestation of Compliance and Responsibility for an external audit?
✓ Correct: B. AWS Artifact is the self-service portal for on-demand access to AWS compliance reports.
How to Think About This:
When a question mentions "compliance documents", "attestation", "audit reports", or "certifications" from AWS, the answer is always AWS Artifact. Think of Artifact as AWS's document library — it's the one place to download all of AWS's compliance and security paperwork. No support tickets, no emails, no legal websites — just log in and download.
Key Concepts:
AWS Artifact — A free, self-service portal in the AWS Console that provides on-demand access to AWS's security and compliance documents. Two main sections:
• Artifact Reports — Download AWS compliance reports: PCI DSS Attestation of Compliance (AOC), SOC 1/2/3 reports, ISO 27001 certification, HIPAA documentation, FedRAMP reports, and more.
• Artifact Agreements — Review and accept agreements like the Business Associate Agreement (BAA) for HIPAA or the GDPR Data Processing Addendum (DPA).
PCI DSS — Payment Card Industry Data Security Standard. Any business that processes, stores, or transmits credit card data must comply. AWS is PCI DSS Level 1 certified (the highest level). The Attestation of Compliance (AOC) is the document proving AWS passed its PCI DSS audit — auditors need this to verify that the infrastructure layer meets compliance. The Responsibility Summary clarifies which controls AWS handles vs. what the customer is responsible for (shared responsibility model).
Why This Matters for Audits — When an external auditor asks "prove your cloud provider is PCI compliant," you don't need to audit AWS yourself. You download the AOC from Artifact and hand it over. This is part of the Shared Responsibility Model — AWS is responsible for security of the cloud (and proves it via Artifact documents), while you're responsible for security in the cloud.
Why B is correct: AWS Artifact is specifically designed for this purpose — instant, self-service access to compliance documents including PCI DSS AOC. No waiting, no approvals needed. Available 24/7 in the AWS Console under Artifact.
Why others are wrong:
A) AWS Macie — Macie is a data security service that uses machine learning to discover and protect sensitive data (like credit card numbers, PII) stored in S3. It helps you achieve PCI compliance by finding exposed card data, but it does not provide compliance documents.
C) AWS IAM Console — IAM manages users, roles, and permissions. It has nothing to do with compliance documentation. You use IAM to control who can access what, not to download audit reports.
D) AWS WorkDocs — WorkDocs is a document collaboration and storage service (like Google Drive or SharePoint). It's for your own documents, not AWS compliance reports.
E) Submit a Support Case — Before Artifact existed, this was how you obtained compliance documents. Now it's unnecessary — Artifact provides instant self-service access. No need to wait for a support response.
F) AWS Legal Services website — There is no "AWS Legal Services website" that distributes compliance documents. Legal matters like agreements are handled through Artifact Agreements, not a separate website.
How to Think About This:
When a question mentions "compliance documents", "attestation", "audit reports", or "certifications" from AWS, the answer is always AWS Artifact. Think of Artifact as AWS's document library — it's the one place to download all of AWS's compliance and security paperwork. No support tickets, no emails, no legal websites — just log in and download.
Key Concepts:
AWS Artifact — A free, self-service portal in the AWS Console that provides on-demand access to AWS's security and compliance documents. Two main sections:
• Artifact Reports — Download AWS compliance reports: PCI DSS Attestation of Compliance (AOC), SOC 1/2/3 reports, ISO 27001 certification, HIPAA documentation, FedRAMP reports, and more.
• Artifact Agreements — Review and accept agreements like the Business Associate Agreement (BAA) for HIPAA or the GDPR Data Processing Addendum (DPA).
PCI DSS — Payment Card Industry Data Security Standard. Any business that processes, stores, or transmits credit card data must comply. AWS is PCI DSS Level 1 certified (the highest level). The Attestation of Compliance (AOC) is the document proving AWS passed its PCI DSS audit — auditors need this to verify that the infrastructure layer meets compliance. The Responsibility Summary clarifies which controls AWS handles vs. what the customer is responsible for (shared responsibility model).
Why This Matters for Audits — When an external auditor asks "prove your cloud provider is PCI compliant," you don't need to audit AWS yourself. You download the AOC from Artifact and hand it over. This is part of the Shared Responsibility Model — AWS is responsible for security of the cloud (and proves it via Artifact documents), while you're responsible for security in the cloud.
Why B is correct: AWS Artifact is specifically designed for this purpose — instant, self-service access to compliance documents including PCI DSS AOC. No waiting, no approvals needed. Available 24/7 in the AWS Console under Artifact.
Why others are wrong:
A) AWS Macie — Macie is a data security service that uses machine learning to discover and protect sensitive data (like credit card numbers, PII) stored in S3. It helps you achieve PCI compliance by finding exposed card data, but it does not provide compliance documents.
C) AWS IAM Console — IAM manages users, roles, and permissions. It has nothing to do with compliance documentation. You use IAM to control who can access what, not to download audit reports.
D) AWS WorkDocs — WorkDocs is a document collaboration and storage service (like Google Drive or SharePoint). It's for your own documents, not AWS compliance reports.
E) Submit a Support Case — Before Artifact existed, this was how you obtained compliance documents. Now it's unnecessary — Artifact provides instant self-service access. No need to wait for a support response.
F) AWS Legal Services website — There is no "AWS Legal Services website" that distributes compliance documents. Legal matters like agreements are handled through Artifact Agreements, not a separate website.
Q12.How can you best defend against SQL injection attacks as per the latest Security Penetration Test Audit?
✓ Correct: D. Implement AWS WAF to block SQL code in requests.
How to Think About This:
AWS has two services that sound similar but protect against completely different attack types. Memorize this:
• AWS WAF → Layer 7 (Application) → Protects against: SQL injection, XSS, bad bots, request flooding. Inspects HTTP request content (URL, body, headers, cookies).
• AWS Shield → Layer 3/4 (Network) → Protects against: DDoS attacks only (SYN floods, UDP reflection, volumetric attacks). Does NOT inspect request content.
Simple rule: if the question mentions SQL injection, XSS, or request content → WAF. If it mentions DDoS → Shield.
Key Concepts:
AWS WAF (Web Application Firewall) — Sits in front of CloudFront, ALB, or API Gateway and inspects every HTTP/HTTPS request. You create rules that examine request content:
• SQL Injection Match — Built-in rule that detects SQL patterns like
• AWS Managed Rule Groups — Pre-built rulesets maintained by AWS, including
• Actions: Block (reject the request), Allow, or Count (log but don't block, useful for testing).
AWS Shield — Protects against Distributed Denial of Service (DDoS) attacks. Shield Standard is free and automatic on all AWS resources. Shield Advanced adds real-time metrics, 24/7 DDoS Response Team, and cost protection. But Shield operates at the network/transport layer — it sees packet volumes, connection rates, and protocol anomalies. It has zero visibility into HTTP request content, so it cannot detect SQL injection. A SQL injection is a single, normal-looking HTTP request — Shield wouldn't even notice it.
SQL Injection Explained — An attacker inserts malicious SQL code into input fields (login forms, search boxes, URLs). Example: a login form where the attacker types
Why D is correct: AWS WAF inspects HTTP request content and has purpose-built SQL injection detection rules. It can block requests containing SQL patterns before they ever reach your application. This is exactly what a penetration test audit would recommend.
Why others are wrong:
A) Custom NACL filter with Lambda — NACLs (Network Access Control Lists) operate at Layer 3/4 — they filter by IP address, port, and protocol. They cannot inspect HTTP request content or detect SQL patterns. You can't write a NACL rule that says "block requests containing SQL code." Lambda could theoretically process requests, but this is a custom, fragile solution when WAF does it natively.
B) AWS Shield — Shield protects against DDoS only. A SQL injection is a single, well-formed HTTP request — not a flood of traffic. Shield sees it as normal traffic and lets it through. Shield and WAF complement each other but solve completely different problems.
C) CloudFormation for SQL restrictions on CloudFront — CloudFormation is an infrastructure-as-code tool that deploys resources from templates. It doesn't filter or inspect traffic. You could use CloudFormation to deploy a WAF configuration on CloudFront, but CloudFormation itself provides zero security filtering.
How to Think About This:
AWS has two services that sound similar but protect against completely different attack types. Memorize this:
• AWS WAF → Layer 7 (Application) → Protects against: SQL injection, XSS, bad bots, request flooding. Inspects HTTP request content (URL, body, headers, cookies).
• AWS Shield → Layer 3/4 (Network) → Protects against: DDoS attacks only (SYN floods, UDP reflection, volumetric attacks). Does NOT inspect request content.
Simple rule: if the question mentions SQL injection, XSS, or request content → WAF. If it mentions DDoS → Shield.
Key Concepts:
AWS WAF (Web Application Firewall) — Sits in front of CloudFront, ALB, or API Gateway and inspects every HTTP/HTTPS request. You create rules that examine request content:
• SQL Injection Match — Built-in rule that detects SQL patterns like
' OR 1=1 --, UNION SELECT, DROP TABLE in query strings, body, or headers. Uses pattern matching and transformation functions (URL decode, HTML decode) to catch encoded attacks.• AWS Managed Rule Groups — Pre-built rulesets maintained by AWS, including
AWSManagedRulesSQLiRuleSet specifically for SQL injection. Just enable it — no need to write custom rules.• Actions: Block (reject the request), Allow, or Count (log but don't block, useful for testing).
AWS Shield — Protects against Distributed Denial of Service (DDoS) attacks. Shield Standard is free and automatic on all AWS resources. Shield Advanced adds real-time metrics, 24/7 DDoS Response Team, and cost protection. But Shield operates at the network/transport layer — it sees packet volumes, connection rates, and protocol anomalies. It has zero visibility into HTTP request content, so it cannot detect SQL injection. A SQL injection is a single, normal-looking HTTP request — Shield wouldn't even notice it.
SQL Injection Explained — An attacker inserts malicious SQL code into input fields (login forms, search boxes, URLs). Example: a login form where the attacker types
admin' OR '1'='1 as the username. If the backend directly concatenates this into a SQL query, it becomes SELECT * FROM users WHERE username='admin' OR '1'='1' — which returns all users. WAF catches this pattern before it reaches your application.Why D is correct: AWS WAF inspects HTTP request content and has purpose-built SQL injection detection rules. It can block requests containing SQL patterns before they ever reach your application. This is exactly what a penetration test audit would recommend.
Why others are wrong:
A) Custom NACL filter with Lambda — NACLs (Network Access Control Lists) operate at Layer 3/4 — they filter by IP address, port, and protocol. They cannot inspect HTTP request content or detect SQL patterns. You can't write a NACL rule that says "block requests containing SQL code." Lambda could theoretically process requests, but this is a custom, fragile solution when WAF does it natively.
B) AWS Shield — Shield protects against DDoS only. A SQL injection is a single, well-formed HTTP request — not a flood of traffic. Shield sees it as normal traffic and lets it through. Shield and WAF complement each other but solve completely different problems.
C) CloudFormation for SQL restrictions on CloudFront — CloudFormation is an infrastructure-as-code tool that deploys resources from templates. It doesn't filter or inspect traffic. You could use CloudFormation to deploy a WAF configuration on CloudFront, but CloudFormation itself provides zero security filtering.
Q13.How can you centrally manage permissions across multiple AWS accounts to ensure nobody can disable AWS CloudTrail?
✓ Correct: D. Consolidate accounts under one AWS Organization and employ a service control policy (SCP) that restricts disabling of AWS CloudTrail.
How to Think About This:
When you see "centrally manage" + "multiple AWS accounts", immediately think AWS Organizations + SCPs. The keyword "centrally" means a single control plane. IAM policies are per-account (decentralized), Lambda is reactive (not preventive), and SCPs only exist at the Organization level — not inside individual accounts.
Key Concepts:
• AWS Organizations — A service that lets you consolidate multiple AWS accounts into a single management hierarchy. It provides a management account (formerly master) and member accounts organized into Organizational Units (OUs).
• Service Control Policies (SCPs) — Guardrails attached to the Organization root, OUs, or individual accounts. SCPs define the maximum permissions available — they act as permission boundaries. Even if an IAM user has
• Key distinction: SCPs are preventive controls (stop actions before they happen), not detective controls (find actions after they happen). SCPs do not grant permissions — they only restrict what is allowed.
Example SCP to protect CloudTrail:
Why D is correct: AWS Organizations is the only way to centrally manage policies across multiple accounts from a single point. An SCP attached at the Organization root applies to every member account automatically. Nobody in any member account — not even account administrators — can override it. This is the AWS-recommended approach for enforcing non-negotiable security controls like CloudTrail logging.
Why others are wrong:
A) Create an IAM policy in each account — This is decentralized, not central. You'd have to manage policies in every account individually. Worse, an account administrator could simply remove or modify the IAM policy. There's no enforcement mechanism from above.
B) Create a service control policy in each AWS account — SCPs do not exist inside individual accounts. SCPs are a feature of AWS Organizations and can only be created from the management account. This answer reveals a misunderstanding of where SCPs live. Also, the phrasing "limit write access" is vague — you want to specifically deny
C) Lambda function triggered by CloudWatch Events — This is a reactive/detective approach, not preventive. CloudTrail would be disabled first, then the Lambda would try to re-enable it. During that gap (even seconds), audit logging is lost. Also, this introduces operational complexity and potential failure points (Lambda throttling, permission issues, regional deployment). Prevention is always better than reaction for compliance.
How to Think About This:
When you see "centrally manage" + "multiple AWS accounts", immediately think AWS Organizations + SCPs. The keyword "centrally" means a single control plane. IAM policies are per-account (decentralized), Lambda is reactive (not preventive), and SCPs only exist at the Organization level — not inside individual accounts.
Key Concepts:
• AWS Organizations — A service that lets you consolidate multiple AWS accounts into a single management hierarchy. It provides a management account (formerly master) and member accounts organized into Organizational Units (OUs).
• Service Control Policies (SCPs) — Guardrails attached to the Organization root, OUs, or individual accounts. SCPs define the maximum permissions available — they act as permission boundaries. Even if an IAM user has
AdministratorAccess, an SCP can override and deny specific actions.• Key distinction: SCPs are preventive controls (stop actions before they happen), not detective controls (find actions after they happen). SCPs do not grant permissions — they only restrict what is allowed.
Example SCP to protect CloudTrail:
{"Effect":"Deny","Action":["cloudtrail:StopLogging","cloudtrail:DeleteTrail"],"Resource":"*"}Why D is correct: AWS Organizations is the only way to centrally manage policies across multiple accounts from a single point. An SCP attached at the Organization root applies to every member account automatically. Nobody in any member account — not even account administrators — can override it. This is the AWS-recommended approach for enforcing non-negotiable security controls like CloudTrail logging.
Why others are wrong:
A) Create an IAM policy in each account — This is decentralized, not central. You'd have to manage policies in every account individually. Worse, an account administrator could simply remove or modify the IAM policy. There's no enforcement mechanism from above.
B) Create a service control policy in each AWS account — SCPs do not exist inside individual accounts. SCPs are a feature of AWS Organizations and can only be created from the management account. This answer reveals a misunderstanding of where SCPs live. Also, the phrasing "limit write access" is vague — you want to specifically deny
StopLogging and DeleteTrail.C) Lambda function triggered by CloudWatch Events — This is a reactive/detective approach, not preventive. CloudTrail would be disabled first, then the Lambda would try to re-enable it. During that gap (even seconds), audit logging is lost. Also, this introduces operational complexity and potential failure points (Lambda throttling, permission issues, regional deployment). Prevention is always better than reaction for compliance.
Q14.How can you detect unauthorized port scanning activities from your EC2 instances? Select All
✓ Correct: A, B, D. GuardDuty, VPC Flow Logs, and VPC Traffic Mirroring can all detect unauthorized port scanning from EC2 instances.
How to Think About This:
When you see "detect" + "port scanning" + "EC2 instances", think about network-level monitoring tools. Port scanning is outbound network behavior from your instances, so you need tools that monitor network traffic originating from EC2. WAF operates at Layer 7 (HTTP) in front of web endpoints — it never sees outbound EC2 traffic.
Key Concepts:
• Amazon GuardDuty — An intelligent threat detection service that continuously monitors for malicious activity. It analyzes VPC Flow Logs, DNS logs, and CloudTrail events. It has a specific finding type called
• VPC Flow Logs — Capture metadata about IP traffic flowing through your VPC network interfaces: source/destination IPs, ports, protocol, packet/byte counts, and accept/reject status. By analyzing flow logs, you can identify an instance making connection attempts to many different ports on a target — the signature pattern of a port scan.
• VPC Traffic Mirroring — Copies actual network packets (not just metadata) from an ENI and sends them to a target for deep packet inspection. You route mirrored traffic to an IDS/IPS (like Suricata or Snort) which can detect port scan patterns, payload anomalies, and other threats from full packet analysis.
• AWS WAF — A web application firewall that inspects HTTP/HTTPS requests arriving at CloudFront, ALB, or API Gateway. It operates at Layer 7 only and only on inbound web traffic. It has no visibility into outbound EC2 network behavior.
Why A, B, D are correct:
A) GuardDuty — Purpose-built for this exact scenario. The
B) VPC Flow Logs — Flow log records showing the same source instance connecting to dozens or hundreds of different destination ports in a short time window is a clear indicator of port scanning. Tools like Athena or CloudWatch Insights can query these patterns.
D) VPC Traffic Mirroring — Provides the deepest visibility by capturing full packets. An IDS/IPS analyzing this data can detect port scans using signature-based or anomaly-based detection.
Why others are wrong:
C) Use AWS WAF logs — AWS WAF only processes inbound HTTP/HTTPS requests destined for web endpoints behind CloudFront, ALB, or API Gateway. Port scanning is outbound network traffic from EC2 instances at the network/transport layer (TCP SYN packets to various ports). WAF never sees this traffic — it's the wrong tool entirely. Think of WAF as a bouncer at the front door of a web server; it has no idea what people inside the building are doing.
How to Think About This:
When you see "detect" + "port scanning" + "EC2 instances", think about network-level monitoring tools. Port scanning is outbound network behavior from your instances, so you need tools that monitor network traffic originating from EC2. WAF operates at Layer 7 (HTTP) in front of web endpoints — it never sees outbound EC2 traffic.
Key Concepts:
• Amazon GuardDuty — An intelligent threat detection service that continuously monitors for malicious activity. It analyzes VPC Flow Logs, DNS logs, and CloudTrail events. It has a specific finding type called
Recon:EC2/Portscan that fires when an EC2 instance is probing ports on other hosts. Zero setup needed — just enable it.• VPC Flow Logs — Capture metadata about IP traffic flowing through your VPC network interfaces: source/destination IPs, ports, protocol, packet/byte counts, and accept/reject status. By analyzing flow logs, you can identify an instance making connection attempts to many different ports on a target — the signature pattern of a port scan.
• VPC Traffic Mirroring — Copies actual network packets (not just metadata) from an ENI and sends them to a target for deep packet inspection. You route mirrored traffic to an IDS/IPS (like Suricata or Snort) which can detect port scan patterns, payload anomalies, and other threats from full packet analysis.
• AWS WAF — A web application firewall that inspects HTTP/HTTPS requests arriving at CloudFront, ALB, or API Gateway. It operates at Layer 7 only and only on inbound web traffic. It has no visibility into outbound EC2 network behavior.
Why A, B, D are correct:
A) GuardDuty — Purpose-built for this exact scenario. The
Recon:EC2/Portscan finding type automatically detects when an EC2 instance is scanning ports on remote hosts.B) VPC Flow Logs — Flow log records showing the same source instance connecting to dozens or hundreds of different destination ports in a short time window is a clear indicator of port scanning. Tools like Athena or CloudWatch Insights can query these patterns.
D) VPC Traffic Mirroring — Provides the deepest visibility by capturing full packets. An IDS/IPS analyzing this data can detect port scans using signature-based or anomaly-based detection.
Why others are wrong:
C) Use AWS WAF logs — AWS WAF only processes inbound HTTP/HTTPS requests destined for web endpoints behind CloudFront, ALB, or API Gateway. Port scanning is outbound network traffic from EC2 instances at the network/transport layer (TCP SYN packets to various ports). WAF never sees this traffic — it's the wrong tool entirely. Think of WAF as a bouncer at the front door of a web server; it has no idea what people inside the building are doing.
Q15.How can you ensure that clients only access your S3-stored files via CloudFront?
✓ Correct: C. Create an Origin Access Identity (OAI) to restrict S3 access exclusively through CloudFront.
How to Think About This:
When you see "only via CloudFront" + "S3", the answer is always Origin Access Identity (OAI) or the newer Origin Access Control (OAC). The question is about restricting the access path to S3, not about cross-origin headers or bucket permissions alone. The OAI acts as a special CloudFront identity that S3 trusts — and you configure S3 to trust only that identity.
Key Concepts:
• Origin Access Identity (OAI) — A special CloudFront identity that you create and associate with your distribution. CloudFront uses this identity when fetching objects from S3. You then update the S3 bucket policy to only allow the OAI principal, and remove all public access. Result: direct S3 URLs return
• The flow:
• Origin Access Control (OAC) — The newer, recommended replacement for OAI that supports additional features like SSE-KMS encrypted objects and all S3 regions. Same concept, improved implementation.
Why C is correct: Creating an OAI is the first and essential step. Without creating the OAI, you cannot reference it in the S3 bucket policy. Once created, CloudFront is associated with this identity, and S3 grants access only to this identity. The combination of OAI + bucket policy ensures all access must flow through CloudFront.
Why others are wrong:
A) Enable Cross-Origin Resource Sharing (CORS) — CORS is about allowing web browsers to make cross-domain requests (e.g., JavaScript on domain-A fetching resources from domain-B). It has nothing to do with restricting which service can access S3. CORS headers control browser behavior, not server-side access paths.
B) Only allow read permission to the origin access identity in S3 — This describes configuring the S3 bucket policy after creating the OAI. It's the second step, not the first. You cannot grant permissions to an OAI that doesn't exist yet. The question asks how to ensure CloudFront-only access — the answer is creating the OAI, which is the prerequisite to everything else.
D) Select the 'Restrict Access To CloudFront Only' option in S3 — This option does not exist in S3. There is no checkbox or setting in S3 that magically restricts access to CloudFront. The restriction is achieved through the OAI mechanism and bucket policy configuration, not through an S3 console option.
How to Think About This:
When you see "only via CloudFront" + "S3", the answer is always Origin Access Identity (OAI) or the newer Origin Access Control (OAC). The question is about restricting the access path to S3, not about cross-origin headers or bucket permissions alone. The OAI acts as a special CloudFront identity that S3 trusts — and you configure S3 to trust only that identity.
Key Concepts:
• Origin Access Identity (OAI) — A special CloudFront identity that you create and associate with your distribution. CloudFront uses this identity when fetching objects from S3. You then update the S3 bucket policy to only allow the OAI principal, and remove all public access. Result: direct S3 URLs return
403 Forbidden, but CloudFront URLs work.• The flow:
User → CloudFront URL → CloudFront (presents OAI) → S3 (allows OAI) → Content deliveredUser → Direct S3 URL → S3 (no OAI, no public access) → 403 Forbidden• Origin Access Control (OAC) — The newer, recommended replacement for OAI that supports additional features like SSE-KMS encrypted objects and all S3 regions. Same concept, improved implementation.
Why C is correct: Creating an OAI is the first and essential step. Without creating the OAI, you cannot reference it in the S3 bucket policy. Once created, CloudFront is associated with this identity, and S3 grants access only to this identity. The combination of OAI + bucket policy ensures all access must flow through CloudFront.
Why others are wrong:
A) Enable Cross-Origin Resource Sharing (CORS) — CORS is about allowing web browsers to make cross-domain requests (e.g., JavaScript on domain-A fetching resources from domain-B). It has nothing to do with restricting which service can access S3. CORS headers control browser behavior, not server-side access paths.
B) Only allow read permission to the origin access identity in S3 — This describes configuring the S3 bucket policy after creating the OAI. It's the second step, not the first. You cannot grant permissions to an OAI that doesn't exist yet. The question asks how to ensure CloudFront-only access — the answer is creating the OAI, which is the prerequisite to everything else.
D) Select the 'Restrict Access To CloudFront Only' option in S3 — This option does not exist in S3. There is no checkbox or setting in S3 that magically restricts access to CloudFront. The restriction is achieved through the OAI mechanism and bucket policy configuration, not through an S3 console option.
Q16.How can you restrict the use of your KMS master key so that it only services requests coming from S3?
✓ Correct: B. The
How to Think About This:
When you see "restrict KMS key" + "only from [specific service]", think kms:ViaService. The keyword is "via" — you want requests to come via (through) a particular service. Remember:
Key Concepts:
• kms:ViaService — A KMS condition key that restricts key usage to requests made on behalf of a specific AWS service. The value is in the format
• kms:CallerAccount — Restricts key usage to requests from a specific AWS account ID. Useful for cross-account key sharing, but doesn't restrict which service can use the key.
• kms:KeyOrigin — A condition based on where the key material came from:
• kms:GrantIsForAWSResource — A boolean condition that checks whether a grant was created by an AWS service. It doesn't restrict which service uses the key.
Example policy condition:
Why B is correct:
Why others are wrong:
A) kms:CallerAccount — This restricts by account, not by service. It answers "which account can use this key?" not "which service can use this key?" You could restrict the key to account 123456789012, but users in that account could still use it from any service — EC2, Lambda, S3, or directly.
C) kms:KeyOrigin — This describes where the key material was generated, not who is using the key. It's about the key's provenance (AWS-managed vs. imported vs. CloudHSM-backed), which has nothing to do with restricting usage to S3.
D) kms:GrantIsForAWSResource — This is a boolean condition for grant operations. It checks if a grant is being created for use by an AWS service integration. It doesn't let you specify which service, and it only applies to grant creation, not to regular encrypt/decrypt operations.
kms:ViaService condition key restricts a KMS key so it can only be used when the request comes through a specific AWS service.How to Think About This:
When you see "restrict KMS key" + "only from [specific service]", think kms:ViaService. The keyword is "via" — you want requests to come via (through) a particular service. Remember:
ViaService = "only allow this key to be used via this service."Key Concepts:
• kms:ViaService — A KMS condition key that restricts key usage to requests made on behalf of a specific AWS service. The value is in the format
servicename.region.amazonaws.com (e.g., s3.us-east-1.amazonaws.com). When set, the KMS key can only be used when S3 (or the specified service) makes the KMS API call on the user's behalf — direct KMS API calls are denied.• kms:CallerAccount — Restricts key usage to requests from a specific AWS account ID. Useful for cross-account key sharing, but doesn't restrict which service can use the key.
• kms:KeyOrigin — A condition based on where the key material came from:
AWS_KMS (generated by KMS), EXTERNAL (imported), or AWS_CLOUDHSM (custom key store). This describes the key itself, not who's calling it.• kms:GrantIsForAWSResource — A boolean condition that checks whether a grant was created by an AWS service. It doesn't restrict which service uses the key.
Example policy condition:
"Condition": {"StringEquals": {"kms:ViaService": "s3.us-east-1.amazonaws.com"}}Why B is correct:
kms:ViaService is the only condition key that filters by which AWS service is making the KMS request. When S3 encrypts or decrypts an object using SSE-KMS, S3 calls KMS on the user's behalf. The kms:ViaService condition ensures the key only responds to those S3-initiated calls, blocking any direct usage of the key from the CLI, SDK, or other services.Why others are wrong:
A) kms:CallerAccount — This restricts by account, not by service. It answers "which account can use this key?" not "which service can use this key?" You could restrict the key to account 123456789012, but users in that account could still use it from any service — EC2, Lambda, S3, or directly.
C) kms:KeyOrigin — This describes where the key material was generated, not who is using the key. It's about the key's provenance (AWS-managed vs. imported vs. CloudHSM-backed), which has nothing to do with restricting usage to S3.
D) kms:GrantIsForAWSResource — This is a boolean condition for grant operations. It checks if a grant is being created for use by an AWS service integration. It doesn't let you specify which service, and it only applies to grant creation, not to regular encrypt/decrypt operations.
Q17.How can you segregate resources between Development, Testing, and Production environments in a rapidly growing organization?
✓ Correct: C. Create a separate AWS account for each environment (Production, Development, Testing) for the strongest isolation.
How to Think About This:
When you see "segregate" + "environments" + "rapidly growing", think separate AWS accounts. The words "rapidly growing" are a clue — a single account will become increasingly difficult to manage as the organization scales. AWS accounts provide the hardest security boundary available. IAM and VPCs within a single account can be misconfigured, but account-level separation is absolute.
Key Concepts:
• AWS Account as a security boundary — Each AWS account is a completely isolated container. Resources in one account cannot access resources in another account by default. This is the strongest form of isolation AWS offers — stronger than IAM policies, VPCs, or any other mechanism within a single account.
• AWS Organizations multi-account strategy — AWS recommends using AWS Organizations to manage multiple accounts. The typical structure is:
• Benefits of multi-account: independent billing, separate IAM namespaces, blast radius containment (a breach in Dev cannot reach Prod), separate service limits, and compliance isolation.
Why C is correct: Separate accounts per environment is the AWS Well-Architected Framework best practice. Each environment gets its own account with its own IAM users, roles, VPCs, and resources. A developer with full admin access in the Dev account has zero access to Production. Billing is automatically separated. Service limits in one environment don't affect another. For a rapidly growing organization, this approach scales cleanly with AWS Organizations managing everything centrally.
Why others are wrong:
A) IAM Users, Groups, and Roles in a single account — IAM provides identity-based isolation, but everything lives in the same account. A misconfigured IAM policy or an overly broad role could grant Dev users access to Prod resources. As the organization grows, IAM policies become increasingly complex and error-prone. One credential compromise potentially affects all environments. This provides the weakest isolation of all options.
B) Separate VPCs in a single account — VPCs provide network isolation only. Resources in different VPCs cannot communicate by default (network level), but they still share the same IAM namespace, same billing, same service limits, and same account-level configurations. An IAM user with
D) Single account for common resources + separate accounts for projects — This is a valid multi-account pattern, but it doesn't match the question. The question asks about environment segregation (Dev/Test/Prod), not project segregation. Separating by project but keeping all environments in one project account still mixes Dev and Prod resources together. The correct approach is to separate by environment, not by project.
How to Think About This:
When you see "segregate" + "environments" + "rapidly growing", think separate AWS accounts. The words "rapidly growing" are a clue — a single account will become increasingly difficult to manage as the organization scales. AWS accounts provide the hardest security boundary available. IAM and VPCs within a single account can be misconfigured, but account-level separation is absolute.
Key Concepts:
• AWS Account as a security boundary — Each AWS account is a completely isolated container. Resources in one account cannot access resources in another account by default. This is the strongest form of isolation AWS offers — stronger than IAM policies, VPCs, or any other mechanism within a single account.
• AWS Organizations multi-account strategy — AWS recommends using AWS Organizations to manage multiple accounts. The typical structure is:
Management Account → Prod OU → Prod Account → Dev OU → Dev Account → Test OU → Test Account• Benefits of multi-account: independent billing, separate IAM namespaces, blast radius containment (a breach in Dev cannot reach Prod), separate service limits, and compliance isolation.
Why C is correct: Separate accounts per environment is the AWS Well-Architected Framework best practice. Each environment gets its own account with its own IAM users, roles, VPCs, and resources. A developer with full admin access in the Dev account has zero access to Production. Billing is automatically separated. Service limits in one environment don't affect another. For a rapidly growing organization, this approach scales cleanly with AWS Organizations managing everything centrally.
Why others are wrong:
A) IAM Users, Groups, and Roles in a single account — IAM provides identity-based isolation, but everything lives in the same account. A misconfigured IAM policy or an overly broad role could grant Dev users access to Prod resources. As the organization grows, IAM policies become increasingly complex and error-prone. One credential compromise potentially affects all environments. This provides the weakest isolation of all options.
B) Separate VPCs in a single account — VPCs provide network isolation only. Resources in different VPCs cannot communicate by default (network level), but they still share the same IAM namespace, same billing, same service limits, and same account-level configurations. An IAM user with
ec2:* permission can manage instances in all VPCs. Network isolation alone is insufficient for environment segregation.D) Single account for common resources + separate accounts for projects — This is a valid multi-account pattern, but it doesn't match the question. The question asks about environment segregation (Dev/Test/Prod), not project segregation. Separating by project but keeping all environments in one project account still mixes Dev and Prod resources together. The correct approach is to separate by environment, not by project.
Q18.How do you enforce the use of SSE-S3 encryption in an S3 bucket policy?
✓ Correct: C. The condition value
How to Think About This:
When you see "enforce SSE-S3" in a bucket policy, you need to know the header value mapping. There are only two valid values for the
Key Concepts:
• S3 Server-Side Encryption Types and Header Values:
SSE-S3 → header value:
SSE-KMS → header value:
SSE-C → no server-side-encryption header (customer provides key in separate headers)
• Bucket policy enforcement pattern: You create a Deny policy on
• Important distinction:
Why C is correct:
Why others are wrong:
A) aws:s3 — This is a fabricated value. There is no
B) CMK — CMK stands for "Customer Master Key" (now called KMS key), which is a KMS concept. It's not a valid value for the encryption header. You might think of CMK because SSE-KMS uses KMS keys, but even SSE-KMS uses the header value
D) aws:kms — This is a valid value, but it corresponds to SSE-KMS, not SSE-S3. If you used this condition, you would be enforcing SSE-KMS encryption (where AWS KMS manages the encryption keys), which is a different encryption type than what the question asks for. Know the difference:
AES256 corresponds to SSE-S3 encryption in the x-amz-server-side-encryption header.How to Think About This:
When you see "enforce SSE-S3" in a bucket policy, you need to know the header value mapping. There are only two valid values for the
x-amz-server-side-encryption header: AES256 (for SSE-S3) and aws:kms (for SSE-KMS). Memorize this mapping — it appears frequently on the exam.Key Concepts:
• S3 Server-Side Encryption Types and Header Values:
SSE-S3 → header value:
AES256 — S3 manages the keys entirelySSE-KMS → header value:
aws:kms — AWS KMS manages the keysSSE-C → no server-side-encryption header (customer provides key in separate headers)
• Bucket policy enforcement pattern: You create a Deny policy on
s3:PutObject where the condition checks that the header is NOT the desired value. This rejects any upload that doesn't include the correct encryption header:"Condition":{"StringNotEquals":{"s3:x-amz-server-side-encryption":"AES256"}}• Important distinction:
x-amz-server-side-encryption is an HTTP request header sent with the PutObject API call. In the bucket policy condition, you reference it as s3:x-amz-server-side-encryption (with the s3: prefix).Why C is correct:
AES256 is the exact string value that maps to SSE-S3 in the x-amz-server-side-encryption header. When you put this in a bucket policy condition with StringEquals, you're saying "only allow uploads that specify SSE-S3 encryption." This is how AWS defines the header value — it's not configurable or negotiable.Why others are wrong:
A) aws:s3 — This is a fabricated value. There is no
aws:s3 option for the server-side-encryption header. The only valid values are AES256 and aws:kms. This looks plausible because of the "aws:" prefix pattern, but it's not a real value.B) CMK — CMK stands for "Customer Master Key" (now called KMS key), which is a KMS concept. It's not a valid value for the encryption header. You might think of CMK because SSE-KMS uses KMS keys, but even SSE-KMS uses the header value
aws:kms, not CMK. And the question asks about SSE-S3, not SSE-KMS.D) aws:kms — This is a valid value, but it corresponds to SSE-KMS, not SSE-S3. If you used this condition, you would be enforcing SSE-KMS encryption (where AWS KMS manages the encryption keys), which is a different encryption type than what the question asks for. Know the difference:
AES256 = SSE-S3, aws:kms = SSE-KMS.
Q19.How do you limit access to a particular S3 bucket so that only your Lambda function can read from it? Select All
✓ Correct: A, C. Use both an IAM execution role policy on Lambda and an S3 bucket policy to create a two-sided access control.
How to Think About This:
When you see "only [specific resource] can access S3", think about both sides of the access equation: (1) the identity policy (IAM role attached to the Lambda function granting it permission to read S3), and (2) the resource policy (S3 bucket policy restricting who can access the bucket). Best practice is to lock down both sides — the caller needs permission to call, AND the resource needs to accept the caller.
Key Concepts:
• Lambda Execution Role — Every Lambda function runs with an IAM role (called the execution role). This role's policy determines what AWS services the function can access. To read from S3, the role needs
• S3 Bucket Policy — A resource-based policy attached to the bucket itself. You can specify a Principal (the Lambda execution role ARN) and grant
• Defense in depth: Using both policies together means if either side is misconfigured, the other still provides protection.
Access flow:
Why A and C are correct:
A) IAM policy on Lambda for S3 read access — This is the identity-side control. The Lambda execution role gets a policy granting
C) S3 bucket policy granting read to Lambda's IAM role — This is the resource-side control. The bucket policy uses the Lambda execution role ARN as the Principal and grants read access. Combined with removing all other access (no public access, no other principals), this ensures only the Lambda function's role can read from the bucket.
Why others are wrong:
B) Make the bucket public and restrict via Security Groups — Two problems. First, S3 does not use Security Groups. Security Groups are a VPC-level network firewall for EC2 instances, ENIs, and similar resources. S3 is a global service accessed via API calls, not network connections to an instance. Second, making a bucket public is the opposite of restricting access — it opens the bucket to the entire internet.
D) Use AWS Organization SCPs — SCPs can only deny or restrict permissions; they cannot grant permissions. An SCP saying "only this role can access the bucket" doesn't work because SCPs set maximum permission boundaries — they don't provide the actual access. You still need IAM policies and bucket policies to grant the access. Also, SCPs apply to entire accounts, not individual resources like a specific S3 bucket.
How to Think About This:
When you see "only [specific resource] can access S3", think about both sides of the access equation: (1) the identity policy (IAM role attached to the Lambda function granting it permission to read S3), and (2) the resource policy (S3 bucket policy restricting who can access the bucket). Best practice is to lock down both sides — the caller needs permission to call, AND the resource needs to accept the caller.
Key Concepts:
• Lambda Execution Role — Every Lambda function runs with an IAM role (called the execution role). This role's policy determines what AWS services the function can access. To read from S3, the role needs
s3:GetObject permission on the bucket/objects.• S3 Bucket Policy — A resource-based policy attached to the bucket itself. You can specify a Principal (the Lambda execution role ARN) and grant
s3:GetObject. To ensure only the Lambda can read, you also add a Deny statement for all principals except the Lambda role.• Defense in depth: Using both policies together means if either side is misconfigured, the other still provides protection.
Access flow:
Lambda function → Assumes Execution Role → IAM policy allows s3:GetObjectS3 Bucket Policy → Only allows Principal: Lambda-Role-ARN → All others deniedWhy A and C are correct:
A) IAM policy on Lambda for S3 read access — This is the identity-side control. The Lambda execution role gets a policy granting
s3:GetObject on the target bucket. Without this, Lambda cannot make S3 API calls regardless of the bucket policy.C) S3 bucket policy granting read to Lambda's IAM role — This is the resource-side control. The bucket policy uses the Lambda execution role ARN as the Principal and grants read access. Combined with removing all other access (no public access, no other principals), this ensures only the Lambda function's role can read from the bucket.
Why others are wrong:
B) Make the bucket public and restrict via Security Groups — Two problems. First, S3 does not use Security Groups. Security Groups are a VPC-level network firewall for EC2 instances, ENIs, and similar resources. S3 is a global service accessed via API calls, not network connections to an instance. Second, making a bucket public is the opposite of restricting access — it opens the bucket to the entire internet.
D) Use AWS Organization SCPs — SCPs can only deny or restrict permissions; they cannot grant permissions. An SCP saying "only this role can access the bucket" doesn't work because SCPs set maximum permission boundaries — they don't provide the actual access. You still need IAM policies and bucket policies to grant the access. Also, SCPs apply to entire accounts, not individual resources like a specific S3 bucket.
Q20.How do you secure end-to-end HTTPS traffic between users and an S3 bucket using a custom domain and CloudFront?
✓ Correct: B. Use a custom SSL certificate in CloudFront for the custom domain, and configure HTTPS for the origin fetch to S3 for true end-to-end encryption.
How to Think About This:
When you see "end-to-end HTTPS" + "custom domain" + "CloudFront" + "S3", break the connection into two separate legs:
Both legs must be HTTPS for "end-to-end" encryption. A custom domain requires a custom SSL certificate (the default
Key Concepts:
• Two legs of CloudFront traffic:
Leg 1: Viewer protocol — requires SSL cert matching your custom domain
Leg 2: Origin protocol — must be configured as "HTTPS Only" or "Match Viewer"
• Custom SSL Certificate — When using a custom domain (e.g.,
• HTTPS Origin Fetch — CloudFront's "Origin Protocol Policy" setting. Set to
• Encryption in transit vs. at rest — HTTPS = encryption in transit (data moving over the network). SSE-S3/SSE-KMS = encryption at rest (data stored on disk). End-to-end HTTPS has nothing to do with server-side encryption.
Why B is correct: This option correctly addresses both legs of the end-to-end connection. The custom SSL certificate handles the viewer-to-CloudFront leg (matching your custom domain). The HTTPS origin fetch handles the CloudFront-to-S3 leg. Together, data is encrypted from the user's browser all the way to S3, with no unencrypted segment.
Why others are wrong:
A) Default CloudFront certificate + SSE-KMS — The default certificate only covers
C) Custom SSL certificate + SSE-S3 — Gets the first leg right (custom cert for custom domain) but fails on the second leg. SSE-S3 encrypts data at rest on disk, not in transit over the network. The origin fetch to S3 could still happen over HTTP if the origin protocol policy isn't set to HTTPS. End-to-end encryption requires HTTPS configuration, not at-rest encryption.
D) Default CloudFront certificate + SSE-C — Same certificate problem as option A (default cert doesn't cover custom domains). SSE-C (customer-provided keys) is also encryption at rest, not in transit. This option fails on both legs.
How to Think About This:
When you see "end-to-end HTTPS" + "custom domain" + "CloudFront" + "S3", break the connection into two separate legs:
User ↔ CloudFront (viewer connection) and CloudFront ↔ S3 (origin fetch).Both legs must be HTTPS for "end-to-end" encryption. A custom domain requires a custom SSL certificate (the default
*.cloudfront.net cert doesn't cover your domain). SSE (S3 encryption) is encryption at rest, not encryption in transit.Key Concepts:
• Two legs of CloudFront traffic:
User →[HTTPS]→ CloudFront →[HTTPS]→ S3 OriginLeg 1: Viewer protocol — requires SSL cert matching your custom domain
Leg 2: Origin protocol — must be configured as "HTTPS Only" or "Match Viewer"
• Custom SSL Certificate — When using a custom domain (e.g.,
cdn.example.com), the default CloudFront certificate (*.cloudfront.net) does not match your domain name. Browsers will show a certificate mismatch error. You must import or request a certificate in AWS Certificate Manager (ACM) in us-east-1 for your custom domain and attach it to the CloudFront distribution.• HTTPS Origin Fetch — CloudFront's "Origin Protocol Policy" setting. Set to
HTTPS Only to ensure CloudFront fetches from S3 over HTTPS, encrypting the connection between CloudFront edge and S3.• Encryption in transit vs. at rest — HTTPS = encryption in transit (data moving over the network). SSE-S3/SSE-KMS = encryption at rest (data stored on disk). End-to-end HTTPS has nothing to do with server-side encryption.
Why B is correct: This option correctly addresses both legs of the end-to-end connection. The custom SSL certificate handles the viewer-to-CloudFront leg (matching your custom domain). The HTTPS origin fetch handles the CloudFront-to-S3 leg. Together, data is encrypted from the user's browser all the way to S3, with no unencrypted segment.
Why others are wrong:
A) Default CloudFront certificate + SSE-KMS — The default certificate only covers
*.cloudfront.net domains. If you're using a custom domain like cdn.example.com, browsers will reject the connection with a certificate mismatch error. Also, SSE-KMS is encryption at rest, not in transit — it doesn't help with HTTPS between CloudFront and S3.C) Custom SSL certificate + SSE-S3 — Gets the first leg right (custom cert for custom domain) but fails on the second leg. SSE-S3 encrypts data at rest on disk, not in transit over the network. The origin fetch to S3 could still happen over HTTP if the origin protocol policy isn't set to HTTPS. End-to-end encryption requires HTTPS configuration, not at-rest encryption.
D) Default CloudFront certificate + SSE-C — Same certificate problem as option A (default cert doesn't cover custom domains). SSE-C (customer-provided keys) is also encryption at rest, not in transit. This option fails on both legs.
Q21.How does the hypervisor secure data in host memory and EBS volumes against unauthorized access?
✓ Correct: B. The AWS hypervisor immediately scrubs (zeroes out) memory after it is freed, preventing data leakage between instances.
How to Think About This:
When you see "hypervisor" + "memory" + "security", remember the core principle: memory is zeroed immediately upon deallocation. This is critical in a multi-tenant environment where the same physical server hosts multiple customers' instances. The keyword "immediately" matters — there's no window where stale data is accessible. For EBS, the mechanism is different — zeroing happens before reuse, not immediately after deletion.
Key Concepts:
• AWS Nitro Hypervisor — A purpose-built, lightweight hypervisor that manages EC2 instances. One of its core security responsibilities is memory isolation between instances. When an instance releases memory (terminates or frees pages), the hypervisor immediately overwrites that memory with zeros before it can be allocated to any other instance.
• Memory scrubbing — The process of writing zeros to every byte of a memory region after deallocation. This prevents a data remanence attack where a new instance on the same host could read leftover data from a previous instance's memory.
• EBS volume data wiping — EBS uses a different approach. Volumes are logically zeroed before being made available to a new customer (lazy zeroing / zero-on-read). The data is not immediately zeroed upon deletion — instead, the blocks are marked as unavailable and zeroed before any future allocation. This is a key distinction from memory handling.
Why B is correct: AWS documentation confirms that the hypervisor zeroes out memory immediately after it is freed. This happens at the hypervisor level, which is below the guest OS, making it impossible for any instance to bypass. The immediacy is critical — there is no time gap during which another instance could read stale data. This is a fundamental security guarantee of the AWS shared responsibility model at the infrastructure layer.
Why others are wrong:
A) Memory is not immediately zeroed out after deallocation — This is the exact opposite of what AWS does. If memory were not zeroed, it would be a catastrophic security vulnerability in a multi-tenant cloud. Previous instance data (passwords, keys, application data) could leak to new instances on the same host. AWS specifically guarantees immediate zeroing.
C) EBS volumes are immediately zeroed out after deletion — EBS volumes are not immediately zeroed after deletion. They use a lazy/deferred approach where blocks are zeroed before being reallocated to a new volume. The distinction matters: "immediately after deletion" vs. "before reuse" are different timing guarantees. The word "immediately" is what makes this wrong for EBS (but correct for memory in option B).
D) Disk virtualization layer zeroes out EBS blocks after deletion — Similar to C, this implies immediate zeroing of EBS blocks upon deletion. The actual process uses a zero-on-read approach where previously used blocks return zeros when read by a new volume, and physical overwriting happens asynchronously. The question specifically asks about the hypervisor securing data, and the hypervisor's primary mechanism is memory scrubbing, not EBS block management.
How to Think About This:
When you see "hypervisor" + "memory" + "security", remember the core principle: memory is zeroed immediately upon deallocation. This is critical in a multi-tenant environment where the same physical server hosts multiple customers' instances. The keyword "immediately" matters — there's no window where stale data is accessible. For EBS, the mechanism is different — zeroing happens before reuse, not immediately after deletion.
Key Concepts:
• AWS Nitro Hypervisor — A purpose-built, lightweight hypervisor that manages EC2 instances. One of its core security responsibilities is memory isolation between instances. When an instance releases memory (terminates or frees pages), the hypervisor immediately overwrites that memory with zeros before it can be allocated to any other instance.
• Memory scrubbing — The process of writing zeros to every byte of a memory region after deallocation. This prevents a data remanence attack where a new instance on the same host could read leftover data from a previous instance's memory.
• EBS volume data wiping — EBS uses a different approach. Volumes are logically zeroed before being made available to a new customer (lazy zeroing / zero-on-read). The data is not immediately zeroed upon deletion — instead, the blocks are marked as unavailable and zeroed before any future allocation. This is a key distinction from memory handling.
Why B is correct: AWS documentation confirms that the hypervisor zeroes out memory immediately after it is freed. This happens at the hypervisor level, which is below the guest OS, making it impossible for any instance to bypass. The immediacy is critical — there is no time gap during which another instance could read stale data. This is a fundamental security guarantee of the AWS shared responsibility model at the infrastructure layer.
Why others are wrong:
A) Memory is not immediately zeroed out after deallocation — This is the exact opposite of what AWS does. If memory were not zeroed, it would be a catastrophic security vulnerability in a multi-tenant cloud. Previous instance data (passwords, keys, application data) could leak to new instances on the same host. AWS specifically guarantees immediate zeroing.
C) EBS volumes are immediately zeroed out after deletion — EBS volumes are not immediately zeroed after deletion. They use a lazy/deferred approach where blocks are zeroed before being reallocated to a new volume. The distinction matters: "immediately after deletion" vs. "before reuse" are different timing guarantees. The word "immediately" is what makes this wrong for EBS (but correct for memory in option B).
D) Disk virtualization layer zeroes out EBS blocks after deletion — Similar to C, this implies immediate zeroing of EBS blocks upon deletion. The actual process uses a zero-on-read approach where previously used blocks return zeros when read by a new volume, and physical overwriting happens asynchronously. The question specifically asks about the hypervisor securing data, and the hypervisor's primary mechanism is memory scrubbing, not EBS block management.
Q22.How does the user authentication process work when AWS is federated with a corporate Active Directory?
✓ Correct: A. The user authenticates via the ADFS portal, which validates against Active Directory, then sends a SAML assertion to AWS STS, which returns temporary credentials.
How to Think About This:
When you see "federated" + "Active Directory" + "AWS", trace the authentication flow step by step. The critical rule is: the user never talks to AWS STS directly. The user authenticates with ADFS (the identity provider), ADFS talks to AD, and then ADFS provides the SAML token. The user's browser then posts the SAML assertion to AWS. STS never reaches back to AD — it trusts the SAML assertion.
Key Concepts:
• SAML 2.0 Federation — An industry standard protocol for exchanging authentication and authorization data between an Identity Provider (IdP) and a Service Provider (SP). AWS acts as the SP; ADFS acts as the IdP.
• ADFS (Active Directory Federation Services) — A Microsoft service that acts as the bridge between Active Directory and external applications. It authenticates users against AD and issues SAML assertions.
• AWS STS (Security Token Service) — Accepts SAML assertions via
The complete flow:
Why A is correct: This option accurately describes the SAML federation flow. The user starts at the ADFS portal (not at AWS). ADFS authenticates against AD. ADFS generates a SAML assertion containing the user's identity and authorized AWS roles. This assertion is sent to AWS STS, which validates it and returns temporary credentials. The key insight is that ADFS is the intermediary — it authenticates the user and vouches for them to AWS.
Why others are wrong:
B) User authenticates with STS, which then authenticates against AD — The flow is backwards. AWS STS does not authenticate users and does not contact Active Directory. STS is a token-issuing service — it trusts the SAML assertion it receives and issues credentials based on that trust. STS has no mechanism to reach into your corporate AD. The whole point of federation is that authentication happens on your side (ADFS/AD), and AWS trusts the assertion.
C) User authenticates directly against AD, then forwards SAML token to STS — Users do not authenticate directly against AD in a SAML federation flow. They authenticate through ADFS, which acts as the broker. ADFS is the service that generates the SAML assertion — AD itself doesn't produce SAML tokens. AD only handles username/password validation. Without ADFS, there's no SAML assertion to send to STS. The user interacts with ADFS, ADFS interacts with AD.
How to Think About This:
When you see "federated" + "Active Directory" + "AWS", trace the authentication flow step by step. The critical rule is: the user never talks to AWS STS directly. The user authenticates with ADFS (the identity provider), ADFS talks to AD, and then ADFS provides the SAML token. The user's browser then posts the SAML assertion to AWS. STS never reaches back to AD — it trusts the SAML assertion.
Key Concepts:
• SAML 2.0 Federation — An industry standard protocol for exchanging authentication and authorization data between an Identity Provider (IdP) and a Service Provider (SP). AWS acts as the SP; ADFS acts as the IdP.
• ADFS (Active Directory Federation Services) — A Microsoft service that acts as the bridge between Active Directory and external applications. It authenticates users against AD and issues SAML assertions.
• AWS STS (Security Token Service) — Accepts SAML assertions via
AssumeRoleWithSAML and returns temporary AWS credentials (access key, secret key, session token).The complete flow:
1. User → ADFS Portal (enters AD credentials)2. ADFS → Active Directory (validates username/password)3. AD → ADFS (authentication success + user attributes)4. ADFS → User's Browser (SAML assertion with roles)5. Browser → AWS STS AssumeRoleWithSAML (posts SAML assertion)6. STS → User (temporary credentials: AccessKey + SecretKey + Token)7. User → AWS Console or API (uses temporary credentials)Why A is correct: This option accurately describes the SAML federation flow. The user starts at the ADFS portal (not at AWS). ADFS authenticates against AD. ADFS generates a SAML assertion containing the user's identity and authorized AWS roles. This assertion is sent to AWS STS, which validates it and returns temporary credentials. The key insight is that ADFS is the intermediary — it authenticates the user and vouches for them to AWS.
Why others are wrong:
B) User authenticates with STS, which then authenticates against AD — The flow is backwards. AWS STS does not authenticate users and does not contact Active Directory. STS is a token-issuing service — it trusts the SAML assertion it receives and issues credentials based on that trust. STS has no mechanism to reach into your corporate AD. The whole point of federation is that authentication happens on your side (ADFS/AD), and AWS trusts the assertion.
C) User authenticates directly against AD, then forwards SAML token to STS — Users do not authenticate directly against AD in a SAML federation flow. They authenticate through ADFS, which acts as the broker. ADFS is the service that generates the SAML assertion — AD itself doesn't produce SAML tokens. AD only handles username/password validation. Without ADFS, there's no SAML assertion to send to STS. The user interacts with ADFS, ADFS interacts with AD.
Q23.How would you best protect your fleet of EC2 instances running proprietary data algorithms from unauthorized access and activity?
✓ Correct: C. CloudTrail monitors API activity (who did what), and Amazon Inspector scans EC2 instances for vulnerabilities and security issues.
How to Think About This:
When you see "protect EC2 instances" + "unauthorized access and activity", you need two capabilities: (1) monitoring who is doing what to your instances (API-level auditing), and (2) scanning the instances themselves for vulnerabilities that could enable unauthorized access. CloudTrail handles the first; Inspector handles the second. WAF/Shield are for web attacks, not EC2 protection. GuardDuty detects threats but doesn't assess vulnerabilities on the instance.
Key Concepts:
• AWS CloudTrail — Records every API call made in your AWS account: who called it, when, from what IP, what resource was affected. For EC2, this captures actions like
• Amazon Inspector — An automated vulnerability assessment service that scans EC2 instances for: known software vulnerabilities (CVEs), unintended network exposure, CIS benchmark deviations, and security best practice violations. It runs an agent on the instance and checks for misconfigurations that could lead to unauthorized access.
• Together: CloudTrail answers "is anyone doing something suspicious with our instances?" while Inspector answers "are our instances secure enough to resist unauthorized access?"
Why C is correct: The question asks about protecting EC2 instances from unauthorized access AND activity. CloudTrail provides the activity monitoring — any API call to manage, access, or modify the EC2 fleet is logged and auditable. Inspector provides the access protection — by identifying vulnerabilities (unpatched software, open ports, weak configurations) on the instances themselves, you can fix them before they're exploited. This combination covers both the proactive (vulnerability scanning) and detective (activity monitoring) sides.
Why others are wrong:
A) CloudTrail and Trusted Advisor — CloudTrail is correct, but Trusted Advisor is a cost optimization and general best-practice tool. It checks for things like underutilized instances, open security groups, and S3 permissions, but it does not perform vulnerability assessments on EC2 instances. It provides high-level recommendations, not the deep security scanning that Inspector offers. It's too broad and not focused on EC2 security.
B) WAF and Shield — AWS WAF is a web application firewall that protects web-facing endpoints (CloudFront, ALB, API Gateway) from HTTP-layer attacks like SQL injection and XSS. AWS Shield protects against DDoS attacks. Neither of these protects EC2 instances from unauthorized access or monitors API activity. They're focused on perimeter web defense, not instance-level security. The question mentions "proprietary data algorithms" running on EC2, not a web application.
D) GuardDuty — GuardDuty is an excellent threat detection service, but the question asks about protecting instances, which requires both monitoring AND vulnerability assessment. GuardDuty detects threats (e.g., compromised instances, unusual API calls) but does not scan instances for vulnerabilities. It also doesn't provide the same API activity auditing that CloudTrail offers. GuardDuty consumes CloudTrail data — it doesn't replace it. Additionally, a single service cannot cover both needs as well as the CloudTrail + Inspector combination.
How to Think About This:
When you see "protect EC2 instances" + "unauthorized access and activity", you need two capabilities: (1) monitoring who is doing what to your instances (API-level auditing), and (2) scanning the instances themselves for vulnerabilities that could enable unauthorized access. CloudTrail handles the first; Inspector handles the second. WAF/Shield are for web attacks, not EC2 protection. GuardDuty detects threats but doesn't assess vulnerabilities on the instance.
Key Concepts:
• AWS CloudTrail — Records every API call made in your AWS account: who called it, when, from what IP, what resource was affected. For EC2, this captures actions like
RunInstances, StopInstances, ModifySecurityGroup, etc. Essential for detecting unauthorized activity (e.g., someone launching instances they shouldn't, changing security groups).• Amazon Inspector — An automated vulnerability assessment service that scans EC2 instances for: known software vulnerabilities (CVEs), unintended network exposure, CIS benchmark deviations, and security best practice violations. It runs an agent on the instance and checks for misconfigurations that could lead to unauthorized access.
• Together: CloudTrail answers "is anyone doing something suspicious with our instances?" while Inspector answers "are our instances secure enough to resist unauthorized access?"
Why C is correct: The question asks about protecting EC2 instances from unauthorized access AND activity. CloudTrail provides the activity monitoring — any API call to manage, access, or modify the EC2 fleet is logged and auditable. Inspector provides the access protection — by identifying vulnerabilities (unpatched software, open ports, weak configurations) on the instances themselves, you can fix them before they're exploited. This combination covers both the proactive (vulnerability scanning) and detective (activity monitoring) sides.
Why others are wrong:
A) CloudTrail and Trusted Advisor — CloudTrail is correct, but Trusted Advisor is a cost optimization and general best-practice tool. It checks for things like underutilized instances, open security groups, and S3 permissions, but it does not perform vulnerability assessments on EC2 instances. It provides high-level recommendations, not the deep security scanning that Inspector offers. It's too broad and not focused on EC2 security.
B) WAF and Shield — AWS WAF is a web application firewall that protects web-facing endpoints (CloudFront, ALB, API Gateway) from HTTP-layer attacks like SQL injection and XSS. AWS Shield protects against DDoS attacks. Neither of these protects EC2 instances from unauthorized access or monitors API activity. They're focused on perimeter web defense, not instance-level security. The question mentions "proprietary data algorithms" running on EC2, not a web application.
D) GuardDuty — GuardDuty is an excellent threat detection service, but the question asks about protecting instances, which requires both monitoring AND vulnerability assessment. GuardDuty detects threats (e.g., compromised instances, unusual API calls) but does not scan instances for vulnerabilities. It also doesn't provide the same API activity auditing that CloudTrail offers. GuardDuty consumes CloudTrail data — it doesn't replace it. Additionally, a single service cannot cover both needs as well as the CloudTrail + Inspector combination.
Q24.How would you contain and investigate a compromised EC2 instance initiating a DoS attack?
✓ Correct: A. Restrict all outbound traffic (stops the DoS attack) and allow inbound SSH on port 22 from internal IPs only (enables forensic investigation).
How to Think About This:
When you see "compromised instance" + "contain AND investigate", you need to satisfy two goals simultaneously: (1) Stop the attack — block all outbound traffic so the instance can't send DoS traffic, and (2) Enable investigation — allow your security team to SSH into the instance for forensics. The key is a quarantine security group: block everything going out, allow only SSH coming in from your internal network.
Key Concepts:
• Incident Response — Containment — The immediate priority is to stop the damage (the outgoing DoS attack). You do this by restricting all outbound traffic in the security group. With no outbound rules, the instance cannot send traffic to any external target.
• Incident Response — Investigation — After containment, you need to analyze the instance to understand the compromise: check logs, examine processes, review file changes. This requires inbound SSH access (port 22) from your internal/trusted IP addresses only.
• Quarantine Security Group pattern:
• Port 22 vs. Port 3389: Port 22 = SSH (Linux), Port 3389 = RDP (Windows). For investigation, SSH is standard for Linux instances. RDP is for Windows.
Why A is correct: This option perfectly addresses both goals. Restricting all outbound traffic immediately stops the DoS attack because the instance can no longer send packets to external targets. Allowing inbound on port 22 (SSH) from internal IPs only lets your incident response team log into the instance for forensic analysis while keeping it completely isolated from the internet. The "internal IPs only" restriction prevents the attacker from using the open SSH port to further compromise the instance.
Why others are wrong:
B) Allow outbound on port 3389 only — This fails at containment. Allowing any outbound traffic means the instance can still communicate externally. Port 3389 (RDP) outbound means the instance could potentially connect to other Windows machines. Also, this option says nothing about inbound access for investigation. You need inbound SSH/RDP to investigate, not outbound.
C) Restrict all outbound + allow outbound on port 22 only — Contradictory and wrong direction. If you restrict all outbound but then allow outbound port 22, you're allowing the instance to make outgoing SSH connections to other servers, which could be used for lateral movement by the attacker. For investigation, you need inbound SSH (your team connecting TO the instance), not outbound SSH (the instance connecting to other systems).
D) Allow inbound on port 3389 only from internal IPs — This addresses investigation (RDP access from internal IPs) but completely ignores containment. There's no mention of restricting outbound traffic, so the DoS attack continues unabated. The instance keeps flooding the target while you're trying to investigate. Containment must come first.
How to Think About This:
When you see "compromised instance" + "contain AND investigate", you need to satisfy two goals simultaneously: (1) Stop the attack — block all outbound traffic so the instance can't send DoS traffic, and (2) Enable investigation — allow your security team to SSH into the instance for forensics. The key is a quarantine security group: block everything going out, allow only SSH coming in from your internal network.
Key Concepts:
• Incident Response — Containment — The immediate priority is to stop the damage (the outgoing DoS attack). You do this by restricting all outbound traffic in the security group. With no outbound rules, the instance cannot send traffic to any external target.
• Incident Response — Investigation — After containment, you need to analyze the instance to understand the compromise: check logs, examine processes, review file changes. This requires inbound SSH access (port 22) from your internal/trusted IP addresses only.
• Quarantine Security Group pattern:
Inbound: Allow TCP 22 from 10.0.0.0/8 (internal network only)Outbound: DENY ALL (no outbound rules = no outbound traffic)• Port 22 vs. Port 3389: Port 22 = SSH (Linux), Port 3389 = RDP (Windows). For investigation, SSH is standard for Linux instances. RDP is for Windows.
Why A is correct: This option perfectly addresses both goals. Restricting all outbound traffic immediately stops the DoS attack because the instance can no longer send packets to external targets. Allowing inbound on port 22 (SSH) from internal IPs only lets your incident response team log into the instance for forensic analysis while keeping it completely isolated from the internet. The "internal IPs only" restriction prevents the attacker from using the open SSH port to further compromise the instance.
Why others are wrong:
B) Allow outbound on port 3389 only — This fails at containment. Allowing any outbound traffic means the instance can still communicate externally. Port 3389 (RDP) outbound means the instance could potentially connect to other Windows machines. Also, this option says nothing about inbound access for investigation. You need inbound SSH/RDP to investigate, not outbound.
C) Restrict all outbound + allow outbound on port 22 only — Contradictory and wrong direction. If you restrict all outbound but then allow outbound port 22, you're allowing the instance to make outgoing SSH connections to other servers, which could be used for lateral movement by the attacker. For investigation, you need inbound SSH (your team connecting TO the instance), not outbound SSH (the instance connecting to other systems).
D) Allow inbound on port 3389 only from internal IPs — This addresses investigation (RDP access from internal IPs) but completely ignores containment. There's no mention of restricting outbound traffic, so the DoS attack continues unabated. The instance keeps flooding the target while you're trying to investigate. Containment must come first.
Q25.How would you prevent SQL injection attacks originating from a specific IP range on your web application behind an ELB? Select All
✓ Correct: C, D. AWS WAF blocks SQL injection patterns at the application layer, and Network ACLs block the specific malicious IP range at the network layer.
How to Think About This:
The keyword here is "prevent" — not detect, not log, but actively stop the attacks. This eliminates monitoring-only tools. You also have two aspects to address: (1) the SQL injection attack pattern (application layer) and (2) the specific IP range (network layer). WAF handles the pattern matching; NACLs handle the IP blocking. Think of it as defense in depth — two layers of prevention.
Key Concepts:
• AWS WAF — A web application firewall that inspects HTTP/HTTPS requests and can block them based on rules. It has built-in SQL injection match conditions that detect SQL injection patterns in query strings, headers, body, and URI. WAF is the primary defense against application-layer attacks. It integrates with ALB (which is behind ELB in this scenario).
• Network ACLs (NACLs) — Stateless firewall rules at the subnet level. NACLs can deny traffic from specific IP ranges (CIDR blocks). Since the question specifies attacks from a "specific IP range," NACLs can block that entire range at the network layer before traffic even reaches your instances or load balancer.
• Preventive vs. Detective controls:
Preventive (stops attacks): WAF, NACLs, Security Groups
Detective (monitors/reports): VPC Flow Logs, GuardDuty, CloudTrail
Defense in depth:
Why C and D are correct:
C) AWS WAF — WAF is purpose-built for detecting and blocking SQL injection attacks. You create a SQL injection match condition, add it to a WAF rule, and associate the rule with a Web ACL attached to your ALB. WAF inspects every incoming request and blocks those matching SQL injection patterns. This is the primary prevention mechanism for application-layer attacks.
D) Network ACLs — Since the attacks originate from a specific IP range, you can add a DENY rule in the NACL for that CIDR block. NACLs operate at the subnet level and are stateless, meaning they evaluate both inbound and outbound traffic independently. This blocks all traffic from the malicious IP range before it reaches any resource in the subnet — a quick and effective IP-based block.
Why others are wrong:
A) VPC Flow Logs — Flow Logs are a detective/logging tool, not a preventive one. They record metadata about traffic (source IP, destination IP, port, protocol, accept/reject) but cannot block or modify traffic. You can analyze flow logs after the fact to identify attack patterns, but they do nothing to prevent the SQL injection from reaching your application. Logging an attack is not the same as stopping it.
B) GuardDuty — GuardDuty is a threat detection service that identifies suspicious activity by analyzing VPC Flow Logs, DNS logs, and CloudTrail events. It might generate a finding about unusual traffic patterns, but it cannot block traffic. GuardDuty is detective, not preventive. It tells you something is wrong; it doesn't stop the attack. You would need to take separate action (like updating WAF rules or NACLs) based on GuardDuty's findings.
How to Think About This:
The keyword here is "prevent" — not detect, not log, but actively stop the attacks. This eliminates monitoring-only tools. You also have two aspects to address: (1) the SQL injection attack pattern (application layer) and (2) the specific IP range (network layer). WAF handles the pattern matching; NACLs handle the IP blocking. Think of it as defense in depth — two layers of prevention.
Key Concepts:
• AWS WAF — A web application firewall that inspects HTTP/HTTPS requests and can block them based on rules. It has built-in SQL injection match conditions that detect SQL injection patterns in query strings, headers, body, and URI. WAF is the primary defense against application-layer attacks. It integrates with ALB (which is behind ELB in this scenario).
• Network ACLs (NACLs) — Stateless firewall rules at the subnet level. NACLs can deny traffic from specific IP ranges (CIDR blocks). Since the question specifies attacks from a "specific IP range," NACLs can block that entire range at the network layer before traffic even reaches your instances or load balancer.
• Preventive vs. Detective controls:
Preventive (stops attacks): WAF, NACLs, Security Groups
Detective (monitors/reports): VPC Flow Logs, GuardDuty, CloudTrail
Defense in depth:
Attacker IP → [NACL blocks IP range] → [WAF blocks SQL injection] → Application Layer 3/4 block Layer 7 blockWhy C and D are correct:
C) AWS WAF — WAF is purpose-built for detecting and blocking SQL injection attacks. You create a SQL injection match condition, add it to a WAF rule, and associate the rule with a Web ACL attached to your ALB. WAF inspects every incoming request and blocks those matching SQL injection patterns. This is the primary prevention mechanism for application-layer attacks.
D) Network ACLs — Since the attacks originate from a specific IP range, you can add a DENY rule in the NACL for that CIDR block. NACLs operate at the subnet level and are stateless, meaning they evaluate both inbound and outbound traffic independently. This blocks all traffic from the malicious IP range before it reaches any resource in the subnet — a quick and effective IP-based block.
Why others are wrong:
A) VPC Flow Logs — Flow Logs are a detective/logging tool, not a preventive one. They record metadata about traffic (source IP, destination IP, port, protocol, accept/reject) but cannot block or modify traffic. You can analyze flow logs after the fact to identify attack patterns, but they do nothing to prevent the SQL injection from reaching your application. Logging an attack is not the same as stopping it.
B) GuardDuty — GuardDuty is a threat detection service that identifies suspicious activity by analyzing VPC Flow Logs, DNS logs, and CloudTrail events. It might generate a finding about unusual traffic patterns, but it cannot block traffic. GuardDuty is detective, not preventive. It tells you something is wrong; it doesn't stop the attack. You would need to take separate action (like updating WAF rules or NACLs) based on GuardDuty's findings.
Q26.One of your team members has inadvertently exposed their IAM User access keys on GitHub. What immediate steps should be taken? Select All
✓ Correct: A. The most critical immediate step is to disable and erase the compromised IAM access keys to prevent unauthorized use.
How to Think About This:
When you see "access keys exposed", think: disable the keys FIRST. This is an incident response question where the priority is to stop the bleeding. The compromised keys are the direct threat vector — everything else is secondary. Don't overthink it with drastic measures (shutting down all instances) or unrelated actions (changing root password). Focus on the specific credentials that were exposed.
Key Concepts:
• IAM Access Keys — Consist of an Access Key ID and Secret Access Key. They provide programmatic access to AWS services. If exposed, anyone with these keys can make API calls as the associated IAM user, with whatever permissions that user has.
• Incident Response Priority — The AWS security incident response process follows: (1) Contain — disable the compromised credentials, (2) Assess — review CloudTrail for unauthorized activity, (3) Remediate — create new keys, review permissions, rotate other credentials if needed.
• Key states in IAM: Active (can be used), Inactive (disabled but still exists), Deleted (permanently removed). First make the key inactive (instant containment), then delete it once you've confirmed the user has new keys.
Immediate response steps:
Why A is correct: Disabling and erasing the compromised access keys is the single most important immediate action. The moment the keys are disabled, they can no longer be used to authenticate API calls — the threat is neutralized. This is the fastest, most targeted, and least disruptive way to contain the incident. Every second the keys remain active, the attacker could be spinning up resources, accessing data, or creating backdoor accounts.
Why others are wrong:
B) Halt all running EC2 instances — This is massively disproportionate and disruptive. Stopping all EC2 instances would cause a complete service outage for your entire organization to address a single compromised credential. The compromised access key might not even have EC2 permissions. Even if it does, disabling the key (option A) prevents the attacker from managing instances. Halting instances also doesn't prevent the attacker from launching new instances if the key is still active.
C) Update the root user password — The root user credentials were not compromised. An IAM user's access keys are completely separate from the root account password. Changing the root password is unnecessary and addresses a threat that doesn't exist. Unless there's evidence that root credentials were also exposed (which the question doesn't state), this action wastes valuable response time on the wrong credential.
D) Deactivate other potentially compromised IAM credentials — While reviewing other credentials is reasonable as a later step, the question asks about immediate steps. There's no indication other credentials were compromised — only one team member exposed their keys on GitHub. Preemptively deactivating other users' credentials without evidence would disrupt the entire team unnecessarily. Focus on the known compromised credential first.
How to Think About This:
When you see "access keys exposed", think: disable the keys FIRST. This is an incident response question where the priority is to stop the bleeding. The compromised keys are the direct threat vector — everything else is secondary. Don't overthink it with drastic measures (shutting down all instances) or unrelated actions (changing root password). Focus on the specific credentials that were exposed.
Key Concepts:
• IAM Access Keys — Consist of an Access Key ID and Secret Access Key. They provide programmatic access to AWS services. If exposed, anyone with these keys can make API calls as the associated IAM user, with whatever permissions that user has.
• Incident Response Priority — The AWS security incident response process follows: (1) Contain — disable the compromised credentials, (2) Assess — review CloudTrail for unauthorized activity, (3) Remediate — create new keys, review permissions, rotate other credentials if needed.
• Key states in IAM: Active (can be used), Inactive (disabled but still exists), Deleted (permanently removed). First make the key inactive (instant containment), then delete it once you've confirmed the user has new keys.
Immediate response steps:
1. Disable the exposed access key (IAM Console or CLI)2. Review CloudTrail for any unauthorized API calls using the key3. Create a new access key for the user4. Delete the old compromised key5. Review and potentially tighten the user's IAM permissionsWhy A is correct: Disabling and erasing the compromised access keys is the single most important immediate action. The moment the keys are disabled, they can no longer be used to authenticate API calls — the threat is neutralized. This is the fastest, most targeted, and least disruptive way to contain the incident. Every second the keys remain active, the attacker could be spinning up resources, accessing data, or creating backdoor accounts.
Why others are wrong:
B) Halt all running EC2 instances — This is massively disproportionate and disruptive. Stopping all EC2 instances would cause a complete service outage for your entire organization to address a single compromised credential. The compromised access key might not even have EC2 permissions. Even if it does, disabling the key (option A) prevents the attacker from managing instances. Halting instances also doesn't prevent the attacker from launching new instances if the key is still active.
C) Update the root user password — The root user credentials were not compromised. An IAM user's access keys are completely separate from the root account password. Changing the root password is unnecessary and addresses a threat that doesn't exist. Unless there's evidence that root credentials were also exposed (which the question doesn't state), this action wastes valuable response time on the wrong credential.
D) Deactivate other potentially compromised IAM credentials — While reviewing other credentials is reasonable as a later step, the question asks about immediate steps. There's no indication other credentials were compromised — only one team member exposed their keys on GitHub. Preemptively deactivating other users' credentials without evidence would disrupt the entire team unnecessarily. Focus on the known compromised credential first.
Q27.To ensure encrypted connections to an S3 bucket, what key can be used in a bucket policy's conditional statement?
✓ Correct: D. The
How to Think About This:
When you see "encrypted connections" + "S3 bucket policy" + "condition key", the answer is always
Key Concepts:
• aws:SecureTransport — A global IAM condition key (available for all AWS services) that evaluates to
• Bucket policy pattern for enforcing HTTPS:
• This policy says: deny all S3 operations when the request is NOT sent over HTTPS. Any HTTP (unencrypted) request is rejected. Only HTTPS requests are allowed through.
• Global condition keys use the
Why D is correct:
Why others are wrong:
A) aws:SecureHTTP — This condition key does not exist. There is no
B) aws:SecureTLS — This condition key does not exist. While the concept is correct (TLS is what provides the security), AWS named the condition key
C) aws:SecureHTTPS — This condition key does not exist. Again, "HTTPS" is descriptively accurate (you want to enforce HTTPS), but the actual key name is
aws:SecureTransport condition key checks whether the API request was sent over an encrypted HTTPS connection.How to Think About This:
When you see "encrypted connections" + "S3 bucket policy" + "condition key", the answer is always
aws:SecureTransport. This is a must-memorize condition key. The other options are fabricated — they don't exist. The actual key name uses the word "Transport" (referring to TLS/SSL transport layer security), not "HTTP," "HTTPS," or "TLS" in its name.Key Concepts:
• aws:SecureTransport — A global IAM condition key (available for all AWS services) that evaluates to
true when the request is made over HTTPS, and false when made over plain HTTP. Used in S3 bucket policies to deny requests that don't use encryption in transit.• Bucket policy pattern for enforcing HTTPS:
{ "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::bucket-name/*", "Condition": {"Bool": {"aws:SecureTransport": "false"}}}• This policy says: deny all S3 operations when the request is NOT sent over HTTPS. Any HTTP (unencrypted) request is rejected. Only HTTPS requests are allowed through.
• Global condition keys use the
aws: prefix. Service-specific condition keys use the service prefix (e.g., s3:, kms:). aws:SecureTransport is global because encrypted transport applies to all services, not just S3.Why D is correct:
aws:SecureTransport is the official, documented AWS condition key for checking whether a request uses SSL/TLS encryption. It's a boolean value — you use "Bool": {"aws:SecureTransport": "false"} in a Deny statement to block unencrypted connections, or "true" in an Allow statement to only permit encrypted ones. This is the standard way to enforce HTTPS-only access to S3 buckets.Why others are wrong:
A) aws:SecureHTTP — This condition key does not exist. There is no
aws:SecureHTTP in IAM. The term "SecureHTTP" is itself contradictory — HTTP is by definition the insecure protocol; the secure version is HTTPS. AWS uses the term "SecureTransport" to refer to the underlying TLS/SSL transport layer, not the HTTP protocol name.B) aws:SecureTLS — This condition key does not exist. While the concept is correct (TLS is what provides the security), AWS named the condition key
aws:SecureTransport, not aws:SecureTLS. This is a plausible-sounding distractor designed to catch people who know the technology but haven't memorized the exact key name.C) aws:SecureHTTPS — This condition key does not exist. Again, "HTTPS" is descriptively accurate (you want to enforce HTTPS), but the actual key name is
aws:SecureTransport. AWS chose a more general name because the condition applies to the transport layer encryption broadly, not just HTTPS specifically.
Q28.To pass a compliance audit requiring all data to be encrypted at rest and securing against TLS certificate theft, what changes should the team make? Select All
✓ Correct: A. Deploy AWS CloudHSM and migrate TLS keys to it, providing tamper-resistant hardware-based key storage to protect against certificate/key theft.
How to Think About This:
This question has two requirements: (1) encryption at rest for all data, and (2) protection against TLS certificate theft. The key phrase is "securing against TLS certificate theft" — this tells you the current TLS key storage is vulnerable. When you need tamper-proof key storage that prevents extraction, think CloudHSM. HSMs (Hardware Security Modules) are specifically designed so that keys can never leave the device.
Key Concepts:
• AWS CloudHSM — Provides dedicated hardware security modules in the AWS cloud. Keys stored in an HSM are protected by FIPS 140-2 Level 3 validated hardware. The critical security property: private keys never leave the HSM in plaintext. Even AWS operators cannot extract your keys. All cryptographic operations (signing, decrypting) happen inside the HSM.
• TLS Certificate vs. TLS Private Key — The certificate itself is public; what needs protection is the private key. If an attacker steals the TLS private key, they can impersonate your server, decrypt recorded traffic (if not using perfect forward secrecy), and perform man-in-the-middle attacks. Storing the private key in CloudHSM means it physically cannot be stolen.
• ACM (AWS Certificate Manager) — Manages TLS certificates and integrates with CloudFront, ALB, etc. However, ACM stores keys in software-based key management. While secure for most use cases, compliance audits requiring protection against key theft may require the stronger guarantee of hardware-based storage (CloudHSM).
• CloudHSM vs. KMS for TLS: KMS does not support TLS/SSL operations directly. CloudHSM supports standard PKCS#11 and OpenSSL interfaces, making it suitable for TLS termination with hardware-protected keys.
Why A is correct: CloudHSM directly addresses the "securing against TLS certificate theft" requirement. When TLS private keys are stored in an HSM, they are physically protected by tamper-resistant hardware. Even if an attacker gains full control of the server, they cannot extract the private key — they can only use it for operations while they maintain access. Once the attacker is removed, they have no key material. This is the strongest possible protection for TLS keys and satisfies compliance audits that require hardware-level key protection.
Why others are wrong:
B) Leave the S3 objects alone — The audit requires all data encrypted at rest. If S3 objects are not already encrypted, leaving them alone fails the audit. Even if they are currently encrypted, this answer provides no assurance — it's passive inaction, not an affirmative security measure. The question asks what changes should be made, and "no change" is only valid if the current state already meets compliance, which this option doesn't verify.
C) Continue to use ACM for the TLS certificate — ACM stores keys in software, which may not satisfy the audit requirement of "securing against TLS certificate theft." While ACM keys are protected by AWS's infrastructure, they don't provide the same level of assurance as hardware-based CloudHSM. A compliance audit that specifically requires protection against key theft typically demands FIPS 140-2 Level 3 hardware, which only CloudHSM provides. Continuing with ACM doesn't address the upgraded security requirement.
D) Make no changes to the EBS volumes — Similar to option B — if EBS volumes are not already encrypted, this fails the encryption-at-rest audit requirement. EBS encryption should be enabled using KMS keys. "Make no changes" is an assumption that the current state is compliant, which the question doesn't confirm. The audit is requiring changes, implying the current configuration is insufficient.
How to Think About This:
This question has two requirements: (1) encryption at rest for all data, and (2) protection against TLS certificate theft. The key phrase is "securing against TLS certificate theft" — this tells you the current TLS key storage is vulnerable. When you need tamper-proof key storage that prevents extraction, think CloudHSM. HSMs (Hardware Security Modules) are specifically designed so that keys can never leave the device.
Key Concepts:
• AWS CloudHSM — Provides dedicated hardware security modules in the AWS cloud. Keys stored in an HSM are protected by FIPS 140-2 Level 3 validated hardware. The critical security property: private keys never leave the HSM in plaintext. Even AWS operators cannot extract your keys. All cryptographic operations (signing, decrypting) happen inside the HSM.
• TLS Certificate vs. TLS Private Key — The certificate itself is public; what needs protection is the private key. If an attacker steals the TLS private key, they can impersonate your server, decrypt recorded traffic (if not using perfect forward secrecy), and perform man-in-the-middle attacks. Storing the private key in CloudHSM means it physically cannot be stolen.
• ACM (AWS Certificate Manager) — Manages TLS certificates and integrates with CloudFront, ALB, etc. However, ACM stores keys in software-based key management. While secure for most use cases, compliance audits requiring protection against key theft may require the stronger guarantee of hardware-based storage (CloudHSM).
• CloudHSM vs. KMS for TLS: KMS does not support TLS/SSL operations directly. CloudHSM supports standard PKCS#11 and OpenSSL interfaces, making it suitable for TLS termination with hardware-protected keys.
Why A is correct: CloudHSM directly addresses the "securing against TLS certificate theft" requirement. When TLS private keys are stored in an HSM, they are physically protected by tamper-resistant hardware. Even if an attacker gains full control of the server, they cannot extract the private key — they can only use it for operations while they maintain access. Once the attacker is removed, they have no key material. This is the strongest possible protection for TLS keys and satisfies compliance audits that require hardware-level key protection.
Why others are wrong:
B) Leave the S3 objects alone — The audit requires all data encrypted at rest. If S3 objects are not already encrypted, leaving them alone fails the audit. Even if they are currently encrypted, this answer provides no assurance — it's passive inaction, not an affirmative security measure. The question asks what changes should be made, and "no change" is only valid if the current state already meets compliance, which this option doesn't verify.
C) Continue to use ACM for the TLS certificate — ACM stores keys in software, which may not satisfy the audit requirement of "securing against TLS certificate theft." While ACM keys are protected by AWS's infrastructure, they don't provide the same level of assurance as hardware-based CloudHSM. A compliance audit that specifically requires protection against key theft typically demands FIPS 140-2 Level 3 hardware, which only CloudHSM provides. Continuing with ACM doesn't address the upgraded security requirement.
D) Make no changes to the EBS volumes — Similar to option B — if EBS volumes are not already encrypted, this fails the encryption-at-rest audit requirement. EBS encryption should be enabled using KMS keys. "Make no changes" is an assumption that the current state is compliant, which the question doesn't confirm. The audit is requiring changes, implying the current configuration is insufficient.
Q29.To protect your website using AWS WAF, how should the incoming traffic be configured? Select All
✓ Correct: B, C. AWS WAF can only inspect traffic routed through supported services: Application Load Balancer and CloudFront (plus API Gateway and AppSync).
How to Think About This:
When you see "AWS WAF" + "how to configure traffic", remember the WAF integration list. WAF is NOT a standalone firewall — it can only inspect traffic that flows through a supported AWS service. Memorize the supported services: ALB, CloudFront, API Gateway, AppSync, Cognito, App Runner, Verified Access. If traffic bypasses these services (direct to EC2, through CLB), WAF never sees it.
Key Concepts:
• AWS WAF Architecture — WAF attaches to a supported resource as a Web ACL. It inspects every request flowing through that resource and applies rules (allow, block, count, CAPTCHA). It cannot be attached to arbitrary endpoints.
• Supported WAF resources:
• CloudFront distribution — WAF inspects all requests at edge locations
• Application Load Balancer (ALB) — WAF inspects HTTP/HTTPS requests at the load balancer
• API Gateway REST API — WAF inspects API calls
• AppSync GraphQL API — WAF inspects GraphQL queries
• Cognito User Pool — WAF protects authentication endpoints
• Traffic flow with WAF protection:
Why B and C are correct:
B) Through an Application Load Balancer — ALB is a fully supported WAF integration point. You create a Web ACL and associate it with the ALB. All HTTP/HTTPS requests pass through the ALB, where WAF inspects and filters them before forwarding to your EC2 instances. This is the most common WAF deployment pattern for web applications.
C) Via CloudFront CDN — CloudFront is the other primary WAF integration point. Attaching a Web ACL to a CloudFront distribution means WAF inspects requests at the edge (closest to the user), which provides the earliest possible filtering. This also offers DDoS protection through CloudFront's distributed infrastructure.
Why others are wrong:
A) Direct HTTPS connection to the EC2 instance — If traffic goes directly from the user to an EC2 instance (via its public IP or Elastic IP), there is no WAF integration point. WAF cannot be attached to EC2 instances, security groups, or ENIs. It only works through supported services. Direct-to-EC2 traffic completely bypasses WAF. You would need to place an ALB or CloudFront in front of the EC2 instance to use WAF.
D) Using a Classic Load Balancer — Classic Load Balancers (CLB) are the older, legacy load balancer type. They do NOT support WAF integration. Only Application Load Balancers (ALBs) support WAF. This is a frequent exam trap — CLB and ALB are both "load balancers," but only ALB works with WAF. If you're using a CLB and need WAF, you must migrate to an ALB.
How to Think About This:
When you see "AWS WAF" + "how to configure traffic", remember the WAF integration list. WAF is NOT a standalone firewall — it can only inspect traffic that flows through a supported AWS service. Memorize the supported services: ALB, CloudFront, API Gateway, AppSync, Cognito, App Runner, Verified Access. If traffic bypasses these services (direct to EC2, through CLB), WAF never sees it.
Key Concepts:
• AWS WAF Architecture — WAF attaches to a supported resource as a Web ACL. It inspects every request flowing through that resource and applies rules (allow, block, count, CAPTCHA). It cannot be attached to arbitrary endpoints.
• Supported WAF resources:
• CloudFront distribution — WAF inspects all requests at edge locations
• Application Load Balancer (ALB) — WAF inspects HTTP/HTTPS requests at the load balancer
• API Gateway REST API — WAF inspects API calls
• AppSync GraphQL API — WAF inspects GraphQL queries
• Cognito User Pool — WAF protects authentication endpoints
• Traffic flow with WAF protection:
User → CloudFront (WAF inspects) → ALB (WAF inspects) → EC2User → ALB (WAF inspects) → EC2User → Direct to EC2 (NO WAF possible)Why B and C are correct:
B) Through an Application Load Balancer — ALB is a fully supported WAF integration point. You create a Web ACL and associate it with the ALB. All HTTP/HTTPS requests pass through the ALB, where WAF inspects and filters them before forwarding to your EC2 instances. This is the most common WAF deployment pattern for web applications.
C) Via CloudFront CDN — CloudFront is the other primary WAF integration point. Attaching a Web ACL to a CloudFront distribution means WAF inspects requests at the edge (closest to the user), which provides the earliest possible filtering. This also offers DDoS protection through CloudFront's distributed infrastructure.
Why others are wrong:
A) Direct HTTPS connection to the EC2 instance — If traffic goes directly from the user to an EC2 instance (via its public IP or Elastic IP), there is no WAF integration point. WAF cannot be attached to EC2 instances, security groups, or ENIs. It only works through supported services. Direct-to-EC2 traffic completely bypasses WAF. You would need to place an ALB or CloudFront in front of the EC2 instance to use WAF.
D) Using a Classic Load Balancer — Classic Load Balancers (CLB) are the older, legacy load balancer type. They do NOT support WAF integration. Only Application Load Balancers (ALBs) support WAF. This is a frequent exam trap — CLB and ALB are both "load balancers," but only ALB works with WAF. If you're using a CLB and need WAF, you must migrate to an ALB.
Q30.What AWS services can be integrated with AWS WAF for deployment? Select All
✓ Correct: C, E. AWS WAF integrates with Application Load Balancer and CloudFront (as well as API Gateway, AppSync, and Cognito).
How to Think About This:
This is a direct memorization question about WAF integration points. The exam loves testing whether you know the exact list of services that support WAF. The mental model: WAF works with services that process HTTP/HTTPS requests as a proxy or gateway. If the service acts as an intermediary that receives and forwards web requests, WAF can likely attach to it. If the service is a backend data store, compute engine, or network endpoint, it cannot.
Key Concepts:
• Complete WAF Integration List (memorize these):
• Amazon CloudFront — Content delivery, edge-level WAF inspection
• Application Load Balancer (ALB) — Layer 7 load balancing with WAF
• Amazon API Gateway — REST and HTTP API endpoints
• AWS AppSync — GraphQL API endpoints
• Amazon Cognito User Pools — Authentication service endpoints
• AWS App Runner — Containerized web app service
• AWS Verified Access — Zero-trust access service
• Common distractors that do NOT support WAF: Classic Load Balancer, Network Load Balancer, EC2 directly, S3, ElastiCache, Lambda (directly), DynamoDB, S3 Endpoints, NAT Gateway.
• Why these specific services? WAF operates at Layer 7 (HTTP/HTTPS). All supported services act as HTTP request processors — they receive web requests, and WAF can inspect them before the request is processed. Services that don't operate at Layer 7 or don't act as request proxies cannot host WAF.
Why C and E are correct:
C) Application Load Balancer — ALB operates at Layer 7 and processes every HTTP/HTTPS request before routing to targets. A WAF Web ACL can be directly associated with an ALB. This is the most common WAF deployment for applications running on EC2, ECS, or Lambda behind a load balancer.
E) CloudFront — CloudFront acts as a reverse proxy at AWS edge locations worldwide. A WAF Web ACL associated with a CloudFront distribution inspects requests at the edge, providing the earliest possible filtering and global protection. CloudFront + WAF is the standard architecture for protecting web applications.
Why others are wrong:
A) ElastiCache — ElastiCache is an in-memory data store (Redis or Memcached) used for caching. It does not receive or process HTTP requests from end users. There is no web request flow to inspect. ElastiCache is a backend service accessed by your application code, not a public-facing endpoint.
B) Lambda — Lambda functions are compute resources, not request-processing proxies. While Lambda can be invoked by many services, WAF cannot be attached to Lambda directly. However, if Lambda is behind API Gateway or ALB, WAF protects it indirectly through those integration points. The distinction is important: WAF protects the gateway, not the Lambda function itself.
D) S3 Endpoint — S3 VPC Endpoints (Gateway or Interface endpoints) are network-level constructs that allow private connectivity to S3 from within a VPC. They do not process HTTP requests in a way that WAF can inspect. VPC endpoints are transparent network tunnels, not application-layer proxies. Additionally, S3 itself does not support WAF — to protect S3 content with WAF, you place CloudFront in front of S3.
How to Think About This:
This is a direct memorization question about WAF integration points. The exam loves testing whether you know the exact list of services that support WAF. The mental model: WAF works with services that process HTTP/HTTPS requests as a proxy or gateway. If the service acts as an intermediary that receives and forwards web requests, WAF can likely attach to it. If the service is a backend data store, compute engine, or network endpoint, it cannot.
Key Concepts:
• Complete WAF Integration List (memorize these):
• Amazon CloudFront — Content delivery, edge-level WAF inspection
• Application Load Balancer (ALB) — Layer 7 load balancing with WAF
• Amazon API Gateway — REST and HTTP API endpoints
• AWS AppSync — GraphQL API endpoints
• Amazon Cognito User Pools — Authentication service endpoints
• AWS App Runner — Containerized web app service
• AWS Verified Access — Zero-trust access service
• Common distractors that do NOT support WAF: Classic Load Balancer, Network Load Balancer, EC2 directly, S3, ElastiCache, Lambda (directly), DynamoDB, S3 Endpoints, NAT Gateway.
• Why these specific services? WAF operates at Layer 7 (HTTP/HTTPS). All supported services act as HTTP request processors — they receive web requests, and WAF can inspect them before the request is processed. Services that don't operate at Layer 7 or don't act as request proxies cannot host WAF.
Why C and E are correct:
C) Application Load Balancer — ALB operates at Layer 7 and processes every HTTP/HTTPS request before routing to targets. A WAF Web ACL can be directly associated with an ALB. This is the most common WAF deployment for applications running on EC2, ECS, or Lambda behind a load balancer.
E) CloudFront — CloudFront acts as a reverse proxy at AWS edge locations worldwide. A WAF Web ACL associated with a CloudFront distribution inspects requests at the edge, providing the earliest possible filtering and global protection. CloudFront + WAF is the standard architecture for protecting web applications.
Why others are wrong:
A) ElastiCache — ElastiCache is an in-memory data store (Redis or Memcached) used for caching. It does not receive or process HTTP requests from end users. There is no web request flow to inspect. ElastiCache is a backend service accessed by your application code, not a public-facing endpoint.
B) Lambda — Lambda functions are compute resources, not request-processing proxies. While Lambda can be invoked by many services, WAF cannot be attached to Lambda directly. However, if Lambda is behind API Gateway or ALB, WAF protects it indirectly through those integration points. The distinction is important: WAF protects the gateway, not the Lambda function itself.
D) S3 Endpoint — S3 VPC Endpoints (Gateway or Interface endpoints) are network-level constructs that allow private connectivity to S3 from within a VPC. They do not process HTTP requests in a way that WAF can inspect. VPC endpoints are transparent network tunnels, not application-layer proxies. Additionally, S3 itself does not support WAF — to protect S3 content with WAF, you place CloudFront in front of S3.
Q31.What AWS services should be utilized for advanced analysis of API activities? Select All
✓ Correct: A, B, C, F. CloudTrail, Athena, CloudWatch Logs, and GuardDuty all contribute to advanced API activity analysis.
How to Think About This:
When a question asks about "API activity analysis", think about the full pipeline: Record → Store → Query → Detect. CloudTrail records API calls, CloudWatch Logs stores them for real-time filtering, Athena queries them with SQL, and GuardDuty detects anomalies using ML. Any service that participates in this pipeline is a valid answer. Services that deal with network traffic (not API calls) or enforce rules (not analyze) are wrong.
Key Concepts:
AWS CloudTrail (F) — The foundation of API analysis. CloudTrail records every API call made in your AWS account: who called it, from what IP, at what time, what resource was affected, and whether it succeeded or failed. Without CloudTrail, there is no API activity data to analyze.
Amazon Athena (C) — Serverless SQL query engine that can analyze CloudTrail logs stored in S3. You define a table schema over the JSON log files and run queries like
Amazon CloudWatch Logs (A) — CloudTrail can deliver logs to CloudWatch Logs for real-time analysis. You can create metric filters (e.g., count all
Amazon GuardDuty (B) — Analyzes CloudTrail management and data events using machine learning and threat intelligence feeds. It automatically detects suspicious API patterns like: unusual credential usage, API calls from known malicious IPs, or data exfiltration patterns — without you writing any rules.
Why A, B, C, F are correct: Each service plays a distinct role in the API analysis pipeline. CloudTrail captures the raw data, CloudWatch Logs enables real-time filtering and alerting, Athena provides deep SQL-based investigation, and GuardDuty adds automated ML-driven threat detection. Together, they form a comprehensive API activity analysis stack.
Why others are wrong:
D) VPC Flow Logs — Flow Logs capture network-level metadata: source/destination IPs, ports, protocols, and bytes transferred. They do not capture API calls or their parameters. If someone calls
E) Network ACLs — NACLs are firewall rules that allow or deny traffic at the subnet level. They have zero analysis capability — they don't log, query, or detect anything. They enforce network-level access control, which is a completely different function from API activity analysis.
How to Think About This:
When a question asks about "API activity analysis", think about the full pipeline: Record → Store → Query → Detect. CloudTrail records API calls, CloudWatch Logs stores them for real-time filtering, Athena queries them with SQL, and GuardDuty detects anomalies using ML. Any service that participates in this pipeline is a valid answer. Services that deal with network traffic (not API calls) or enforce rules (not analyze) are wrong.
Key Concepts:
AWS CloudTrail (F) — The foundation of API analysis. CloudTrail records every API call made in your AWS account: who called it, from what IP, at what time, what resource was affected, and whether it succeeded or failed. Without CloudTrail, there is no API activity data to analyze.
Amazon Athena (C) — Serverless SQL query engine that can analyze CloudTrail logs stored in S3. You define a table schema over the JSON log files and run queries like
SELECT * FROM cloudtrail_logs WHERE eventName = 'DeleteBucket'. Perfect for ad-hoc forensic investigations.Amazon CloudWatch Logs (A) — CloudTrail can deliver logs to CloudWatch Logs for real-time analysis. You can create metric filters (e.g., count all
ConsoleLogin events with failed authentication) and trigger alarms. This provides near-instant alerting on suspicious API patterns.Amazon GuardDuty (B) — Analyzes CloudTrail management and data events using machine learning and threat intelligence feeds. It automatically detects suspicious API patterns like: unusual credential usage, API calls from known malicious IPs, or data exfiltration patterns — without you writing any rules.
Why A, B, C, F are correct: Each service plays a distinct role in the API analysis pipeline. CloudTrail captures the raw data, CloudWatch Logs enables real-time filtering and alerting, Athena provides deep SQL-based investigation, and GuardDuty adds automated ML-driven threat detection. Together, they form a comprehensive API activity analysis stack.
Why others are wrong:
D) VPC Flow Logs — Flow Logs capture network-level metadata: source/destination IPs, ports, protocols, and bytes transferred. They do not capture API calls or their parameters. If someone calls
s3:DeleteObject, Flow Logs would only show a TCP connection to the S3 endpoint — not which API was called or what object was deleted. Flow Logs are for network troubleshooting, not API analysis.E) Network ACLs — NACLs are firewall rules that allow or deny traffic at the subnet level. They have zero analysis capability — they don't log, query, or detect anything. They enforce network-level access control, which is a completely different function from API activity analysis.
Q32.What Security Group rules should be set up for secure bastion host access from corporate workstations?
✓ Correct: A. Security Groups are stateful — only an inbound rule on port 22 restricted to corporate workstation IPs is needed.
How to Think About This:
The key word in this question is "Security Group" — and the critical concept is stateful vs stateless. Security Groups are stateful: if you allow traffic in, the response traffic is automatically allowed out. This means you never need to create matching outbound rules for return traffic. If you see answer choices mentioning "inbound AND outbound rules" for Security Groups, that's usually wrong (it would be correct for NACLs, which are stateless).
Key Concepts:
Bastion Host — A hardened EC2 instance in a public subnet that serves as the single entry point for SSH access into your private network. All administrators connect to the bastion first, then hop to private instances. This creates a single chokepoint for auditing and security controls.
Stateful Security Groups — When you create an inbound rule allowing TCP port 22 from a specific IP range, the Security Group automatically tracks the connection state. The response packets (from the EC2 instance back to the administrator) are allowed through without any explicit outbound rule. This is because the Security Group remembers "this outbound packet is a response to an allowed inbound connection."
Port 22 (SSH) — The standard port for Secure Shell connections. For a bastion host, the inbound rule should restrict port 22 to only the corporate workstation IPs (e.g.,
Why A is correct: A single inbound rule on port 22 restricted to corporate workstation IPs is sufficient. The Security Group's stateful nature handles return traffic automatically. This follows the principle of least privilege — only the minimum necessary access.
Why others are wrong:
B) Inbound on port 22 for 10.0.0.0/0 —
C) Inbound and outbound on ephemeral ports — Ephemeral ports (1024-65535) are used for return traffic, but Security Groups are stateful — return traffic is automatically allowed. Explicitly defining ephemeral port rules is unnecessary and shows a misunderstanding of how Security Groups work. Also, SSH uses port 22, not ephemeral ports for inbound connections.
D) Inbound and outbound on port 22 — The outbound rule on port 22 is unnecessary because Security Groups are stateful. Adding it doesn't cause harm but indicates a misunderstanding. The correct minimal configuration is inbound-only.
How to Think About This:
The key word in this question is "Security Group" — and the critical concept is stateful vs stateless. Security Groups are stateful: if you allow traffic in, the response traffic is automatically allowed out. This means you never need to create matching outbound rules for return traffic. If you see answer choices mentioning "inbound AND outbound rules" for Security Groups, that's usually wrong (it would be correct for NACLs, which are stateless).
Key Concepts:
Bastion Host — A hardened EC2 instance in a public subnet that serves as the single entry point for SSH access into your private network. All administrators connect to the bastion first, then hop to private instances. This creates a single chokepoint for auditing and security controls.
Stateful Security Groups — When you create an inbound rule allowing TCP port 22 from a specific IP range, the Security Group automatically tracks the connection state. The response packets (from the EC2 instance back to the administrator) are allowed through without any explicit outbound rule. This is because the Security Group remembers "this outbound packet is a response to an allowed inbound connection."
Port 22 (SSH) — The standard port for Secure Shell connections. For a bastion host, the inbound rule should restrict port 22 to only the corporate workstation IPs (e.g.,
203.0.113.0/24), never 0.0.0.0/0 (the entire internet).Why A is correct: A single inbound rule on port 22 restricted to corporate workstation IPs is sufficient. The Security Group's stateful nature handles return traffic automatically. This follows the principle of least privilege — only the minimum necessary access.
Why others are wrong:
B) Inbound on port 22 for 10.0.0.0/0 —
10.0.0.0/0 is not a valid CIDR (it's a private range with /0 mask, which doesn't make sense). Even if interpreted as allowing all private IPs, it's overly permissive. A bastion host should only allow SSH from specific corporate IPs, not any IP in the 10.x.x.x range.C) Inbound and outbound on ephemeral ports — Ephemeral ports (1024-65535) are used for return traffic, but Security Groups are stateful — return traffic is automatically allowed. Explicitly defining ephemeral port rules is unnecessary and shows a misunderstanding of how Security Groups work. Also, SSH uses port 22, not ephemeral ports for inbound connections.
D) Inbound and outbound on port 22 — The outbound rule on port 22 is unnecessary because Security Groups are stateful. Adding it doesn't cause harm but indicates a misunderstanding. The correct minimal configuration is inbound-only.
Q33.What are the common reasons for encountering a "Permission denied (publickey)" error when trying to SSH into an EC2 instance? Select All
✓ Correct: A, B, D. The three most common causes of SSH "Permission denied (publickey)" errors.
How to Think About This:
When troubleshooting SSH failures to EC2, think about the three layers that must all be correct: (1) Network access — can the traffic reach the instance? (Security Groups, NACLs), (2) Authentication — is the key file correct and does it match? (3) Username — each AMI has a different default user. If any of these three layers fail, you get a connection error. The question asks about "common" causes — AWS infrastructure failure is theoretically possible but extremely rare.
Key Concepts:
AMI Default Usernames (A) — Each Amazon Machine Image has a different default SSH username. Common ones:
• Amazon Linux / Amazon Linux 2:
• Ubuntu:
• CentOS:
• Debian:
• RHEL:
Using the wrong username (e.g.,
Security Group Misconfiguration (B) — If the instance's Security Group doesn't have an inbound rule allowing TCP port 22 from your IP address, the SSH connection will be blocked. This appears as a timeout or connection refused, which users often confuse with key authentication errors. The Security Group must explicitly allow your source IP on port 22.
Private Key File Issues (D) — The
Why A, B, D are correct: These represent the three most frequent root causes that administrators encounter. Wrong username, wrong/misconfigured key, and network access blocked by Security Groups together account for the vast majority of SSH connection failures to EC2 instances.
Why C is wrong:
C) Issues with AWS infrastructure — While theoretically possible (hardware failure, hypervisor issue), AWS infrastructure problems are extremely rare and are not a "common" reason for SSH failures. AWS manages infrastructure reliability at massive scale with redundancy. If an instance has an underlying hardware issue, AWS typically auto-migrates it. When troubleshooting SSH, you should exhaust all user-side causes (username, key, Security Group, NACL, route table) before considering AWS infrastructure as the culprit.
How to Think About This:
When troubleshooting SSH failures to EC2, think about the three layers that must all be correct: (1) Network access — can the traffic reach the instance? (Security Groups, NACLs), (2) Authentication — is the key file correct and does it match? (3) Username — each AMI has a different default user. If any of these three layers fail, you get a connection error. The question asks about "common" causes — AWS infrastructure failure is theoretically possible but extremely rare.
Key Concepts:
AMI Default Usernames (A) — Each Amazon Machine Image has a different default SSH username. Common ones:
• Amazon Linux / Amazon Linux 2:
ec2-user• Ubuntu:
ubuntu• CentOS:
centos• Debian:
admin• RHEL:
ec2-user or rootUsing the wrong username (e.g.,
ssh root@... on Ubuntu) will immediately fail with "Permission denied."Security Group Misconfiguration (B) — If the instance's Security Group doesn't have an inbound rule allowing TCP port 22 from your IP address, the SSH connection will be blocked. This appears as a timeout or connection refused, which users often confuse with key authentication errors. The Security Group must explicitly allow your source IP on port 22.
Private Key File Issues (D) — The
.pem file must match the key pair assigned to the instance at launch. Common issues: using a key file from a different key pair, corrupted download, wrong file permissions (chmod 400 required on Linux/Mac), or using the public key instead of the private key.Why A, B, D are correct: These represent the three most frequent root causes that administrators encounter. Wrong username, wrong/misconfigured key, and network access blocked by Security Groups together account for the vast majority of SSH connection failures to EC2 instances.
Why C is wrong:
C) Issues with AWS infrastructure — While theoretically possible (hardware failure, hypervisor issue), AWS infrastructure problems are extremely rare and are not a "common" reason for SSH failures. AWS manages infrastructure reliability at massive scale with redundancy. If an instance has an underlying hardware issue, AWS typically auto-migrates it. When troubleshooting SSH, you should exhaust all user-side causes (username, key, Security Group, NACL, route table) before considering AWS infrastructure as the culprit.
Q34.What is the best way to allow another AWS account to access your resources securely?
✓ Correct: C. Cross-account IAM roles with external IDs provide secure, auditable access without sharing credentials.
How to Think About This:
When a question mentions "another AWS account" + "access your resources", the answer pattern is almost always cross-account IAM roles. The key principle: never share credentials. Instead, create a role in your account that the other account can assume. The external ID prevents the "confused deputy" attack, and CloudTrail logs every
Key Concepts:
Cross-Account IAM Roles — You create an IAM role in your account (Account A) with a trust policy that allows a specific principal in another account (Account B) to assume it. The trust policy specifies Account B's account ID and requires a unique external ID. When Account B needs access, they call
External ID — A secret string shared between you and the third party. It prevents the "confused deputy" problem: without an external ID, a malicious actor could trick a trusted service into assuming your role on their behalf. The external ID ensures only the intended third party can assume the role, because only they know the shared secret.
Temporary Credentials — When a role is assumed, STS returns temporary access key, secret key, and session token. These expire automatically — no long-lived credentials to manage, rotate, or potentially leak.
Why C is correct: Cross-account roles with external IDs follow every security best practice: no credential sharing, temporary access, least privilege (role policy controls exactly what they can do), full auditability via CloudTrail, and protection against confused deputy attacks. This is the AWS-recommended approach for all cross-account access scenarios.
Why others are wrong:
A) Share your AWS credentials — This is a critical security violation. Sharing access keys means: no auditability (you can't distinguish who did what), no easy revocation (you'd have to rotate your own keys), and permanent access until manually revoked. This violates every AWS security best practice.
B) Use Web Identity Federation — Web Identity Federation is for authenticating end users via social providers (Google, Facebook, Amazon) or OIDC providers. It's designed for consumer-facing applications, not for granting another AWS account access to your resources. The question specifically says "another AWS account," not external users.
D) Use Cognito for temporary guest access — Amazon Cognito is for mobile/web application user authentication. It handles user sign-up, sign-in, and temporary AWS credentials for app users. It's not designed for AWS account-to-account resource sharing — that's what IAM cross-account roles are for.
How to Think About This:
When a question mentions "another AWS account" + "access your resources", the answer pattern is almost always cross-account IAM roles. The key principle: never share credentials. Instead, create a role in your account that the other account can assume. The external ID prevents the "confused deputy" attack, and CloudTrail logs every
AssumeRole call for full auditability.Key Concepts:
Cross-Account IAM Roles — You create an IAM role in your account (Account A) with a trust policy that allows a specific principal in another account (Account B) to assume it. The trust policy specifies Account B's account ID and requires a unique external ID. When Account B needs access, they call
sts:AssumeRole, providing the external ID, and receive temporary credentials that last 1-12 hours.External ID — A secret string shared between you and the third party. It prevents the "confused deputy" problem: without an external ID, a malicious actor could trick a trusted service into assuming your role on their behalf. The external ID ensures only the intended third party can assume the role, because only they know the shared secret.
Temporary Credentials — When a role is assumed, STS returns temporary access key, secret key, and session token. These expire automatically — no long-lived credentials to manage, rotate, or potentially leak.
Why C is correct: Cross-account roles with external IDs follow every security best practice: no credential sharing, temporary access, least privilege (role policy controls exactly what they can do), full auditability via CloudTrail, and protection against confused deputy attacks. This is the AWS-recommended approach for all cross-account access scenarios.
Why others are wrong:
A) Share your AWS credentials — This is a critical security violation. Sharing access keys means: no auditability (you can't distinguish who did what), no easy revocation (you'd have to rotate your own keys), and permanent access until manually revoked. This violates every AWS security best practice.
B) Use Web Identity Federation — Web Identity Federation is for authenticating end users via social providers (Google, Facebook, Amazon) or OIDC providers. It's designed for consumer-facing applications, not for granting another AWS account access to your resources. The question specifically says "another AWS account," not external users.
D) Use Cognito for temporary guest access — Amazon Cognito is for mobile/web application user authentication. It handles user sign-up, sign-in, and temporary AWS credentials for app users. It's not designed for AWS account-to-account resource sharing — that's what IAM cross-account roles are for.
Q35.What is the best way to facilitate user sign-in and sign-up via Facebook in a mobile application?
✓ Correct: D. Amazon Cognito is purpose-built for mobile/web app authentication with social identity providers like Facebook.
How to Think About This:
When a question mentions "mobile app" + "social login (Facebook/Google/Amazon)", the answer is always Amazon Cognito. Cognito is the AWS-native identity broker specifically designed for this use case. It handles the entire OAuth/OIDC token exchange flow, so your app doesn't need to implement identity brokering logic yourself.
Key Concepts:
Amazon Cognito User Pools — Managed user directory that handles sign-up, sign-in, and account recovery. It supports federation with social identity providers (Facebook, Google, Apple, Amazon) and enterprise providers (SAML, OIDC). When a user signs in via Facebook, Cognito exchanges the Facebook token for Cognito tokens (ID token, access token, refresh token).
Amazon Cognito Identity Pools (Federated Identities) — After authentication, Identity Pools provide temporary AWS credentials (via STS) so the mobile app can directly access AWS services like S3 or DynamoDB. The flow: User signs in via Facebook → Cognito validates the Facebook token → Cognito issues temporary AWS credentials → App uses those credentials to call AWS APIs.
Identity Broker Pattern — Cognito acts as the broker between your app and the identity provider. Your app doesn't need to know how Facebook's OAuth works — it just calls Cognito's APIs. Cognito handles token validation, credential exchange, and session management transparently.
Why D is correct: Cognito is purpose-built for exactly this scenario. It natively integrates with Facebook (and other social providers), handles the OAuth token exchange, provides user management (sign-up, sign-in, password recovery), and issues temporary AWS credentials — all without writing custom identity brokering code.
Why others are wrong:
A) Use a Lambda function as an Identity Broker — While technically possible (you could write Lambda code to exchange Facebook tokens for AWS credentials via STS), this is reinventing the wheel. You'd have to implement OAuth flows, token validation, session management, and credential rotation manually. Cognito does all of this out of the box, securely and at scale.
B) Use IAM as an Identity Broker — IAM is not an identity broker. IAM manages AWS-internal identities (users, roles, policies). It doesn't handle social login flows, OAuth token exchanges, or mobile app authentication. IAM has no mechanism to accept a Facebook token and convert it to AWS credentials directly.
C) Embed encrypted AWS credentials in the application — This is a severe security anti-pattern. Even if encrypted, credentials embedded in a mobile app binary can be extracted through reverse engineering. Mobile apps are distributed to untrusted devices — any credentials in the app must be considered compromised. Always use temporary credentials obtained at runtime, never embedded long-lived keys.
How to Think About This:
When a question mentions "mobile app" + "social login (Facebook/Google/Amazon)", the answer is always Amazon Cognito. Cognito is the AWS-native identity broker specifically designed for this use case. It handles the entire OAuth/OIDC token exchange flow, so your app doesn't need to implement identity brokering logic yourself.
Key Concepts:
Amazon Cognito User Pools — Managed user directory that handles sign-up, sign-in, and account recovery. It supports federation with social identity providers (Facebook, Google, Apple, Amazon) and enterprise providers (SAML, OIDC). When a user signs in via Facebook, Cognito exchanges the Facebook token for Cognito tokens (ID token, access token, refresh token).
Amazon Cognito Identity Pools (Federated Identities) — After authentication, Identity Pools provide temporary AWS credentials (via STS) so the mobile app can directly access AWS services like S3 or DynamoDB. The flow: User signs in via Facebook → Cognito validates the Facebook token → Cognito issues temporary AWS credentials → App uses those credentials to call AWS APIs.
Identity Broker Pattern — Cognito acts as the broker between your app and the identity provider. Your app doesn't need to know how Facebook's OAuth works — it just calls Cognito's APIs. Cognito handles token validation, credential exchange, and session management transparently.
Why D is correct: Cognito is purpose-built for exactly this scenario. It natively integrates with Facebook (and other social providers), handles the OAuth token exchange, provides user management (sign-up, sign-in, password recovery), and issues temporary AWS credentials — all without writing custom identity brokering code.
Why others are wrong:
A) Use a Lambda function as an Identity Broker — While technically possible (you could write Lambda code to exchange Facebook tokens for AWS credentials via STS), this is reinventing the wheel. You'd have to implement OAuth flows, token validation, session management, and credential rotation manually. Cognito does all of this out of the box, securely and at scale.
B) Use IAM as an Identity Broker — IAM is not an identity broker. IAM manages AWS-internal identities (users, roles, policies). It doesn't handle social login flows, OAuth token exchanges, or mobile app authentication. IAM has no mechanism to accept a Facebook token and convert it to AWS credentials directly.
C) Embed encrypted AWS credentials in the application — This is a severe security anti-pattern. Even if encrypted, credentials embedded in a mobile app binary can be extracted through reverse engineering. Mobile apps are distributed to untrusted devices — any credentials in the app must be considered compromised. Always use temporary credentials obtained at runtime, never embedded long-lived keys.
Q36.What is the evaluation process AWS follows to authenticate a request in IAM?
✓ Correct: B. IAM evaluates requests by first authenticating the principal, then gathering all applicable policies, evaluating them, and making an allow/deny decision.
How to Think About This:
The IAM request evaluation process follows a strict 4-step sequence: Authenticate → Gather → Evaluate → Decide. Think of it like airport security: (1) Check your ID (authenticate), (2) Look up your flight and security clearance (gather policies), (3) Check each rule against your boarding pass (evaluate), (4) Allow or deny boarding (decision). The order matters — you can't evaluate policies if you don't know who is making the request.
Key Concepts:
Step 1: Authentication — AWS verifies the identity of the principal (user, role, or service) making the request. This involves validating the access key, secret key, session token, or signed request. If authentication fails, the request is denied immediately — no policy evaluation happens.
Step 2: Gather Applicable Policies — AWS collects ALL policies that apply to the request: identity-based policies (attached to the user/role), resource-based policies (attached to the target resource), permission boundaries, session policies, SCPs (Service Control Policies from Organizations), and VPC endpoint policies. All of these are gathered before any evaluation begins.
Step 3: Evaluate Policies — AWS evaluates all gathered policies in a specific order with a default deny starting position: (1) Explicit deny in any policy → DENY (final, cannot be overridden), (2) SCP/Permission Boundary allows? → Must be present or DENY, (3) Explicit allow in identity or resource policy → ALLOW, (4) No explicit allow → implicit DENY. An explicit deny always wins over any allow.
Step 4: Decision — Based on evaluation, the request is either allowed or denied. The decision is logged in CloudTrail with the complete evaluation context.
Why B is correct: It accurately describes the 4-step process in the correct order: authenticate the principal first, then gather all applicable policies, arrange/evaluate them according to the evaluation logic, and make an allow/deny decision.
Why others are wrong:
A) Checks authority first, evaluates policies, issues decision — "Checking authority" is vague and skips the explicit authentication step. Authentication (verifying identity) must happen before any authority check. This answer conflates authentication with authorization.
C) Evaluates permissions and request, considers STS and organizational boundaries — This gets the order wrong and is too vague. It doesn't mention the authentication step first. Also, STS is a service for issuing temporary credentials — it's not a step in the IAM evaluation process. STS is involved before the request is made, not during evaluation.
D) Examines organizational boundaries first — Organizational boundaries (SCPs) are evaluated after authentication, not first. You must authenticate the principal before you can determine which organization and which SCPs apply to them. This reverses the correct order.
How to Think About This:
The IAM request evaluation process follows a strict 4-step sequence: Authenticate → Gather → Evaluate → Decide. Think of it like airport security: (1) Check your ID (authenticate), (2) Look up your flight and security clearance (gather policies), (3) Check each rule against your boarding pass (evaluate), (4) Allow or deny boarding (decision). The order matters — you can't evaluate policies if you don't know who is making the request.
Key Concepts:
Step 1: Authentication — AWS verifies the identity of the principal (user, role, or service) making the request. This involves validating the access key, secret key, session token, or signed request. If authentication fails, the request is denied immediately — no policy evaluation happens.
Step 2: Gather Applicable Policies — AWS collects ALL policies that apply to the request: identity-based policies (attached to the user/role), resource-based policies (attached to the target resource), permission boundaries, session policies, SCPs (Service Control Policies from Organizations), and VPC endpoint policies. All of these are gathered before any evaluation begins.
Step 3: Evaluate Policies — AWS evaluates all gathered policies in a specific order with a default deny starting position: (1) Explicit deny in any policy → DENY (final, cannot be overridden), (2) SCP/Permission Boundary allows? → Must be present or DENY, (3) Explicit allow in identity or resource policy → ALLOW, (4) No explicit allow → implicit DENY. An explicit deny always wins over any allow.
Step 4: Decision — Based on evaluation, the request is either allowed or denied. The decision is logged in CloudTrail with the complete evaluation context.
Why B is correct: It accurately describes the 4-step process in the correct order: authenticate the principal first, then gather all applicable policies, arrange/evaluate them according to the evaluation logic, and make an allow/deny decision.
Why others are wrong:
A) Checks authority first, evaluates policies, issues decision — "Checking authority" is vague and skips the explicit authentication step. Authentication (verifying identity) must happen before any authority check. This answer conflates authentication with authorization.
C) Evaluates permissions and request, considers STS and organizational boundaries — This gets the order wrong and is too vague. It doesn't mention the authentication step first. Also, STS is a service for issuing temporary credentials — it's not a step in the IAM evaluation process. STS is involved before the request is made, not during evaluation.
D) Examines organizational boundaries first — Organizational boundaries (SCPs) are evaluated after authentication, not first. You must authenticate the principal before you can determine which organization and which SCPs apply to them. This reverses the correct order.
Q37.What is the most flexible method for managing access to encryption keys based on whether the user requires cryptographic or administrative permissions?
✓ Correct: A. KMS Encryption Context allows flexible, granular access control through key-value pairs that distinguish cryptographic from administrative permissions.
How to Think About This:
When a question asks about "flexible" or "granular" access control for encryption keys — especially distinguishing between cryptographic operations (encrypt, decrypt, generate data keys) vs administrative operations (create, enable, disable, rotate keys) — think Encryption Context. It adds conditional logic to key policies, allowing the same CMK to be used differently by different principals based on context.
Key Concepts:
KMS Encryption Context — A set of key-value pairs included with every encrypt/decrypt API call. For example:
Cryptographic vs Administrative Permissions — KMS operations fall into two categories:
• Administrative:
• Cryptographic:
Encryption Context allows you to create policies that grant cryptographic permissions only when specific context values are present, while administrative permissions have no context requirement.
Why A is correct: Encryption Context provides the most flexible method because you can define arbitrary key-value conditions. Different teams, projects, or environments can share the same CMK while having completely different access rights based on the context values they provide. This is more granular than just separating admin vs crypto permissions — it adds conditional, attribute-based access control.
Why others are wrong:
B) Data Encryption Key (DEK) — A DEK is a key used to encrypt actual data (envelope encryption). It's generated by KMS via
C) AWS Managed CMK — AWS-managed CMKs are created and managed by AWS services (e.g.,
D) Customer Managed CMK — While customer-managed CMKs provide more control than AWS-managed ones (you can define custom key policies), the CMK itself is just the key. The flexibility in managing granular access comes from Encryption Context conditions within the key policy, not from the CMK type alone. A customer-managed CMK without Encryption Context still lacks the conditional granularity the question asks about.
How to Think About This:
When a question asks about "flexible" or "granular" access control for encryption keys — especially distinguishing between cryptographic operations (encrypt, decrypt, generate data keys) vs administrative operations (create, enable, disable, rotate keys) — think Encryption Context. It adds conditional logic to key policies, allowing the same CMK to be used differently by different principals based on context.
Key Concepts:
KMS Encryption Context — A set of key-value pairs included with every encrypt/decrypt API call. For example:
{"department": "finance", "project": "payroll"}. The encryption context serves dual purposes: (1) Access control — Key policies can use kms:EncryptionContext conditions to restrict which principals can perform cryptographic operations based on the context values. (2) Audit trail — The context is logged in CloudTrail, so you can see exactly what data was being encrypted/decrypted and by whom.Cryptographic vs Administrative Permissions — KMS operations fall into two categories:
• Administrative:
CreateKey, EnableKey, DisableKey, ScheduleKeyDeletion, PutKeyPolicy, DescribeKey• Cryptographic:
Encrypt, Decrypt, GenerateDataKey, ReEncryptEncryption Context allows you to create policies that grant cryptographic permissions only when specific context values are present, while administrative permissions have no context requirement.
Why A is correct: Encryption Context provides the most flexible method because you can define arbitrary key-value conditions. Different teams, projects, or environments can share the same CMK while having completely different access rights based on the context values they provide. This is more granular than just separating admin vs crypto permissions — it adds conditional, attribute-based access control.
Why others are wrong:
B) Data Encryption Key (DEK) — A DEK is a key used to encrypt actual data (envelope encryption). It's generated by KMS via
GenerateDataKey. DEKs are about how data is encrypted, not about managing access control to the CMK. A DEK doesn't differentiate between cryptographic and administrative permissions.C) AWS Managed CMK — AWS-managed CMKs are created and managed by AWS services (e.g.,
aws/s3, aws/ebs). You have limited control over their key policies — you can't customize access conditions or separate admin/crypto permissions. They're convenient but not flexible.D) Customer Managed CMK — While customer-managed CMKs provide more control than AWS-managed ones (you can define custom key policies), the CMK itself is just the key. The flexibility in managing granular access comes from Encryption Context conditions within the key policy, not from the CMK type alone. A customer-managed CMK without Encryption Context still lacks the conditional granularity the question asks about.
Q38.What is the most secure way to set up a policy for a Lambda function that needs to write to three specific DynamoDB tables?
✓ Correct: C. Following least privilege — specify each table ARN individually and only the required action (PutItem).
How to Think About This:
When a question asks for the "most secure" IAM policy, apply the Principle of Least Privilege at both the action level and the resource level. The most secure policy: (1) Lists only the specific API actions needed —
Key Concepts:
IAM Policy Structure — Every IAM policy has three key elements: Effect (Allow/Deny), Action (what API calls), and Resource (which specific AWS resources). The tightest policy specifies exact values for all three.
DynamoDB ARN Format — A DynamoDB table ARN looks like:
Why Wildcards Are Dangerous — Using
Why C is correct: Listing each table ARN individually with only
Why others are wrong:
A) Specify PutItem with arn:...table/ev — This specifies only a partial table name
B) Use table/ with dynamodb:* —
D) Use "*" for resource with PutItem — While the action is correctly restricted to PutItem, using
How to Think About This:
When a question asks for the "most secure" IAM policy, apply the Principle of Least Privilege at both the action level and the resource level. The most secure policy: (1) Lists only the specific API actions needed —
dynamodb:PutItem, not dynamodb:*, (2) Lists only the specific resource ARNs — each table's full ARN, not * or a wildcard pattern. If an answer uses any wildcard (*) where a specific value is available, it's less secure.Key Concepts:
IAM Policy Structure — Every IAM policy has three key elements: Effect (Allow/Deny), Action (what API calls), and Resource (which specific AWS resources). The tightest policy specifies exact values for all three.
DynamoDB ARN Format — A DynamoDB table ARN looks like:
arn:aws:dynamodb:us-east-1:123456789012:table/MyTable. When writing a policy for 3 specific tables, you list all 3 ARNs in the Resource array. This ensures the Lambda function can only write to those 3 tables and nothing else.Why Wildcards Are Dangerous — Using
"Resource": "*" means the function can write to every DynamoDB table in the account. If the function is compromised, the blast radius is enormous. Using dynamodb:* for the action means the function can do anything — read, delete, create tables — not just write items.Why C is correct: Listing each table ARN individually with only
dynamodb:PutItem as the action is the most restrictive policy. The Lambda function can only write new items to exactly those 3 tables — nothing more. This is the textbook implementation of least privilege.Why others are wrong:
A) Specify PutItem with arn:...table/ev — This specifies only a partial table name
/ev, which likely uses a wildcard prefix pattern or is incomplete. It doesn't clearly specify all 3 tables individually. Even if it worked, matching a pattern like table/ev* could accidentally include future tables that start with "ev."B) Use table/ with dynamodb:* —
dynamodb:* grants all DynamoDB operations: PutItem, DeleteItem, DeleteTable, CreateTable, Scan, Query, etc. This massively violates least privilege. The Lambda only needs to write, not delete tables or scan all data.D) Use "*" for resource with PutItem — While the action is correctly restricted to PutItem, using
"*" for the resource means the function can write to any DynamoDB table in the account. If the account has sensitive tables (user credentials, financial data), the compromised function could write garbage data to all of them.
Q39.What is the most suitable approach to ensure a private and consistent network connection between your AWS application and on-premises data center?
✓ Correct: B. AWS Direct Connect + VPN provides a dedicated, consistent, and encrypted connection.
How to Think About This:
When a question mentions "private" + "consistent" connection to on-premises, think Direct Connect. When it also implies encryption, think Direct Connect + VPN. Important: Direct Connect by itself is private (dedicated fiber, not internet) but is NOT encrypted. Adding a VPN tunnel over the Direct Connect connection provides encryption. This combination gives you all three: private, consistent, and encrypted.
Key Concepts:
AWS Direct Connect — A dedicated, physical network connection between your data center and an AWS Direct Connect location (colocation facility). Unlike VPN, it does not traverse the public internet. Benefits: consistent latency (no internet routing variability), higher bandwidth (up to 100 Gbps), and reduced data transfer costs. However, Direct Connect traffic is not encrypted by default — it travels over a dedicated fiber link, which is private but not cryptographically protected.
Site-to-Site VPN over Direct Connect — By establishing an IPsec VPN tunnel over the Direct Connect link, you get encryption in transit. The VPN adds cryptographic protection to the already-private Direct Connect connection. This is the gold standard for hybrid connectivity when both privacy and encryption are required.
Consistency — The key differentiator from internet-based VPN. VPN over the internet shares bandwidth with all other internet traffic, leading to variable latency and potential packet loss during congestion. Direct Connect provides dedicated bandwidth that isn't affected by internet conditions, making it "consistent."
Why B is correct: The question asks for "private AND consistent." Only Direct Connect provides consistency (dedicated link, no internet variability). Adding VPN provides encryption. Together, this combination satisfies all requirements: private (dedicated fiber, not internet), consistent (guaranteed bandwidth), and secure (IPsec encryption).
Why others are wrong:
A) Use a site-to-site VPN — A standalone VPN goes over the public internet. While encrypted (IPsec), it is NOT consistent — internet routing varies, congestion causes latency spikes, and bandwidth is shared. For applications requiring predictable performance (database replication, real-time data sync), internet VPN is unreliable.
C) Use VPC peering — VPC peering connects two VPCs within AWS. It cannot connect a VPC to an on-premises data center. VPC peering is for AWS-to-AWS communication, not hybrid cloud connectivity.
D) Transfer over internet using CloudFront and HTTPS — CloudFront is a CDN for distributing content to end users. HTTPS provides encryption but the connection still goes over the public internet, making it inconsistent. CloudFront is designed for content delivery, not for private, persistent data center connectivity.
How to Think About This:
When a question mentions "private" + "consistent" connection to on-premises, think Direct Connect. When it also implies encryption, think Direct Connect + VPN. Important: Direct Connect by itself is private (dedicated fiber, not internet) but is NOT encrypted. Adding a VPN tunnel over the Direct Connect connection provides encryption. This combination gives you all three: private, consistent, and encrypted.
Key Concepts:
AWS Direct Connect — A dedicated, physical network connection between your data center and an AWS Direct Connect location (colocation facility). Unlike VPN, it does not traverse the public internet. Benefits: consistent latency (no internet routing variability), higher bandwidth (up to 100 Gbps), and reduced data transfer costs. However, Direct Connect traffic is not encrypted by default — it travels over a dedicated fiber link, which is private but not cryptographically protected.
Site-to-Site VPN over Direct Connect — By establishing an IPsec VPN tunnel over the Direct Connect link, you get encryption in transit. The VPN adds cryptographic protection to the already-private Direct Connect connection. This is the gold standard for hybrid connectivity when both privacy and encryption are required.
Consistency — The key differentiator from internet-based VPN. VPN over the internet shares bandwidth with all other internet traffic, leading to variable latency and potential packet loss during congestion. Direct Connect provides dedicated bandwidth that isn't affected by internet conditions, making it "consistent."
Why B is correct: The question asks for "private AND consistent." Only Direct Connect provides consistency (dedicated link, no internet variability). Adding VPN provides encryption. Together, this combination satisfies all requirements: private (dedicated fiber, not internet), consistent (guaranteed bandwidth), and secure (IPsec encryption).
Why others are wrong:
A) Use a site-to-site VPN — A standalone VPN goes over the public internet. While encrypted (IPsec), it is NOT consistent — internet routing varies, congestion causes latency spikes, and bandwidth is shared. For applications requiring predictable performance (database replication, real-time data sync), internet VPN is unreliable.
C) Use VPC peering — VPC peering connects two VPCs within AWS. It cannot connect a VPC to an on-premises data center. VPC peering is for AWS-to-AWS communication, not hybrid cloud connectivity.
D) Transfer over internet using CloudFront and HTTPS — CloudFront is a CDN for distributing content to end users. HTTPS provides encryption but the connection still goes over the public internet, making it inconsistent. CloudFront is designed for content delivery, not for private, persistent data center connectivity.
Q40.What is the most suitable way to manage console and service access across multiple AWS accounts for your client's expanding employee base?
✓ Correct: B. AWS Organizations + ADFS federation provides centralized, scalable access management across multiple accounts using existing corporate identities.
How to Think About This:
When a question mentions "multiple AWS accounts" + "expanding employee base" + "console and service access", think: AWS Organizations for account management + Federation for identity management. The key insight is that you want employees to use their existing corporate credentials (Active Directory) rather than creating separate IAM users in every AWS account. ADFS (Active Directory Federation Services) bridges corporate AD to AWS via SAML 2.0.
Key Concepts:
AWS Organizations — A service that lets you manage multiple AWS accounts as a single unit. You can create organizational units (OUs), apply Service Control Policies (SCPs) across accounts, and manage billing centrally. This is the foundation for multi-account management at scale.
ADFS (Active Directory Federation Services) — A Microsoft service that provides SAML 2.0 federation. When an employee logs in through ADFS with their corporate AD credentials, ADFS generates a SAML assertion. AWS receives this assertion and maps it to an IAM role via a trust policy. The employee gets temporary credentials to access the AWS console and services — without needing a separate IAM user in any AWS account.
Why Federation + Organizations Scale — As the company grows: (1) New employees get AD accounts through normal HR onboarding — no AWS-specific setup needed. (2) New AWS accounts are added to the Organization with consistent SCPs. (3) ADFS role mappings automatically apply to new accounts. (4) Offboarding is instant — disable the AD account, and all AWS access is revoked across every account simultaneously.
Why B is correct: This is the AWS-recommended architecture for enterprises with existing Active Directory infrastructure. Organizations provides centralized multi-account management, and ADFS provides single sign-on using existing corporate identities. It scales with both new accounts and new employees without creating IAM users.
Why others are wrong:
A) Deploy AD Connector and sync on-premises AD — AD Connector is a directory proxy that forwards authentication requests to on-premises AD. While useful for single-account scenarios, it doesn't provide the multi-account federation capabilities needed here. It also doesn't "sync" AD — that would be AWS Directory Service for Microsoft AD. AD Connector alone doesn't solve cross-account access management at scale.
C) Use OAuth Identity Provider in IAM with AD groups — IAM supports SAML 2.0 and OIDC federation, but configuring an "OAuth Identity Provider" per account doesn't scale well. Also, IAM identity providers are configured per-account, not centrally. This approach requires repetitive configuration in every account as new ones are added.
D) Install Active Directory Sync appliance in vCenter — vCenter is VMware's virtualization management platform. Installing an AD sync appliance there is an on-premises infrastructure concern that doesn't address AWS multi-account access management. This doesn't help employees access AWS console or services.
E) Use Cognito with web federation against AD — Cognito is designed for customer-facing applications (mobile apps, web apps), not for enterprise employee access to the AWS console. Cognito doesn't integrate with AWS Organizations or provide IAM role mapping for console access.
How to Think About This:
When a question mentions "multiple AWS accounts" + "expanding employee base" + "console and service access", think: AWS Organizations for account management + Federation for identity management. The key insight is that you want employees to use their existing corporate credentials (Active Directory) rather than creating separate IAM users in every AWS account. ADFS (Active Directory Federation Services) bridges corporate AD to AWS via SAML 2.0.
Key Concepts:
AWS Organizations — A service that lets you manage multiple AWS accounts as a single unit. You can create organizational units (OUs), apply Service Control Policies (SCPs) across accounts, and manage billing centrally. This is the foundation for multi-account management at scale.
ADFS (Active Directory Federation Services) — A Microsoft service that provides SAML 2.0 federation. When an employee logs in through ADFS with their corporate AD credentials, ADFS generates a SAML assertion. AWS receives this assertion and maps it to an IAM role via a trust policy. The employee gets temporary credentials to access the AWS console and services — without needing a separate IAM user in any AWS account.
Why Federation + Organizations Scale — As the company grows: (1) New employees get AD accounts through normal HR onboarding — no AWS-specific setup needed. (2) New AWS accounts are added to the Organization with consistent SCPs. (3) ADFS role mappings automatically apply to new accounts. (4) Offboarding is instant — disable the AD account, and all AWS access is revoked across every account simultaneously.
Why B is correct: This is the AWS-recommended architecture for enterprises with existing Active Directory infrastructure. Organizations provides centralized multi-account management, and ADFS provides single sign-on using existing corporate identities. It scales with both new accounts and new employees without creating IAM users.
Why others are wrong:
A) Deploy AD Connector and sync on-premises AD — AD Connector is a directory proxy that forwards authentication requests to on-premises AD. While useful for single-account scenarios, it doesn't provide the multi-account federation capabilities needed here. It also doesn't "sync" AD — that would be AWS Directory Service for Microsoft AD. AD Connector alone doesn't solve cross-account access management at scale.
C) Use OAuth Identity Provider in IAM with AD groups — IAM supports SAML 2.0 and OIDC federation, but configuring an "OAuth Identity Provider" per account doesn't scale well. Also, IAM identity providers are configured per-account, not centrally. This approach requires repetitive configuration in every account as new ones are added.
D) Install Active Directory Sync appliance in vCenter — vCenter is VMware's virtualization management platform. Installing an AD sync appliance there is an on-premises infrastructure concern that doesn't address AWS multi-account access management. This doesn't help employees access AWS console or services.
E) Use Cognito with web federation against AD — Cognito is designed for customer-facing applications (mobile apps, web apps), not for enterprise employee access to the AWS console. Cognito doesn't integrate with AWS Organizations or provide IAM role mapping for console access.
Q41.What method would you recommend for consolidating CloudTrail logs across multiple AWS accounts and querying them using SQL?
✓ Correct: B. CloudTrail logs natively to S3, and Athena provides serverless SQL querying directly against S3 data.
How to Think About This:
This is a repeat of the Q2 pattern: CloudTrail + S3 + Athena is the standard AWS architecture for log consolidation and SQL querying. When you see "consolidate CloudTrail logs" + "SQL query" in the same question, the answer is always: logs to one S3 bucket, query with Athena. No data movement, no additional infrastructure, no ETL pipeline needed.
Key Concepts:
Multi-Account CloudTrail to S3 — Each AWS account configures its CloudTrail to deliver logs to the same centralized S3 bucket. The bucket's policy allows cross-account writes from CloudTrail. Logs are organized by account ID prefix:
Amazon Athena — A serverless query engine that reads data directly from S3 using standard SQL. You create a table definition that maps to the CloudTrail JSON log format, then run queries like
Why This Pattern Wins — CloudTrail already writes JSON to S3 (no custom setup). Athena already understands CloudTrail's schema (AWS provides a pre-built table definition). The total infrastructure cost is: S3 storage + Athena per-query charges. No databases to provision, no ETL jobs to maintain, no data to move or transform.
Why B is correct: It's the simplest, most cost-effective, and AWS-recommended approach. CloudTrail's native output is S3, and Athena's native input is S3. The two services connect directly with zero data movement or transformation.
Why others are wrong:
A) Store in DynamoDB, query with Athena — DynamoDB is a key-value/document database, not designed for log storage or analytical queries. Loading CloudTrail logs into DynamoDB requires an ETL pipeline, adds cost, and is architecturally wrong. Also, Athena queries S3, not DynamoDB.
C) Store in RDS, query with Athena — RDS is a relational database. Loading millions of CloudTrail log entries into RDS is expensive (compute + storage), slow (ETL required), and unnecessary. Athena doesn't query RDS — it queries S3. This adds two layers of unnecessary infrastructure.
D) Log to S3, query with Macie — Amazon Macie discovers and classifies sensitive data (PII, credentials) in S3. It does not provide SQL query capabilities. Macie answers "is there sensitive data in this bucket?" — not "show me all API calls from IP 10.0.0.1."
E) Log to S3, query with DynamoDB — DynamoDB is a database engine, not a query tool for S3 data. It cannot read or query files stored in S3. DynamoDB stores and retrieves its own data using key-value lookups — it has no capability to analyze external data sources.
How to Think About This:
This is a repeat of the Q2 pattern: CloudTrail + S3 + Athena is the standard AWS architecture for log consolidation and SQL querying. When you see "consolidate CloudTrail logs" + "SQL query" in the same question, the answer is always: logs to one S3 bucket, query with Athena. No data movement, no additional infrastructure, no ETL pipeline needed.
Key Concepts:
Multi-Account CloudTrail to S3 — Each AWS account configures its CloudTrail to deliver logs to the same centralized S3 bucket. The bucket's policy allows cross-account writes from CloudTrail. Logs are organized by account ID prefix:
s3://central-bucket/AWSLogs/111111111111/, s3://central-bucket/AWSLogs/222222222222/, etc.Amazon Athena — A serverless query engine that reads data directly from S3 using standard SQL. You create a table definition that maps to the CloudTrail JSON log format, then run queries like
SELECT * FROM cloudtrail_logs WHERE eventSource = 'iam.amazonaws.com'. No servers to manage, pay per query, instant setup.Why This Pattern Wins — CloudTrail already writes JSON to S3 (no custom setup). Athena already understands CloudTrail's schema (AWS provides a pre-built table definition). The total infrastructure cost is: S3 storage + Athena per-query charges. No databases to provision, no ETL jobs to maintain, no data to move or transform.
Why B is correct: It's the simplest, most cost-effective, and AWS-recommended approach. CloudTrail's native output is S3, and Athena's native input is S3. The two services connect directly with zero data movement or transformation.
Why others are wrong:
A) Store in DynamoDB, query with Athena — DynamoDB is a key-value/document database, not designed for log storage or analytical queries. Loading CloudTrail logs into DynamoDB requires an ETL pipeline, adds cost, and is architecturally wrong. Also, Athena queries S3, not DynamoDB.
C) Store in RDS, query with Athena — RDS is a relational database. Loading millions of CloudTrail log entries into RDS is expensive (compute + storage), slow (ETL required), and unnecessary. Athena doesn't query RDS — it queries S3. This adds two layers of unnecessary infrastructure.
D) Log to S3, query with Macie — Amazon Macie discovers and classifies sensitive data (PII, credentials) in S3. It does not provide SQL query capabilities. Macie answers "is there sensitive data in this bucket?" — not "show me all API calls from IP 10.0.0.1."
E) Log to S3, query with DynamoDB — DynamoDB is a database engine, not a query tool for S3 data. It cannot read or query files stored in S3. DynamoDB stores and retrieves its own data using key-value lookups — it has no capability to analyze external data sources.
Q42.What service would you recommend for protection against DDoS, SQL injection, and cross-site scripting? Select All
✓ Correct: A, B, C, E. WAF handles SQL injection, XSS, and application-layer DDoS. Shield handles network/transport-layer DDoS.
How to Think About This:
This question tests whether you understand the division of responsibility between WAF and Shield. The mental model:
• AWS WAF = Layer 7 (application) protection → SQL injection, XSS, rate-based DDoS mitigation
• AWS Shield = Layer 3/4 (network/transport) protection → volumetric DDoS, SYN floods, UDP reflection
Shield does NOT inspect HTTP request content, so it cannot detect SQL injection or XSS. WAF CAN help with application-layer DDoS through rate-based rules (blocking IPs that send too many requests).
Key Concepts:
AWS WAF for SQL Injection (A) — WAF has managed rule groups that inspect HTTP request parameters (query strings, form bodies, headers) for SQL injection patterns like
AWS Shield for DDoS (B) — Shield Standard (free, automatic) protects against common Layer 3/4 DDoS attacks. Shield Advanced (paid) adds DDoS response team, advanced detection, and cost protection. Shield absorbs volumetric attacks at the network edge before they reach your infrastructure.
AWS WAF for DDoS (C) — WAF's rate-based rules can mitigate application-layer (Layer 7) DDoS — like HTTP flood attacks where attackers send massive numbers of legitimate-looking HTTP requests. WAF can automatically block IPs that exceed a request threshold (e.g., 2000 requests per 5 minutes).
AWS WAF for XSS (E) — WAF's XSS match rules inspect request parameters for cross-site scripting payloads like
Why A, B, C, E are correct: WAF protects against application-layer threats (SQL injection, XSS) and can help with application-layer DDoS via rate limiting. Shield protects against network-layer DDoS. Together they cover all three attack types mentioned in the question, but each handles different layers.
Why others are wrong:
D) AWS Shield for SQL injection — Shield operates at Layers 3/4 (IP, TCP, UDP). It analyzes packet volumes, connection rates, and protocol patterns — but it does not inspect HTTP content. SQL injection is embedded in HTTP request parameters (Layer 7). Shield literally cannot see SQL injection payloads because they're inside application-layer data that Shield doesn't inspect.
F) AWS Shield for cross-site scripting — Same reason as D. XSS payloads are inside HTTP request/response bodies (Layer 7). Shield doesn't parse HTTP content. Only WAF, which operates at Layer 7 and inspects HTTP request components, can detect and block XSS patterns.
How to Think About This:
This question tests whether you understand the division of responsibility between WAF and Shield. The mental model:
• AWS WAF = Layer 7 (application) protection → SQL injection, XSS, rate-based DDoS mitigation
• AWS Shield = Layer 3/4 (network/transport) protection → volumetric DDoS, SYN floods, UDP reflection
Shield does NOT inspect HTTP request content, so it cannot detect SQL injection or XSS. WAF CAN help with application-layer DDoS through rate-based rules (blocking IPs that send too many requests).
Key Concepts:
AWS WAF for SQL Injection (A) — WAF has managed rule groups that inspect HTTP request parameters (query strings, form bodies, headers) for SQL injection patterns like
' OR 1=1 --. When detected, WAF blocks the request before it reaches your application.AWS Shield for DDoS (B) — Shield Standard (free, automatic) protects against common Layer 3/4 DDoS attacks. Shield Advanced (paid) adds DDoS response team, advanced detection, and cost protection. Shield absorbs volumetric attacks at the network edge before they reach your infrastructure.
AWS WAF for DDoS (C) — WAF's rate-based rules can mitigate application-layer (Layer 7) DDoS — like HTTP flood attacks where attackers send massive numbers of legitimate-looking HTTP requests. WAF can automatically block IPs that exceed a request threshold (e.g., 2000 requests per 5 minutes).
AWS WAF for XSS (E) — WAF's XSS match rules inspect request parameters for cross-site scripting payloads like
<script>alert(1)</script>. AWS provides managed rule groups specifically for XSS detection.Why A, B, C, E are correct: WAF protects against application-layer threats (SQL injection, XSS) and can help with application-layer DDoS via rate limiting. Shield protects against network-layer DDoS. Together they cover all three attack types mentioned in the question, but each handles different layers.
Why others are wrong:
D) AWS Shield for SQL injection — Shield operates at Layers 3/4 (IP, TCP, UDP). It analyzes packet volumes, connection rates, and protocol patterns — but it does not inspect HTTP content. SQL injection is embedded in HTTP request parameters (Layer 7). Shield literally cannot see SQL injection payloads because they're inside application-layer data that Shield doesn't inspect.
F) AWS Shield for cross-site scripting — Same reason as D. XSS payloads are inside HTTP request/response bodies (Layer 7). Shield doesn't parse HTTP content. Only WAF, which operates at Layer 7 and inspects HTTP request components, can detect and block XSS patterns.
Q43.What steps should you take to change a vault lock policy that's in an in-progress state?
✓ Correct: D. An in-progress vault lock can be aborted, then a new lock policy can be initiated.
How to Think About This:
Vault Lock has two states: in-progress and locked (completed). The critical distinction: in-progress locks can be aborted (deleted), but completed locks are immutable forever — even AWS cannot change them. If you need to modify an in-progress lock policy, you must abort it first, then start fresh. You cannot edit it in place.
Key Concepts:
S3 Glacier Vault Lock — A compliance feature that enforces retention policies on Glacier vaults using an immutable policy. Common in regulated industries (healthcare, finance, legal) where records must be preserved for a specific period and cannot be deleted — even by administrators.
Vault Lock Lifecycle — The process has specific steps:
• Initiate Lock — Call
• In-Progress State — During this 24-hour window, you can test the policy. If it's wrong, you can abort it with
• Complete Lock — Call
Why Immutability Matters — Regulatory compliance (SEC Rule 17a-4, HIPAA, WORM requirements) demands that certain data retention policies cannot be altered after being set. Vault Lock provides this legal guarantee.
Why D is correct: When a vault lock is in-progress and you need to change the policy, the only option is to abort the current lock (delete it entirely) and then initiate a new one with the corrected policy. You cannot modify an in-progress lock in place — it's all or nothing.
Why others are wrong:
A) Can't modify once in-progress — This is partially true (you can't edit it), but misleading because it implies you're stuck. You CAN abort an in-progress lock and start fresh. Only completed locks are truly unmodifiable.
B) Validate the lock, then update it — Validating (completing) the lock makes it permanently immutable. Once validated/completed, you can NEVER update it. This answer has the steps backwards and would result in a permanent, incorrect policy.
C) Update the in-progress lock, then validate — You cannot update an in-progress lock policy. There is no API to modify the policy text of an existing lock. The only options are: abort and restart, or complete it as-is.
How to Think About This:
Vault Lock has two states: in-progress and locked (completed). The critical distinction: in-progress locks can be aborted (deleted), but completed locks are immutable forever — even AWS cannot change them. If you need to modify an in-progress lock policy, you must abort it first, then start fresh. You cannot edit it in place.
Key Concepts:
S3 Glacier Vault Lock — A compliance feature that enforces retention policies on Glacier vaults using an immutable policy. Common in regulated industries (healthcare, finance, legal) where records must be preserved for a specific period and cannot be deleted — even by administrators.
Vault Lock Lifecycle — The process has specific steps:
• Initiate Lock — Call
InitiateVaultLock with your policy. This sets the lock to "in-progress" state and returns a lock ID. You have 24 hours to validate and complete the lock.• In-Progress State — During this 24-hour window, you can test the policy. If it's wrong, you can abort it with
AbortVaultLock and start over.• Complete Lock — Call
CompleteVaultLock to finalize. The policy becomes permanently immutable. No one — not even the root account or AWS support — can modify or delete it.Why Immutability Matters — Regulatory compliance (SEC Rule 17a-4, HIPAA, WORM requirements) demands that certain data retention policies cannot be altered after being set. Vault Lock provides this legal guarantee.
Why D is correct: When a vault lock is in-progress and you need to change the policy, the only option is to abort the current lock (delete it entirely) and then initiate a new one with the corrected policy. You cannot modify an in-progress lock in place — it's all or nothing.
Why others are wrong:
A) Can't modify once in-progress — This is partially true (you can't edit it), but misleading because it implies you're stuck. You CAN abort an in-progress lock and start fresh. Only completed locks are truly unmodifiable.
B) Validate the lock, then update it — Validating (completing) the lock makes it permanently immutable. Once validated/completed, you can NEVER update it. This answer has the steps backwards and would result in a permanent, incorrect policy.
C) Update the in-progress lock, then validate — You cannot update an in-progress lock policy. There is no API to modify the policy text of an existing lock. The only options are: abort and restart, or complete it as-is.
Q44.What strategy will you employ to identify S3 buckets that contain personally identifiable information (PII)?
✓ Correct: C. Amazon Macie is specifically designed to discover, classify, and protect sensitive data including PII in S3 buckets.
How to Think About This:
When a question asks about "finding PII" or "sensitive data in S3", the answer is always Amazon Macie. Macie is the purpose-built service for this exact use case. It uses machine learning and pattern matching to automatically scan S3 buckets and identify data types like Social Security numbers, credit card numbers, passport numbers, email addresses, and other PII categories.
Key Concepts:
Amazon Macie — A fully managed data security and privacy service. Key capabilities:
• Automated Discovery — Scans S3 buckets automatically, identifying sensitive data types using built-in ML models and custom data identifiers
• PII Detection — Recognizes 100+ sensitive data types: SSN, credit cards, driver's licenses, health records, financial data, API keys, and more
• S3 Inventory — Provides a complete view of your S3 security posture: which buckets are public, unencrypted, or shared externally
• Findings — Generates detailed findings for each discovery, integrated with Security Hub and EventBridge for automated workflows
Macie vs Other Services — The exam frequently tests this distinction:
• Macie = "What sensitive data is IN my S3?" (content inspection)
• GuardDuty = "Who is accessing my S3 suspiciously?" (threat detection)
• Inspector = "Are my EC2/Lambda vulnerable?" (vulnerability scanning)
• Config = "Is my S3 bucket configured correctly?" (configuration compliance)
Why C is correct: Macie is the AWS-native, purpose-built solution for discovering PII in S3. It requires minimal setup (enable Macie, select buckets to scan), provides automated continuous scanning, and is the AWS-recommended approach for data classification and PII discovery.
Why others are wrong:
A) Use Athena to identify PII — Athena is a SQL query engine for structured data in S3. While you could theoretically write regex-based SQL queries to find patterns that look like SSNs or credit card numbers, this is manual, error-prone, doesn't scale, and misses many PII types. Athena is for data analysis, not security classification.
B) Lambda + Amazon Comprehend — Comprehend is an NLP service that can detect PII in text. While this could work, it requires custom development: writing Lambda code to iterate through S3 objects, read their contents, call Comprehend APIs, and process results. Macie does all of this out of the box with zero custom code. Building a custom solution when a managed service exists is not the "best strategy."
D) Amazon Inspector for PII — Inspector is a vulnerability scanner for EC2 instances, Lambda functions, and container images. It detects software vulnerabilities (CVEs), network exposure, and unpatched packages. It does not scan S3 data content and has no PII detection capability. Inspector answers "is my compute infrastructure vulnerable?" — not "does my data contain PII?"
How to Think About This:
When a question asks about "finding PII" or "sensitive data in S3", the answer is always Amazon Macie. Macie is the purpose-built service for this exact use case. It uses machine learning and pattern matching to automatically scan S3 buckets and identify data types like Social Security numbers, credit card numbers, passport numbers, email addresses, and other PII categories.
Key Concepts:
Amazon Macie — A fully managed data security and privacy service. Key capabilities:
• Automated Discovery — Scans S3 buckets automatically, identifying sensitive data types using built-in ML models and custom data identifiers
• PII Detection — Recognizes 100+ sensitive data types: SSN, credit cards, driver's licenses, health records, financial data, API keys, and more
• S3 Inventory — Provides a complete view of your S3 security posture: which buckets are public, unencrypted, or shared externally
• Findings — Generates detailed findings for each discovery, integrated with Security Hub and EventBridge for automated workflows
Macie vs Other Services — The exam frequently tests this distinction:
• Macie = "What sensitive data is IN my S3?" (content inspection)
• GuardDuty = "Who is accessing my S3 suspiciously?" (threat detection)
• Inspector = "Are my EC2/Lambda vulnerable?" (vulnerability scanning)
• Config = "Is my S3 bucket configured correctly?" (configuration compliance)
Why C is correct: Macie is the AWS-native, purpose-built solution for discovering PII in S3. It requires minimal setup (enable Macie, select buckets to scan), provides automated continuous scanning, and is the AWS-recommended approach for data classification and PII discovery.
Why others are wrong:
A) Use Athena to identify PII — Athena is a SQL query engine for structured data in S3. While you could theoretically write regex-based SQL queries to find patterns that look like SSNs or credit card numbers, this is manual, error-prone, doesn't scale, and misses many PII types. Athena is for data analysis, not security classification.
B) Lambda + Amazon Comprehend — Comprehend is an NLP service that can detect PII in text. While this could work, it requires custom development: writing Lambda code to iterate through S3 objects, read their contents, call Comprehend APIs, and process results. Macie does all of this out of the box with zero custom code. Building a custom solution when a managed service exists is not the "best strategy."
D) Amazon Inspector for PII — Inspector is a vulnerability scanner for EC2 instances, Lambda functions, and container images. It detects software vulnerabilities (CVEs), network exposure, and unpatched packages. It does not scan S3 data content and has no PII detection capability. Inspector answers "is my compute infrastructure vulnerable?" — not "does my data contain PII?"
Q45.What strategy would you recommend for logging all changes to the AWS infrastructure in a way that prevents tampering or deletion? Select All
✓ Correct: A, B, D, E. CloudTrail uses SHA-256 digest validation, not MD5. All other options are valid tamper-proof logging strategies.
How to Think About This:
The question asks about tamper-proof logging of infrastructure changes. Think about three layers: (1) What to log — CloudTrail for API calls, AWS Config for configuration state, CloudWatch Logs for application/service logs, (2) Where to store — dedicated S3 bucket in a separate security account, (3) How to protect — restrict access, enable integrity validation, use immutable storage. The trap answer is the MD5 checksum — CloudTrail uses SHA-256, not MD5.
Key Concepts:
CloudTrail in All Regions (E) — By default, a trail may only log events in its own region. To capture ALL API activity across your entire AWS footprint, enable a multi-region trail. Store logs in a dedicated S3 bucket in a centralized security account that is separate from workload accounts — this way, no one in a workload account can delete the logs.
Restrict CloudTrail Changes (B) — Use IAM policies and SCPs to ensure only the security team can modify CloudTrail configuration (disable logging, change S3 destination, delete trails). If a compromised administrator could disable CloudTrail, they could cover their tracks.
CloudWatch Logs (A) — CloudTrail can deliver logs to CloudWatch Logs for real-time monitoring. CloudWatch Logs can have resource policies that restrict who can delete log groups. Store in a dedicated log group with retention policies.
AWS Config (D) — AWS Config continuously records the configuration state of your resources (what changed, when, and to what value). CloudTrail tells you who made a change; Config tells you what the change was. Both are needed for complete audit coverage. Config snapshots stored in a dedicated S3 bucket provide a point-in-time record of infrastructure state.
CloudTrail Log File Integrity Validation — CloudTrail can generate SHA-256 digest files for log validation. These digests let you verify that log files haven't been modified or deleted after delivery to S3. This is cryptographic proof of log integrity — important for compliance and forensic investigations.
Why A, B, D, E are correct: Each addresses a different aspect of tamper-proof logging: CloudTrail captures API activity everywhere (E), restricted access prevents unauthorized changes (B), AWS Config captures configuration state (D), and CloudWatch Logs provides real-time analysis (A). Together, they create a comprehensive, tamper-resistant audit system.
Why C is wrong:
C) Validate MD5 checksum for log files — CloudTrail does NOT use MD5 for log file integrity validation. It uses SHA-256 with RSA signing. MD5 is considered cryptographically broken (collision attacks are feasible), so AWS doesn't rely on it for security-critical validation. If you see "MD5" in the context of CloudTrail integrity validation, it's always wrong. The correct mechanism is CloudTrail's built-in
How to Think About This:
The question asks about tamper-proof logging of infrastructure changes. Think about three layers: (1) What to log — CloudTrail for API calls, AWS Config for configuration state, CloudWatch Logs for application/service logs, (2) Where to store — dedicated S3 bucket in a separate security account, (3) How to protect — restrict access, enable integrity validation, use immutable storage. The trap answer is the MD5 checksum — CloudTrail uses SHA-256, not MD5.
Key Concepts:
CloudTrail in All Regions (E) — By default, a trail may only log events in its own region. To capture ALL API activity across your entire AWS footprint, enable a multi-region trail. Store logs in a dedicated S3 bucket in a centralized security account that is separate from workload accounts — this way, no one in a workload account can delete the logs.
Restrict CloudTrail Changes (B) — Use IAM policies and SCPs to ensure only the security team can modify CloudTrail configuration (disable logging, change S3 destination, delete trails). If a compromised administrator could disable CloudTrail, they could cover their tracks.
CloudWatch Logs (A) — CloudTrail can deliver logs to CloudWatch Logs for real-time monitoring. CloudWatch Logs can have resource policies that restrict who can delete log groups. Store in a dedicated log group with retention policies.
AWS Config (D) — AWS Config continuously records the configuration state of your resources (what changed, when, and to what value). CloudTrail tells you who made a change; Config tells you what the change was. Both are needed for complete audit coverage. Config snapshots stored in a dedicated S3 bucket provide a point-in-time record of infrastructure state.
CloudTrail Log File Integrity Validation — CloudTrail can generate SHA-256 digest files for log validation. These digests let you verify that log files haven't been modified or deleted after delivery to S3. This is cryptographic proof of log integrity — important for compliance and forensic investigations.
Why A, B, D, E are correct: Each addresses a different aspect of tamper-proof logging: CloudTrail captures API activity everywhere (E), restricted access prevents unauthorized changes (B), AWS Config captures configuration state (D), and CloudWatch Logs provides real-time analysis (A). Together, they create a comprehensive, tamper-resistant audit system.
Why C is wrong:
C) Validate MD5 checksum for log files — CloudTrail does NOT use MD5 for log file integrity validation. It uses SHA-256 with RSA signing. MD5 is considered cryptographically broken (collision attacks are feasible), so AWS doesn't rely on it for security-critical validation. If you see "MD5" in the context of CloudTrail integrity validation, it's always wrong. The correct mechanism is CloudTrail's built-in
digest file validation using SHA-256.
Q46.What tasks related to Customer Master Keys (CMKs) can a new system administrator perform if they don't have cryptographic operation permissions? Select All
✓ Correct: A, C, E. CreateKey, DescribeKey, and EnableKey are administrative operations that don't require cryptographic permissions.
How to Think About This:
KMS operations are divided into two distinct categories: administrative (managing keys) and cryptographic (using keys to encrypt/decrypt data). A system administrator without cryptographic permissions can manage the lifecycle of keys but cannot use them to process data. Think of it like a building manager who has keys to the key cabinet (admin) but doesn't have the keys themselves to open specific doors (crypto).
Key Concepts:
Administrative Operations — These manage the CMK itself without touching any data:
•
•
•
• Other admin operations:
Cryptographic Operations — These use the CMK to process data:
•
•
• Other crypto operations:
Why A, C, E are correct: CreateKey, DescribeKey, and EnableKey are all administrative operations that manage the key lifecycle. A system administrator with only administrative permissions can create keys, inspect their properties, and enable/disable them — but cannot use them to encrypt, decrypt, or generate data keys.
Why others are wrong:
B) GenerateDataKey — This is a cryptographic operation. It uses the CMK's key material to generate and encrypt a data encryption key. Without cryptographic permissions, the administrator cannot call this API.
D) Decrypt — This is a cryptographic operation. It uses the CMK to decrypt ciphertext. This directly exercises the CMK's cryptographic capabilities, which requires explicit cryptographic permissions in the key policy.
How to Think About This:
KMS operations are divided into two distinct categories: administrative (managing keys) and cryptographic (using keys to encrypt/decrypt data). A system administrator without cryptographic permissions can manage the lifecycle of keys but cannot use them to process data. Think of it like a building manager who has keys to the key cabinet (admin) but doesn't have the keys themselves to open specific doors (crypto).
Key Concepts:
Administrative Operations — These manage the CMK itself without touching any data:
•
CreateKey (A) — Creates a new CMK in KMS. Does not encrypt or decrypt anything.•
DescribeKey (C) — Returns metadata about a CMK (key ID, creation date, key state, key spec). Read-only, no cryptographic operation involved.•
EnableKey / DisableKey (E) — Toggles whether a CMK is available for use. Enabling a key doesn't perform any encryption — it just changes the key's state from "Disabled" to "Enabled."• Other admin operations:
PutKeyPolicy, TagResource, ScheduleKeyDeletion, ListKeys, GetKeyRotationStatusCryptographic Operations — These use the CMK to process data:
•
GenerateDataKey (B) — Generates a data encryption key (DEK) encrypted under the CMK. This is a cryptographic operation because it uses the CMK's key material.•
Decrypt (D) — Decrypts data that was encrypted with the CMK. This directly uses the CMK's cryptographic capabilities.• Other crypto operations:
Encrypt, ReEncrypt, GenerateDataKeyWithoutPlaintext, Sign, VerifyWhy A, C, E are correct: CreateKey, DescribeKey, and EnableKey are all administrative operations that manage the key lifecycle. A system administrator with only administrative permissions can create keys, inspect their properties, and enable/disable them — but cannot use them to encrypt, decrypt, or generate data keys.
Why others are wrong:
B) GenerateDataKey — This is a cryptographic operation. It uses the CMK's key material to generate and encrypt a data encryption key. Without cryptographic permissions, the administrator cannot call this API.
D) Decrypt — This is a cryptographic operation. It uses the CMK to decrypt ciphertext. This directly exercises the CMK's cryptographic capabilities, which requires explicit cryptographic permissions in the key policy.
Q47.What tool should you use to capture IP traffic information to and from network interfaces in your VPC?
✓ Correct: B. VPC Flow Logs is the AWS-native service specifically designed to capture IP traffic metadata for network interfaces.
How to Think About This:
When a question asks about "IP traffic information" + "network interfaces" + "VPC", the answer is always VPC Flow Logs. This is a direct knowledge question. Flow Logs capture network-level metadata (not packet contents) — source/destination IPs, ports, protocols, packet counts, byte counts, and whether traffic was accepted or rejected.
Key Concepts:
VPC Flow Logs — A feature that captures information about IP traffic going to and from network interfaces in your VPC. Flow Logs can be configured at three levels:
• VPC level — Captures traffic for all ENIs in the VPC
• Subnet level — Captures traffic for all ENIs in a specific subnet
• ENI level — Captures traffic for a specific network interface
What Flow Logs Capture — Each flow log record includes: version, account ID, interface ID, source/destination IP, source/destination port, protocol number, packets, bytes, start/end time, action (ACCEPT/REJECT), and log status. Flow Logs do not capture packet payloads (content) — they only capture metadata.
Flow Log Destinations — Logs can be sent to: CloudWatch Logs (for real-time analysis and alarms), S3 (for long-term storage and Athena queries), or Kinesis Data Firehose (for streaming analytics).
Why B is correct: VPC Flow Logs is the purpose-built AWS service for capturing IP traffic information at network interfaces. No other AWS service provides this specific functionality. It's the standard tool for network troubleshooting, security analysis, and compliance monitoring in VPCs.
Why others are wrong:
A) Third-party packet sniffer — While third-party tools can capture network traffic, this is not the AWS-native approach. Packet sniffers capture full packet contents (deep packet inspection), which is different from Flow Logs' metadata-only approach. Also, in a cloud environment, installing packet sniffers adds complexity and may not have access to all network interfaces.
C) AWS WAF and CloudWatch Logs — WAF logs HTTP request details for web traffic passing through WAF-protected resources (ALB, CloudFront, API Gateway). It doesn't capture general IP traffic to/from all network interfaces — only HTTP traffic through WAF integration points.
D) Application logs in CloudWatch Logs — Application logs capture what your application generates (errors, transactions, debug info). They don't capture network-level IP traffic information. An application doesn't know about all traffic hitting its network interface — only the connections it handles.
E) All of the Above — Not correct because options A, C, and D don't specifically capture IP traffic information to/from network interfaces.
F) None of the Above — Incorrect because VPC Flow Logs (B) is the right answer.
How to Think About This:
When a question asks about "IP traffic information" + "network interfaces" + "VPC", the answer is always VPC Flow Logs. This is a direct knowledge question. Flow Logs capture network-level metadata (not packet contents) — source/destination IPs, ports, protocols, packet counts, byte counts, and whether traffic was accepted or rejected.
Key Concepts:
VPC Flow Logs — A feature that captures information about IP traffic going to and from network interfaces in your VPC. Flow Logs can be configured at three levels:
• VPC level — Captures traffic for all ENIs in the VPC
• Subnet level — Captures traffic for all ENIs in a specific subnet
• ENI level — Captures traffic for a specific network interface
What Flow Logs Capture — Each flow log record includes: version, account ID, interface ID, source/destination IP, source/destination port, protocol number, packets, bytes, start/end time, action (ACCEPT/REJECT), and log status. Flow Logs do not capture packet payloads (content) — they only capture metadata.
Flow Log Destinations — Logs can be sent to: CloudWatch Logs (for real-time analysis and alarms), S3 (for long-term storage and Athena queries), or Kinesis Data Firehose (for streaming analytics).
Why B is correct: VPC Flow Logs is the purpose-built AWS service for capturing IP traffic information at network interfaces. No other AWS service provides this specific functionality. It's the standard tool for network troubleshooting, security analysis, and compliance monitoring in VPCs.
Why others are wrong:
A) Third-party packet sniffer — While third-party tools can capture network traffic, this is not the AWS-native approach. Packet sniffers capture full packet contents (deep packet inspection), which is different from Flow Logs' metadata-only approach. Also, in a cloud environment, installing packet sniffers adds complexity and may not have access to all network interfaces.
C) AWS WAF and CloudWatch Logs — WAF logs HTTP request details for web traffic passing through WAF-protected resources (ALB, CloudFront, API Gateway). It doesn't capture general IP traffic to/from all network interfaces — only HTTP traffic through WAF integration points.
D) Application logs in CloudWatch Logs — Application logs capture what your application generates (errors, transactions, debug info). They don't capture network-level IP traffic information. An application doesn't know about all traffic hitting its network interface — only the connections it handles.
E) All of the Above — Not correct because options A, C, and D don't specifically capture IP traffic information to/from network interfaces.
F) None of the Above — Incorrect because VPC Flow Logs (B) is the right answer.
Q48.What's the best way to set up an Intrusion Detection and Prevention System (IDPS) for your AWS infrastructure? Select All
✓ Correct: C, D, E. GuardDuty (managed IDS), AWS Network Firewall (IPS with Suricata rules), and third-party Marketplace solutions provide comprehensive IDPS.
How to Think About This:
IDPS = Intrusion Detection AND Prevention. You need services that can both detect threats and block them. AWS provides this through a combination: GuardDuty for intelligent detection, Network Firewall for inline prevention, and the Marketplace for specialized third-party solutions. Services that only log (Flow Logs) or only filter at host level (iptables) don't qualify as proper IDPS for your AWS infrastructure.
Key Concepts:
GuardDuty as IDS (C) — GuardDuty is a managed Intrusion Detection System. It analyzes CloudTrail logs, VPC Flow Logs, and DNS logs using machine learning and threat intelligence feeds to detect: credential compromise, unauthorized access, cryptocurrency mining, command & control communication, and data exfiltration. It generates findings (alerts) but doesn't block traffic directly. For prevention, you pair GuardDuty findings with automated responses (Lambda → NACL updates, Security Group changes).
AWS Network Firewall as IPS (D) — Network Firewall provides Intrusion Prevention System capabilities through Suricata-compatible rules. It sits inline in your VPC (traffic routes through it) and can: detect and block exploit attempts, perform deep packet inspection, and enforce protocol compliance. The "P" in IPS means it actively prevents intrusions, not just detects them.
Third-Party Solutions (E) — AWS Marketplace offers numerous IDPS solutions (Trend Micro, Palo Alto, Fortinet, etc.) that run on EC2 instances. These provide additional capabilities like advanced threat intelligence, proprietary detection algorithms, and unified management across hybrid environments. AWS explicitly supports and recommends Marketplace solutions as valid IDPS options.
Why C, D, E are correct: Together, these three options provide a complete IDPS strategy: GuardDuty for cloud-native threat detection, Network Firewall for inline traffic inspection and prevention, and Marketplace solutions for specialized or vendor-specific capabilities. This layered approach covers both detection and prevention.
Why others are wrong:
A) iptables — iptables is a host-level Linux firewall. It runs on a single EC2 instance and only protects that specific instance's network traffic. It's not a VPC-wide IDPS solution — it doesn't provide centralized detection, can't inspect traffic for other instances, and requires manual configuration on every host. For "AWS infrastructure" IDPS, you need VPC-level solutions, not per-host firewalls.
B) VPC Flow Logs — Flow Logs are passive logging only. They record traffic metadata (IPs, ports, accept/reject) but do not inspect packet contents, detect intrusion patterns, or block anything. Flow Logs tell you "traffic happened" — they don't tell you "this traffic is malicious" or prevent it. They're an input for analysis tools, not an IDPS themselves.
How to Think About This:
IDPS = Intrusion Detection AND Prevention. You need services that can both detect threats and block them. AWS provides this through a combination: GuardDuty for intelligent detection, Network Firewall for inline prevention, and the Marketplace for specialized third-party solutions. Services that only log (Flow Logs) or only filter at host level (iptables) don't qualify as proper IDPS for your AWS infrastructure.
Key Concepts:
GuardDuty as IDS (C) — GuardDuty is a managed Intrusion Detection System. It analyzes CloudTrail logs, VPC Flow Logs, and DNS logs using machine learning and threat intelligence feeds to detect: credential compromise, unauthorized access, cryptocurrency mining, command & control communication, and data exfiltration. It generates findings (alerts) but doesn't block traffic directly. For prevention, you pair GuardDuty findings with automated responses (Lambda → NACL updates, Security Group changes).
AWS Network Firewall as IPS (D) — Network Firewall provides Intrusion Prevention System capabilities through Suricata-compatible rules. It sits inline in your VPC (traffic routes through it) and can: detect and block exploit attempts, perform deep packet inspection, and enforce protocol compliance. The "P" in IPS means it actively prevents intrusions, not just detects them.
Third-Party Solutions (E) — AWS Marketplace offers numerous IDPS solutions (Trend Micro, Palo Alto, Fortinet, etc.) that run on EC2 instances. These provide additional capabilities like advanced threat intelligence, proprietary detection algorithms, and unified management across hybrid environments. AWS explicitly supports and recommends Marketplace solutions as valid IDPS options.
Why C, D, E are correct: Together, these three options provide a complete IDPS strategy: GuardDuty for cloud-native threat detection, Network Firewall for inline traffic inspection and prevention, and Marketplace solutions for specialized or vendor-specific capabilities. This layered approach covers both detection and prevention.
Why others are wrong:
A) iptables — iptables is a host-level Linux firewall. It runs on a single EC2 instance and only protects that specific instance's network traffic. It's not a VPC-wide IDPS solution — it doesn't provide centralized detection, can't inspect traffic for other instances, and requires manual configuration on every host. For "AWS infrastructure" IDPS, you need VPC-level solutions, not per-host firewalls.
B) VPC Flow Logs — Flow Logs are passive logging only. They record traffic metadata (IPs, ports, accept/reject) but do not inspect packet contents, detect intrusion patterns, or block anything. Flow Logs tell you "traffic happened" — they don't tell you "this traffic is malicious" or prevent it. They're an input for analysis tools, not an IDPS themselves.
Q49.Where can the administrator find documents confirming AWS's PCI-DSS 3.2 Level 1 Service Provider status?
✓ Correct: D. AWS Artifact provides on-demand access to AWS compliance documentation including PCI-DSS attestation reports.
How to Think About This:
When a question asks "where to find AWS compliance documents/certifications/audit reports", the answer is always AWS Artifact. Artifact is the self-service portal for downloading AWS's own compliance documentation. Think of it as the "document library" for all AWS certifications and attestation reports. Don't confuse it with services that help you become compliant (Config, Security Hub) — Artifact shows that AWS itself is compliant.
Key Concepts:
AWS Artifact — A free, self-service portal accessible from the AWS Management Console. It provides two types of documents:
• Artifact Reports — AWS's compliance reports from third-party auditors: PCI-DSS Attestation of Compliance (AOC), SOC 1/2/3 reports, ISO 27001 certification, FedRAMP authorizations, HIPAA documentation, and more.
• Artifact Agreements — Legal agreements between your organization and AWS: Business Associate Addendum (BAA) for HIPAA, Data Processing Agreement (DPA), etc.
PCI-DSS Level 1 — Payment Card Industry Data Security Standard. Level 1 is the highest level of compliance, required for organizations processing over 6 million card transactions per year. AWS maintains PCI-DSS Level 1 Service Provider status, meaning AWS's infrastructure meets the strictest payment card security standards. The attestation report proving this is available in Artifact.
Why D is correct: AWS Artifact is the only service designed to provide compliance documentation. An administrator needing to show an auditor that AWS is PCI-DSS compliant would download the Attestation of Compliance (AOC) from Artifact.
Why others are wrong:
A) Amazon Macie — Macie discovers sensitive data (PII) in S3 buckets. It helps your organization protect sensitive data but does not provide AWS's compliance certifications or audit reports.
B) Amazon DocumentDB — DocumentDB is a managed MongoDB-compatible database service. It has nothing to do with compliance documentation. The name "DocumentDB" refers to document-oriented databases (JSON documents), not compliance documents.
C) AWS Security Hub — Security Hub aggregates security findings from multiple AWS services (GuardDuty, Inspector, Macie) and checks your environment against compliance frameworks (CIS Benchmarks, PCI-DSS). However, it checks whether your resources are compliant — it doesn't provide AWS's own attestation reports. Security Hub helps you become compliant; Artifact proves AWS is compliant.
How to Think About This:
When a question asks "where to find AWS compliance documents/certifications/audit reports", the answer is always AWS Artifact. Artifact is the self-service portal for downloading AWS's own compliance documentation. Think of it as the "document library" for all AWS certifications and attestation reports. Don't confuse it with services that help you become compliant (Config, Security Hub) — Artifact shows that AWS itself is compliant.
Key Concepts:
AWS Artifact — A free, self-service portal accessible from the AWS Management Console. It provides two types of documents:
• Artifact Reports — AWS's compliance reports from third-party auditors: PCI-DSS Attestation of Compliance (AOC), SOC 1/2/3 reports, ISO 27001 certification, FedRAMP authorizations, HIPAA documentation, and more.
• Artifact Agreements — Legal agreements between your organization and AWS: Business Associate Addendum (BAA) for HIPAA, Data Processing Agreement (DPA), etc.
PCI-DSS Level 1 — Payment Card Industry Data Security Standard. Level 1 is the highest level of compliance, required for organizations processing over 6 million card transactions per year. AWS maintains PCI-DSS Level 1 Service Provider status, meaning AWS's infrastructure meets the strictest payment card security standards. The attestation report proving this is available in Artifact.
Why D is correct: AWS Artifact is the only service designed to provide compliance documentation. An administrator needing to show an auditor that AWS is PCI-DSS compliant would download the Attestation of Compliance (AOC) from Artifact.
Why others are wrong:
A) Amazon Macie — Macie discovers sensitive data (PII) in S3 buckets. It helps your organization protect sensitive data but does not provide AWS's compliance certifications or audit reports.
B) Amazon DocumentDB — DocumentDB is a managed MongoDB-compatible database service. It has nothing to do with compliance documentation. The name "DocumentDB" refers to document-oriented databases (JSON documents), not compliance documents.
C) AWS Security Hub — Security Hub aggregates security findings from multiple AWS services (GuardDuty, Inspector, Macie) and checks your environment against compliance frameworks (CIS Benchmarks, PCI-DSS). However, it checks whether your resources are compliant — it doesn't provide AWS's own attestation reports. Security Hub helps you become compliant; Artifact proves AWS is compliant.
Q50.Which AWS managed service is specifically tailored for SQL injection and cross-site scripting protection?
✓ Correct: C. AWS WAF is specifically designed to protect web applications against SQL injection and XSS attacks.
How to Think About This:
When a question mentions "SQL injection" or "cross-site scripting (XSS)", the answer is AWS WAF. These are Layer 7 (application layer) attacks that target HTTP request content. WAF is the only AWS service that inspects HTTP request parameters and blocks malicious payloads before they reach your application. Memorize this mapping: SQL injection/XSS → WAF.
Key Concepts:
AWS WAF (Web Application Firewall) — A managed firewall that filters HTTP/HTTPS traffic. It works by defining Web ACL rules that inspect request components (query strings, headers, body, URI) for malicious patterns. AWS provides Managed Rule Groups — pre-built rule sets maintained by AWS and AWS Marketplace sellers:
• SQL Injection Rule Group — Detects common SQLi patterns like
• XSS Rule Group — Detects script injection like
• Core Rule Set — Broad protection against OWASP Top 10 vulnerabilities
WAF Deployment Points — WAF integrates with CloudFront, ALB, API Gateway, AppSync, and Cognito. It inspects requests at these integration points before they reach your backend application.
Why C is correct: WAF is the AWS service purpose-built for protecting against application-layer web attacks. Its managed rule groups specifically target SQL injection and XSS patterns, and it integrates natively with common AWS web architecture services.
Why others are wrong:
A) Amazon Macie — Macie discovers sensitive data in S3 buckets (PII detection). It does not inspect web traffic, does not block HTTP requests, and has no SQL injection or XSS detection capability. Macie is about data classification, not web application protection.
B) AWS Shield — Shield protects against DDoS attacks at Layers 3/4 (network/transport). It handles volumetric attacks, SYN floods, and UDP reflection. Shield does not inspect HTTP request content, so it cannot detect SQL injection or XSS payloads embedded in HTTP parameters.
D) Amazon GuardDuty — GuardDuty is a threat detection service that analyzes CloudTrail, VPC Flow Logs, and DNS logs. It detects account compromise, credential abuse, and suspicious API activity — but it does not inspect web application traffic for SQL injection or XSS. GuardDuty operates at the infrastructure level, not the application level.
How to Think About This:
When a question mentions "SQL injection" or "cross-site scripting (XSS)", the answer is AWS WAF. These are Layer 7 (application layer) attacks that target HTTP request content. WAF is the only AWS service that inspects HTTP request parameters and blocks malicious payloads before they reach your application. Memorize this mapping: SQL injection/XSS → WAF.
Key Concepts:
AWS WAF (Web Application Firewall) — A managed firewall that filters HTTP/HTTPS traffic. It works by defining Web ACL rules that inspect request components (query strings, headers, body, URI) for malicious patterns. AWS provides Managed Rule Groups — pre-built rule sets maintained by AWS and AWS Marketplace sellers:
• SQL Injection Rule Group — Detects common SQLi patterns like
' OR 1=1, UNION SELECT, ; DROP TABLE• XSS Rule Group — Detects script injection like
<script> tags, event handlers (onerror=), and JavaScript URIs• Core Rule Set — Broad protection against OWASP Top 10 vulnerabilities
WAF Deployment Points — WAF integrates with CloudFront, ALB, API Gateway, AppSync, and Cognito. It inspects requests at these integration points before they reach your backend application.
Why C is correct: WAF is the AWS service purpose-built for protecting against application-layer web attacks. Its managed rule groups specifically target SQL injection and XSS patterns, and it integrates natively with common AWS web architecture services.
Why others are wrong:
A) Amazon Macie — Macie discovers sensitive data in S3 buckets (PII detection). It does not inspect web traffic, does not block HTTP requests, and has no SQL injection or XSS detection capability. Macie is about data classification, not web application protection.
B) AWS Shield — Shield protects against DDoS attacks at Layers 3/4 (network/transport). It handles volumetric attacks, SYN floods, and UDP reflection. Shield does not inspect HTTP request content, so it cannot detect SQL injection or XSS payloads embedded in HTTP parameters.
D) Amazon GuardDuty — GuardDuty is a threat detection service that analyzes CloudTrail, VPC Flow Logs, and DNS logs. It detects account compromise, credential abuse, and suspicious API activity — but it does not inspect web application traffic for SQL injection or XSS. GuardDuty operates at the infrastructure level, not the application level.
Q51.Which AWS service should you leverage for threat detection that employs machine learning to protect your AWS accounts, workloads, and S3 data?
✓ Correct: B. AWS GuardDuty uses machine learning and threat intelligence to continuously detect threats across AWS accounts, workloads, and S3.
How to Think About This:
When a question mentions "threat detection" + "machine learning" + "AWS accounts/workloads/S3", the answer is GuardDuty. It's the managed, ML-powered threat detection service for AWS. The key distinguisher: GuardDuty is continuous, automated, and intelligent — it requires no rules, no custom code, and no manual analysis. It learns normal behavior patterns and flags anomalies.
Key Concepts:
Amazon GuardDuty — A managed threat detection service that continuously monitors your AWS environment. It analyzes multiple data sources:
• CloudTrail Management Events — Detects unusual API calls, credential abuse, unauthorized access patterns
• CloudTrail S3 Data Events — Detects suspicious S3 access patterns, data exfiltration
• VPC Flow Logs — Detects network anomalies, port scanning, communication with known malicious IPs
• DNS Logs — Detects DNS-based exfiltration, communication with C2 (command & control) domains
• EKS Audit Logs — Detects Kubernetes-specific threats
Machine Learning in GuardDuty — GuardDuty establishes a baseline of normal activity for your account (what APIs are typically called, from which IPs, at what times). It then uses ML models and threat intelligence feeds (AWS's own + CrowdStrike + Proofpoint) to identify deviations that indicate threats: impossible travel, credential stuffing, cryptocurrency mining, privilege escalation.
Custom Threat Lists — You can add custom blocklists (known malicious IPs, suspicious domains) and trusted IP lists (your corporate IPs to reduce false positives). This customizes GuardDuty's detection to your specific environment.
Why B is correct: GuardDuty is the only option that specifically uses ML for automated threat detection across the full AWS environment (accounts, workloads, S3). Enable it with one click, and it starts detecting threats immediately with no configuration needed.
Why others are wrong:
A) Lambda for reviewing CloudFront logs — This is a custom, manual solution requiring you to write and maintain Lambda code to parse logs and detect threats. It's not ML-powered, not managed, and only covers CloudFront (not accounts, workloads, or S3). Building custom detection when a managed service exists is the wrong approach.
C) Configure CloudWatch for event creation — CloudWatch creates events/alarms based on metrics you define. It doesn't use ML, doesn't analyze security patterns, and doesn't detect threats. CloudWatch tells you "CPU is high" or "error count exceeded threshold" — it doesn't tell you "someone is exfiltrating data."
D) Macie + Lambda for anomaly detection — Macie discovers sensitive data in S3 (PII classification), not threats. While Macie uses ML for data classification, it's not a threat detection service. Adding Lambda for anomaly detection makes this a custom solution, not a managed threat detection service.
E) VPC Flow Logs with ElasticSearch — This is an infrastructure monitoring approach. Flow Logs capture network metadata, and ElasticSearch can index/search it. But this lacks ML-based threat intelligence, requires manual setup and query writing, and only covers network traffic — not API calls, S3 access patterns, or DNS activity.
How to Think About This:
When a question mentions "threat detection" + "machine learning" + "AWS accounts/workloads/S3", the answer is GuardDuty. It's the managed, ML-powered threat detection service for AWS. The key distinguisher: GuardDuty is continuous, automated, and intelligent — it requires no rules, no custom code, and no manual analysis. It learns normal behavior patterns and flags anomalies.
Key Concepts:
Amazon GuardDuty — A managed threat detection service that continuously monitors your AWS environment. It analyzes multiple data sources:
• CloudTrail Management Events — Detects unusual API calls, credential abuse, unauthorized access patterns
• CloudTrail S3 Data Events — Detects suspicious S3 access patterns, data exfiltration
• VPC Flow Logs — Detects network anomalies, port scanning, communication with known malicious IPs
• DNS Logs — Detects DNS-based exfiltration, communication with C2 (command & control) domains
• EKS Audit Logs — Detects Kubernetes-specific threats
Machine Learning in GuardDuty — GuardDuty establishes a baseline of normal activity for your account (what APIs are typically called, from which IPs, at what times). It then uses ML models and threat intelligence feeds (AWS's own + CrowdStrike + Proofpoint) to identify deviations that indicate threats: impossible travel, credential stuffing, cryptocurrency mining, privilege escalation.
Custom Threat Lists — You can add custom blocklists (known malicious IPs, suspicious domains) and trusted IP lists (your corporate IPs to reduce false positives). This customizes GuardDuty's detection to your specific environment.
Why B is correct: GuardDuty is the only option that specifically uses ML for automated threat detection across the full AWS environment (accounts, workloads, S3). Enable it with one click, and it starts detecting threats immediately with no configuration needed.
Why others are wrong:
A) Lambda for reviewing CloudFront logs — This is a custom, manual solution requiring you to write and maintain Lambda code to parse logs and detect threats. It's not ML-powered, not managed, and only covers CloudFront (not accounts, workloads, or S3). Building custom detection when a managed service exists is the wrong approach.
C) Configure CloudWatch for event creation — CloudWatch creates events/alarms based on metrics you define. It doesn't use ML, doesn't analyze security patterns, and doesn't detect threats. CloudWatch tells you "CPU is high" or "error count exceeded threshold" — it doesn't tell you "someone is exfiltrating data."
D) Macie + Lambda for anomaly detection — Macie discovers sensitive data in S3 (PII classification), not threats. While Macie uses ML for data classification, it's not a threat detection service. Adding Lambda for anomaly detection makes this a custom solution, not a managed threat detection service.
E) VPC Flow Logs with ElasticSearch — This is an infrastructure monitoring approach. Flow Logs capture network metadata, and ElasticSearch can index/search it. But this lacks ML-based threat intelligence, requires manual setup and query writing, and only covers network traffic — not API calls, S3 access patterns, or DNS activity.
Q52.Which AWS services should be used to safeguard a web application from SQL injection and XSS attacks? Select All
✓ Correct: C, D. AWS WAF provides the actual SQL injection/XSS rules, and CloudFront serves as the WAF integration point for edge-level protection.
How to Think About This:
This question tests two concepts together: (1) What provides the protection? → WAF (the rule engine), (2) Where do you deploy it? → CloudFront (the integration point). WAF doesn't work in isolation — it must be attached to a supported AWS service. The standard architecture for web app protection is: CloudFront (CDN) + WAF (firewall rules). Users hit CloudFront → WAF inspects requests → clean traffic reaches your origin.
Key Concepts:
AWS WAF (C) — Provides Web ACL rules that inspect HTTP/HTTPS requests for SQL injection and XSS patterns. WAF has managed rule groups specifically for these attacks. However, WAF cannot operate standalone — it must be associated with a supported integration point (CloudFront, ALB, API Gateway).
Amazon CloudFront (D) — A CDN that serves as the first point of contact for web users. When WAF is associated with a CloudFront distribution, requests are inspected at AWS edge locations worldwide before they reach your origin. This means malicious traffic is blocked at the edge — geographically closest to the attacker — and never reaches your infrastructure. CloudFront + WAF is the most common deployment pattern for global web application protection.
Why C and D are correct: WAF provides the SQL injection and XSS detection/blocking rules. CloudFront provides the deployment point where WAF inspects traffic. Together, they form the standard AWS architecture for protecting web applications from application-layer attacks at the edge.
Why others are wrong:
A) AWS Shield — Shield protects against DDoS attacks (Layer 3/4 volumetric attacks). It does not inspect HTTP content for SQL injection or XSS. Shield and WAF are complementary services — Shield for DDoS, WAF for application-layer attacks — but only WAF handles SQLi/XSS.
B) Network Load Balancer — NLB operates at Layer 4 (TCP/UDP). It forwards raw TCP connections without inspecting HTTP content. Since SQL injection and XSS are embedded in HTTP request parameters (Layer 7), NLB cannot detect them. WAF does not integrate with NLB — only with ALB (which is Layer 7). This is a common exam trap.
E) CloudWatch — CloudWatch is a monitoring and alerting service. It collects metrics, logs, and events. While it can be used to monitor WAF metrics (blocked request counts), CloudWatch itself does not inspect or filter web traffic. It's an observability tool, not a protection tool.
How to Think About This:
This question tests two concepts together: (1) What provides the protection? → WAF (the rule engine), (2) Where do you deploy it? → CloudFront (the integration point). WAF doesn't work in isolation — it must be attached to a supported AWS service. The standard architecture for web app protection is: CloudFront (CDN) + WAF (firewall rules). Users hit CloudFront → WAF inspects requests → clean traffic reaches your origin.
Key Concepts:
AWS WAF (C) — Provides Web ACL rules that inspect HTTP/HTTPS requests for SQL injection and XSS patterns. WAF has managed rule groups specifically for these attacks. However, WAF cannot operate standalone — it must be associated with a supported integration point (CloudFront, ALB, API Gateway).
Amazon CloudFront (D) — A CDN that serves as the first point of contact for web users. When WAF is associated with a CloudFront distribution, requests are inspected at AWS edge locations worldwide before they reach your origin. This means malicious traffic is blocked at the edge — geographically closest to the attacker — and never reaches your infrastructure. CloudFront + WAF is the most common deployment pattern for global web application protection.
Why C and D are correct: WAF provides the SQL injection and XSS detection/blocking rules. CloudFront provides the deployment point where WAF inspects traffic. Together, they form the standard AWS architecture for protecting web applications from application-layer attacks at the edge.
Why others are wrong:
A) AWS Shield — Shield protects against DDoS attacks (Layer 3/4 volumetric attacks). It does not inspect HTTP content for SQL injection or XSS. Shield and WAF are complementary services — Shield for DDoS, WAF for application-layer attacks — but only WAF handles SQLi/XSS.
B) Network Load Balancer — NLB operates at Layer 4 (TCP/UDP). It forwards raw TCP connections without inspecting HTTP content. Since SQL injection and XSS are embedded in HTTP request parameters (Layer 7), NLB cannot detect them. WAF does not integrate with NLB — only with ALB (which is Layer 7). This is a common exam trap.
E) CloudWatch — CloudWatch is a monitoring and alerting service. It collects metrics, logs, and events. While it can be used to monitor WAF metrics (blocked request counts), CloudWatch itself does not inspect or filter web traffic. It's an observability tool, not a protection tool.
Q53.Which encryption strategy is the most cost-effective while also meeting the criteria of not using third-party material for generating encryption keys?
✓ Correct: A. Customer-managed CMKs in KMS are cost-effective, use AWS-generated key material, and provide full control over policies and rotation.
How to Think About This:
The question has two requirements: (1) Cost-effective, (2) No third-party key material. This eliminates CloudHSM (expensive) and imported keys (third-party material). KMS customer-managed CMKs use key material generated by AWS's own hardware security modules — no external/third-party cryptographic material involved — at a fraction of CloudHSM's cost (~$1/month per key vs ~$1.50/hour for CloudHSM).
Key Concepts:
KMS Customer-Managed CMK (A) — A key you create in KMS where AWS generates and stores the key material in FIPS 140-2 validated hardware security modules. You control the key policy, rotation schedule, and lifecycle. Cost: ~$1/month per key + per-request charges. This is the most common and cost-effective approach for most encryption needs.
Key Material Origin — KMS supports three origins:
• AWS_KMS (default) — AWS generates key material internally. No third-party involvement.
• EXTERNAL — You import key material from your own systems. This IS third-party material (from your perspective as a customer, "third-party" means non-AWS in this context).
• AWS_CLOUDHSM — Key material generated and stored in your CloudHSM cluster. Technically AWS infrastructure, but much more expensive.
Why A is correct: A customer-managed CMK with AWS-generated key material (the default) satisfies both requirements: it's cost-effective ($1/month), and the key material is generated entirely within AWS's KMS infrastructure — no third-party cryptographic material is used.
Why others are wrong:
B) Initialize a CloudHSM instance — CloudHSM provides dedicated hardware security modules that you control. While it meets the "no third-party material" requirement (keys generated in your HSM), it is extremely expensive: ~$1.50/hour (~$1,100/month) per HSM cluster, plus you need at least 2 for high availability. This violates the "cost-effective" requirement.
C) Generate key pair via EC2 — Generating encryption keys on an EC2 instance means managing key storage, rotation, and security yourself. This is not using AWS KMS at all. The keys would be "third-party" in the sense that they're generated outside AWS's key management infrastructure, and you lose all the compliance benefits (FIPS 140-2, audit logging, access controls) that KMS provides.
D) Use AWS KMS to create a 256-bit key — This sounds similar to A but is more vague and potentially misleading. KMS doesn't directly expose a "create a 256-bit key" option in this generic way. Customer-managed CMKs (answer A) are the proper term and provide full control. Answer A is more precise and complete.
How to Think About This:
The question has two requirements: (1) Cost-effective, (2) No third-party key material. This eliminates CloudHSM (expensive) and imported keys (third-party material). KMS customer-managed CMKs use key material generated by AWS's own hardware security modules — no external/third-party cryptographic material involved — at a fraction of CloudHSM's cost (~$1/month per key vs ~$1.50/hour for CloudHSM).
Key Concepts:
KMS Customer-Managed CMK (A) — A key you create in KMS where AWS generates and stores the key material in FIPS 140-2 validated hardware security modules. You control the key policy, rotation schedule, and lifecycle. Cost: ~$1/month per key + per-request charges. This is the most common and cost-effective approach for most encryption needs.
Key Material Origin — KMS supports three origins:
• AWS_KMS (default) — AWS generates key material internally. No third-party involvement.
• EXTERNAL — You import key material from your own systems. This IS third-party material (from your perspective as a customer, "third-party" means non-AWS in this context).
• AWS_CLOUDHSM — Key material generated and stored in your CloudHSM cluster. Technically AWS infrastructure, but much more expensive.
Why A is correct: A customer-managed CMK with AWS-generated key material (the default) satisfies both requirements: it's cost-effective ($1/month), and the key material is generated entirely within AWS's KMS infrastructure — no third-party cryptographic material is used.
Why others are wrong:
B) Initialize a CloudHSM instance — CloudHSM provides dedicated hardware security modules that you control. While it meets the "no third-party material" requirement (keys generated in your HSM), it is extremely expensive: ~$1.50/hour (~$1,100/month) per HSM cluster, plus you need at least 2 for high availability. This violates the "cost-effective" requirement.
C) Generate key pair via EC2 — Generating encryption keys on an EC2 instance means managing key storage, rotation, and security yourself. This is not using AWS KMS at all. The keys would be "third-party" in the sense that they're generated outside AWS's key management infrastructure, and you lose all the compliance benefits (FIPS 140-2, audit logging, access controls) that KMS provides.
D) Use AWS KMS to create a 256-bit key — This sounds similar to A but is more vague and potentially misleading. KMS doesn't directly expose a "create a 256-bit key" option in this generic way. Customer-managed CMKs (answer A) are the proper term and provide full control. Answer A is more precise and complete.
Q54.Which key type should be created in AWS KMS for automatic yearly rotation? Select All
✓ Correct: D. Only customer-managed CMKs with KMS-generated key material support configurable automatic yearly rotation.
How to Think About This:
KMS key rotation support depends on two factors: (1) Who manages the key? (AWS-owned, AWS-managed, customer-managed), and (2) What is the key material origin? (KMS-generated vs imported). Only customer-managed CMKs with KMS-generated key material give you the ability to configure automatic rotation. Memorize this rule for the exam.
Key Concepts:
Key Rotation in KMS — When a key is rotated, KMS generates new cryptographic key material while keeping the old material available for decrypting previously encrypted data. The CMK ID stays the same — your applications don't need any changes. KMS transparently uses the new key material for encryption and the old material for decryption.
Customer-Managed CMK with KMS Material (D) — You create the CMK, AWS generates the key material. You can enable automatic rotation, which rotates the key material annually (every 365 days). This is the only key type where you have control over the rotation schedule.
CMK with Imported Key Material (A) — If you import your own key material, KMS cannot automatically rotate it because KMS doesn't have the source material to generate a new version. You must manually rotate by creating a new CMK, importing new material, and updating key aliases. Auto-rotation is NOT supported.
AWS-Owned CMK (B) — Keys owned and managed entirely by AWS. You don't see them in your account, can't manage them, and can't configure rotation. AWS rotates them on their own schedule (often more frequently than annually), but you have zero control.
AWS-Managed CMK (C) — Keys created by AWS services in your account (e.g.,
Why D is correct: Customer-managed CMKs (with KMS-generated key material) are the only key type where you can configure automatic yearly rotation. You can enable it, disable it, and it rotates on your schedule. This gives you full control over the rotation lifecycle.
Why others are wrong:
A) CMK with imported key material — Imported key material does NOT support automatic rotation. KMS cannot generate new versions of externally sourced key material. You must handle rotation manually by creating new CMKs and rotating key aliases.
B) AWS-owned CMK — You have no visibility or control over AWS-owned keys. While AWS may rotate them internally, you cannot configure or manage this rotation. These keys aren't even visible in your KMS console.
C) AWS-managed CMK — These rotate automatically every year, but the rotation is mandatory and not configurable by you. The question asks which key type should be "created" for automatic yearly rotation — AWS-managed keys are created by AWS services, not by you. You cannot enable/disable their rotation.
How to Think About This:
KMS key rotation support depends on two factors: (1) Who manages the key? (AWS-owned, AWS-managed, customer-managed), and (2) What is the key material origin? (KMS-generated vs imported). Only customer-managed CMKs with KMS-generated key material give you the ability to configure automatic rotation. Memorize this rule for the exam.
Key Concepts:
Key Rotation in KMS — When a key is rotated, KMS generates new cryptographic key material while keeping the old material available for decrypting previously encrypted data. The CMK ID stays the same — your applications don't need any changes. KMS transparently uses the new key material for encryption and the old material for decryption.
Customer-Managed CMK with KMS Material (D) — You create the CMK, AWS generates the key material. You can enable automatic rotation, which rotates the key material annually (every 365 days). This is the only key type where you have control over the rotation schedule.
CMK with Imported Key Material (A) — If you import your own key material, KMS cannot automatically rotate it because KMS doesn't have the source material to generate a new version. You must manually rotate by creating a new CMK, importing new material, and updating key aliases. Auto-rotation is NOT supported.
AWS-Owned CMK (B) — Keys owned and managed entirely by AWS. You don't see them in your account, can't manage them, and can't configure rotation. AWS rotates them on their own schedule (often more frequently than annually), but you have zero control.
AWS-Managed CMK (C) — Keys created by AWS services in your account (e.g.,
aws/s3, aws/ebs). AWS automatically rotates these every year, but you cannot configure or disable this rotation. It's automatic and mandatory — not configurable by you.Why D is correct: Customer-managed CMKs (with KMS-generated key material) are the only key type where you can configure automatic yearly rotation. You can enable it, disable it, and it rotates on your schedule. This gives you full control over the rotation lifecycle.
Why others are wrong:
A) CMK with imported key material — Imported key material does NOT support automatic rotation. KMS cannot generate new versions of externally sourced key material. You must handle rotation manually by creating new CMKs and rotating key aliases.
B) AWS-owned CMK — You have no visibility or control over AWS-owned keys. While AWS may rotate them internally, you cannot configure or manage this rotation. These keys aren't even visible in your KMS console.
C) AWS-managed CMK — These rotate automatically every year, but the rotation is mandatory and not configurable by you. The question asks which key type should be "created" for automatic yearly rotation — AWS-managed keys are created by AWS services, not by you. You cannot enable/disable their rotation.
Q55.Which of these statements accurately describe Amazon Macie's capabilities? Select All
✓ Correct: B. Macie's primary capability is identifying PII and sensitive data in S3 using ML and pattern matching.
How to Think About This:
This question tests whether you know Macie's actual capabilities vs. common misconceptions. Macie is a data discovery and classification service — it tells you what sensitive data exists in your S3 buckets. It does NOT: use NLP, prevent access, or monitor document sharing. Understanding what a service does not do is just as important on the exam as knowing what it does.
Key Concepts:
What Macie Actually Does (B) — Macie uses machine learning and pattern matching (not NLP) to scan S3 objects and identify sensitive data types: credit card numbers, SSNs, passport numbers, email addresses, AWS access keys, and 100+ other PII categories. It produces findings with details about what data was found, where, and how much. It integrates with Security Hub and EventBridge for automated workflows.
How Macie Works — Macie uses two detection methods:
• Managed Data Identifiers — Built-in patterns and ML models maintained by AWS that recognize common PII formats across multiple countries and formats
• Custom Data Identifiers — Regex patterns and keyword lists you define for organization-specific sensitive data (employee IDs, internal codes, proprietary formats)
Why B is correct: Identifying PII in S3 buckets is Macie's core, primary capability. It's the exact use case Macie was built for.
Why others are wrong:
A) Uses Natural Language Processing to comprehend data — Macie uses ML and pattern matching, not NLP. NLP is used by services like Amazon Comprehend (which understands language, sentiment, entities in text). Macie doesn't "comprehend" the meaning of text — it identifies structured patterns (regex for SSN format, credit card number format) and uses ML to classify data. The distinction matters: NLP understands language, Macie finds data patterns.
C) Can prevent users from accessing PII — Macie is a detective control, not a preventive one. It discovers and reports the existence of PII, but it does NOT block access or enforce permissions. To prevent access, you would need to use the Macie findings to trigger remediation actions (e.g., Lambda function that modifies S3 bucket policies) — but that's a separate workflow, not Macie's native capability.
D) Detects large-scale document sharing — Macie does not monitor document sharing patterns or collaboration activity. It scans data content, not access patterns or sharing behavior. Services like CloudTrail (for API-level access logging) or GuardDuty (for unusual access patterns) would be more relevant for detecting unusual sharing activity.
How to Think About This:
This question tests whether you know Macie's actual capabilities vs. common misconceptions. Macie is a data discovery and classification service — it tells you what sensitive data exists in your S3 buckets. It does NOT: use NLP, prevent access, or monitor document sharing. Understanding what a service does not do is just as important on the exam as knowing what it does.
Key Concepts:
What Macie Actually Does (B) — Macie uses machine learning and pattern matching (not NLP) to scan S3 objects and identify sensitive data types: credit card numbers, SSNs, passport numbers, email addresses, AWS access keys, and 100+ other PII categories. It produces findings with details about what data was found, where, and how much. It integrates with Security Hub and EventBridge for automated workflows.
How Macie Works — Macie uses two detection methods:
• Managed Data Identifiers — Built-in patterns and ML models maintained by AWS that recognize common PII formats across multiple countries and formats
• Custom Data Identifiers — Regex patterns and keyword lists you define for organization-specific sensitive data (employee IDs, internal codes, proprietary formats)
Why B is correct: Identifying PII in S3 buckets is Macie's core, primary capability. It's the exact use case Macie was built for.
Why others are wrong:
A) Uses Natural Language Processing to comprehend data — Macie uses ML and pattern matching, not NLP. NLP is used by services like Amazon Comprehend (which understands language, sentiment, entities in text). Macie doesn't "comprehend" the meaning of text — it identifies structured patterns (regex for SSN format, credit card number format) and uses ML to classify data. The distinction matters: NLP understands language, Macie finds data patterns.
C) Can prevent users from accessing PII — Macie is a detective control, not a preventive one. It discovers and reports the existence of PII, but it does NOT block access or enforce permissions. To prevent access, you would need to use the Macie findings to trigger remediation actions (e.g., Lambda function that modifies S3 bucket policies) — but that's a separate workflow, not Macie's native capability.
D) Detects large-scale document sharing — Macie does not monitor document sharing patterns or collaboration activity. It scans data content, not access patterns or sharing behavior. Services like CloudTrail (for API-level access logging) or GuardDuty (for unusual access patterns) would be more relevant for detecting unusual sharing activity.
Q56.Why is your Lambda function, which writes metadata to a DynamoDB table from S3, not sending logs to CloudWatch?
✓ Correct: D. The Lambda execution role is missing CloudWatch Logs permissions.
How to Think About This:
When a question says "Lambda not sending logs to CloudWatch", the answer is almost always about the execution role missing CloudWatch Logs permissions. Lambda needs three specific permissions to write logs:
Key Concepts:
Lambda Execution Role — Every Lambda function has an IAM role that defines what AWS services it can access. This role needs separate permissions for each service: S3 permissions for reading files, DynamoDB permissions for writing data, AND CloudWatch Logs permissions for sending logs. These are independent — having S3 or DynamoDB permissions does NOT automatically grant logging permissions.
Required CloudWatch Logs Permissions:
•
•
•
AWS Managed Policy — The
Why D is correct: The function writes metadata to DynamoDB from S3 (so S3 read and DynamoDB write work), but logs aren't appearing in CloudWatch. This isolates the problem to CloudWatch Logs permissions specifically. The execution role has S3 and DynamoDB permissions but is missing the CloudWatch Logs permissions.
Why others are wrong:
A) Incorrect S3 read permissions — If S3 read permissions were wrong, the function would throw an AccessDeniedException when trying to read the S3 object. The function itself would fail, which is a different symptom than "not sending logs." The question implies the function runs but logs don't appear.
B) Lambda hasn't been properly deployed — If Lambda wasn't properly deployed, it wouldn't execute at all. Since the function is writing to DynamoDB (processing data from S3), it's clearly deployed and running. The issue is specifically with logging output.
C) Insufficient DynamoDB write permissions — Similar to A — if DynamoDB write permissions were wrong, the function would fail with an error when writing metadata. This would be a function execution error, not a logging issue. The question specifically says logs aren't appearing, implying the function runs but logging doesn't work.
How to Think About This:
When a question says "Lambda not sending logs to CloudWatch", the answer is almost always about the execution role missing CloudWatch Logs permissions. Lambda needs three specific permissions to write logs:
logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents. If any are missing, logs silently fail to appear. The other options describe unrelated issues — S3 read and DynamoDB write problems would cause the function to error, not prevent logging.Key Concepts:
Lambda Execution Role — Every Lambda function has an IAM role that defines what AWS services it can access. This role needs separate permissions for each service: S3 permissions for reading files, DynamoDB permissions for writing data, AND CloudWatch Logs permissions for sending logs. These are independent — having S3 or DynamoDB permissions does NOT automatically grant logging permissions.
Required CloudWatch Logs Permissions:
•
logs:CreateLogGroup — Creates the log group (/aws/lambda/function-name) on first invocation•
logs:CreateLogStream — Creates a log stream for each execution environment•
logs:PutLogEvents — Writes the actual log entriesAWS Managed Policy — The
AWSLambdaBasicExecutionRole managed policy includes exactly these three permissions. It's the minimum policy every Lambda function should have attached to its execution role.Why D is correct: The function writes metadata to DynamoDB from S3 (so S3 read and DynamoDB write work), but logs aren't appearing in CloudWatch. This isolates the problem to CloudWatch Logs permissions specifically. The execution role has S3 and DynamoDB permissions but is missing the CloudWatch Logs permissions.
Why others are wrong:
A) Incorrect S3 read permissions — If S3 read permissions were wrong, the function would throw an AccessDeniedException when trying to read the S3 object. The function itself would fail, which is a different symptom than "not sending logs." The question implies the function runs but logs don't appear.
B) Lambda hasn't been properly deployed — If Lambda wasn't properly deployed, it wouldn't execute at all. Since the function is writing to DynamoDB (processing data from S3), it's clearly deployed and running. The issue is specifically with logging output.
C) Insufficient DynamoDB write permissions — Similar to A — if DynamoDB write permissions were wrong, the function would fail with an error when writing metadata. This would be a function execution error, not a logging issue. The question specifically says logs aren't appearing, implying the function runs but logging doesn't work.
Q57.You are notified of suspicious activity targeting your application servers in a specific subnet. The attacks are coming from a specific IP range. What should you do?
✓ Correct: D. Network ACLs support explicit deny rules at the subnet level — ideal for blocking specific IP ranges.
How to Think About This:
When a question says "block specific IPs" or "deny traffic from IP range", the answer is Network ACLs (NACLs). The critical distinction: Security Groups only support ALLOW rules — you cannot create a deny rule in a Security Group. NACLs support both ALLOW and DENY rules, making them the only VPC-native tool for explicitly blocking specific IP addresses or ranges.
Key Concepts:
Network ACLs — Stateless firewall rules at the subnet level. Key features:
• Support both ALLOW and DENY rules
• Rules are evaluated in number order (lowest first) — first match wins
• Stateless — you must create rules for both inbound AND outbound traffic
• Apply to ALL traffic entering/leaving the subnet
To block an IP range, add a DENY rule with a low number (e.g., Rule 50: DENY all traffic from 203.0.113.0/24) before any ALLOW rules.
Security Groups vs NACLs for Blocking:
• Security Groups: ALLOW only, stateful, instance-level → Cannot block specific IPs
• NACLs: ALLOW + DENY, stateless, subnet-level → Can block specific IPs
Why D is correct: NACLs are the correct tool for blocking a specific IP range targeting a subnet. You add a DENY rule for the malicious IP range, and all traffic from those IPs is dropped before reaching any instance in the subnet. This is immediate and requires no changes to individual instance configurations.
Why others are wrong:
A) GuardDuty Threat List to block traffic — GuardDuty is a detection service, not a blocking service. Adding IPs to a GuardDuty threat list makes GuardDuty generate findings when those IPs appear in logs — but it does NOT block any traffic. You'd still need NACLs or WAF rules to actually block the traffic.
B) VPC Flow Logs to monitor and stop traffic — Flow Logs are purely passive logging. They record traffic metadata but cannot stop, block, or modify any traffic. Flow Logs are for analysis after the fact, not for real-time prevention.
C) Security Group to deny traffic — Security Groups do not support deny rules. You can only create allow rules. There is no way to add a "deny from IP range X" rule in a Security Group. This is one of the most important distinctions on the exam.
How to Think About This:
When a question says "block specific IPs" or "deny traffic from IP range", the answer is Network ACLs (NACLs). The critical distinction: Security Groups only support ALLOW rules — you cannot create a deny rule in a Security Group. NACLs support both ALLOW and DENY rules, making them the only VPC-native tool for explicitly blocking specific IP addresses or ranges.
Key Concepts:
Network ACLs — Stateless firewall rules at the subnet level. Key features:
• Support both ALLOW and DENY rules
• Rules are evaluated in number order (lowest first) — first match wins
• Stateless — you must create rules for both inbound AND outbound traffic
• Apply to ALL traffic entering/leaving the subnet
To block an IP range, add a DENY rule with a low number (e.g., Rule 50: DENY all traffic from 203.0.113.0/24) before any ALLOW rules.
Security Groups vs NACLs for Blocking:
• Security Groups: ALLOW only, stateful, instance-level → Cannot block specific IPs
• NACLs: ALLOW + DENY, stateless, subnet-level → Can block specific IPs
Why D is correct: NACLs are the correct tool for blocking a specific IP range targeting a subnet. You add a DENY rule for the malicious IP range, and all traffic from those IPs is dropped before reaching any instance in the subnet. This is immediate and requires no changes to individual instance configurations.
Why others are wrong:
A) GuardDuty Threat List to block traffic — GuardDuty is a detection service, not a blocking service. Adding IPs to a GuardDuty threat list makes GuardDuty generate findings when those IPs appear in logs — but it does NOT block any traffic. You'd still need NACLs or WAF rules to actually block the traffic.
B) VPC Flow Logs to monitor and stop traffic — Flow Logs are purely passive logging. They record traffic metadata but cannot stop, block, or modify any traffic. Flow Logs are for analysis after the fact, not for real-time prevention.
C) Security Group to deny traffic — Security Groups do not support deny rules. You can only create allow rules. There is no way to add a "deny from IP range X" rule in a Security Group. This is one of the most important distinctions on the exam.
Q58.You encounter an error while trying to apply an S3 bucket policy that gives a user access to all items in 'mys3bucket'. How can you fix it?
✓ Correct: B. The correct ARN format for all objects in an S3 bucket is
How to Think About This:
S3 ARN format is a common exam topic. The rules:
• S3 ARNs have no region and no account ID — just three colons:
• Bucket names are globally unique and always lowercase
• To reference all objects in a bucket, append
• To reference the bucket itself (for bucket-level operations), use just the bucket name without
Key Concepts:
S3 ARN Format:
• Bucket:
• Objects:
• Specific object:
S3 Bucket Naming Rules — Bucket names must be 3-63 characters, lowercase only, no underscores, no uppercase, no special characters except hyphens. This means
Why B is correct:
Why others are wrong:
A) arn:aws:s3:::/S3/myS3bucket — S3 ARNs do not have a
C) arn:aws:s3:::myS3/bucket/* — This treats
D) arn:aws:s3:::myS3bucket/* — Almost correct in structure, but uses
arn:aws:s3:::bucketname/*.How to Think About This:
S3 ARN format is a common exam topic. The rules:
• S3 ARNs have no region and no account ID — just three colons:
arn:aws:s3:::• Bucket names are globally unique and always lowercase
• To reference all objects in a bucket, append
/* after the bucket name• To reference the bucket itself (for bucket-level operations), use just the bucket name without
/*Key Concepts:
S3 ARN Format:
• Bucket:
arn:aws:s3:::mys3bucket — for operations like ListBucket, GetBucketPolicy• Objects:
arn:aws:s3:::mys3bucket/* — for operations like GetObject, PutObject, DeleteObject• Specific object:
arn:aws:s3:::mys3bucket/folder/file.txtS3 Bucket Naming Rules — Bucket names must be 3-63 characters, lowercase only, no underscores, no uppercase, no special characters except hyphens. This means
myS3bucket (with uppercase S) is an invalid bucket name — it must be mys3bucket.Why B is correct:
arn:aws:s3:::mys3bucket/* correctly references all objects in the bucket with the proper lowercase name and the /* wildcard for all objects.Why others are wrong:
A) arn:aws:s3:::/S3/myS3bucket — S3 ARNs do not have a
/S3/ path prefix. The bucket name follows directly after arn:aws:s3:::. Also, myS3bucket uses uppercase which is invalid.C) arn:aws:s3:::myS3/bucket/* — This treats
myS3 as the bucket name and bucket/* as a path prefix. The actual bucket name is mys3bucket (one word, lowercase), not split with a slash.D) arn:aws:s3:::myS3bucket/* — Almost correct in structure, but uses
myS3bucket with an uppercase "S". S3 bucket names are always lowercase. The original bucket is named mys3bucket, so the ARN must match exactly.
Q59.You find that your Lambda function logs are not appearing in CloudWatch. What could be the issue?
✓ Correct: C. The Lambda execution role must have CloudWatch Logs write permissions.
How to Think About This:
This is a repeat of the Q56 pattern — "Lambda logs not appearing in CloudWatch" = missing CloudWatch Logs permissions in the execution role. The exam may present this scenario from different angles, but the root cause is the same. Lambda's ability to write logs is a separate IAM permission from its ability to do anything else.
Key Concepts:
Lambda Logging Pipeline — When a Lambda function runs, it automatically generates log output (your
•
•
•
If any are missing, Lambda runs fine but logs vanish silently — no error is thrown to the caller, the logs just don't appear.
Why C is correct: The Lambda execution role is missing CloudWatch Logs permissions. The function executes (no error to the caller), but its log output cannot be written to CloudWatch. Adding the
Why others are wrong:
A) CloudWatch is not enabled — CloudWatch Logs doesn't need to be "enabled" as a service. It's always available. Log groups are created automatically by Lambda (if the execution role has permission). There is no global on/off switch for CloudWatch.
B) CloudWatch doesn't have permission to trigger the function — This reverses the direction. The question is about Lambda writing TO CloudWatch (logs), not CloudWatch triggering Lambda. CloudWatch Events can trigger Lambda, but that's a separate, unrelated permission.
D) DynamoDB can't trigger the function — DynamoDB triggers are about invoking Lambda from DynamoDB Streams. This has nothing to do with Lambda's ability to write logs to CloudWatch. Even if there were a trigger issue, the symptom would be "function not running," not "logs not appearing."
How to Think About This:
This is a repeat of the Q56 pattern — "Lambda logs not appearing in CloudWatch" = missing CloudWatch Logs permissions in the execution role. The exam may present this scenario from different angles, but the root cause is the same. Lambda's ability to write logs is a separate IAM permission from its ability to do anything else.
Key Concepts:
Lambda Logging Pipeline — When a Lambda function runs, it automatically generates log output (your
print() statements, errors, start/end markers). This output goes to CloudWatch Logs — but ONLY if the execution role grants these three permissions:•
logs:CreateLogGroup — Creates /aws/lambda/<function-name>•
logs:CreateLogStream — Creates a stream within the group•
logs:PutLogEvents — Writes actual log linesIf any are missing, Lambda runs fine but logs vanish silently — no error is thrown to the caller, the logs just don't appear.
Why C is correct: The Lambda execution role is missing CloudWatch Logs permissions. The function executes (no error to the caller), but its log output cannot be written to CloudWatch. Adding the
AWSLambdaBasicExecutionRole managed policy to the execution role resolves this.Why others are wrong:
A) CloudWatch is not enabled — CloudWatch Logs doesn't need to be "enabled" as a service. It's always available. Log groups are created automatically by Lambda (if the execution role has permission). There is no global on/off switch for CloudWatch.
B) CloudWatch doesn't have permission to trigger the function — This reverses the direction. The question is about Lambda writing TO CloudWatch (logs), not CloudWatch triggering Lambda. CloudWatch Events can trigger Lambda, but that's a separate, unrelated permission.
D) DynamoDB can't trigger the function — DynamoDB triggers are about invoking Lambda from DynamoDB Streams. This has nothing to do with Lambda's ability to write logs to CloudWatch. Even if there were a trigger issue, the symptom would be "function not running," not "logs not appearing."
Q60.You suspect that your AWS account has been compromised. What immediate actions should you undertake? Select 2
✓ Correct: B, C. Rotate all credentials and delete unauthorized resources — the two critical immediate actions after account compromise.
How to Think About This:
AWS account compromise response follows a priority: Contain → Eradicate → Recover. The two most immediate actions are: (1) Rotate credentials (contain — revoke the attacker's access), (2) Delete unauthorized resources (eradicate — remove the attacker's infrastructure like rogue EC2 instances, IAM users, or Lambda functions). Avoid overly destructive actions (deleting ALL users) and non-urgent actions (CMK rotation).
Key Concepts:
Rotate Credentials (B) — Change ALL passwords and rotate ALL IAM access keys. This immediately invalidates any credentials the attacker may have stolen. This includes: root account password, IAM user passwords, IAM access keys, and any API keys stored in the environment. Also revoke all active sessions by attaching a deny-all inline policy temporarily.
Delete Unauthorized Resources (C) — Attackers often create resources for their purposes: EC2 instances for cryptocurrency mining, IAM users for persistent access, Lambda functions for data exfiltration, S3 buckets for staging. Identifying and removing these stops ongoing malicious activity and prevents further damage.
Why B and C are correct: These are the two highest-priority immediate actions. Rotating credentials cuts off the attacker's access. Deleting unauthorized resources stops ongoing malicious activity. Together, they contain and begin eradicating the compromise.
Why others are wrong:
A) Delete all IAM user accounts — This is too destructive. Deleting ALL IAM users would lock legitimate employees out of AWS, disrupting business operations. It's a scorched-earth approach that causes more harm than the compromise itself. Instead, rotate credentials for all users and delete only unauthorized accounts.
D) Rotate all CMKs — CMK rotation is not an immediate incident response step. CMKs are used for data encryption, and rotating them doesn't revoke attacker access. If the attacker has IAM credentials, rotating CMKs doesn't help — they'd still have permissions to use the new key material. CMK rotation may be appropriate later in the recovery phase, but credential rotation and resource cleanup are far more urgent.
How to Think About This:
AWS account compromise response follows a priority: Contain → Eradicate → Recover. The two most immediate actions are: (1) Rotate credentials (contain — revoke the attacker's access), (2) Delete unauthorized resources (eradicate — remove the attacker's infrastructure like rogue EC2 instances, IAM users, or Lambda functions). Avoid overly destructive actions (deleting ALL users) and non-urgent actions (CMK rotation).
Key Concepts:
Rotate Credentials (B) — Change ALL passwords and rotate ALL IAM access keys. This immediately invalidates any credentials the attacker may have stolen. This includes: root account password, IAM user passwords, IAM access keys, and any API keys stored in the environment. Also revoke all active sessions by attaching a deny-all inline policy temporarily.
Delete Unauthorized Resources (C) — Attackers often create resources for their purposes: EC2 instances for cryptocurrency mining, IAM users for persistent access, Lambda functions for data exfiltration, S3 buckets for staging. Identifying and removing these stops ongoing malicious activity and prevents further damage.
Why B and C are correct: These are the two highest-priority immediate actions. Rotating credentials cuts off the attacker's access. Deleting unauthorized resources stops ongoing malicious activity. Together, they contain and begin eradicating the compromise.
Why others are wrong:
A) Delete all IAM user accounts — This is too destructive. Deleting ALL IAM users would lock legitimate employees out of AWS, disrupting business operations. It's a scorched-earth approach that causes more harm than the compromise itself. Instead, rotate credentials for all users and delete only unauthorized accounts.
D) Rotate all CMKs — CMK rotation is not an immediate incident response step. CMKs are used for data encryption, and rotating them doesn't revoke attacker access. If the attacker has IAM credentials, rotating CMKs doesn't help — they'd still have permissions to use the new key material. CMK rotation may be appropriate later in the recovery phase, but credential rotation and resource cleanup are far more urgent.
Q61.Your CTO has asked for an automated system that can proactively remediate security vulnerabilities. What approach?
✓ Correct: B. AWS Config rules + CloudWatch Events + Lambda = the standard automated remediation pipeline.
How to Think About This:
When a question asks for "automated remediation" of security vulnerabilities, the AWS-standard pattern is a three-service pipeline: Config (detect) → CloudWatch Events (trigger) → Lambda (remediate). Config continuously evaluates your resources against rules. When a rule is violated, it fires a CloudWatch Event. Lambda receives the event and executes remediation code — fully automated, no human intervention needed.
Key Concepts:
AWS Config Rules — Continuously evaluate whether your resources comply with desired configurations. Examples: "all S3 buckets must have encryption enabled," "all Security Groups must not allow 0.0.0.0/0 on port 22," "all EBS volumes must be encrypted." When a resource becomes non-compliant, Config records the deviation.
CloudWatch Events (EventBridge) — Acts as the event bus. Config publishes compliance change events, and CloudWatch Events routes them to targets. You create a rule: "When Config reports NON_COMPLIANT, trigger this Lambda function."
Lambda for Remediation — Lambda functions contain the actual fix logic: enable S3 encryption, remove overly permissive Security Group rules, encrypt unencrypted volumes, etc. Lambda is programmable, so you can implement any remediation logic needed — from simple fixes to complex multi-step workflows.
Why B is correct: This is the AWS-recommended architecture for automated security remediation. Config provides continuous compliance monitoring, CloudWatch Events provides the real-time trigger mechanism, and Lambda provides flexible, programmable remediation. It's fully automated, scalable, and requires no manual intervention.
Why others are wrong:
A) Chaos Monkey + CloudWatch Logs + Elastic Beanstalk — Chaos Monkey is a Netflix tool for resilience testing (randomly killing instances), not security vulnerability detection. CloudWatch Logs is for log storage, not event routing. Elastic Beanstalk is an application deployment service, not a remediation tool. None of these are designed for security compliance.
C) CloudTrail + GuardDuty + CloudWatch Logs + Lambda — CloudTrail and GuardDuty detect threats and suspicious activity (who did what, was it malicious?), not configuration drift. The question asks about "security vulnerabilities" (misconfigurations), which is Config's domain. Also, CloudWatch Logs (storage) is used instead of CloudWatch Events (routing), which is the wrong component.
D) Config + CloudWatch Events + CloudFormation — Config and CloudWatch Events are correct, but CloudFormation is for infrastructure provisioning (deploying stacks), not for real-time remediation of individual resource misconfigurations. Lambda provides the programmable, granular remediation logic needed. CloudFormation would re-deploy entire stacks, which is too heavy-weight for fixing a single misconfigured Security Group.
E) All of the above — Not all approaches are correct, so this is wrong.
How to Think About This:
When a question asks for "automated remediation" of security vulnerabilities, the AWS-standard pattern is a three-service pipeline: Config (detect) → CloudWatch Events (trigger) → Lambda (remediate). Config continuously evaluates your resources against rules. When a rule is violated, it fires a CloudWatch Event. Lambda receives the event and executes remediation code — fully automated, no human intervention needed.
Key Concepts:
AWS Config Rules — Continuously evaluate whether your resources comply with desired configurations. Examples: "all S3 buckets must have encryption enabled," "all Security Groups must not allow 0.0.0.0/0 on port 22," "all EBS volumes must be encrypted." When a resource becomes non-compliant, Config records the deviation.
CloudWatch Events (EventBridge) — Acts as the event bus. Config publishes compliance change events, and CloudWatch Events routes them to targets. You create a rule: "When Config reports NON_COMPLIANT, trigger this Lambda function."
Lambda for Remediation — Lambda functions contain the actual fix logic: enable S3 encryption, remove overly permissive Security Group rules, encrypt unencrypted volumes, etc. Lambda is programmable, so you can implement any remediation logic needed — from simple fixes to complex multi-step workflows.
Why B is correct: This is the AWS-recommended architecture for automated security remediation. Config provides continuous compliance monitoring, CloudWatch Events provides the real-time trigger mechanism, and Lambda provides flexible, programmable remediation. It's fully automated, scalable, and requires no manual intervention.
Why others are wrong:
A) Chaos Monkey + CloudWatch Logs + Elastic Beanstalk — Chaos Monkey is a Netflix tool for resilience testing (randomly killing instances), not security vulnerability detection. CloudWatch Logs is for log storage, not event routing. Elastic Beanstalk is an application deployment service, not a remediation tool. None of these are designed for security compliance.
C) CloudTrail + GuardDuty + CloudWatch Logs + Lambda — CloudTrail and GuardDuty detect threats and suspicious activity (who did what, was it malicious?), not configuration drift. The question asks about "security vulnerabilities" (misconfigurations), which is Config's domain. Also, CloudWatch Logs (storage) is used instead of CloudWatch Events (routing), which is the wrong component.
D) Config + CloudWatch Events + CloudFormation — Config and CloudWatch Events are correct, but CloudFormation is for infrastructure provisioning (deploying stacks), not for real-time remediation of individual resource misconfigurations. Lambda provides the programmable, granular remediation logic needed. CloudFormation would re-deploy entire stacks, which is too heavy-weight for fixing a single misconfigured Security Group.
E) All of the above — Not all approaches are correct, so this is wrong.
Q62.Your CTO wants automatic notifications for any unencrypted EBS volumes on EC2 instances. What's the best method?
✓ Correct: C. AWS Config's managed rule
How to Think About This:
When a question asks about "automatic detection of misconfigured resources" + "notifications", the pattern is: AWS Config (detect) → CloudWatch Events (route) → SNS (notify). Config has pre-built managed rules for common compliance checks, including
Key Concepts:
AWS Config Managed Rules — Pre-built compliance rules maintained by AWS.
Notification Pipeline — Config compliance changes → CloudWatch Events rule matches NON_COMPLIANT → SNS topic sends email/SMS/webhook notification. This is fully automated and near real-time — the CTO gets notified within minutes of an unencrypted volume appearing.
Why C is correct: AWS Config with the
Why others are wrong:
A) Lambda function to scan + SNS alert — While technically possible, this is a custom solution requiring you to write, deploy, and maintain Lambda code. You'd need to schedule it (CloudWatch Events cron), handle pagination across all volumes, and manage errors. AWS Config already does this out of the box with a managed rule — no custom code needed.
B) Trusted Advisor for detection + SNS — Trusted Advisor does check for some security issues, but its EBS encryption check requires Business or Enterprise support plans. Also, Trusted Advisor runs periodic checks (not continuous monitoring) and doesn't integrate as cleanly with CloudWatch Events for real-time notifications. Config is the preferred tool for continuous compliance monitoring.
D) GuardDuty for scanning unencrypted volumes — GuardDuty is a threat detection service, not a configuration compliance tool. It detects suspicious activity (unusual API calls, credential compromise) but does NOT check whether EBS volumes are encrypted. GuardDuty looks for attackers; Config looks for misconfigurations.
encrypted-volumes detects unencrypted EBS volumes and triggers SNS alerts via CloudWatch Events.How to Think About This:
When a question asks about "automatic detection of misconfigured resources" + "notifications", the pattern is: AWS Config (detect) → CloudWatch Events (route) → SNS (notify). Config has pre-built managed rules for common compliance checks, including
encrypted-volumes which specifically checks whether EBS volumes attached to EC2 instances are encrypted.Key Concepts:
AWS Config Managed Rules — Pre-built compliance rules maintained by AWS.
encrypted-volumes evaluates whether all EBS volumes in use are encrypted. When an unencrypted volume is detected, Config marks it as NON_COMPLIANT. Other useful managed rules: s3-bucket-server-side-encryption-enabled, rds-storage-encrypted, iam-password-policy.Notification Pipeline — Config compliance changes → CloudWatch Events rule matches NON_COMPLIANT → SNS topic sends email/SMS/webhook notification. This is fully automated and near real-time — the CTO gets notified within minutes of an unencrypted volume appearing.
Why C is correct: AWS Config with the
encrypted-volumes managed rule is the AWS-native, purpose-built solution. It requires minimal setup (enable Config, activate the managed rule, create CloudWatch Events → SNS pipeline), continuously monitors, and automatically notifies on violations.Why others are wrong:
A) Lambda function to scan + SNS alert — While technically possible, this is a custom solution requiring you to write, deploy, and maintain Lambda code. You'd need to schedule it (CloudWatch Events cron), handle pagination across all volumes, and manage errors. AWS Config already does this out of the box with a managed rule — no custom code needed.
B) Trusted Advisor for detection + SNS — Trusted Advisor does check for some security issues, but its EBS encryption check requires Business or Enterprise support plans. Also, Trusted Advisor runs periodic checks (not continuous monitoring) and doesn't integrate as cleanly with CloudWatch Events for real-time notifications. Config is the preferred tool for continuous compliance monitoring.
D) GuardDuty for scanning unencrypted volumes — GuardDuty is a threat detection service, not a configuration compliance tool. It detects suspicious activity (unusual API calls, credential compromise) but does NOT check whether EBS volumes are encrypted. GuardDuty looks for attackers; Config looks for misconfigurations.
Q63.Your EC2 instances in a private subnet can't access S3, even after creating a Gateway endpoint. How do you resolve this?
✓ Correct: C. After creating a Gateway VPC endpoint, add a route in the private subnet's route table directing S3 traffic to the endpoint.
How to Think About This:
Gateway VPC endpoints (for S3 and DynamoDB) work through route tables, not DNS or network interfaces. After creating the endpoint, you must add a route in the associated subnet's route table that sends S3-destined traffic to the endpoint. If you create the endpoint but forget the route (or associate it with the wrong route table), traffic still tries to go through the NAT Gateway or internet — and private subnets without a NAT won't reach S3.
Key Concepts:
Gateway VPC Endpoint — A VPC component for S3 and DynamoDB that provides private connectivity without traversing the internet. When you create a Gateway endpoint, AWS creates a prefix list (e.g.,
Route Table Association — The endpoint must be associated with the correct route table(s) — specifically the route table used by your private subnets. During endpoint creation, you select which route tables to associate. AWS automatically adds the route. If your private subnet uses a different route table than the one you associated, traffic won't route through the endpoint.
Gateway vs Interface Endpoints — Gateway endpoints use route tables (S3, DynamoDB only). Interface endpoints use DNS and ENIs (all other services). Don't confuse them — they work differently.
Why C is correct: The route table entry directing S3 traffic through the endpoint is the missing piece. Without this route, S3 requests from the private subnet have no path to reach S3 (since there's no NAT Gateway or internet gateway for private subnets).
Why others are wrong:
A) Direct S3 traffic via PrivateLink — PrivateLink is the technology behind Interface VPC endpoints, not Gateway endpoints. S3 supports both types, but the question specifically mentions a Gateway endpoint was created. PrivateLink/Interface endpoints use DNS and ENIs — a different mechanism than route tables. Switching to PrivateLink would mean abandoning the Gateway endpoint already created.
B) Route S3 requests via the private subnet — This is vague and doesn't describe a specific solution. Traffic already originates from the private subnet. The issue is that the subnet's route table doesn't have a route to the S3 endpoint. "Routing via the private subnet" doesn't address the missing route table entry.
D) Direct S3 requests via VPN gateway — Sending S3 traffic through a VPN gateway would route it to your on-premises network, not to S3. This would add latency, cost, and complexity. The VPC endpoint was created specifically to avoid external routing — using a VPN gateway defeats the purpose entirely.
How to Think About This:
Gateway VPC endpoints (for S3 and DynamoDB) work through route tables, not DNS or network interfaces. After creating the endpoint, you must add a route in the associated subnet's route table that sends S3-destined traffic to the endpoint. If you create the endpoint but forget the route (or associate it with the wrong route table), traffic still tries to go through the NAT Gateway or internet — and private subnets without a NAT won't reach S3.
Key Concepts:
Gateway VPC Endpoint — A VPC component for S3 and DynamoDB that provides private connectivity without traversing the internet. When you create a Gateway endpoint, AWS creates a prefix list (e.g.,
pl-63a5400a) containing S3's IP ranges. You then add a route: Destination: pl-63a5400a → Target: vpce-xxxxx.Route Table Association — The endpoint must be associated with the correct route table(s) — specifically the route table used by your private subnets. During endpoint creation, you select which route tables to associate. AWS automatically adds the route. If your private subnet uses a different route table than the one you associated, traffic won't route through the endpoint.
Gateway vs Interface Endpoints — Gateway endpoints use route tables (S3, DynamoDB only). Interface endpoints use DNS and ENIs (all other services). Don't confuse them — they work differently.
Why C is correct: The route table entry directing S3 traffic through the endpoint is the missing piece. Without this route, S3 requests from the private subnet have no path to reach S3 (since there's no NAT Gateway or internet gateway for private subnets).
Why others are wrong:
A) Direct S3 traffic via PrivateLink — PrivateLink is the technology behind Interface VPC endpoints, not Gateway endpoints. S3 supports both types, but the question specifically mentions a Gateway endpoint was created. PrivateLink/Interface endpoints use DNS and ENIs — a different mechanism than route tables. Switching to PrivateLink would mean abandoning the Gateway endpoint already created.
B) Route S3 requests via the private subnet — This is vague and doesn't describe a specific solution. Traffic already originates from the private subnet. The issue is that the subnet's route table doesn't have a route to the S3 endpoint. "Routing via the private subnet" doesn't address the missing route table entry.
D) Direct S3 requests via VPN gateway — Sending S3 traffic through a VPN gateway would route it to your on-premises network, not to S3. This would add latency, cost, and complexity. The VPC endpoint was created specifically to avoid external routing — using a VPN gateway defeats the purpose entirely.