← GlobeSec.ai
GlobeSec SCS-C03 Pre-Test Assessment
Study Guide Cheat Sheet Takeaways IAM Practice

AWS Security Pre-Test Assessment

All domains — assess your readiness before deep study

0/63
Correct Answers
0 / 0 answered
Q1.A company uses AWS Organizations and generates an AMI of an Amazon EC2 instance. The company needs to share the AMI with other accounts in the same AWS Region.

How can the company share the AMI?
ACreate IAM roles in each destination account with EC2 permissions. Create an RCP that automatically shares AMIs with member accounts based on tags.
BUse AWS Resource Access Manager (AWS RAM). Specify the AMI and the account IDs.
CCopy the AMI to a new AMI. Specify the account IDs when configuring the copy.
DModify the AMI permissions. Indicate the private AMI availability and add the account IDs.
✓ Correct: D. Modify the AMI permissions and add the destination account IDs.

How to Think About This:
When you see "share an AMI with other accounts in the same region", the mechanism is AMI launch permissions. You modify the AMI's permissions to add specific account IDs, which grants those accounts the ability to launch instances from your AMI. The AMI itself stays in your account — you're just granting others permission to use it.

Key Concepts:
AMI Launch Permissions — Every AMI has a launch permission attribute that controls who can use it:
Private (default) — Only your account can use the AMI
Shared with specific accounts — You add account IDs that can launch instances from the AMI
Public — Anyone can use the AMI (not recommended for custom images)

How to Share:
• Console: EC2 → AMIs → Select AMI → Actions → Edit AMI Permissions → Add Account IDs
• CLI: aws ec2 modify-image-attribute --image-id ami-xxx --launch-permission "Add=[{UserId=123456789012}]"

Important Constraints:
• The AMI must be in the same region as the target accounts want to use it (satisfied in this question)
• If the AMI is encrypted with a custom KMS key, you must also share the KMS key with the target accounts
• The AMI stays in the owner's account — recipients can use it but cannot modify or re-share it

Why D is correct: Modifying AMI launch permissions is the native, direct mechanism for sharing AMIs with specific accounts in the same region. It requires no additional services, no copying, and works immediately.

Why others are wrong:
A) IAM roles + RCP — Resource Control Policies (RCPs) are guardrails that restrict permissions across an organization — they do not share resources. IAM roles in destination accounts don't control AMI sharing either. AMI sharing is managed through the AMI's own launch permissions, not through IAM or organizational policies.
B) AWS Resource Access Manager (RAM) — RAM is used to share resources like Subnets, Transit Gateways, License Manager configurations, and Route 53 Resolver rules. AMIs are not a supported resource type in RAM. AMI sharing uses EC2's native launch permission mechanism instead.
C) Copy the AMI and specify account IDs — The AMI copy operation (CopyImage) creates a copy of an AMI in the same or different region within your own account. It does not accept destination account IDs — you cannot copy an AMI directly into another account. To give another account their own copy, they must first have launch permissions (option D), then they can copy it themselves.
Q2.A company uses AWS Systems Manager Session Manager to connect to Amazon EC2 instances. The solution must log and store all session activity in Amazon S3 for security auditing and archival purposes. The logged session activity must include commands that are executed on the instances.

Which solution will meet these requirements with the LEAST operational overhead?
AConfigure Session Manager to send session output logs to Amazon CloudWatch Logs. Create a subscription filter to stream the logs to an Amazon Data Firehose delivery stream. Configure the delivery stream to deliver the logs to an S3 bucket.
BConfigure Session Manager to log session activity to an S3 bucket by enabling session logging. Specify the S3 bucket name in the Session Manager preferences.
CEnable AWS CloudTrail data event logging for the EC2 instances. Configure CloudTrail to deliver the logs to an S3 bucket.
DUse EC2 instance-level shell scripts to capture command history and periodically upload the log files to Amazon S3 by using a Systems Manager Automation document.
✓ Correct: B. Configure Session Manager to log session activity directly to an S3 bucket via Session Manager preferences.

How to Think About This:
When the question asks for "least operational overhead", always look for the native, built-in feature first. Session Manager has built-in S3 logging — you just enable it and specify a bucket name. No additional services, no pipelines, no custom scripts.

Key Concepts:
Session Manager Logging — Session Manager natively supports logging session activity (including all commands typed and their output) to two destinations:
Amazon S3 — Session output is written directly to an S3 bucket you specify
Amazon CloudWatch Logs — Session output is streamed to a log group
Both options are configured in Session Manager Preferences with just a bucket name or log group — no additional infrastructure required.

What Gets Logged: Every command executed during the session, the command output, timestamps, and session metadata. This satisfies the "commands executed on the instances" requirement.

Why B is correct: Enabling S3 logging in Session Manager preferences is a single configuration change. It captures all session activity including commands, writes directly to S3, and requires zero additional services. This is the absolute minimum operational overhead.

Why others are wrong:
A) CloudWatch Logs → Data Firehose → S3 — This achieves the same result but adds two unnecessary intermediary services (CloudWatch Logs and Data Firehose). More services = more configuration, more cost, more failure points. Since Session Manager can write directly to S3, this pipeline is unnecessary overhead.
C) CloudTrail data events — CloudTrail logs API calls (e.g., StartSession, TerminateSession) but does not capture the commands typed during a session. CloudTrail records who started a session and when, but not what they did inside the session. This fails the "commands executed" requirement.
D) Shell scripts + Automation documents — Writing custom scripts to capture command history, scheduling periodic uploads, and maintaining Automation documents is the highest operational overhead option. It requires development, testing, scheduling, and ongoing maintenance. Session Manager provides this functionality out of the box.
Q3.A security engineer is responsible for maintaining consistent security controls across a company's AWS infrastructure. The engineer implemented an AWS CloudFormation StackSets stack set across all of the company's AWS accounts in multiple AWS Regions. The stack set includes the creation of:

• Amazon SQS queues with dead-letter queues
• IAM roles and policies
• AWS KMS keys
• Amazon EventBridge rules
• AWS Systems Manager Parameter Store parameters

When the stack set is created, stack instances fail to create in approximately one-third of the accounts. In the failed accounts, the stack set stopped at the Amazon SQS stage with the error message "Access is denied".

What is the cause of this issue?
AThe KMS keys are missing the kms:GenerateDataKey permission and are failing on accounts that need additional service key operations.
BThe stack set tried to create an Amazon SQS resource that already has the same name as the one that the stack set is trying to create.
CThe target roles that are assumed by the execution role for the stack set are missing the proper permissions in the failed accounts.
DThe organization has an SCP attached that is preventing the stack sets from deploying to certain Regions.
✓ Correct: C. The execution roles in the failed accounts are missing the necessary permissions.

How to Think About This:
Two key clues: (1) "one-third of accounts" fail, not all — this means the issue is account-specific, not template-wide or organization-wide. (2) The error is "Access is denied" — a permissions problem. When you combine "some accounts fail" + "Access Denied," the answer is always inconsistent IAM roles/permissions across accounts.

Key Concepts:
CloudFormation StackSets Role Architecture — StackSets uses two roles:
Administration Role (in the management/admin account) — AWSCloudFormationStackSetAdministrationRole. This role orchestrates the stack set deployment.
Execution Role (in each target account) — AWSCloudFormationStackSetExecutionRole. This role is assumed by the admin role and actually creates the resources in the target account. It needs permissions for every service being created (SQS, IAM, KMS, EventBridge, SSM).

Why One-Third Fail: If the execution role was set up manually (self-managed StackSets) or was modified in some accounts, those accounts may have execution roles with insufficient permissions. Accounts where the role has full permissions succeed; accounts where the role is missing SQS permissions fail at the SQS stage with "Access Denied."

Service-Managed vs. Self-Managed: With service-managed StackSets (using Organizations), AWS automatically creates and manages the execution roles with appropriate permissions. With self-managed StackSets, you must ensure the execution role exists and has correct permissions in every target account — which is where inconsistencies arise.

Why C is correct: The failure pattern (some accounts fail, others succeed) combined with "Access is denied" points directly to the execution role in the failed accounts lacking sqs:CreateQueue and related SQS permissions. The fix is to update the execution roles in the failed accounts to include all necessary service permissions.

Why others are wrong:
A) KMS missing kms:GenerateDataKey — The error occurs at the SQS stage, which is processed before KMS keys in the stack. If KMS were the issue, SQS would have been created successfully and the stack would fail at the KMS resource. Also, kms:GenerateDataKey is a data-plane operation, not needed for key creation.
B) SQS name conflict — A naming conflict would produce a "resource already exists" or "QueueAlreadyExists" error, not "Access is denied." These are fundamentally different error types. Also, it's unlikely that exactly one-third of accounts happen to have the same queue name.
D) SCP blocking certain Regions — An SCP region restriction would affect all resources in those regions, not just SQS. More importantly, it would cause failures based on region, not on which account. The question states failures occur in one-third of accounts (across multiple regions), not in specific regions. This pattern points to per-account issues, not region-based SCPs.
Q4.A company uses AWS Organizations to create and manage multiple AWS accounts. The company uses its management account only for consolidated billing and wants to use a dedicated non-management account to perform account management tasks across the organization. A security team has developed a set of custom AWS Config rules. The company wants to manage and deploy these rules as a package.SELECT TWO

Which combination of steps will meet these requirements?
ARegister a delegated admin account.
BCreate an OrganizationAccountAccessRole IAM role.
CDeploy AWS Config rules by using the put-organization-conformance-pack CLI command.
DDeploy AWS Config rules by using the put-organization-config-rule CLI command.
EDeploy AWS Config rules by using AWS Security Hub.
✓ Correct: A and C. Register a delegated admin account and deploy rules using put-organization-conformance-pack.

How to Think About This:
Two requirements to match: (1) "non-management account to perform management tasks" = delegated administrator. (2) "deploy rules as a package" = conformance pack (not individual rules).

Key Concepts:
Delegated Administrator — AWS Organizations allows you to designate a member account as a delegated administrator for specific AWS services (Config, GuardDuty, Security Hub, etc.). This lets you keep the management account clean (billing only) while a dedicated security account manages services across the organization. You register a delegated admin using:
aws organizations register-delegated-administrator --account-id 123456789012 --service-principal config.amazonaws.com

Conformance Pack vs. Individual Config Rule:
put-organization-config-rule — Deploys a single Config rule across the organization
put-organization-conformance-pack — Deploys a package (collection) of Config rules and remediation actions across the organization using a YAML template
The question explicitly says "as a package," which maps to conformance packs.

Why A is correct: The requirement states the company wants a non-management account to manage Config rules across the organization. Registering a delegated administrator for AWS Config enables this — the delegated account can deploy organization-level Config rules and conformance packs without using the management account.

Why C is correct: The question asks to deploy custom Config rules "as a package." A conformance pack bundles multiple Config rules together in a single template. put-organization-conformance-pack deploys this package across all member accounts in the organization.

Why others are wrong:
B) OrganizationAccountAccessRole — This IAM role is automatically created in member accounts when they are created through Organizations. It grants the management account cross-account admin access. It does not help a non-management account manage services across the org — that requires delegated administrator registration.
D) put-organization-config-rule — This deploys individual Config rules, not a package. The question specifically says "as a package," which requires a conformance pack. While this command works for single rules, it doesn't meet the packaging requirement.
E) AWS Security Hub — Security Hub deploys predefined security standards (CIS, FSBP, PCI DSS) that use Config rules under the hood. However, it cannot deploy custom Config rule packages. For custom rules bundled together, you need conformance packs.
Q5.An application running on Amazon EC2 instances must use a user name and password to access a legacy application. A developer has stored those secrets in AWS Systems Manager Parameter Store with type SecureString using the default AWS KMS key.SELECT TWO

Which combination of configuration steps will allow the application to access the secrets through the API?
AAdd the EC2 instance role as a trusted service to the Systems Manager service role.
BAdd a permission to the Systems Manager service role that allows it to decrypt the KMS encryption key.
CAdd a permission to the EC2 instance role that allows it to read the Systems Manager parameter.
DAdd a permission to the EC2 instance role that allows it to decrypt the KMS encryption key.
EAdd the SSM service role as a trusted service to the EC2 instance role.
✓ Correct: C and D. The EC2 instance role needs both ssm:GetParameter permission and kms:Decrypt permission.

How to Think About This:
When you see "EC2 needs to read a SecureString from Parameter Store", think about two layers of access: (1) permission to read the parameter itself, and (2) permission to decrypt the KMS-encrypted value. Both permissions must be on the caller's role (the EC2 instance role), not the SSM service role.

Key Concepts:
How SecureString Retrieval Works:
1. The application on EC2 calls ssm:GetParameter using the instance role's credentials.
2. Parameter Store receives the request and checks if the caller has ssm:GetParameter permission → Needs permission C.
3. Since the parameter is a SecureString, Parameter Store calls KMS to decrypt the value on behalf of the caller.
4. KMS checks if the caller (EC2 instance role) has kms:Decrypt permission on the KMS key → Needs permission D.
5. If both checks pass, the decrypted value is returned to the application.

Key Insight: Parameter Store does NOT use its own service role to decrypt. It passes the decryption request to KMS using the caller's identity. This means the EC2 instance role must have both ssm:GetParameter AND kms:Decrypt.

Default KMS Key: The question mentions the "default AWS KMS key" — this is the AWS-managed key aws/ssm. Even with the default key, the caller still needs kms:Decrypt permission on that key.

Why C is correct: The EC2 application calls the Parameter Store API directly. The instance role must include ssm:GetParameter (or ssm:GetParameters) with the resource scoped to the specific parameter ARN. Without this, the API call is rejected before decryption even happens.

Why D is correct: Since the parameter is encrypted as SecureString, KMS decryption occurs during retrieval. KMS evaluates the caller's permissions (the EC2 instance role), not the SSM service role's permissions. The instance role must include kms:Decrypt on the KMS key used to encrypt the parameter.

Why others are wrong:
A) EC2 role as trusted service to SSM service role — The EC2 instance does not assume the SSM service role. The instance role calls the Parameter Store API directly using its own credentials. There is no trust relationship needed between the EC2 role and the SSM service role for parameter retrieval.
B) SSM service role needs kms:Decrypt — KMS permission is checked against the caller (EC2 instance role), not the SSM service role. Parameter Store acts as a pass-through — it calls KMS on behalf of the caller, and KMS evaluates the caller's identity for authorization.
E) SSM service role as trusted service to EC2 role — This relationship is backwards and unnecessary. The EC2 instance doesn't need SSM to assume its role. The application simply makes API calls using the instance role's credentials directly.
Q6.A company wants to deploy an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The cluster must have multiple Amazon EC2 instances as nodes in the same VPC and AWS Region. All node-to-node network traffic within the cluster must have automatic encryption in transit with no management overhead.

Which solution will meet this requirement?
AUse EC2 instances based on the hardware Nitro v3 or above.
BDeploy an interface VPC endpoint in the subnet where the EC2 nodes are located.
CEnable VPC Flow Logs. Implement AWS Network Firewall with TLS inspection.
DImplement an Application Load Balancer (ALB) with an HTTPS listener.
✓ Correct: A. Use EC2 instances based on the Nitro System v3 or above.

How to Think About This:
When you see "automatic encryption in transit between EC2 instances" + "no management overhead", the answer is Nitro System hardware-level encryption. Nitro v3+ instances automatically encrypt all traffic between instances in the same VPC at the hardware level — no certificates, no application changes, no configuration required.

Key Concepts:
AWS Nitro System Encryption — EC2 instances built on the Nitro System v3 and later automatically encrypt network traffic between instances. Key details:
• Encryption happens at the Nitro card level (hardware), not in software
• It is always on for supported instance types — no configuration needed
• Uses AEAD (Authenticated Encryption with Associated Data) with 256-bit keys
Zero performance overhead because encryption is offloaded to dedicated hardware
• Covers all traffic between Nitro v3+ instances in the same VPC, including EKS node-to-node communication

Supported Instance Types — Instance families like C6i, M6i, R6i, C7g, M7g, and newer are Nitro v3+. Older generations (C5, M5) use earlier Nitro versions that do not have automatic encryption.

Why A is correct: Nitro v3+ provides exactly what the question asks: automatic encryption of all node-to-node traffic with zero management overhead. No TLS certificates to manage, no application code changes, no sidecar proxies. The hardware handles everything transparently.

Why others are wrong:
B) Interface VPC endpoint — VPC endpoints provide private connectivity to AWS services (like the EKS API, ECR, or CloudWatch) without traversing the public internet. They do not encrypt traffic between EC2 instances within the cluster. Node-to-node communication (kubelet, pod-to-pod, etcd) doesn't flow through VPC endpoints.
C) VPC Flow Logs + Network Firewall with TLS inspection — VPC Flow Logs are a monitoring tool that captures metadata about network traffic (source, destination, ports, bytes). They do not encrypt anything. Network Firewall with TLS inspection inspects already-encrypted traffic — it doesn't encrypt unencrypted traffic. Also, both involve significant management overhead, which violates the requirement.
D) ALB with HTTPS listener — An ALB terminates TLS for client-to-load-balancer traffic. EKS node-to-node communication (kubelet API, pod networking, DNS) does not pass through an ALB. An ALB handles ingress traffic from external clients, not lateral cluster traffic between nodes.
Q7.A security engineer must configure an Application Load Balancer (ALB) for a critical web application. All client-to-ALB communication must enforce the use of TLS 1.2 or higher. Specific deprecated cipher suites must be disallowed.

Which solution will meet these requirements?
AConfigure a custom security policy on the ALB listener to explicitly list only TLS 1.2 or higher and to exclude all deprecated cipher suites.
BImplement an AWS WAF web ACL on the ALB to block requests that use TLS versions lower than 1.2 or that contain deprecated cipher suites.
CSelect an appropriate predefined ELBSecurityPolicy for the HTTPS listener on the ALB that enforces TLS 1.2 or higher and strong cipher suites.
DDeploy a Network Load Balancer (NLB) in front of the ALB. Configure the NLB to enforce TLS 1.2 or higher and strong cipher suites.
✓ Correct: C. Select a predefined ELBSecurityPolicy that enforces TLS 1.2+ and strong cipher suites.

How to Think About This:
When you see "ALB" + "enforce TLS version" + "cipher suites", the answer is always ELB Security Policy. ALBs control TLS negotiation through predefined security policies that you select on the HTTPS listener. AWS provides several policies with different TLS version minimums and cipher suite combinations.

Key Concepts:
ELB Security Policies — When you create an HTTPS listener on an ALB, you select a security policy that determines:
• The minimum TLS version the ALB will accept from clients
• Which cipher suites are available for TLS negotiation

Common Policies:
ELBSecurityPolicy-TLS13-1-2-2021-06 — TLS 1.2 minimum, supports TLS 1.3, strong ciphers only
ELBSecurityPolicy-TLS-1-2-2017-01 — TLS 1.2 minimum, excludes older weak ciphers
ELBSecurityPolicy-2016-08 — Default policy, allows TLS 1.0+ (not suitable for compliance)

By selecting a policy that enforces TLS 1.2 minimum, the ALB will reject any client connection attempting TLS 1.0 or 1.1. The policy also defines exactly which cipher suites are offered during the TLS handshake — deprecated ciphers are simply not included.

Why C is correct: AWS provides predefined security policies specifically designed for this purpose. You select the appropriate policy (e.g., ELBSecurityPolicy-TLS13-1-2-2021-06) on the HTTPS listener. The ALB enforces TLS 1.2+, only offers strong cipher suites, and rejects non-compliant connections at the TLS handshake level. No additional services needed.

Why others are wrong:
A) Custom security policy on ALB — ALBs do not support custom security policies. You must choose from AWS's predefined ELBSecurityPolicy-* options. Only Classic Load Balancers (legacy) supported custom SSL negotiation configurations. For ALBs, you select a predefined policy that matches your requirements.
B) AWS WAF to block TLS versions — WAF operates at Layer 7 (HTTP), which is after the TLS handshake has already completed. By the time WAF sees the request, TLS negotiation is done. WAF cannot inspect or block based on TLS version or cipher suite — it works on HTTP headers, body, URI, and IP addresses. TLS enforcement must happen at the listener level, not the WAF level.
D) NLB in front of ALB — Adding an NLB in front of an ALB is an unnecessary and overly complex architecture. The ALB itself handles TLS termination through its security policy — there's no need for a second load balancer. Also, NLBs performing TLS termination would require managing certificates on the NLB as well, adding operational overhead for no benefit.
Q8.A security team audits a role with the following identity policy attached:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "s3:CreateBucket", "rds:CreateDBSnapshot", "rds:CopyDBSnapshot", "ec2:CreateImage" ], "Resource": "*" }] }
The security team then attaches the following permissions boundary to the role:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "rds:*", "iam:*" ], "Resource": "*" }] }
Which actions will the updated permissions allow?
Ards:CreateDBSnapshot and rds:CopyDBSnapshot only.
Brds:*, iam:*, s3:PutObject, s3:GetObjectAcl, s3:GetObject, s3:CreateBucket, and ec2:CreateImage.
Crds:* and iam:* only.
Ds3:PutObject, s3:GetObjectAcl, s3:GetObject, s3:CreateBucket, and ec2:CreateImage only.
✓ Correct: A. Only rds:CreateDBSnapshot and rds:CopyDBSnapshot.

How to Think About This:
Permission Boundaries set the maximum ceiling. Effective permissions are the intersection (overlap) of the identity policy AND the boundary. Think of a Venn diagram — only actions that appear in BOTH circles are allowed.

Step-by-Step Evaluation:
Identity Policy allows: s3:PutObject, s3:GetObjectAcl, s3:GetObject, s3:CreateBucket, rds:CreateDBSnapshot, rds:CopyDBSnapshot, ec2:CreateImage

Permissions Boundary allows: rds:* (all RDS actions), iam:* (all IAM actions)

Intersection (what's in BOTH):
s3:PutObject — in identity policy but NOT in boundary → BLOCKED
s3:GetObjectAcl — in identity policy but NOT in boundary → BLOCKED
s3:GetObject — in identity policy but NOT in boundary → BLOCKED
s3:CreateBucket — in identity policy but NOT in boundary → BLOCKED
rds:CreateDBSnapshot — in identity policy AND in boundary (rds:*) → ALLOWED
rds:CopyDBSnapshot — in identity policy AND in boundary (rds:*) → ALLOWED
ec2:CreateImage — in identity policy but NOT in boundary → BLOCKED
iam:* — in boundary but NOT in identity policy → NOT GRANTED (boundaries don't grant permissions)

Key Concepts:
Effective Permissions = Identity Policy ∩ Permissions Boundary
• The boundary cannot grant permissions — it only sets the ceiling
• The identity policy cannot exceed the boundary — even if it allows an action, the boundary must also allow it
iam:* in the boundary means IAM actions are allowed to be granted, but since the identity policy doesn't include any IAM actions, none are effective

Why A is correct: The only actions that exist in BOTH the identity policy and the permissions boundary are the two RDS snapshot actions. Everything else is either outside the boundary (S3, EC2) or outside the identity policy (IAM).

Why others are wrong:
B) All actions combined — This treats the boundary as additive (union), which is wrong. Boundaries are restrictive (intersection). The boundary doesn't add iam:* to the identity policy — it only caps what the identity policy can grant.
C) rds:* and iam:* — This assumes the boundary replaces the identity policy. Boundaries don't grant permissions. iam:* is in the boundary but not in the identity policy, so no IAM actions are effective. Also, rds:* in the boundary doesn't mean all RDS actions are granted — only the ones also in the identity policy (the two snapshot actions).
D) S3 and EC2 actions only — This is the inverse of the correct answer — these are exactly the actions that are blocked by the boundary. S3 and EC2 are in the identity policy but NOT in the boundary, so they are denied.
Q9.A security engineer is copying an encrypted Amazon RDS snapshot from one account to another within the same AWS Region in AWS Organizations. The role has the AWS managed AdministratorAccess policy, an inline policy, and a permissions boundary attached. The KMS resource policy has the correct permissions.

When attempting to copy the snapshot, the engineer receives an access denied error indicating the identity is missing the kms:ListAliases permission.

Which action will remediate this issue?
AReview the KMS resource policy. Add the required permissions assigned for the role that is being used to perform the snapshot copy.
BReview the SCP to verify if the failing permissions are included in the policy. Modify the SCP as needed.
CEdit the permissions boundary policy with KMS permissions to allow the current failing permissions.
DChange the KMS key used to encrypt the RDS instance to an enabled state.
✓ Correct: C. Edit the permissions boundary to include kms:ListAliases.

How to Think About This:
Use the process of elimination based on what the question tells you:
• Role has AdministratorAccess → identity policy is NOT the issue (it allows everything)
• KMS resource policy has correct permissions → resource policy is NOT the issue
• Role has a permissions boundary attached → this is the remaining suspect

Since effective permissions = Identity Policy ∩ Permissions Boundary, even AdministratorAccess (which allows *) is capped by the boundary. If the boundary doesn't include kms:ListAliases, the action is blocked regardless of the identity policy.

Key Concepts:
Access Denied Troubleshooting Checklist — When you get Access Denied in an AWS Organizations environment, check these in order:
1. Identity Policy — Does the role/user have the permission? (Yes — AdministratorAccess)
2. Resource Policy — Does the resource allow the caller? (Yes — stated in question)
3. Permissions Boundary — Does the boundary allow the action? (This is the gap)
4. SCP — Does the organization allow the action?
5. Session Policy — If using temporary credentials with a session policy

The question deliberately tells you #1 and #2 are fine, and explicitly mentions a permissions boundary is attached. This narrows the answer to the boundary.

Permissions Boundary Recap:
• Boundaries set the maximum permissions an IAM entity can have
• Even AdministratorAccess is bounded — the boundary always wins
• Effective = Identity Policy ∩ Boundary ∩ SCP (for org accounts)
• If ANY layer is missing the permission, the action is denied

Why C is correct: The permissions boundary is the only policy layer not confirmed as correct. Adding kms:ListAliases to the boundary will allow the action to pass through all policy evaluation layers: the identity policy allows it (AdministratorAccess), the resource policy allows it (stated), and now the boundary will also allow it.

Why others are wrong:
A) Review KMS resource policy — The question explicitly states "the AWS KMS resource policy has the correct permissions." This option is already ruled out by the question itself. Always read the question carefully — when it tells you something is correct, don't pick an answer that fixes that thing.
B) Review the SCP — While SCPs can theoretically block KMS actions, the question is specifically testing your understanding of permissions boundaries. The clue is that the question mentions a permissions boundary is attached and the role has AdministratorAccess. The boundary is the most direct explanation for why a role with full admin access is being denied a specific action.
D) Enable the KMS key — A disabled KMS key would produce a "KMS key is disabled" or DisabledException error, not a "missing kms:ListAliases permission" error. The error message explicitly says the permission is missing, not that the key is in an invalid state. Always match the error message to the answer category.
Q10.A company is building a mobile app for users. The app must allow users to sign up and sign in with a password or social media login. Authenticated users should be able to upload files to Amazon S3.

Which authentication method will meet these requirements with the LEAST operational overhead?
AImplement Amazon Cognito user pools. Use an Amazon Cognito identity pool to provide temporary access to Amazon S3 for authenticated users.
BGenerate Amazon S3 presigned URLs on the backend after verifying user authentication. Return the S3 presigned URLs to the client to upload files.
CUse AWS IAM Identity Center and configure an identity provider (IdP). Configure the app as a customer managed application.
DConfigure an IAM OpenID Connect (OIDC) identity provider (IdP) for social media logins. Use AWS Security Token Service (AWS STS) AssumeRoleWithWebIdentity to exchange tokens for temporary credentials.
✓ Correct: A. Amazon Cognito User Pools + Identity Pools.

How to Think About This:
When you see "mobile app" + "sign up/sign in" + "social login" + "access AWS services" + "least overhead", the answer is always Amazon Cognito. Cognito is purpose-built for exactly this use case — it handles user registration, authentication, social federation, AND temporary AWS credential vending in a single managed service.

Key Concepts:
Cognito Has Two Components:
User Pools — A managed user directory. Handles: sign-up, sign-in, password policies, MFA, email/phone verification, and social identity federation (Google, Facebook, Apple, SAML). Think of it as authentication — "who is this user?"
Identity Pools (Federated Identities) — Exchanges authentication tokens (from User Pools or social providers) for temporary AWS credentials. Think of it as authorization — "what can this user do in AWS?"

The Complete Flow:
1. User signs up / signs in via Cognito User Pool (or social login)
2. User Pool returns a JWT token
3. App sends token to Cognito Identity Pool
4. Identity Pool calls sts:AssumeRoleWithWebIdentity behind the scenes
5. App receives temporary AWS credentials (scoped to an IAM role)
6. App uploads directly to S3 using those credentials

All of this is fully managed — no servers, no custom auth code, no token management. This is the least operational overhead possible.

Why A is correct: Cognito handles every requirement in one service: user sign-up/sign-in (User Pool), social media login (User Pool federation), and temporary S3 access (Identity Pool). No backend servers to manage, no custom authentication logic, no manual STS calls. Maximum functionality with minimum overhead.

Why others are wrong:
B) S3 presigned URLs from backend — This requires you to build, deploy, and maintain a backend server that authenticates users and generates presigned URLs. You'd also need to build the user sign-up/sign-in system yourself (or use another service for it). More operational overhead than Cognito, which handles everything out of the box.
C) IAM Identity Center — Identity Center (formerly AWS SSO) is designed for workforce identity — employees accessing the AWS Console and business applications. It is NOT designed for consumer-facing mobile apps with public sign-up. It doesn't support social media login for end users or mobile app authentication flows.
D) IAM OIDC provider + STS AssumeRoleWithWebIdentity — This works technically but requires significantly more operational overhead. You must: (1) configure OIDC providers in IAM for each social platform, (2) manage token validation yourself, (3) call AssumeRoleWithWebIdentity manually in your app code, (4) handle token refresh and credential caching. Cognito Identity Pools do all of this automatically behind the scenes. Also, this doesn't provide a built-in sign-up/sign-in flow with passwords — you'd need to build that separately.
Q11.A company wants to deploy a legacy application on Amazon EC2 instances. The legacy application outputs all logs to a local text file. The company must continually monitor application logs for security-related messages.

Which solution will meet these requirements with the LEAST operational overhead?
AInstall the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to stream the log files to CloudWatch Logs. Create CloudWatch metric filters to detect security patterns. Set up CloudWatch alarms based on the metrics.
BInstall the Amazon CloudWatch agent on the EC2 instances. Create an AWS Lambda function to periodically scan the local log files. Configure Amazon EventBridge rules to trigger the Lambda function every 5 minutes to send findings to Amazon SNS.
CInstall the Amazon CloudWatch agent and the AWS Systems Manager Agent (SSM Agent) on the EC2 instances. Use Systems Manager Run Command to periodically collect logs and send the logs to CloudWatch Logs. Create CloudWatch Logs Insights queries to search for security events.
DInstall the Amazon CloudWatch agent on the EC2 instances and configure AWS Security Hub integration. Create custom log parsers in Security Hub to analyze the streamed logs and generate security findings based on detected patterns.
✓ Correct: A. CloudWatch agent → CloudWatch Logs → metric filters → alarms.

How to Think About This:
When you see "EC2 local log files" + "continuous monitoring" + "least overhead", the answer is the standard CloudWatch pipeline: agent streams logs → metric filters detect patterns → alarms notify. This is the native, fully managed approach with zero custom code.

Key Concepts:
CloudWatch Agent Log Streaming — The CloudWatch agent can be configured to tail local log files (just like tail -f) and continuously stream new log lines to CloudWatch Logs. You specify the file path in the agent config:
"file_path": "/var/log/app/application.log"
The agent handles buffering, retries, and delivery automatically. The legacy app doesn't need any changes — it keeps writing to its text file as normal.

CloudWatch Metric Filters — Pattern-matching rules applied to log streams. You define a filter pattern (e.g., "ERROR", "unauthorized", "failed login") and CloudWatch counts occurrences as a custom metric. Example: every time "SECURITY_VIOLATION" appears in the log, the metric increments by 1.

CloudWatch Alarms — Threshold-based alerts on metrics. When the security pattern metric exceeds a threshold (e.g., > 5 occurrences in 5 minutes), the alarm triggers an SNS notification or auto-remediation action.

The Complete Pipeline:
Log file → CW Agent → CW Logs → Metric Filter → Custom Metric → Alarm → SNS/Action
All fully managed, continuous, no custom code, no servers to maintain.

Why A is correct: This is the standard AWS pattern for monitoring application logs from EC2. Every component is fully managed: the agent streams logs continuously, metric filters detect patterns in real time, and alarms provide automated notifications. No Lambda functions, no Run Commands, no custom parsers — just configuration.

Why others are wrong:
B) Lambda scanning local log files every 5 minutes — Two problems: (1) Lambda cannot access local files on EC2 instances — Lambda runs in its own isolated environment, not on the EC2 instance. (2) Even if it could, polling every 5 minutes is not "continuous" monitoring and adds a Lambda function to build and maintain. The CloudWatch agent already handles log streaming natively.
C) SSM Run Command to periodically collect logs — Run Command executes commands on instances on-demand or on a schedule. Using it to periodically collect logs is a polling approach (not continuous), requires scheduling and managing the command, and adds SSM as an unnecessary dependency. The CloudWatch agent already streams logs continuously without needing Run Command.
D) Security Hub custom log parsers — Security Hub aggregates security findings from AWS services (GuardDuty, Inspector, Config, etc.). It does not support custom log parsers or direct log analysis. Security Hub doesn't ingest raw application logs — it works with structured security findings in the AWS Security Finding Format (ASFF). This capability simply doesn't exist.
Q12.An AWS Lambda function reads metadata from an Amazon S3 object and stores the metadata in an Amazon DynamoDB table. The function runs whenever an object is stored within the S3 bucket.

How should a security engineer give the Lambda function access to the DynamoDB table?
ACreate a VPC endpoint for DynamoDB within a VPC. Configure the Lambda function to access resources in the VPC.
BCreate a resource policy that grants the Lambda function permissions to write to the DynamoDB table. Attach the resource policy to the Lambda function.
CCreate an IAM user with permissions to write to the DynamoDB table. Store an access key for that user in the Lambda environment variables.
DCreate an IAM service role with permissions to write to the DynamoDB table. Associate that role with the Lambda function.
✓ Correct: D. Create an IAM service role (Lambda execution role) with DynamoDB write permissions.

How to Think About This:
When you see "Lambda needs access to another AWS service", the answer is always IAM execution role. This is the same pattern as EC2 instance roles — AWS services get permissions through IAM roles, never through embedded credentials or IAM users.

Key Concepts:
Lambda Execution Role — Every Lambda function has an execution role that defines what AWS services and resources the function can access. When Lambda runs your function, it assumes this role and receives temporary credentials automatically. The role needs:
• A trust policy allowing lambda.amazonaws.com to assume it
• A permission policy with the actions needed (in this case dynamodb:PutItem, s3:GetObject)

How It Works:
1. S3 event triggers Lambda (S3 notification configuration handles this)
2. Lambda assumes the execution role automatically
3. Function reads the S3 object metadata using the role's s3:GetObject permission
4. Function writes to DynamoDB using the role's dynamodb:PutItem permission
5. Temporary credentials are managed entirely by AWS — no secrets in code

Why D is correct: An IAM service role is the AWS-recommended and most secure way to grant Lambda access to other services. Credentials are temporary, auto-rotated, and never exposed. The role can be scoped to the exact DynamoDB table and actions needed (least privilege).

Why others are wrong:
A) VPC endpoint for DynamoDB — A VPC endpoint controls network routing (keeping traffic within the AWS network instead of the internet). It does NOT grant IAM permissions. Even with a VPC endpoint, Lambda still needs an execution role with dynamodb:PutItem permission. Also, Lambda doesn't need to be in a VPC to access DynamoDB — both are accessible over the AWS network by default. Adding VPC configuration to Lambda actually increases complexity and cold start times.
B) Resource policy on Lambda function — Lambda resource policies control who can invoke the function (e.g., allowing S3 to trigger it). They do NOT grant the function permissions to access other services. Also, DynamoDB does not support resource-based policies — access is controlled entirely through IAM identity policies and roles.
C) IAM user with access key in environment variables — This is a critical security anti-pattern. Storing access keys in environment variables means: (1) long-term credentials that don't auto-rotate, (2) credentials visible in the Lambda console and potentially in logs, (3) credentials that could be leaked through environment variable exposure. IAM roles provide the same access with temporary, auto-rotating credentials and zero secrets to manage.
Q13.A company uses AWS Organizations to manage its multi-account environment. A security team enabled AWS Security Hub in one account but discovers it only shows alerts for that account. The team needs Security Hub to:

• Automatically receive security alerts from all enabled Regions of all existing and new accounts in Organizations
• Initiate automated remediation of security findingsSELECT THREE

Which combination of steps will meet these requirements?
ACreate a Security Hub custom insight.
BConfigure the AWS account to be the delegated Security Hub administrator.
CEnable cross-Region aggregation in Security Hub.
DEnable the Security Hub integration with Amazon Inspector.
EImplement Security Hub custom actions.
FAttach the AWSServiceRoleForSecurityHub service-linked role to the security team's IAM group.
✓ Correct: B, C, and E. Delegated admin + cross-Region aggregation + custom actions for remediation.

How to Think About This:
Map each requirement to a solution:
"All existing and new accounts" → Delegated administrator (auto-enables Security Hub for all org accounts)
"All enabled Regions" → Cross-Region aggregation
"Automated remediation" → Custom actions (send findings to EventBridge → Lambda)

Key Concepts:
Delegated Security Hub Administrator (B) — In an Organizations environment, you designate a member account as the delegated administrator for Security Hub. This account can then automatically enable Security Hub for all existing and future accounts in the organization. Without this, you'd have to manually enable Security Hub in each account individually. This follows the AWS best practice of keeping the management account clean — delegate service administration to a dedicated security account.

Cross-Region Aggregation (C) — By default, Security Hub findings are regional — you only see findings from the Region you're viewing. Cross-Region aggregation collects findings from all enabled Regions into a single aggregation Region. This gives you a single-pane-of-glass view across all Regions without switching between Region consoles. You configure one Region as the aggregation Region and link all other Regions to it.

Security Hub Custom Actions (E) — Custom actions create a bridge between Security Hub and automation. The flow:
1. Define a custom action in Security Hub (appears as a button in the findings console)
2. When triggered (manually or via EventBridge rule), it sends the finding to Amazon EventBridge
3. EventBridge routes the event to a Lambda function
4. Lambda performs the remediation (e.g., revoke security group rule, disable access key, isolate instance)
This is how you achieve the "automated remediation" requirement. The pipeline: Security Hub Finding → Custom Action → EventBridge → Lambda → Remediation

Why others are wrong:
A) Security Hub custom insight — Insights are saved groups of related findings — essentially saved filters/views. They help you organize and visualize findings (e.g., "show me all critical S3 findings"). They do NOT aggregate across accounts, aggregate across Regions, or perform remediation. Insights are a reporting feature, not a configuration or automation feature.
D) Security Hub integration with Amazon Inspector — This integration allows Inspector to send its vulnerability findings to Security Hub. While useful, it doesn't address the three requirements: it doesn't enable cross-account aggregation, cross-Region aggregation, or automated remediation. It just adds one more finding source.
F) Attach service-linked role to IAM groupAWSServiceRoleForSecurityHub is a service-linked role — a special IAM role that is directly linked to the Security Hub service itself. Service-linked roles are managed by AWS and cannot be attached to users, groups, or other entities. They allow the AWS service to perform actions on your behalf (e.g., Security Hub reading Config data). You cannot and should not manually attach them to IAM groups.
Q14.A financial company is improving its security posture for applications on AWS. A security engineer must implement a strategy to:

• Aggregate security findings in a central location
• Detect and evaluate potential security anomalies across multiple applications
• Search and correlate security threats in an automated mannerSELECT THREE

Which combination of solutions should the security engineer use with the LEAST operational overhead?
AUse AWS Security Hub to aggregate security findings.
BUse Amazon Detective to evaluate, search, and correlate security threats.
CAggregate security findings in a central Amazon S3 bucket.
DCreate dashboards for each application in Amazon CloudWatch to detect anomalous activity.
EUse Amazon Athena to evaluate, search, and correlate security threats.
FUse Amazon GuardDuty to detect anomalies across multiple applications.
✓ Correct: A, B, and F. Security Hub (aggregate) + GuardDuty (detect) + Detective (investigate).

How to Think About This:
Map each requirement to the purpose-built AWS service:
"Aggregate findings in central location" → Security Hub
"Detect anomalies" → GuardDuty
"Search and correlate threats" → Detective

These three services form the AWS security trifecta — they work together as a pipeline and are the "least overhead" answer for any question about centralized security monitoring.

Key Concepts:
AWS Security Hub (A) — The Aggregator
Security Hub is a central dashboard that collects and normalizes security findings from multiple AWS services (GuardDuty, Inspector, Macie, Config, Firewall Manager, IAM Access Analyzer) and third-party tools. It uses the AWS Security Finding Format (ASFF) to standardize all findings. Think of it as the single pane of glass for all security alerts.

Amazon GuardDuty (F) — The Detector
GuardDuty is a threat detection service that continuously monitors and analyzes multiple data sources:
• CloudTrail management and data events (API call anomalies)
• VPC Flow Logs (network anomalies)
• DNS logs (DNS query anomalies)
• EKS audit logs, S3 data events, RDS login activity
GuardDuty uses machine learning, anomaly detection, and threat intelligence to automatically detect suspicious activity like compromised instances, reconnaissance, credential theft, and cryptocurrency mining. It is fully managed — no rules to write, no agents to install.

Amazon Detective (B) — The Investigator
Detective helps you analyze, investigate, and determine the root cause of security findings. When GuardDuty flags a suspicious finding, Detective allows you to dive deep: visualize resource behaviors over time, trace API calls, map relationships between entities (users, IPs, instances), and correlate events across multiple data sources. It automatically builds a behavior graph from CloudTrail, VPC Flow Logs, and GuardDuty findings.

How They Work Together:
GuardDuty (detects) → Security Hub (aggregates) → Detective (investigates)

Why others are wrong:
C) Aggregate findings in S3 — While you could store findings in S3, it would require configuration, management, cataloging, and custom querying to be useful. Security Hub provides all of this out of the box with dashboards, compliance checks, and integrations. S3 is raw storage — Security Hub is a purpose-built security aggregation platform. Much more operational overhead.
D) CloudWatch dashboards for each application — CloudWatch dashboards display metrics (CPU, memory, latency) and require you to visually inspect them. This is not automated anomaly detection — it's manual monitoring. You'd need a human watching dashboards to spot anomalies. GuardDuty detects anomalies automatically using ML with zero human intervention.
E) Amazon Athena to search and correlate — Athena is a SQL query engine for data in S3. You could query security logs with Athena, but you'd need to: set up S3 data ingestion, define table schemas, write SQL queries, and manage the entire pipeline. Amazon Detective does all of this automatically with built-in visualizations and entity graphs. Athena works but requires significantly more operational overhead.
Q15.A company is concerned that its network security configuration might prevent legitimate IPv4 traffic from reaching resources in one of several VPCs. The company has configured VPC Flow Logs with the default flow log record format. The company wants to quickly identify excessive requests that have the type IPv4 and where the action was REJECT.

Which solution will meet these requirements with the LEAST effort?
ACustomize the format of the existing flow log to include the type field. Configure the flow log to publish data to an Amazon S3 bucket. Create and schedule an AWS Lambda function to run an Amazon Athena query against the logs. Invoke an Amazon SNS notification based on the REJECT count.
BCustomize the format of the existing flow log to include the type field. Configure the flow log to publish data directly to an Amazon CloudWatch Logs log group. Create a custom CloudWatch metric from the flow log records by filtering on action = REJECT and type = IPv4. Create an alarm based on the custom metric.
CCreate a new flow log with a custom log format that includes the type field. Configure the flow log to publish data to an Amazon S3 bucket. Create and schedule an AWS Lambda function to run an Amazon Athena query against the logs. Invoke an Amazon SNS notification based on the REJECT count.
DCreate a new flow log with a custom log format that includes the type field. Configure the flow log to publish data directly to an Amazon CloudWatch Logs log group. Create a custom CloudWatch metric from the flow log records by filtering on action = REJECT and type = IPv4. Create an alarm based on the custom metric.
✓ Correct: D. Create a new flow log with custom format → CloudWatch Logs → metric filter → alarm.

How to Think About This:
This question tests two concepts simultaneously:
1. Can you modify an existing flow log's format? No — you must create a new one.
2. Least effort for alerting? CloudWatch metric filters + alarms (not S3 + Athena + Lambda + SNS).

This eliminates A and B (can't modify existing flow logs) and C (Athena pipeline is more effort).

Key Concepts:
VPC Flow Log Formats — The default flow log format captures: version, account-id, interface-id, srcaddr, dstaddr, srcport, dstport, protocol, packets, bytes, start, end, action, log-status. Critically, the default format does NOT include the type field (IPv4 vs IPv6). To filter by type, you need a custom format.

You Cannot Modify an Existing Flow Log — Once a VPC Flow Log is created, its record format is immutable. You cannot add or remove fields from an existing flow log. To change the format, you must create a new flow log with the desired custom format. This immediately eliminates options A and B which say "customize the format of the existing flow log."

CloudWatch Metric Filters vs. Athena — For real-time alerting with least effort:
CloudWatch Logs + Metric Filter + Alarm: Flow logs stream directly to CloudWatch. A metric filter pattern matches REJECT + IPv4 and increments a custom metric. An alarm fires when the count exceeds a threshold. All configuration, no code.
S3 + Athena + Lambda + SNS: Requires setting up an S3 bucket, creating Athena table schemas, writing SQL queries, building a Lambda function, scheduling it, and configuring SNS. Far more components and effort.

Why D is correct: It correctly identifies that a new flow log is needed (can't modify existing) and uses the least-effort alerting pipeline (CloudWatch metric filter + alarm). No Lambda functions, no Athena queries, no custom code — just configuration.

Why others are wrong:
A) Customize existing flow log + S3/Athena/LambdaTwo problems: (1) You cannot modify an existing flow log's format — you must create a new one. (2) The S3 → Athena → Lambda → SNS pipeline requires significantly more effort than CloudWatch metric filters.
B) Customize existing flow log + CloudWatch — Uses the right alerting approach (CloudWatch metric filters) but fails on the first step: you cannot customize an existing flow log. A new flow log is required.
C) New flow log + S3/Athena/Lambda — Correctly creates a new flow log, but uses the more complex S3 → Athena → Lambda pipeline. Scheduling Lambda functions, writing Athena queries, and managing SNS notifications is far more effort than a simple CloudWatch metric filter and alarm.
Q16.A company has a web application that uses an Application Load Balancer (ALB) and an Auto Scaling group with multiple Amazon EC2 instances as the target group. A security engineer needs to analyze the EC2 instance logs and determine the real IP address of the originating client for specific requests.

Which solution will meet these requirements?
AAnalyze the EC2 instance logs to locate the client IP address under the "Client IP" field in the ALB settings.
BConfigure Amazon CloudWatch and enable detailed monitoring.
CModify the ALB listener rules to add forward actions with "X-Forwarded-For: Client IP" in the header.
DEnable VPC Flow Logs for the subnets that contain the EC2 instances. Use Amazon Athena to query the logs for client IP addresses and request patterns.
✓ Correct: C. Modify ALB listener rules to add the X-Forwarded-For header with the client IP.

How to Think About This:
When you see "ALB" + "real client IP" + "EC2 instance logs", the answer is X-Forwarded-For header. This is a fundamental concept: when traffic passes through a load balancer, the EC2 instance sees the ALB's IP as the source, not the client's. The X-Forwarded-For header preserves the original client IP through the proxy chain.

Key Concepts:
The Problem — When a client connects to an ALB, the ALB terminates the client's TCP connection and opens a new connection to the EC2 instance. From the EC2 instance's perspective, the source IP is the ALB's private IP — the real client IP is lost. This is the fundamental challenge with any reverse proxy or load balancer.

X-Forwarded-For Header — An HTTP header that ALBs add to requests before forwarding them to targets. It contains the original client IP address. Format:
X-Forwarded-For: client-ip, proxy1-ip, proxy2-ip
The leftmost IP is the original client. By default, ALBs add this header, but you can explicitly configure it in listener rules to ensure consistent, standardized formatting.

Related ALB Headers:
X-Forwarded-For — Original client IP address
X-Forwarded-Proto — Original protocol (HTTP or HTTPS)
X-Forwarded-Port — Original port used by the client

Your application on EC2 must be configured to read the X-Forwarded-For header instead of the TCP source IP to get the real client address. Most web servers (Apache, Nginx) have modules for this.

Why C is correct: Modifying ALB listener rules to explicitly add the X-Forwarded-For header ensures the original client IP is consistently available in a standardized format. The EC2 instances can then parse this header from their application logs to determine the real client IP for any request.

Why others are wrong:
A) "Client IP" field in EC2 instance logs — EC2 instance logs do not automatically contain a "Client IP" field from the ALB. The ALB doesn't inject this into EC2 system logs. The client IP information must be carried via the X-Forwarded-For HTTP header, which the application (not the OS) needs to parse from incoming requests.
B) CloudWatch detailed monitoring — Detailed monitoring provides metrics (CPU, network, disk) at 1-minute intervals instead of 5-minute. It does not provide application-level information like client IP addresses. CloudWatch metrics are aggregate numbers, not per-request data. You cannot determine which client IP made which request from CloudWatch metrics.
D) VPC Flow Logs + Athena — VPC Flow Logs capture Layer 3/4 network data: source IP, destination IP, ports, protocol, bytes. However, Flow Logs for the EC2 instances would show the ALB's IP as the source — not the original client IP. The original client IP is only visible in the ALB's Flow Logs (or ALB access logs), not the EC2 instance's Flow Logs. Additionally, Flow Logs cannot correlate client IPs with specific application-level HTTP requests.
Q17.A company configured AWS CloudTrail to send logs to an Amazon S3 bucket. The company determined that IAM users created unapproved AWS resources. A security engineer must design a solution to analyze all future CloudTrail logs to determine which API calls users made to create resources. The solution must use a single query to determine:

• The user name of the IAM user
• The IAM ARN that called the API
• The name of the API event
• The time at which the event occurred

Which solution will meet these requirements?
AConfigure Amazon CloudWatch Insights in the account. Query CloudWatch Insights to look for resource creation events.
BUse CloudTrail to send logs directly into an Amazon Athena table. Use an Athena query that can access nested fields within the CloudTrail event.
CUse CloudTrail to send logs directly into an external Amazon Redshift Spectrum table. Use a Redshift Spectrum query that can access nested fields within the CloudTrail event.
DConfigure CloudTrail Lake with an event data store. Create a CloudTrail Lake SQL query that can access nested fields within the CloudTrail event.
✓ Correct: D. Configure CloudTrail Lake with an event data store and use SQL queries.

How to Think About This:
When you see "analyze CloudTrail logs" + "single query" + "nested fields" + "future logs", the answer is CloudTrail Lake. It's the native, purpose-built query engine for CloudTrail events with built-in SQL support for nested event fields like userIdentity.userName and userIdentity.arn.

Key Concepts:
CloudTrail Lake — A managed data lake within CloudTrail that lets you run SQL queries directly on CloudTrail events. Key features:
Event Data Store — A collection of CloudTrail events that you can query. You create one and it automatically collects future events.
SQL Queries — Write standard SQL with support for nested fields. Example:
SELECT userIdentity.userName, userIdentity.arn, eventName, eventTime FROM event_data_store WHERE eventName LIKE 'Create%'
Single location — No need to set up S3, Athena tables, or external schemas. Query directly within CloudTrail.
Future logs only — CloudTrail Lake cannot query past logs. It collects events from the point the event data store is created onward. The question specifically asks about "all future CloudTrail logs," which aligns perfectly.

CloudTrail Lake vs. Athena:
CloudTrail Lake: Zero setup — create event data store, start querying. Built-in nested field support. No S3 bucket or table schema needed.
Athena: Requires creating a table definition, pointing to S3 bucket, managing partitions. Works on historical logs in S3 but requires more setup.

Why D is correct: CloudTrail Lake provides the simplest path: create an event data store, and immediately run SQL queries against future events. It natively supports nested fields (userIdentity.userName, userIdentity.arn, eventName, eventTime) in a single query. No additional services, no table schemas, no data pipelines.

Why others are wrong:
A) CloudWatch Insights — CloudWatch Logs Insights can query logs, but CloudTrail logs must first be configured to send to CloudWatch Logs — they don't go there by default. The question says logs go to S3. Also, CloudWatch Insights cannot query CloudTrail logs directly from S3. This adds an extra configuration step (setting up CloudTrail → CloudWatch Logs integration) which is more operational overhead.
B) CloudTrail to Athena table directly — CloudTrail cannot send logs directly into an Athena table. Athena is a query engine that reads data from S3. You must first create an Athena table definition that points to your CloudTrail S3 bucket, define the schema, and manage partitions. This requires additional setup steps beyond what CloudTrail Lake needs.
C) CloudTrail to Redshift Spectrum directly — CloudTrail cannot send logs directly into a Redshift Spectrum table. Redshift Spectrum queries data in S3 through an external schema. You must create an external schema, define the table structure, and configure a Redshift cluster. This is the most complex option with the highest operational overhead.
Q18.A company has an AWS Lambda function that processes sensitive financial data from an Amazon S3 bucket. The security team notices suspicious patterns where miscalculations benefit one particular customer. The Lambda function has logging enabled at INFO level. The logs show successful execution but no details about data transformations:
[INFO] Processing batch of 100 records [INFO] Successfully processed records [INFO] Processing completed REPORT RequestId: c2d62... Duration: 3287.49 ms
Despite successful execution messages, some processed records show unexpected modifications. A security engineer must provide a more thorough investigation of this issue.

Which action will meet this requirement?
AConfigure AWS Shield Advanced on the Lambda function. Enable Shield logging to analyze Lambda function behavior and potential security threats.
BChange the Lambda function's logging level from INFO to DEBUG. Configure AWS X-Ray tracing in the Lambda function. Use Amazon CloudWatch Insights queries to analyze and discover patterns in the data transformations.
CMove the Lambda function to a private VPC and attach a Network Load Balancer (NLB). Configure the NLB access logs and VPC Flow Logs to monitor function behavior and data transformations.
DEnable Amazon GuardDuty on the S3 bucket and set up alerting rules. Configure GuardDuty findings to monitor for suspicious data access patterns and potential malicious activities.
✓ Correct: B. Change logging to DEBUG + enable X-Ray tracing + use CloudWatch Insights queries.

How to Think About This:
The clue is in the logs: [INFO] level shows "successfully processed" but gives zero detail about what actually happened to the data. The problem is inside the function (data transformations), not outside it (network, access patterns). You need application-level visibility, not network-level or threat-detection-level monitoring.

Key Concepts:
Lambda Logging Levels — Lambda supports standard logging levels with increasing verbosity:
ERROR — Only errors
WARN — Warnings and errors
INFO — General operational messages (what's happening at a high level)
DEBUG — Detailed diagnostic information including variable values, calculation steps, and data transformations

Switching from INFO to DEBUG reveals the actual processing logic: which records were modified, what calculations were performed, what values were used. This is exactly what's needed to investigate suspicious data modifications.

AWS X-Ray Tracing — X-Ray traces individual requests through your application. For Lambda, it shows:
• How long each segment of processing took
• Which downstream services were called (S3, DynamoDB, etc.)
• Where errors or anomalies occurred in the transaction flow
X-Ray helps you trace specific transactions (e.g., the records for the suspicious customer) through the function to see exactly what happened.

CloudWatch Logs Insights — Once DEBUG logs are flowing, Insights lets you run queries to find patterns across thousands of log entries. Example: filter @message like /customer_id=12345/ | stats count(*) by calculation_result. This helps identify if certain records are being treated differently.

The Investigation Pipeline:
DEBUG logs (see what happened) + X-Ray (trace specific transactions) + Insights (find patterns)

Why B is correct: This is the only option that provides application-level visibility into the Lambda function's internal operations. DEBUG logs show the actual data transformations, X-Ray traces specific transactions, and CloudWatch Insights identifies patterns across all executions. Together, they can reveal exactly how and why certain records are being modified differently.

Why others are wrong:
A) AWS Shield Advanced — Shield Advanced protects against DDoS attacks (network layer). It monitors network traffic volume and patterns to detect and mitigate distributed attacks. It has zero visibility into a Lambda function's internal data processing logic. You can't investigate suspicious calculations with a DDoS protection service.
C) VPC + NLB + Flow Logs — NLB access logs and VPC Flow Logs capture network-level traffic (IPs, ports, bytes). They cannot see inside the Lambda function — they only see that a connection happened, not what the function did with the data. The problem is in the application logic, not in the network traffic. Moving Lambda to a VPC adds complexity without adding the needed visibility.
D) GuardDuty on S3 — GuardDuty monitors for suspicious access patterns at the AWS account and service level (e.g., "who accessed the bucket from an unusual IP?"). It cannot see how the Lambda function processes data after retrieving it from S3. GuardDuty would detect if someone unauthorized accessed the bucket, but the issue here is what happens to the data inside the function, which GuardDuty has no visibility into.
Q19.A security engineer must troubleshoot S3 access logging. An existing source bucket is configured to write access logs to a destination bucket. However, the destination bucket does not receive the access logs.SELECT TWO

Which combination of steps will validate the setup to troubleshoot the issue?
AValidate that access logs are disabled for the destination bucket.
BValidate that the destination bucket does not have a default retention period configured.
CValidate that the source bucket policy grants access to the logging service principal.
DValidate that the destination bucket policy grants access to the logging service principal.
EValidate that the source bucket and destination bucket have versioning enabled.
✓ Correct: B and D. Check that the destination bucket has no default retention period AND that its bucket policy grants access to the logging service principal.

How to Think About This:
S3 access logs are written to the destination bucket, so focus on what could prevent writes to the destination bucket. Two things block writes: (1) missing permissions for the logging service, and (2) bucket features that interfere with log delivery (like Object Lock retention).

Key Concepts:
S3 Server Access Logging — When enabled, S3 records detailed information about requests made to a source bucket and delivers those logs to a destination bucket. The logging is performed by the S3 logging service principal (logging.s3.amazonaws.com), not by your IAM user or role.

Destination Bucket Requirements:
Bucket policy must grant write access to logging.s3.amazonaws.com — Without this, the logging service cannot write log files to the destination bucket. This is the most common cause of missing access logs.
No default retention period (Object Lock) — If the destination bucket has a default retention period configured via S3 Object Lock, it can interfere with log delivery. The logging service may not be able to write objects that comply with the retention settings.
• Must be in the same AWS Region as the source bucket
• Must not be the same bucket as the source (to avoid infinite loops, though technically possible)

The Logging Flow:
Client request → Source bucket → S3 logging service (logging.s3.amazonaws.com) → Writes to Destination bucket

Why B is correct: If the destination bucket has a default retention period configured (via S3 Object Lock), it can prevent the logging service from delivering log files. The retention period creates constraints on how objects can be written, which can block log delivery. Removing the default retention period resolves this issue.

Why D is correct: The logging service principal (logging.s3.amazonaws.com) must have s3:PutObject permission on the destination bucket. Without this bucket policy, logs cannot be written. Example policy:
{"Effect":"Allow","Principal":{"Service":"logging.s3.amazonaws.com"},"Action":"s3:PutObject","Resource":"arn:aws:s3:::destination-bucket/*"}

Why others are wrong:
A) Disable access logs on the destination bucket — AWS recommends disabling logging on the destination bucket to avoid infinite loops (destination logs its own writes, which generates more logs, etc.). However, having logging enabled on the destination bucket would NOT prevent logs from arriving — they would still be delivered. It's a best practice issue, not a blocking issue.
C) Source bucket policy grants logging access — The logging service writes to the destination bucket, not the source bucket. The source bucket doesn't need to grant the logging service any permissions. The source bucket only needs logging enabled (which the question confirms is already done).
E) Versioning enabled on both buckets — S3 server access logging does not require versioning. Versioning is an independent feature for maintaining multiple versions of objects. It has no impact on whether access logs are delivered.
Q20.A global company uses AWS Organizations for its multi-account environment. The company needs an incident response framework that:

• Automatically quarantines compromised resources across accounts
• Preserves forensic evidence in an immutable format
• Provides detailed analysis of the attack path
• Maintains regulatory compliance

Which incident response framework configuration will meet these requirements?
AConfigure AWS Config with multi-account aggregators to detect noncompliant resources. Use AWS Control Tower to enforce controls during incidents. Store forensic evidence in Amazon S3 Glacier Flexible Retrieval with Vault Lock policies. Configure AWS X-Ray for attack path visualization.
BDeploy AWS Security Hub as the central aggregation point with Organizations integration. Configure Security Hub automated response and remediation by using built-in AWS Systems Manager Automation runbooks. Store forensic data in Amazon S3 buckets with S3 Object Lock enabled. Implement Amazon Detective for attack path analysis.
CImplement centralized Amazon CloudWatch Logs with subscription filters to detect incidents. Create an Amazon EventBridge rule to trigger AWS Step Functions workflows that orchestrate containment by using SCPs. Use AWS Backup with legal hold settings for forensic preservation.
DDeploy Automated Forensics Orchestrator for Amazon EC2 with cross-account IAM roles. Use Amazon GuardDuty with delegated administrator for detection. Use Amazon EventBridge to trigger AWS Step Functions workflows that isolate affected resources by using VPC endpoint policies.
✓ Correct: B. Security Hub + SSM Automation runbooks + S3 Object Lock + Amazon Detective.

How to Think About This:
Map each requirement to the correct service:
"Automatically quarantine across accounts" → Security Hub automated response with SSM Automation runbooks
"Immutable forensic evidence" → S3 Object Lock (WORM storage)
"Attack path analysis" → Amazon Detective
"Central aggregation" → Security Hub with Organizations integration

Key Concepts:
Security Hub Automated Response & Remediation — Security Hub can trigger SSM Automation runbooks in response to findings. These runbooks can perform cross-account containment actions like isolating EC2 instances, revoking security group rules, or disabling access keys. Because SSM Automation supports cross-account execution, quarantine actions work consistently across all member accounts in the organization.

S3 Object Lock (WORM) — Provides Write Once, Read Many immutable storage. Two modes:
Governance mode — Protected but users with special permissions can override
Compliance modeNobody can delete or modify, not even root — for true forensic evidence preservation
Object Lock ensures forensic evidence (memory dumps, disk snapshots, log copies) cannot be tampered with, maintaining chain of custody for regulatory compliance.

Amazon Detective — Purpose-built for security investigation and attack path analysis. Detective automatically builds a behavior graph from CloudTrail logs, VPC Flow Logs, and GuardDuty findings. It visualizes relationships between users, IP addresses, and resources over time, making it possible to trace exactly how an attacker moved through your environment.

Why others are wrong:
A) Config + Control Tower + S3 Glacier + X-Ray — Multiple issues: (1) AWS Config monitors compliance but is not designed for real-time incident detection. (2) Control Tower guardrails are preventive/detective governance controls, not responsive incident handling tools. (3) S3 Glacier Flexible Retrieval has retrieval delays (minutes to hours) which impede timely forensic investigation. (4) X-Ray is for application performance tracing, NOT security attack path analysis. X-Ray traces API calls through microservices — it doesn't map attacker movement across accounts.
C) CloudWatch Logs + SCPs + AWS Backup — (1) Subscription filters alone could miss sophisticated attacks that don't match specific log patterns. (2) SCPs for containment are too broad — they apply at the account or OU level, potentially disrupting all legitimate operations in the account during an incident, not just the compromised resource. (3) AWS Backup with legal hold is for backup compliance, not forensic preservation — it lacks chain-of-custody features needed for proper digital forensics.
D) Forensics Orchestrator + GuardDuty + VPC endpoint policies — (1) Automated Forensics Orchestrator for EC2 only handles EC2 instances, not other compromised resources (Lambda, S3, IAM). (2) VPC endpoint policies control access to AWS services, not to individual resources — they are not an effective isolation mechanism during incidents. (3) This configuration lacks immutable evidence storage, which is essential for forensic data integrity.
Q21.A security engineer is preparing for a potential security incident involving unauthorized access to an Amazon EC2 instance hosting sensitive customer data. The engineer developed a response plan and needs to test and validate the plan without impacting production data.

Which solution will validate the incident response plan with the LEAST operational overhead?
AReplicate the EC2 environment in a testing account. Use sanitized data and simulate the incident by using AWS Fault Injection Service.
BUse AWS Config to capture configuration changes in the production environment during an actual incident. Apply the changes in a test environment.
CEnable AWS CloudTrail in a testing account. Use production CloudTrail logs to simulate incident patterns in an AWS Lambda function that triggers the response plan.
DDeploy an AWS CloudFormation stack in a testing account that simulates a security incident by exposing EC2 metadata. Observe the results and detection outcomes.
✓ Correct: A. Replicate in a testing account with sanitized data and use AWS Fault Injection Service.

How to Think About This:
When you see "test incident response" + "without impacting production" + "least overhead", the answer needs three things: (1) isolated environment (separate account), (2) safe data (sanitized, not production), and (3) managed simulation tool (not custom scripts). AWS Fault Injection Service (FIS) is the purpose-built tool for controlled fault injection experiments.

Key Concepts:
AWS Fault Injection Service (FIS) — A fully managed service for running controlled fault injection experiments on AWS resources. Key capabilities:
Experiment templates — Define what disruptions to inject (e.g., terminate instances, block network, stress CPU)
Stop conditions — Automatically halt experiments if metrics exceed safe thresholds
Controlled blast radius — Target specific resources with tags or filters
Pre-built actions — AWS provides actions for EC2, ECS, EKS, RDS, and more

For incident response testing, FIS can simulate disruptions that trigger your detection and response workflows, validating that your plan works end-to-end without building custom simulation code.

AWS Best Practice for IR Testing:
1. Create an isolated testing account (separate from production)
2. Replicate the environment architecture
3. Use sanitized data (no real customer data)
4. Use controlled simulations to trigger response workflows
5. Validate detection, containment, eradication, and recovery steps

Why A is correct: This follows the AWS-recommended approach: separate testing account provides isolation, sanitized data prevents real data exposure, and FIS provides a managed, repeatable way to inject faults and validate detection/response workflows. No custom code to maintain, no production risk, least operational overhead.

Why others are wrong:
B) AWS Config during an actual incident — This requires waiting for an actual incident to test the plan, which defeats the purpose of proactive testing. Config tracks configuration changes but cannot simulate incidents. You cannot validate a response plan by passively observing changes during a real breach — by then it's too late to fix gaps in the plan.
C) Production CloudTrail logs in a Lambda function — Using production logs to simulate incidents lacks the control and isolation needed for safe testing. Production logs may contain sensitive information, and replaying them through a Lambda function requires building and maintaining custom simulation code. This is not a controlled sandbox validation — it's a fragile custom solution with more operational overhead.
D) CloudFormation stack that exposes EC2 metadata — This actually creates vulnerable resources rather than simulating a controlled experiment. You'd need to develop and maintain custom CloudFormation templates, and the deployed resources are genuinely exposed — creating real security risk even in a test account. Also requires managing the lifecycle of these intentionally vulnerable resources. More overhead and more risk than using FIS.
Q22.A company needs to test an application's availability across multiple Availability Zones. The architecture uses a three-AZ deployment of Amazon EC2 instances in Auto Scaling groups behind an ALB, with an Amazon RDS MySQL database and Amazon EFS. The security team needs to perform controlled testing to verify high availability even if traffic is disrupted in one AZ.

Which solution will meet these requirements with the LEAST operational effort?
ACreate two AWS Lambda functions. Configure the first to select an AZ, back up its network ACL to DynamoDB, and modify the NACL to simulate isolation. Configure the second to restore the original NACL from DynamoDB after testing.
BCreate an AWS Step Functions state machine to read and store ALB, EC2, and RDS security group rules for the target AZ. Remove all rules, set a 15-minute wait state, then restore the rules after testing.
CCreate an applicable tag to assign to your resources in the target AZ. Configure an AWS Fault Injection Service (FIS) "AZ Availability: Power Interruption" scenario using the defined tag. Execute and end the scenario after testing.
DCreate an AWS Transit Gateway. Configure VPC routes through the Transit Gateway. Update the Transit Gateway route table for the target AZ to send all traffic to a blackhole route. Restore after testing.
✓ Correct: C. Use AWS Fault Injection Service (FIS) with the built-in AZ power interruption scenario.

How to Think About This:
When you see "test availability" + "simulate AZ disruption" + "least effort", the answer is AWS Fault Injection Service. FIS has built-in AZ-level scenarios specifically designed for this exact use case. No custom code, no manual infrastructure changes. Tag your resources, pick the scenario, run it, stop it.

Key Concepts:
AWS FIS AZ Scenarios — FIS includes pre-built scenarios for Availability Zone testing:
"AZ Availability: Power Interruption" — Simulates a power disruption to an entire AZ, affecting tagged resources
• Resources are targeted via tags — you assign a specific tag to resources in the AZ you want to disrupt
Automatic recovery — when the scenario is halted, resources recover to their original state
Stop conditions — CloudWatch alarms can automatically stop the experiment if critical metrics breach thresholds

FIS Workflow:
1. Tag resources in the target AZ
2. Select the "AZ Availability: Power Interruption" scenario
3. Execute → FIS simulates the power disruption
4. Observe: Does the ALB route traffic to healthy AZs? Does RDS failover? Does EFS remain accessible?
5. End scenario → resources recover automatically

This is the AWS-recommended approach for resilience testing and requires the absolute minimum operational effort.

Why others are wrong:
A) Two Lambda functions + DynamoDB — This requires: (1) writing two Lambda functions in a programming language, (2) maintaining a DynamoDB table for NACL backup/restore, (3) handling error scenarios and edge cases in code. All of this is custom development and maintenance. FIS does the same thing with zero code using a built-in scenario.
B) Step Functions + security group manipulation — Multiple problems: (1) Step Functions would need Lambda functions or other compute to make the actual API calls for security group modifications. (2) Removing ALL security group rules is dangerous — you lose network communication to the ALB, EC2 instances, AND the RDS database, not just in the target AZ. (3) Significant operational effort to build and maintain this workflow.
D) Transit Gateway + blackhole routes — This requires creating an entirely new Transit Gateway, reconfiguring all VPC routes to flow through it, and then managing blackhole route changes. This is a major infrastructure change with significant operational overhead. A Transit Gateway also adds cost. FIS provides the same AZ disruption simulation without any infrastructure changes.
Q23.A security engineer receives findings from AWS Security Hub showing suspicious port scanning activity across multiple Amazon EC2 instances in several AWS accounts in AWS Organizations. The engineer needs to:

• Quickly determine the scope of the potential security event
• Identify all affected resources
• Understand the timeline of the activity

Which solution will meet these requirements?
AUse AWS Config to query resource configuration history and create rules to detect instances with open ports. Correlate the findings with AWS CloudTrail logs to identify unauthorized API calls.
BUse AWS Systems Manager to run compliance scans against all EC2 instances and report on instances with noncompliant security groups. Use Amazon EventBridge to monitor for subsequent port scanning alerts.
CUse the Amazon GuardDuty dedicated dashboard to filter findings by type. Export all findings to an Amazon S3 bucket. Use Amazon Athena to run queries to identify patterns and affected resources.
DUse Amazon Detective to create a behavior graph. Analyze relationships between the affected resources and investigate the timeline of activities across the involved accounts.
✓ Correct: D. Use Amazon Detective to create a behavior graph for cross-account investigation.

How to Think About This:
When you see "investigate" + "scope" + "timeline" + "relationships between resources" + "cross-account", the answer is always Amazon Detective. Detective is purpose-built for security investigations. It automatically maps relationships, builds timelines, and visualizes attack paths.

Remember the security trifecta:
GuardDuty = "Something suspicious happened" (detection)
Security Hub = "Here's everything in one place" (aggregation)
Detective = "Let me trace exactly what happened" (investigation)

Key Concepts:
Amazon Detective Behavior Graphs — Detective automatically collects and analyzes data from three sources:
CloudTrail logs — API call history (who did what)
VPC Flow Logs — Network traffic patterns (who connected to whom)
GuardDuty findings — Threat intelligence correlations

From these sources, Detective builds a behavior graph that shows:
Entity relationships — connections between EC2 instances, IAM users, IP addresses, and accounts
Timeline of events — when each activity occurred and how events relate to each other
Anomalous patterns — deviations from normal behavior for each entity
Cross-account scope — traces activity across multiple accounts in an organization

For this port scanning scenario, Detective would show: which instances were scanned, from which source IPs, at what times, whether the scanning IPs connected to other resources, and if any instances exhibited unusual behavior after being scanned.

Why others are wrong:
A) AWS Config + CloudTrail correlation — Config tracks resource configuration state (e.g., "this security group has port 22 open") but doesn't provide security intelligence or visual relationship mapping. Manually correlating Config history with CloudTrail logs is time-consuming and doesn't provide the timeline-based analysis and entity visualization needed for rapid incident investigation.
B) Systems Manager compliance scans — Systems Manager manages infrastructure and can check compliance, but it's not designed for security incident investigation. It cannot map relationships between resources involved in a security event or provide timeline analysis. Compliance scans tell you "is this configured correctly?" not "what happened during this attack?"
C) GuardDuty dashboard + S3 + Athena — GuardDuty detects threats but its dashboard is not designed for complex cross-account investigations with relationship mapping. Exporting to S3 and querying with Athena introduces delays and requires writing custom SQL queries. Detective does all of this automatically with prebuilt visualizations — no data export, no custom queries needed.
Q24.The access key of an IAM user is compromised. Multiple new Amazon EC2 instances have been launched unexpectedly. A security engineer must immediately minimize the impact and isolate affected resources for offline forensic analysis.SELECT THREE

Which combination of actions should the security engineer take, based on AWS recommendations?
AUse SSH to access the new instances. Copy all logs to an Amazon S3 bucket. Terminate the instances.
BDeactivate the compromised IAM access key for the IAM user.
CDelete the compromised IAM access key for the IAM user.
DCreate Amazon EBS snapshots for the EBS volumes attached to the new instances. Isolate the instances.
EUse AWS Config to evaluate the actions the compromised IAM user took under its resource timeline.
FUse AWS CloudTrail to discover any additional actions that were performed by using the compromised access key.
✓ Correct: B, D, and F. Deactivate the key + EBS snapshots & isolate + CloudTrail investigation.

How to Think About This:
AWS incident response follows three phases: Contain → Preserve → Investigate. Map each correct answer to a phase:
Contain: Deactivate the access key (B) — stop the attacker immediately
Preserve: EBS snapshots + isolate (D) — save forensic evidence
Investigate: CloudTrail analysis (F) — determine full scope of compromise

Key Concepts:
Deactivate vs. Delete Access Keys (B vs. C) — This is a critical distinction:
Deactivate (correct) — The key is disabled immediately but still exists. If applications break because they were using this key, you can temporarily reactivate it to remediate. AWS recommends deactivate first, test, then delete later.
Delete (incorrect) — Permanent and irreversible. If any application was legitimately using this key, it breaks with no rollback option. In the chaos of incident response, you don't want to add more fires.

EBS Snapshots for Forensics (D) — Creating EBS snapshots captures a point-in-time copy of all disk data. You can then:
1. Create new EBS volumes from the snapshots
2. Attach those volumes to a forensic EC2 instance in an isolated VPC
3. Analyze malware, log files, and attacker artifacts offline
The original instances are isolated (e.g., move to a security group with no inbound/outbound rules) to prevent further damage while preserving the running state.

CloudTrail for Investigation (F) — CloudTrail records all API calls. By filtering event history for the compromised access key ID, you can discover:
• What other resources the attacker created, modified, or accessed
• Whether they created additional access keys or IAM users (persistence)
• Whether they accessed S3 data, modified security groups, or changed IAM policies
CloudTrail maintains 90 days of management event history by default.

Why others are wrong:
A) SSH in, copy logs, terminate — Two problems: (1) Manual SSH access during incident response is not a best practice — AWS recommends programmatic automation tools (SSM, Lambda) to avoid accidentally contaminating evidence or triggering attacker tripwires. (2) Terminating instances destroys volatile memory evidence (RAM, running processes, network connections). You should isolate, not terminate.
C) Delete the access key — Deleting is irreversible. If legitimate applications depend on this key, they break permanently with no rollback. AWS explicitly recommends deactivating first, testing impact on applications, then deleting after confirming no legitimate dependencies.
E) AWS Config resource timeline — Config tracks resource configuration changes (e.g., "security group rules changed at 3pm"). It does NOT track actions taken by IAM users. The resource timeline shows what happened to a resource, not what a user did. For tracking user actions, you need CloudTrail, not Config.
Q25.After enabling the restricted-common-ports AWS Config rule, a security engineer receives a noncompliant finding for a security group listening on port 3389 from 0.0.0.0/0. The security group is on a production EC2 instance with an instance profile (IAM role with PowerUserAccess) in a public subnet with a public IPv4 address.

The engineer needs to determine the scope: when changes were made and identify any additional resources created, modified, or deleted.SELECT TWO

Which combination of steps will meet these requirements?
AUse AWS Security Hub to identify when critical changes were made. Identify the principals that made the changes and all API calls that the instance profile made.
BUse the AWS Config resource timeline to analyze the resource configuration for the EC2 instance, security group, and IAM role. Note when critical changes were made.
CUse AWS Systems Manager Compliance to analyze the resource configuration for the EC2 instance, security group, and IAM role. Note when critical changes were made.
DUse AWS X-Ray traces to analyze the resource configuration for the EC2 instance, security group, and IAM role. Note when critical changes were made.
EUse AWS CloudTrail to identify when critical changes were made. Identify the principals that made the changes, any other changes they made, and all API calls that the instance profile made.
✓ Correct: B and E. AWS Config resource timeline (when did resources change?) + CloudTrail (who changed them and what else did they do?).

How to Think About This:
This question tests the Config + CloudTrail partnership. They answer different questions and work together:
Config → "What changed on this resource and when?" (resource-centric timeline)
CloudTrail → "Who made the change and what else did they do?" (principal-centric audit)

Use Config to identify the time frame, then use CloudTrail to identify the who and what else.

Key Concepts:
AWS Config Resource Timeline (B) — Provides a visual history of configuration changes for a specific resource over time. For each change, it shows:
What changed — e.g., security group rule added allowing port 3389 from 0.0.0.0/0
When it changed — timestamp of the configuration change
Compliance status — was the resource compliant before and after the change?
Config can also pull the related CloudTrail event directly from the timeline, linking the resource change to the API call that caused it. This is the fastest way to establish the time frame for the investigation.

AWS CloudTrail (E) — Once you know the time frame from Config, CloudTrail lets you dig deeper:
Who made the changes (IAM principal, access key, source IP)
What else they did (filter by the same principal to see all their API calls)
Instance profile activity — What API calls did the EC2 instance's role make? Did it exfiltrate data to S3? Create new IAM users? Modify other security groups?
CloudTrail retains 90 days of management events by default, and indefinitely with a custom trail to S3.

The Investigation Flow:
Config (identify WHEN resources changed) → CloudTrail (identify WHO changed them and WHAT ELSE they did)

Why others are wrong:
A) AWS Security Hub — Security Hub aggregates findings from other services (GuardDuty, Config, Inspector) but does NOT provide resource change history or API audit trails. It can tell you "there's a noncompliant security group" but cannot tell you when it was changed, who changed it, or what else that principal did. Security Hub is for alerting, not investigation.
C) Systems Manager Compliance — SSM Compliance checks whether managed instances meet compliance standards (patches, configurations). It does not audit configuration changes over time. It can tell you "this instance is non-compliant now" but not "when did it become non-compliant and who caused it."
D) AWS X-Ray traces — X-Ray is a distributed application tracing tool for debugging performance issues in microservices. It traces HTTP requests through application components. It has zero security audit capability — it cannot track IAM changes, security group modifications, or API calls. X-Ray answers "why is my app slow?" not "who modified my security group."
Q26.A security engineer discovers suspicious API calls from a compromised IAM role. The engineer must immediately stop all current activity that uses the IAM role.

Which solution provides the QUICKEST way to stop all current activity from the compromised IAM role?
AUpdate the trust policy for the compromised IAM role to include a condition that denies access based on the source IP address.
BApply a deny all IAM policy to the compromised IAM role. Revoke all temporary security credentials. Block the source IP address by using a network ACL.
CUse IAM Access Analyzer to analyze the trust and permission policies for the compromised IAM role to identify potential misconfigurations.
DCreate a new IAM role with the same permissions. Instruct all legitimate users to switch to the new role immediately.
✓ Correct: B. Apply deny-all policy + revoke temporary credentials + block source IP via NACL.

How to Think About This:
The key phrase is "immediately stop ALL CURRENT activity." This means you need to kill existing active sessions, not just prevent new ones. This requires a three-pronged approach:
1. Deny all — blocks any further API calls from the role
2. Revoke credentials — invalidates existing temporary sessions
3. Block IP via NACL — network-level block prevents any access from the attacker's IP

Key Concepts:
Why Deny-All Policy Alone Isn't Enough — When someone assumes an IAM role, they receive temporary security credentials (access key + secret key + session token) that are valid for the session duration (up to 12 hours). Even if you attach a deny-all policy, previously issued temporary credentials may still be cached and usable until they expire. That's why you must also revoke the credentials.

Revoking Temporary Credentials — AWS provides a way to revoke all active sessions for a role. In the IAM console: Role → "Revoke active sessions." This adds an inline policy with an aws:TokenIssueTime condition that denies all actions for tokens issued before a specific timestamp. Any existing session tokens become immediately invalid.

Network ACL for IP Blocking — NACLs are stateless and take effect immediately. By adding a deny rule for the attacker's source IP, you block all network traffic from that IP to any resource in the subnet — even if they somehow obtain new credentials.

The Complete Containment:
Deny-all policy (block API permissions) + Revoke sessions (kill active tokens) + NACL (block network access)

Why others are wrong:
A) Update trust policy to deny source IP — Trust policies control who can assume the role (new sessions). They do NOT affect existing active sessions. An attacker with already-issued temporary credentials continues to have access. Modifying the trust policy only prevents new role assumptions — it doesn't revoke current ones.
C) IAM Access Analyzer — Access Analyzer is a detective/advisory tool that identifies potential misconfigurations. It cannot stop active sessions or revoke credentials. During an active compromise, you need immediate containment, not analysis. Access Analyzer is useful after containment to understand how the misconfiguration occurred.
D) Create a new role and switch users — This does nothing to stop the attacker. The compromised role still exists with active sessions. Creating a replacement role helps legitimate users transition later, but it's a recovery step, not a containment step. The malicious actor maintains full access through the original role's active sessions.
Q27.A company is deploying a new web application on Amazon EC2 instances. Based on its other web applications, the company anticipates frequent DDoS attacks.SELECT TWO

Which solutions can the company use to protect its application?
AAssociate the EC2 instances with a security group that blocks traffic from malicious IP addresses.
BUse an Elastic Load Balancing (ELB) Application Load Balancer and Auto Scaling group to scale to absorb application layer traffic.
CUse Amazon Inspector on the EC2 instances to examine incoming traffic and discard malicious traffic.
DUse Amazon CloudFront and AWS WAF to prevent malicious traffic from reaching the application.
EEnable Amazon GuardDuty to block malicious traffic from reaching the application.
✓ Correct: B and D. ALB + Auto Scaling (absorb Layer 3/4 attacks) + CloudFront + WAF (block Layer 7 attacks).

How to Think About This:
DDoS protection requires defense at two layers:
Infrastructure layer (Layer 3/4) — SYN floods, UDP reflection → ALB auto-scales to absorb
Application layer (Layer 7) — HTTP floods, slow loris → CloudFront + WAF filter and block

Together, B and D cover both layers of DDoS defense.

Key Concepts:
ALB + Auto Scaling (B) — ALBs provide built-in DDoS protection at the infrastructure layer:
• ALBs only accept well-formed HTTP/HTTPS requests — SYN floods and UDP reflection attacks are automatically dropped because they aren't valid HTTP
• When an ALB detects attack traffic, it automatically scales to absorb the load — and this scaling does NOT affect your bill
• Auto Scaling behind the ALB ensures your EC2 instances also scale to handle legitimate traffic during an attack
• ALBs distribute traffic across multiple AZs, making single-point-of-failure attacks ineffective

CloudFront + AWS WAF (D) — Edge-level protection for application layer attacks:
CloudFront operates from 400+ edge locations worldwide. DDoS traffic is absorbed at the edge before it reaches your origin servers. CloudFront also caches content, so many attack requests never reach your infrastructure.
AWS WAF filters malicious requests based on rules: rate limiting (block IPs exceeding X requests/second), IP reputation lists, SQL injection patterns, and custom rules. WAF stops application-layer attacks (HTTP floods, bot traffic, scraping) that the ALB can't distinguish from legitimate requests.
• Together, they form the AWS-recommended "edge defense" for web applications.

AWS DDoS Defense Architecture:
Client → CloudFront (edge caching + absorption) → WAF (Layer 7 filtering) → ALB (Layer 3/4 protection + routing) → Auto Scaling EC2 (elastic capacity)

Why others are wrong:
A) Security groups blocking malicious IPs — Security groups are allow-list based — they specify what traffic IS allowed, and everything else is denied by default. You cannot add "deny" rules to a security group. During a DDoS attack, the volume and source of malicious IPs changes constantly — security groups cannot dynamically respond to this. Also, security groups don't protect against volumetric attacks that overwhelm the network before reaching the instance.
C) Amazon Inspector to examine traffic — Inspector is a vulnerability scanning service that assesses EC2 instances and container images for software vulnerabilities (CVEs) and unintended network exposure. It does NOT examine or filter live network traffic. Inspector answers "does this instance have known vulnerabilities?" not "is this traffic malicious?"
E) GuardDuty to block traffic — GuardDuty is a detection service, not a prevention service. It can detect suspicious activity (port scanning, credential compromise, unusual API calls) but it cannot block traffic. GuardDuty generates findings that you must act on — it doesn't take action itself. For blocking, you need WAF, NACLs, or security groups.
Q28.A security engineer is designing proactive edge security controls. Match each anticipated threat to the correct security strategy.SELECT ALL FOUR

A) Brute force attacks on a login page → Set up rate-based rules in AWS WAF
B) Man-in-the-middle attacks intercepting data in transit → Enforce HTTPS-only with ACM
C) DDoS attacks targeting network/transport layers → Implement AWS Shield Advanced
D) SQL injection and cross-site scripting → Configure AWS WAF with managed rule groups

Select all four to confirm you understand each mapping.
ABrute force login attacks → WAF rate-based rules (limit requests per IP)
BMan-in-the-middle attacks → HTTPS-only with ACM (encrypt data in transit)
CDDoS (Layer 3/4) attacks → AWS Shield Advanced (enhanced DDoS protection)
DSQLi and XSS attacks → WAF managed rule groups (preconfigured OWASP rules)
✓ All four mappings are correct. Each threat maps to a specific edge security strategy.

The Edge Security Matrix — Memorize This:

1. Brute Force Attacks → WAF Rate-Based Rules
What it does: Automatically counts requests from each IP address and blocks IPs that exceed a threshold you define (e.g., 100 login requests per 5 minutes). When the request rate drops below the threshold, the IP is automatically unblocked.
Why it works: Brute force attacks generate high-volume requests from individual IPs. Rate-based rules throttle these automated attacks while allowing legitimate users through. You can scope rules to specific URLs (like /login) for precision.
Key detail: Rate-based rules are different from regular WAF rules. Regular rules match patterns (like SQL injection strings). Rate-based rules count request volume per IP — they're specifically designed for volumetric abuse from single sources.

2. Man-in-the-Middle (MITM) → HTTPS with ACM
What it does: AWS Certificate Manager (ACM) provides free SSL/TLS certificates with automatic renewal. Enforcing HTTPS-only ensures all client-server communication is encrypted in transit.
Why it works: MITM attacks intercept unencrypted traffic between client and server to steal credentials, session tokens, or sensitive data. TLS encryption makes intercepted data unreadable. Even if an attacker captures the packets, they can't decrypt the content without the private key.
Key detail: ACM handles certificate provisioning and renewal automatically — no manual certificate management, no expired certs causing outages.

3. DDoS (Layer 3/4) → AWS Shield Advanced
What it does: Provides enhanced DDoS protection beyond the free Shield Standard. Includes: real-time attack visibility, 24/7 DDoS Response Team (DRT) access, cost protection (credits for scaling during attacks), and advanced detection for volumetric, state-exhaustion, and application-layer attacks.
Why it works: Layer 3/4 DDoS attacks (SYN floods, UDP amplification, DNS reflection) require infrastructure-level mitigation that goes beyond basic ALB/CloudFront protection. Shield Advanced provides dedicated mitigation capacity and expert support.
Key detail: Shield Standard is free and automatic for all AWS customers. Shield Advanced is paid ($3,000/month) and provides the enhanced protection, DRT access, and cost protection.

4. SQLi & XSS → WAF Managed Rule Groups
What it does: Pre-built rule sets that detect and block common web vulnerabilities including the OWASP Top 10: SQL injection, cross-site scripting, local file inclusion, remote code execution, and more. AWS and third-party vendors provide managed rule groups that are regularly updated for emerging threats.
Why it works: SQL injection and XSS exploit predictable patterns in HTTP requests (e.g., ' OR 1=1-- in form fields, <script> tags in input). Managed rule groups contain regex patterns that match these attack signatures and block the requests before they reach your application.
Key detail: You don't write these rules yourself. AWS provides the AWS Managed Rules (free with WAF) and marketplace vendors offer specialized rule groups. They update automatically as new attack vectors are discovered.
Q29.An ecommerce company identifies significant bot traffic on its website, specifically on the login and checkout pages. Some bot traffic comes from a specific country. The website is served by an Amazon CloudFront distribution. The company wants to implement controls to detect and mitigate advanced bots.

Which solution will meet this requirement?
AEnable geo-restriction in CloudFront. Use CloudFront Functions to add additional header checks for the remaining traffic.
BSet up an AWS WAF web ACL with rate-based rules. Enable geo-restriction in CloudFront. Associate the WAF web ACL with the CloudFront distribution.
CConfigure Amazon CloudWatch to monitor CloudFront access logs for bot patterns. Create an AWS Lambda function to analyze patterns and dynamically add IP addresses to AWS WAF block lists.
DSet up an AWS WAF web ACL with an AWS managed rule for bot control for targeted bots. Associate the WAF web ACL with the CloudFront distribution.
✓ Correct: D. AWS WAF Bot Control managed rule group for targeted bots.

How to Think About This:
When you see "advanced bots" + "login/checkout pages" + "detect and mitigate", the answer is AWS WAF Bot Control. Key word is "advanced" — simple rate limiting won't work against sophisticated bots that rotate IPs and mimic human behavior. You need ML-based behavioral detection.

Key Concepts:
AWS WAF Bot Control — A specialized managed rule group with two tiers:
Common bots — Detects self-identifying bots (scrapers, crawlers) using request signatures
Targeted bots (what this question asks for) — Uses advanced techniques to detect sophisticated bots that actively try to evade detection:
  — Browser fingerprinting — verifies the client is a real browser
  — Client-side JavaScript interrogation — challenges the client to execute JS (bots often can't)
  — Machine learning — analyzes behavioral patterns across requests
  — AWS global threat intelligence — automatically updated with new bot signatures

Bot Control is specifically designed to protect login pages (credential stuffing, account takeover) and checkout pages (inventory hoarding, card testing). It allows legitimate users through while blocking automated threats.

Why Other Approaches Fail Against Advanced Bots:
Geo-restriction — Blocks ALL traffic from a country, including legitimate customers. Bots can use VPNs to appear from other countries. Too broad, too easy to circumvent.
Rate-based rules — Advanced bots use distributed IP rotation (thousands of IPs, each making few requests). No single IP exceeds the rate threshold, so rate limiting is ineffective.
IP block lists — Reactive, not proactive. There's always a delay between detection and blocking. Sophisticated bots rotate IPs faster than you can block them.

Why others are wrong:
A) Geo-restriction + CloudFront Functions — Geo-restriction blocks all traffic from a country — including legitimate customers, which hurts business. CloudFront Functions can do basic header checks but cannot perform sophisticated bot detection (no ML, no browser fingerprinting, no JS challenges). Bots using VPNs bypass geo-restriction entirely.
B) WAF rate-based rules + geo-restriction — Rate-based rules count requests per IP. Advanced bots distribute attacks across thousands of IPs, each staying below the rate threshold. Combined with geo-restriction blocking legitimate users, this solution both misses sophisticated bots and impacts real customers.
C) CloudWatch + Lambda + dynamic IP blocking — This is a reactive system with significant delay: logs must be generated, collected, analyzed, and then WAF updated. During that delay, bots continue operating. IP-based blocking is also ineffective against bots that rotate IPs frequently. WAF Bot Control provides real-time, proactive detection with zero delay.
Q30.A company handling sensitive information is targeted by network-based attacks aimed at stealing data and disrupting internet services. The security team needs to:

• Block automated external scripts (bots) accessing the websites
• Block IP addresses from a specific part of a country (not the whole country)SELECT TWO

Which combination of steps will meet these requirements with the LEAST administrative overhead?
ASubscribe to AWS Shield Advanced. Configure Shield Advanced to protect the company's API Gateway API and ALBs.
BAdd the AWS managed WAF IP reputation rule group. Add the source IP addresses of the part of the country that needs to be blocked.
CAdd the AWS managed AWS WAF Bot Control rule group to a web ACL. Configure the WAF web ACL to protect the CloudFront distribution used for the API Gateway API and public websites.
DCreate firewall rules in AWS Firewall Manager. Add identifiers to the firewall geolocation rule for the parts of the country that need to be blocked.
ECreate an Amazon CloudFront distribution. Use the CloudFront geolocation header to invoke a Lambda@Edge function to drop traffic from that part of the country.
✓ Correct: C and E. WAF Bot Control (block bots) + CloudFront geolocation headers with Lambda@Edge (block specific region within a country).

How to Think About This:
Two separate requirements need two separate solutions:
"Block automated scripts" → WAF Bot Control (C) — same concept as Q29
"Block specific PART of a country" → CloudFront granular geolocation headers (E) — this is the tricky part

The key insight: standard geo-restriction (CloudFront or WAF) works at the country level only. To block a specific region within a country, you need CloudFront's granular geolocation headers which include sub-country detail.

Key Concepts:
WAF Bot Control (C) — The managed rule group detects and blocks automated scripts (bots) that try to access websites. Bot Control uses browser fingerprinting, JS challenges, and ML-based behavioral analysis. This directly addresses the "block automated external scripts" requirement. Associate the WAF web ACL with the CloudFront distribution to protect both the API Gateway and public websites.

CloudFront Granular Geolocation Headers (E) — CloudFront adds geolocation headers to requests before forwarding to the origin. The key headers:
CloudFront-Viewer-Country — Country code (e.g., US, DE, JP)
CloudFront-Viewer-Country-RegionSub-country region (e.g., US-TX for Texas, US-CA for California)
CloudFront-Viewer-City — City name

By using CloudFront-Viewer-Country-Region in a Lambda@Edge function, you can drop requests from a specific state/province/region without blocking the entire country. This is the only AWS-native way to do sub-country geo-blocking.

Lambda@Edge Flow:
Client request → CloudFront adds geo headers → Lambda@Edge checks CloudFront-Viewer-Country-Region → If matches blocked region, return 403 → Otherwise, forward to origin

Why others are wrong:
A) Shield Advanced — Shield Advanced provides enhanced DDoS protection, but the question describes bot attacks and regional blocking — not DDoS. Shield Standard (free, automatic) already provides basic DDoS protection. Shield Advanced at $3,000/month is overkill for this scenario and doesn't address bot detection or sub-country geo-blocking.
B) WAF IP reputation rule group + manual IPs — The AWS managed IP reputation rule group is managed by AWS — you cannot modify it or add your own IPs to it. It contains known malicious IPs curated by AWS threat intelligence. You cannot use it to block IPs from a specific geographic region.
D) Firewall Manager geolocation rules — Firewall Manager manages other firewall services (WAF, Network Firewall, Shield) — it doesn't create its own firewall rules independently. Also, the services it manages do not natively support sub-country geolocation. Standard geo-blocking only works at the country level, which is too broad for this requirement.
Q31.A security team needs to improve a daily compliance reporting process. The report must show which on-premises servers and EC2 instances are missing the latest security patches. All servers and instances have agents for CloudWatch, Systems Manager, and Inspector. The team must bring all into compliance within 24 hours.

Which solution will meet these requirements with the LEAST operational overhead?
AUse Amazon QuickSight and AWS CloudTrail to generate the report. Redeploy all noncompliant instances and servers by using an AMI with the latest patches.
BUse Systems Manager Patch Manager to generate the report of all noncompliant servers and instances. Use Patch Manager to install the missing patches.
CUse Systems Manager Patch Manager to generate the report. Redeploy all noncompliant instances and servers by using an AMI with the latest patches.
DUse AWS Trusted Advisor to generate the report. Use Systems Manager Patch Manager to install the missing patches.
✓ Correct: B. Systems Manager Patch Manager for both reporting AND patching.

How to Think About This:
When you see "patch compliance" + "report" + "install patches" + "on-premises + EC2" + "least overhead", the answer is SSM Patch Manager for everything. It's a single service that does both: scan/report AND remediate. No additional services needed.

Key Concepts:
Systems Manager Patch Manager — A fully managed patching service that handles the complete patch lifecycle:
Scan — Scans instances against patch baselines to identify missing patches
Report — Generates compliance reports showing which instances are noncompliant
Install — Automatically installs missing patches on demand or on a schedule
Supports both EC2 AND on-premises — Any server with the SSM Agent installed (managed node) can be patched, regardless of where it runs

Patch Manager Components:
Patch Baselines — Define which patches are approved/rejected (e.g., "auto-approve critical patches after 7 days")
Patch Groups — Group instances by tag for targeted patching (e.g., "production" vs "development")
Maintenance Windows — Schedule when patching occurs (e.g., "every Sunday 2-4 AM")

One service handles scan, report, and remediate — that's least operational overhead.

Why others are wrong:
A) QuickSight + CloudTrail + AMI redeployment — Three problems: (1) CloudTrail logs API calls, not patch compliance status — it can't report which servers are missing patches. (2) QuickSight is a BI dashboarding tool — it can visualize data but doesn't have built-in patch compliance scanning. (3) AMI redeployment is slow for EC2 and doesn't work for on-premises servers at all. You can't re-image an on-premises server with an AMI.
C) Patch Manager report + AMI redeployment — Correctly uses Patch Manager for reporting, but AMI redeployment for remediation is problematic: it's slow (requires instance replacement), doesn't work for on-premises servers, and is far more operational overhead than simply having Patch Manager install the patches directly. Why redeploy when you can patch in place?
D) Trusted Advisor + Patch ManagerTrusted Advisor provides high-level best practice recommendations (cost, performance, security, fault tolerance). It cannot generate patch compliance reports for specific servers and instances. Trusted Advisor doesn't scan for missing OS patches — that's Patch Manager's job. The remediation half (Patch Manager) is correct, but the reporting half (Trusted Advisor) is wrong.
Q32.A company does not allow the permanent storage of SSH keys on Amazon Linux 2 EC2 instances. Three employees with IAM user accounts require interactive command line access to make critical updates to an EC2 instance.

How can a security engineer provide the appropriate access?
AUse AWS Systems Manager Inventory to select and connect to the EC2 instance. Provide the IAM user accounts with permissions to use Inventory.
BUse AWS Systems Manager Run Command to open an SSH connection to the EC2 instance. Provide the IAM user accounts with permissions to use Run Command.
CUse AWS Systems Manager Session Manager. Provide the IAM user accounts with the permissions to use Session Manager.
DUse Amazon EC2 Instance Connect to connect to the EC2 instance. Provide the IAM user accounts with access to EC2 Instance Connect.
✓ Correct: C. AWS Systems Manager Session Manager.

How to Think About This:
Two key requirements: (1) no SSH keys and (2) interactive command line access. Session Manager is the only SSM feature that provides both — a browser-based interactive shell without any SSH keys, controlled entirely through IAM policies.

Key Concepts:
Session Manager — Provides secure, interactive command line access to EC2 instances without:
• SSH keys on the instance
• Open inbound ports (no port 22 needed)
• Bastion hosts or jump boxes

How It Works:
• SSM Agent (pre-installed on Amazon Linux 2) communicates outbound to the SSM service
• Users authenticate via IAM — no SSH keys or passwords needed
• Access is granted through IAM policies (e.g., ssm:StartSession)
• Sessions are interactive — full shell access, just like SSH
• All session activity can be logged to S3 or CloudWatch (as we learned in Q2)

SSM Feature Comparison:
FeatureInteractive?Purpose
Session ManagerYesInteractive shell access
Run CommandNoExecute commands remotely (fire-and-forget)
InventoryNoCollect metadata (installed software, patches)
Patch ManagerNoScan and install patches

Why C is correct: Session Manager provides interactive command line access controlled by IAM policies. No SSH keys are stored on the instance. SSM Agent is pre-installed on Amazon Linux 2, so no additional setup is needed. The three employees just need IAM permissions for ssm:StartSession.

Why others are wrong:
A) SSM Inventory — Inventory collects metadata from managed nodes (installed applications, OS version, network config). It does NOT provide any kind of connection or command line access. It's a data collection tool, not an access tool.
B) SSM Run Command — Run Command executes commands remotely but is not interactive. You submit a command, it runs, and you get the output back. You cannot type commands in real-time or interact with the shell. The question specifically requires "interactive command line access."
D) EC2 Instance Connect — Instance Connect pushes a temporary SSH public key to the instance metadata for 60 seconds, then you SSH using that key. However, this still requires SSH key infrastructure on the instance. The question says the company "does not allow the permanent storage of SSH keys" — while Instance Connect keys are temporary, the mechanism still relies on SSH key-based authentication on the instance.
Q33.A company needs to implement an IPS within a VPC to protect against malicious traffic. Application servers are in public subnets across each Availability Zone. The security team must implement north-south traffic inspection to detect and prevent threats from internet traffic.

Which solution will meet these requirements?
ACreate Gateway Load Balancer (GWLB) endpoints in public subnets and connect to a third-party IPS service. Update public subnet route tables to send traffic through the GWLB endpoints.
BDeploy Application Load Balancers (ALBs) in public subnets. Configure AWS WAF with IPS rules. Place application servers behind the ALBs.
CConfigure AWS Network Firewall endpoints with IPS rules in each public subnet. Update public subnet route tables to send traffic through firewall endpoints.
DConfigure AWS Network Firewall endpoints with IPS rules in new dedicated subnets for each Availability Zone. Update public subnet route tables to send traffic through firewall endpoints. Configure new subnet route tables to route traffic locally and 0.0.0.0/0 to the internet gateway.
✓ Correct: D. AWS Network Firewall endpoints in dedicated subnets (not in the public subnets).

How to Think About This:
When you see "IPS" + "north-south traffic" + "VPC", the answer is AWS Network Firewall. The critical detail is where the firewall endpoints go: they must be in dedicated subnets, never in the same subnets as the application servers or directly in public subnets. This provides proper security isolation and symmetric routing.

Key Concepts:
North-South Traffic — Traffic flowing between the internet and your VPC (in/out). This is distinct from east-west traffic (traffic between resources within the VPC). IPS for north-south traffic inspects everything entering and leaving your VPC through the internet gateway.

AWS Network Firewall — A managed IPS/IDS service for VPCs that provides:
Stateful packet inspection — deep packet inspection beyond Layer 4
Intrusion Prevention (IPS) — detect and block known attack signatures
Protocol detection — identify protocols regardless of port
Suricata-compatible rules — use open-source IPS rule format

Why Dedicated Subnets Are Required:
• Firewall endpoints must be in their own subnets with dedicated route tables
• This ensures symmetric routing — both inbound and outbound traffic pass through the firewall
• Placing endpoints in public subnets creates routing loops and exposes the IPS directly to internet traffic
• The dedicated subnet route table routes local VPC traffic normally and 0.0.0.0/0 to the internet gateway
• The public subnet route table routes 0.0.0.0/0 through the firewall endpoints

Correct Architecture:
Internet → IGW → Firewall Subnet (Network Firewall endpoint) → Public Subnet (application servers)
Application servers → Firewall Subnet (inspected) → IGW → Internet

Why others are wrong:
A) GWLB endpoints in public subnets — GWLB can deploy third-party IPS appliances, but placing endpoints in public subnets creates routing loops and security vulnerabilities. GWLB endpoints should be in dedicated subnets. Also, using a third-party IPS via GWLB adds more operational overhead than the managed Network Firewall service.
B) ALB + AWS WAF — WAF operates at Layer 7 (HTTP/HTTPS only). It is NOT a full IPS solution. WAF cannot inspect non-HTTP traffic (SSH, DNS, database protocols, etc.). A true IPS needs deep packet inspection across all protocols and layers. WAF protects web applications; Network Firewall protects the entire network.
C) Network Firewall in public subnets — Correct service (Network Firewall) but wrong subnet placement. Placing firewall endpoints directly in the public subnets violates security best practices: exposes the IPS to direct internet traffic, creates potential routing issues, and doesn't ensure symmetric traffic inspection for both inbound and outbound flows. Dedicated subnets are required.
Q34.A company needs a dedicated and isolated internet channel to connect an on-premises site to Amazon S3 buckets. The company's compliance policies require data in transit to be encrypted with IPsec.

Which solution will meet these requirements?
AUse an AWS Direct Connect link configured with a public virtual interface (VIF). Use an AWS Site-to-Site VPN tunnel with the Direct Connect link to comply with IPsec data encryption.
BUse an AWS Direct Connect link configured with a public virtual interface (VIF). No additional configuration is needed because IPsec encryption is already supported by default by Direct Connect.
CUse an AWS Direct Connect link configured with a private virtual interface (VIF). Use an AWS Site-to-Site VPN tunnel with the Direct Connect link to comply with IPsec data encryption.
DUse an AWS Direct Connect link configured with a private virtual interface (VIF). No additional configuration is needed because IPsec encryption is already supported by default by Direct Connect.
✓ Correct: A. Direct Connect with public VIF + Site-to-Site VPN for IPsec encryption.

How to Think About This:
This question tests two concepts simultaneously:
1. Public VIF vs. Private VIF — S3 is a public AWS service, so you need a public VIF
2. Direct Connect + IPsec — Direct Connect does NOT encrypt by default, so you must add a VPN tunnel

Key Concepts:
Direct Connect Virtual Interfaces (VIFs):
Public VIF — Connects to public AWS services (S3, DynamoDB, STS, etc.) over Direct Connect. Uses public IP addresses and BGP. Required when accessing AWS services that have public endpoints.
Private VIF — Connects to resources inside a VPC (EC2 instances, RDS databases, etc.) using private IP addresses. Cannot reach public AWS service endpoints like S3 directly.
Transit VIF — Connects to a Transit Gateway for multi-VPC connectivity.

Since S3 is a public service (not inside a VPC), you must use a public VIF. This eliminates options C and D.

Direct Connect Does NOT Encrypt by Default:
Direct Connect provides a dedicated physical connection between your data center and AWS — it's isolated from the public internet. However, the traffic over that physical link is NOT encrypted. Direct Connect provides network isolation, not encryption.

To add IPsec encryption, you layer an AWS Site-to-Site VPN tunnel over the Direct Connect link. This gives you both:
Dedicated/isolated connection (from Direct Connect)
IPsec encryption (from the VPN tunnel)

The Complete Solution:
On-premises → Direct Connect (dedicated link) → Public VIF (reaches S3) → VPN tunnel overlay (IPsec encryption) → S3

Why others are wrong:
B) Public VIF, no additional config — Correct VIF type (public for S3), but wrong about encryption. Direct Connect does NOT support IPsec by default. The link is dedicated and isolated, but the data is unencrypted. You must add a Site-to-Site VPN tunnel for IPsec compliance.
C) Private VIF + VPN — Has the right idea about adding VPN for encryption, but wrong VIF type. Private VIFs connect to VPC resources (EC2, RDS), not public AWS services like S3. You cannot reach S3 directly through a private VIF — you need a public VIF for public service endpoints.
D) Private VIF, no additional config — Wrong on both counts: wrong VIF type (private can't reach S3) and wrong about encryption (Direct Connect doesn't encrypt by default).
Q35.A company policy requires that no insecure server protocols (FTP, Telnet, HTTP) are used on its Amazon EC2 instances. The security team wants to evaluate compliance using a scheduled Amazon EventBridge event to review infrastructure and create regular reports.

Which process will check the compliance of the company's EC2 instances?
AInvoke an AWS Config rules evaluation of the restricted-common-ports rule against every EC2 instance.
BQuery the AWS Trusted Advisor API for all AWS best practice security checks. Check for the "Action recommended" status.
CEnable an Amazon GuardDuty threat detection analysis that targets the port configuration on every EC2 instance.
DRun an Amazon Inspector assessment by using the Network Reachability rules package against the instances.
✓ Correct: D. Amazon Inspector with the Network Reachability rules package.

How to Think About This:
The key distinction: the question asks which protocols are actively running on the EC2 instances, not what the AWS configuration (security groups, NACLs) allows. This requires instance-level scanning, not configuration-level checking. Only Amazon Inspector actually scans what's running on the instance.

Key Concepts:
Amazon Inspector — A vulnerability assessment service that scans EC2 instances for:
Software vulnerabilities (CVEs) — checks installed packages against vulnerability databases
Network Reachability — analyzes network configurations AND what ports/protocols are actually accessible on the instance

Network Reachability Rules Package — Specifically designed to find security vulnerabilities in EC2 network configurations. It checks:
• Which TCP/UDP ports are listening on the instance (actual running services)
• Whether those ports are reachable from the internet through security groups, NACLs, and route tables
• Whether insecure protocols (FTP port 21, Telnet port 23, HTTP port 80) are exposed

This is exactly what the question asks: "are insecure protocols running on EC2 instances?"

Configuration Checking vs. Instance Scanning:
AWS Config — checks the AWS-level configuration (security group rules, NACL rules). Knows "port 21 is allowed in the security group" but doesn't know if FTP is actually running on the instance.
Amazon Inspector — scans the actual instance. Knows "FTP server is running and listening on port 21 AND it's reachable from the internet." This is a deeper, more accurate check.

Why D is correct: Inspector's Network Reachability package scans EC2 instances to determine which protocols are actively running and whether they're network-accessible. This directly answers "are insecure protocols running on our instances?" It can be triggered on a schedule via EventBridge for regular compliance reporting.

Why others are wrong:
A) AWS Config restricted-common-ports — Config checks security group rules (AWS configuration), not what's actually running on the instance. An instance could have FTP running even if the security group blocks port 21 (the service is still installed and running, just not reachable). Conversely, a security group might allow port 21 but no FTP server is running. Config checks the door, Inspector checks what's inside.
B) Trusted Advisor — Provides high-level best practice recommendations about AWS configuration (open security groups, exposed access keys, MFA status). It cannot scan EC2 instances to determine which protocols are running. Trusted Advisor checks AWS configuration, not instance-level state.
C) Amazon GuardDuty — Analyzes VPC Flow Logs, CloudTrail, and DNS logs to detect threats and anomalies (port scanning, credential abuse, C2 communication). GuardDuty does not scan instances to check which TCP ports are active. It detects suspicious behavior, not running services.
Q36.A web application runs on EC2 instances behind an ALB, with an RDS MySQL database and a Linux bastion host for schema updates. Admins SSH to the bastion from corporate workstations. The following security groups are applied:

sgLB — Associated with the ALB
sgWeb — Associated with the EC2 instances
sgDB — Associated with the database
sgBastion — Associated with the bastion host

Which security group configuration is both secure and functional?
AsgLB: 80/633 from 0.0.0.0/0 | sgWeb: 80/633 from 0.0.0.0/0 | sgDB: 3306 from sgWeb, sgBastion | sgBastion: 22 from corporate IP range
BsgLB: 80/633 from 0.0.0.0/0 | sgWeb: 80/633 from sgLB | sgDB: 3306 from sgWeb, sgLB | sgBastion: 22 from VPC IP range
CsgLB: 80/633 from 0.0.0.0/0 | sgWeb: 80/633 from sgLB | sgDB: 3306 from sgWeb, sgBastion | sgBastion: 22 from VPC IP range
DsgLB: 80/633 from 0.0.0.0/0 | sgWeb: 80/633 from sgLB | sgDB: 3306 from sgWeb, sgBastion | sgBastion: 22 from corporate IP range
✓ Correct: D. The only configuration where every security group follows least privilege correctly.

How to Think About This:
Evaluate each security group independently. For each one, ask: "What is the minimum source that needs to reach this resource?" The correct answer restricts each layer to accept traffic only from the layer directly in front of it.

The Correct Traffic Flow:
Internet → ALB (sgLB) → EC2 (sgWeb) → RDS (sgDB)
Corporate → Bastion (sgBastion) → RDS (sgDB)

Security Group Breakdown:

sgLB: Port 80/633 from 0.0.0.0/0
The ALB is the public entry point. It must accept HTTP (80) and HTTPS (443) from the entire internet. This is correct for a public-facing load balancer.

sgWeb: Port 80/633 from sgLB
EC2 instances are behind the ALB. They should only accept traffic from the ALB, not directly from the internet. Using the security group ID (sgLB) as the source is the best practice — it means only traffic forwarded by the ALB can reach the instances. Opening 80/633 from 0.0.0.0/0 (option A) would bypass the ALB entirely, defeating its purpose.

sgDB: Port 3306 from sgWeb and sgBastion
The MySQL database (port 3306) should only accept connections from:
sgWeb — the application servers that read/write data
sgBastion — the bastion host for schema updates
Allowing sgLB to reach the database (option B) is a security violation — the ALB should never talk directly to the database.

sgBastion: Port 22 from corporate IP range
Admins SSH from corporate workstations. The bastion should only accept SSH (port 22) from the corporate IP address range. Using the VPC IP range (options B, C) is wrong because corporate workstations are on-premises, not inside the VPC. The corporate IP range refers to the company's public IP addresses that the workstations use to reach AWS over the internet.

Why others are wrong — one error each:
A) sgWeb allows 80/633 from 0.0.0.0/0 — EC2 instances behind an ALB should only accept traffic from the ALB, not the entire internet.
B) sgDB allows 3306 from sgLB — The ALB should never have direct database access. Only the app servers and bastion need database access. Also, sgBastion allows SSH from VPC range (wrong — admins are on-premises).
C) sgBastion allows 22 from VPC IP range — Corporate workstations are on-premises, not in the VPC. SSH connections come from the corporate public IP range, not VPC internal IPs.
Q37.A company wants to enable SSO so that employees can sign in to the AWS Management Console by using the company's SAML identity provider.SELECT TWO

Which combination of steps are required as part of the process?
ACreate an AWS Direct Connect connection between the corporate network and the AWS Region with the company's infrastructure.
BCreate IAM policies that can be mapped to group memberships in the corporate directory.
CCreate an AWS Lambda function to assign IAM roles to the temporary security tokens provided to the users.
DCreate IAM users that can be mapped to the employees' corporate identities.
ECreate an IAM role that establishes a trust relationship between IAM and the corporate SAML identity provider (IdP).
✓ Correct: B and E. Create IAM policies mapped to directory groups + Create an IAM role trusting the SAML IdP.

How to Think About This:
SAML federation to the AWS Console requires two things:
Trust — An IAM role that trusts the corporate SAML IdP (E)
Permissions — IAM policies on that role, mapped to corporate directory groups (B)

No IAM users, no Lambda functions, no Direct Connect needed.

Key Concepts:
SAML Federation Flow:
1. Employee goes to the corporate SSO portal (AD FS, Okta, etc.)
2. IdP authenticates against Active Directory
3. IdP returns a SAML assertion containing the user's identity and group memberships
4. Browser posts the SAML assertion to the AWS Sign-In endpoint
5. AWS calls sts:AssumeRoleWithSAML to assume the IAM role
6. Employee gets a temporary session in the AWS Console

IAM Role with SAML Trust (E) — The role's trust policy specifies the SAML IdP as a trusted principal:
"Principal": {"Federated": "arn:aws:iam::123456789012:saml-provider/CorporateIdP"}
This tells AWS: "If a SAML assertion comes from this IdP, allow the user to assume this role."

IAM Policies Mapped to Groups (B) — Different corporate directory groups (Developers, DBAs, Security) map to different IAM roles with different policies. The SAML assertion includes the user's group membership, which determines which IAM role they assume and what permissions they get. Example: AD group "Developers" → IAM role "DeveloperRole" with EC2 + S3 permissions.

Why others are wrong:
A) AWS Direct Connect — Direct Connect provides a private network connection between on-premises and AWS. It is not required for SAML federation. The SAML assertion is sent over HTTPS through the regular internet. Direct Connect is for data transfer and private connectivity, not authentication.
C) Lambda function for token assignment — No Lambda function is needed. The SAML assertion is processed by AWS STS automatically. Users don't receive security tokens directly — the IdP provides assertions, and AWS STS creates the session based on the role and policies. The entire flow is built-in; no custom code required.
D) Create IAM users mapped to employees — This is the opposite of federation. The whole point of SAML federation is to avoid creating IAM users for each employee. Instead, employees assume IAM roles through federation. With 1,000 employees, you'd need 1,000 IAM users — federation lets you use 3-5 IAM roles mapped to directory groups.
Q38.A security engineer is developing a mobile app requiring user authentication. The app must support:

• User sign-up with email verification
• Social identity provider (IdP) authentication
Multi-factor authentication (MFA)
• Secure storage of user attributes
Custom authentication logic for business rules
• Access to AWS services (S3, DynamoDB) without storing credentials in the app

Which solution will meet these requirements with the LEAST operational overhead?
ACreate an Amazon Cognito user pool with app clients. Configure email verification, MFA, and social IdP federation. Use an AWS Lambda function for custom authentication logic. Create an identity pool linked to the user pool.
BCreate IAM users for each app user with access keys stored in the mobile app. Configure IAM roles for social federation. Use Amazon SNS for email verification and MFA. Use a Lambda function for custom authentication.
CUse Amazon API Gateway with a Lambda function authorizer for authentication. Store user credentials in DynamoDB with server-side encryption. Implement OAuth flows for social login. Use AWS STS for temporary credentials.
DSet up AWS IAM Identity Center for user management. Configure social IdP federation. Use Amazon SES for email verification and Amazon SNS for MFA. Create permission sets for AWS service access.
✓ Correct: A. Cognito User Pool (authentication) + Lambda trigger (custom logic) + Identity Pool (AWS credentials).

How to Think About This:
This is an expanded version of Q10. When you see "mobile app" + "sign-up" + "social login" + "MFA" + "custom auth logic" + "access AWS services", the answer is always Cognito User Pools + Identity Pools. The addition of "custom authentication logic" maps to Lambda triggers — a built-in Cognito feature.

Key Concepts:
Cognito User Pool Features — A complete managed user directory:
Sign-up/sign-in — built-in user registration with customizable fields
Email/phone verification — built-in, no SES/SNS setup needed
MFA — built-in TOTP and SMS support
Social IdP federation — Google, Facebook, Apple, SAML, OIDC
User attribute storage — securely stores custom user attributes
Lambda triggers — hook custom code into the authentication flow

Cognito Lambda Triggers — Let you inject custom business logic at various points:
Pre sign-up — validate/reject registrations based on business rules
Pre authentication — run checks before allowing sign-in
Post authentication — log events, trigger workflows after sign-in
Custom message — customize verification emails/SMS
Pre token generation — add/modify claims in the JWT token
This addresses the "custom authentication logic" requirement without building a custom auth system.

Cognito Identity Pool — Links to the User Pool and exchanges authentication tokens for temporary AWS credentials. Users get scoped access to S3, DynamoDB, etc. without any credentials stored in the app. Credentials auto-rotate and expire.

Complete Architecture:
User signs up/in → User Pool (auth + MFA + email verify) → Lambda trigger (custom rules) → JWT token → Identity Pool → Temporary AWS credentials → S3/DynamoDB

Why others are wrong:
B) IAM users + access keys in the app — Multiple critical failures: (1) IAM users are not designed for end-user auth at scale — IAM has a 5,000 user limit per account. (2) Storing access keys in a mobile app is a critical security vulnerability — keys can be extracted by decompiling the app. (3) Building email verification and MFA with SNS requires custom development. Cognito does all of this out of the box.
C) API Gateway + Lambda authorizer + DynamoDB for credentials — This requires building an entire authentication system from scratch: OAuth flow implementation, credential storage, token management, MFA logic. Storing user credentials in DynamoDB (even encrypted) is less secure than Cognito's purpose-built credential storage. Massive operational overhead compared to using Cognito's managed service.
D) IAM Identity Center — Identity Center is for workforce identity (employees accessing AWS accounts and business applications). It is NOT designed for consumer-facing mobile apps. You cannot use Identity Center to provide sign-up, social login, or mobile app authentication for end users. It's the wrong tool entirely.
Q39.A company uses an external identity provider (IdP) for employee logins. Developers need access to multiple AWS accounts using their company credentials. The company needs a centralized SSO solution that integrates with AWS, avoids IAM user management, and follows best practices for multi-account access.

Which authentication strategy will meet these requirements with the LEAST operational overhead?
AUse AWS IAM Identity Center and integrate with the external IdP. Configure permission sets in IAM Identity Center. Assign the permission sets to AWS accounts for the user groups.
BConfigure a SAML IdP with the external IdP. Create IAM roles to allow sts:AssumeRole for SAML principals. Instruct developers to log in to each account by using role switching.
CCreate Amazon Cognito user pools. Configure the user pools to use the IdP as an external OIDC provider for developers to sign in to each account console.
DSet up AWS Directory Service. Create a trust with the company Active Directory to grant console access by using IAM roles federated with Active Directory.
✓ Correct: A. AWS IAM Identity Center with external IdP integration and permission sets.

How to Think About This:
When you see "employees" + "multiple AWS accounts" + "SSO" + "centralized" + "company credentials", the answer is AWS IAM Identity Center. It's the purpose-built service for workforce SSO across multiple AWS accounts. Permission sets define what developers can do in each account.

Notice the contrast with Q38 (mobile app consumers = Cognito) vs. Q39 (employees accessing AWS Console = Identity Center). The exam frequently tests whether you can distinguish between these two services.

Key Concepts:
AWS IAM Identity Center (formerly AWS SSO) — A centralized service for managing workforce access to:
Multiple AWS accounts in your organization
Business applications (Salesforce, Slack, custom SAML apps)
AWS CLI/SDK access with temporary credentials

How It Works:
1. Integrate with external IdP — Connect to Okta, Azure AD, or any SAML 2.0/SCIM provider. Employees use existing company credentials.
2. Create permission sets — Define policies (e.g., "DeveloperAccess" = EC2 + S3 + Lambda). These are reusable across accounts.
3. Assign to accounts + groups — Map permission sets to AWS accounts for specific user groups (e.g., "Dev team gets DeveloperAccess in dev-account and staging-account").
4. Single portal — Developers log in once at the SSO portal and see all accounts/roles they have access to. Click to assume a role in any account.

No IAM users created. No IAM roles to manage per account. No SAML providers to configure per account. Everything is centralized.

Why others are wrong:
B) SAML IdP + IAM roles per account — This works technically but is not centralized. You must: (1) create a SAML provider in each AWS account, (2) create IAM roles in each account, (3) developers must log in to each account individually through separate role switching. With 20 accounts, that's 20 SAML providers + 20+ IAM roles to manage. Identity Center does this with one configuration and one portal.
C) Cognito user pools — Cognito is for consumer/customer-facing applications (mobile apps, web apps). Cognito user pools issue tokens for applications — they do not provide AWS Management Console access. You cannot use Cognito to give employees SSO access to multiple AWS accounts.
D) AWS Directory Service + AD trust — This creates a managed Active Directory in AWS and establishes a trust with on-premises AD. While it can enable console access via IAM roles, it requires significant setup overhead: deploying Directory Service, configuring trust relationships, managing IAM federation per account. Identity Center provides built-in IdP integration without needing to deploy a directory service.
Q40.A company used AWS IAM Roles Anywhere to grant access to a third-party security scanner. Now that scanning is complete, the company needs to revoke all access for the scanner.

What is the MOST secure way to remove this access?
ARevoke the certificate issued to the security scanner.
BDelete the role that is associated with the security scanner.
CRemove the trust anchor for the company.
DAdd a condition to the trust policy that prevents the certificate by CN value.
✓ Correct: A. Revoke the certificate issued to the security scanner.

How to Think About This:
IAM Roles Anywhere uses X.509 certificates for authentication instead of access keys. When you revoke access, you must invalidate the certificate itself — not just remove the role or modify policies. A revoked certificate cannot be used to authenticate, period. Other approaches leave the certificate active, which means it could potentially be reused.

Key Concepts:
IAM Roles Anywhere — Extends IAM roles to workloads outside of AWS (on-premises servers, third-party tools, IoT devices). Instead of using long-term access keys, external workloads use X.509 certificates issued by a Certificate Authority (CA) to authenticate and receive temporary AWS credentials.

How It Works:
1. A trust anchor is created pointing to your CA (or AWS Private CA)
2. A profile specifies which IAM role to assume and session policies
3. The external workload presents its X.509 certificate
4. IAM Roles Anywhere validates the certificate against the trust anchor
5. If valid, it returns temporary AWS credentials (like AssumeRole)

Certificate Revocation List (CRL) — The standard mechanism to invalidate certificates. When you add a certificate to the CRL, any authentication attempt using that certificate is immediately rejected. The certificate becomes permanently unusable for IAM Roles Anywhere, regardless of which role or policy it was associated with.

Why A is correct: Revoking the certificate is the most targeted and secure approach. It invalidates the specific certificate issued to the scanner. The certificate can never be used again for authentication, even if it's presented with a different role or policy. This follows the principle of revoking the authentication credential itself.

Why others are wrong:
B) Delete the associated role — Deleting the role removes the permissions, but the certificate remains active and valid. If the certificate is later associated with a different role (or a new role with the same trust anchor), the scanner could regain access. The authentication credential (certificate) is not invalidated — only the authorization target (role) is removed.
C) Remove the trust anchor — A trust anchor references the Certificate Authority that issued the certificate. Removing it would invalidate ALL certificates issued by that CA — not just the scanner's. If other legitimate services use certificates from the same CA, they would all lose access. This is like revoking everyone's ID badge because one person's badge needs to be cancelled.
D) Add a condition to block by CN value — Adding a trust policy condition to block a specific Common Name (CN) prevents that certificate from assuming this specific role. However, the certificate remains active and could be referenced in other trust policies or roles. Certificate revocation is more secure because it invalidates the credential at the authentication layer, not just the authorization layer.
Q41.A company builds a web application using Amazon API Gateway with microservices on Amazon ECS. The company uses an external IdP and AWS IAM Identity Center for authentication. Microservices need to access Amazon S3 on behalf of authenticated users.

The solution must:
Preserve end user identity across service layers
• Enforce least privilege access
No custom authorization logic

Which solution will meet these requirements?
AConfigure IAM Identity Center Active Directory sync. Configure ECS tasks with a static IAM role that grants access to the S3 bucket. Enforce user-level access in the application code.
BAttach a user identity token as a custom header in each request. Configure an AWS Lambda authorizer in API Gateway to validate the token and map permissions.
CConfigure each ECS task with an IAM role that uses attribute-based access control (ABAC). Include user attributes in request headers that pass from API Gateway.
DConfigure trusted identity propagation in IAM Identity Center. Configure ECS tasks to assume a user-specific role derived from the IAM Identity Center session to access Amazon S3.
✓ Correct: D. IAM Identity Center trusted identity propagation.

How to Think About This:
Three requirements narrow the answer:
"Preserve end user identity across service layers" → identity propagation (not static roles)
"Least privilege" → per-user permissions (not one shared role for all users)
"No custom authorization logic" → built-in feature (not Lambda authorizers or app code)

Only trusted identity propagation meets all three. It's a built-in IAM Identity Center feature that passes the authenticated user's identity through the entire service chain without custom code.

Key Concepts:
Trusted Identity Propagation — A feature of IAM Identity Center that adds identity metadata to IAM role sessions. This means when an ECS task accesses S3, AWS knows which end user initiated the request — not just which service role. Key benefits:
User identity flows end-to-end: API Gateway → ECS → S3 all see the original user
S3 access policies can be user-specific: grant Alice access to her folder, Bob to his
No custom code needed: the identity propagation is handled by IAM Identity Center
Auditing shows the real user: CloudTrail logs show which user accessed S3, not just "ECS task role"

The Flow:
User authenticates via IdP → IAM Identity Center session created → API Gateway receives request → ECS task assumes user-specific role from Identity Center session → S3 access is scoped to that user

Without trusted identity propagation, ECS tasks use a shared static IAM role. All users appear as the same principal. You'd need custom application code to enforce per-user access. CloudTrail can't distinguish which user made which S3 call.

Why others are wrong:
A) Static IAM role + application code — A static role grants the same permissions to all users — violates least privilege. Enforcing user-level access "in the application code" means writing custom authorization logic — which the question explicitly prohibits. Also, AD sync is specific to Microsoft AD, not a generic external IdP solution.
B) Lambda authorizer with token validation — A Lambda authorizer requires writing custom logic to validate tokens and map permissions. The question says "does not want to implement custom authorization logic." Lambda authorizers are useful but add operational overhead and custom code to maintain.
C) ABAC with user attributes in headers — ABAC uses tags/attributes to make authorization decisions. However, passing user attributes in request headers from API Gateway to ECS doesn't preserve the user's identity at the AWS level. The ECS task still uses a shared IAM role — AWS services like S3 don't see the individual user. ABAC defines permissions based on resource tags, but it doesn't propagate end-user identity through service layers.
Q42.A company uses AWS Organizations and wants to secure root access to all member accounts. Requirements:

• Prevent root credential recovery by unauthorized users
• No long-term credentials for the root user
• Credentials cannot be leaked
• Root actions are controlled and restricted
• Must be able to perform specific root user functions quickly

Which solution will meet these requirements?
ARemove existing root access keys. Update root passwords. Ensure only authorized users have root passwords. Create an SCP that denies all actions by the root user in all accounts.
BUpdate root user email to a limited access mailbox in all accounts. Remove existing credentials. Update root passwords. Ensure only authorized users have mailbox and password access.
CApply hardware MFA to root user in all accounts. Remove existing credentials. Update root passwords. Ensure only authorized users have MFA tokens and passwords.
DEnable root access management in the organization management account. Remove any existing credentials. Ensure only authorized users have access to the management account.
✓ Correct: D. Enable centralized root access management in the organization management account.

How to Think About This:
The question has a critical combination: "no long-term credentials" + "credentials cannot be leaked" + "perform root functions quickly." Options A, B, and C all involve root passwords — which are long-term credentials that can be leaked. Only option D eliminates root passwords entirely through centralized root management.

Key Concepts:
Centralized Root Access Management — An AWS Organizations feature that allows the management account to:
Remove all root credentials from member accounts (passwords, access keys, MFA) — no long-term credentials exist
Disable account recovery for member accounts — no one can reset the root password from the member account
Perform specific root actions directly from IAM in the management account — without ever logging in as root in the member account
No SCP changes needed — the management account can perform root-privileged actions through IAM, not through the root user itself

What Root Actions Can Be Performed:
From the management account, authorized IAM users can perform limited root-only actions on member accounts:
• Delete S3 bucket policies that deny all access (lockout recovery)
• Delete SQS resource policies that deny all access
• Other emergency root-required operations
These actions are quick — no MFA token retrieval, no SCP removal, no password lookup needed.

Why This Is More Secure:
No root password exists in member accounts → nothing to leak
No root login possible in member accounts → no credential stuffing attacks
No account recovery possible from member accounts → attackers can't reset root password via email
• All root-like actions go through IAM in the management account → governed by IAM policies, logged in CloudTrail

Why others are wrong:
A) SCP denying all root actions — The SCP blocks root from acting, but root passwords still exist (can be leaked). When you need to perform a root action, you must remove the SCP first — temporarily exposing ALL member accounts in that OU. Also, removing and reapplying SCPs is slow, error-prone, and not "quick."
B) Limited access mailbox + root passwords — Root passwords still exist and can be leaked. Anyone with mailbox access can recover root credentials. This doesn't prevent root actions — just limits who knows the password. Doesn't meet "no long-term credentials" or "credentials cannot be leaked."
C) Hardware MFA + root passwords — More secure than B (requires physical MFA device), but root passwords still exist. Retrieving the physical MFA token is time-consuming (stored in a safe, requires physical access) — doesn't meet "quickly." Also doesn't limit which actions root can take once logged in — full unrestricted root access.
Q43.A DevOps team has pipelines across several AWS accounts. A security team created a central identity account with IAM roles — each role manages a specific service (EC2, Lambda, API Gateway) in a specific account. The DevOps team finds switching roles cumbersome.

The team wants one role that can manage multiple services in a project, but is limited to only objects belonging to that project.

Which solution resolves this while following AWS best practices?
AImplement an attribute-based access control (ABAC) authorization strategy with a role policy that allows the management of objects with a project-specific tag.
BImplement an SCP that allows the use of the services the DevOps team uses. Allow the team to use a single role with the managed AdminAccess policy.
CCreate a new role with the AmazonDevOpsGuruFullAccess managed policy attached. Set a permissions boundary to allow only the services the DevOps team uses.
DCreate an IAM group with permissions that allow the use of services the DevOps team uses. Add the DevOps users to the group.
✓ Correct: A. Attribute-Based Access Control (ABAC) with project-specific tags.

How to Think About This:
When you see "one role" + "multiple services" + "limited to specific project resources", the answer is ABAC. ABAC uses tags to scope permissions: "you can manage any EC2 instance, Lambda function, or API Gateway — but ONLY if it has the tag Project=MyProject." One role, multiple services, project-scoped.

Key Concepts:
ABAC (Attribute-Based Access Control) — An authorization strategy where permissions are based on tags (attributes) rather than explicit resource ARNs. The core idea:
Tag resources with project identifiers: Project=AlphaProject
Tag the IAM principal (user/role) with the same project: Project=AlphaProject
Policy condition allows actions only when the principal's tag matches the resource's tag

Example ABAC Policy:
"Condition": {"StringEquals": {"aws:ResourceTag/Project": "${aws:PrincipalTag/Project}"}}
This says: "Allow this action only if the resource's Project tag matches the caller's Project tag."

Why ABAC Solves This Problem:
One role, multiple services — The policy can allow ec2:*, lambda:*, apigateway:* all in one role
Project-scoped — The tag condition ensures the role can only touch resources tagged with the team's project
Scales automatically — New resources tagged with the project are automatically accessible; no policy updates needed
Least privilege — Cannot access resources from other projects, even if they use the same services

ABAC vs. RBAC:
RBAC (Role-Based) — One role per service per account. 5 services × 10 accounts = 50 roles to manage. Cumbersome.
ABAC (Attribute-Based) — One role per project. Tag-based conditions scope access. Scales without role explosion.

Why others are wrong:
B) SCP + AdminAccess roleAdminAccess grants access to ALL resources, not just project-specific ones. SCPs can restrict which services are allowed, but they cannot scope access to specific project resources by tag. The DevOps team could access resources belonging to other projects. Violates least privilege.
C) DevOps Guru + permissions boundaryAmazonDevOpsGuruFullAccess is for the Amazon DevOps Guru service (an ML-powered operations service). It's NOT a general-purpose DevOps policy for managing EC2, Lambda, and API Gateway. This is a naming trap — "DevOps" in the policy name doesn't mean it's for DevOps teams.
D) IAM group with service permissions — An IAM group can grant permissions for EC2, Lambda, and API Gateway. However, without tag-based conditions, the group grants access to ALL EC2 instances, ALL Lambda functions, etc. — not just those belonging to the team's project. Violates "limited to only objects belonging to the project."
Q44.A company uses a SAML identity provider (IdP) to allow external users to access an Amazon S3 bucket. The security team suspects that a role's credentials have been compromised. The team must immediately stop users with active sessions to ensure compromised credentials are not in use.

Which step should the security team take?
AAttach an S3 bucket policy with Effect "Deny", Action "s3:*", and Principal set to the role associated with the IdP.
BRemove the IAM policy that is attached to the role associated with the IdP.
CDelete the compromised role's access key and secret access key.
DUse the AWSRevokeOlderSessions policy to revoke active sessions for the role associated with the IdP.
✓ Correct: D. Revoke active sessions using the AWSRevokeOlderSessions policy.

How to Think About This:
The key phrase is "immediately stop users with active sessions." This means you need to invalidate existing temporary credentials that were already issued. Changing policies or bucket policies affects future API calls but doesn't invalidate the session tokens themselves. Only revoking sessions kills active tokens immediately.

This is the same concept as Q26 (compromised IAM role) but specifically for SAML-federated roles. The mechanism is identical: revoke active sessions.

Key Concepts:
AWSRevokeOlderSessions — When you "Revoke active sessions" for an IAM role (via console or API), AWS automatically attaches a policy called AWSRevokeOlderSessions to the role. This policy:
• Contains a Deny all with a condition: aws:TokenIssueTime is before the revocation timestamp
• Any temporary credentials issued before the revocation are immediately denied for all actions
• Any credentials issued after the revocation work normally
• Takes effect immediately — no propagation delay

How It Works:
Revoke sessions at 3:00 PM → All tokens issued before 3:00 PM are denied → New AssumeRoleWithSAML calls after 3:00 PM get valid tokens

This is the only way to immediately stop active sessions. All other approaches affect future access but leave existing session tokens valid until they naturally expire (up to 12 hours).

Why others are wrong:
A) S3 bucket policy with Deny — Adding a Deny to the bucket policy blocks future S3 API calls from the role. However, this is not "immediate" in the same way — the compromised credentials could still be used for other AWS services (if the role has broader permissions). Also, bucket policies don't invalidate the session tokens — they block specific actions on one resource. Revoking sessions blocks all actions everywhere for compromised tokens.
B) Remove the IAM policy from the role — Removing the policy removes all future permissions for the role — including for legitimate users who assume the role after the incident. This is too broad (affects everyone, not just compromised sessions) and doesn't immediately invalidate existing session tokens. Cached temporary credentials may still work until they expire.
C) Delete access key and secret key — IAM roles don't have permanent access keys. Access keys belong to IAM users. SAML-federated users receive temporary session credentials via AssumeRoleWithSAML, not permanent access keys. There's nothing to delete. This option fundamentally misunderstands how SAML federation works.
Q45.A company has an IAM user group with a small set of IAM users. A security administrator plans to attach a new IAM policy to the group for restricted access to an Amazon S3 bucket. Before applying, the administrator wants to verify the overall effective access — including existing policies on each user.

Which solution will meet this requirement?
AUse the IAM policy simulator in Existing Policies mode. Select all existing policies for an IAM user in the group. Create the new policy in the simulator. Run the simulation with all S3 actions. Repeat for each IAM user.
BUse the IAM policy simulator in Existing Policies mode. Select all existing policies for the IAM user group. Create the new policy in the simulator. Run the simulation with all S3 actions.
CUse the IAM policy simulator in New Policy mode. Select all existing policies for an IAM user in the group. Create the new policy. Run the simulation with all S3 actions. Repeat for each user.
DUse the IAM policy simulator in New Policy mode. Select all existing policies for the IAM user group. Create the new policy. Run the simulation with all S3 actions.
✓ Correct: A. IAM Policy Simulator in Existing Policies mode, testing each user individually.

How to Think About This:
Two decisions to make:
1. Existing Policies mode vs. New Policy mode → Existing Policies mode lets you select specific IAM users and their attached policies. New Policy mode does not.
2. Per-user vs. per-group testing → Each IAM user may have different directly attached policies and permissions boundaries beyond the group policy. You must test each user individually to get the full effective access picture.

Key Concepts:
IAM Policy Simulator — A tool that lets you test and validate IAM policies before applying them. It answers: "If this user tries to do X, will it be allowed or denied?" Two modes:
Existing Policies mode — Select an existing IAM user/group/role and see their current policies. You can add a new policy to simulate the combined effect. This is what you need to verify "overall effective access."
New Policy mode — Write and test a standalone policy in isolation. You cannot select existing users or their attached policies. Only tests the new policy by itself.

Why Test Each User Individually:
IAM users can have permissions from multiple sources:
Group policies — shared by all group members
Directly attached policies — unique to each user
Permissions boundaries — may differ per user
Inline policies — may differ per user

User Alice might have a directly attached Deny for S3, while User Bob might have a broader Allow. Testing only the group policy misses these user-specific overrides. You must test each user to see their complete effective permissions.

Policy Simulator vs. Access Analyzer:
Policy Simulator — "Will this action be allowed?" (test before applying)
Access Advisor — "Which services were actually used?" (review after the fact)
Access Analyzer — "Is this resource shared externally?" (detect unintended access)

Why others are wrong:
B) Existing Policies mode, group only — Testing only the group's policies misses policies attached directly to individual users. A user might have a deny policy or permissions boundary that changes the effective access. You must test per-user to get accurate results.
C) New Policy mode, per user — New Policy mode does not allow you to select existing IAM users or their attached policies. You can only write and test a new policy in isolation. It cannot show the combined effect of existing + new policies.
D) New Policy mode, group only — Same problem as C (can't select existing users) plus same problem as B (misses user-specific policies). Wrong on both counts.
Q46.A company will deploy an application on Amazon EC2 instances in private subnets. The application will transfer sensitive data to and from an Amazon S3 bucket. According to compliance requirements, the data must not traverse the public internet.

Which solution meets the compliance requirement?
AAccess the S3 bucket through a proxy server.
BAccess the S3 bucket through a NAT gateway.
CAccess the S3 bucket through a VPC endpoint for S3.
DAccess the S3 bucket through the SSL-protected S3 endpoint.
✓ Correct: C. Use a VPC endpoint for S3.

How to Think About This:
When you see "private subnets" + "must not traverse public internet" + "access S3", the answer is always VPC endpoint. A VPC endpoint keeps traffic entirely within the AWS private network — it never touches the public internet.

Key Concepts:
VPC Endpoints for S3 — Two types exist:
Gateway endpoint (most common for S3) — Free, adds a route to your subnet route table that directs S3 traffic through the endpoint. Traffic stays on the AWS private network. No internet gateway, NAT gateway, or VPN needed.
Interface endpoint (powered by PrivateLink) — Creates an ENI with a private IP in your subnet. Costs per hour + per GB. Useful when you need DNS resolution from on-premises or specific security group controls.

Gateway Endpoint Flow:
EC2 (private subnet) → Route table entry for S3 prefix list → Gateway Endpoint → S3 (all within AWS network)

No internet gateway, no NAT gateway, no public IP needed. The traffic never leaves the AWS private network.

What Traverses the Internet vs. What Doesn't:
NAT gateway → Routes traffic TO the internet (public) — data traverses the internet
Internet gateway → Direct internet access — data traverses the internet
SSL/HTTPS endpoint → Encrypted but still goes over the internet — data traverses the internet
VPC endpoint → Private AWS network only — data does NOT traverse the internet
Direct Connect → Dedicated private link — data does NOT traverse the internet

Why C is correct: A VPC endpoint for S3 provides a private connection between your VPC and S3 without requiring an internet gateway, NAT device, VPN, or Direct Connect. Traffic between the VPC and S3 stays entirely within the Amazon network. This meets the compliance requirement that data must not traverse the public internet.

Why others are wrong:
A) Proxy server — A proxy server forwards requests on behalf of clients. Unless the proxy connects to S3 through a VPC endpoint, it would route traffic through the internet. A proxy alone doesn't guarantee private connectivity — it just adds an intermediary. The proxy itself would need a VPC endpoint to keep traffic private.
B) NAT gateway — A NAT gateway enables instances in private subnets to reach the public internet. Traffic from the instance goes through the NAT gateway, then the internet gateway, then the public internet to reach S3. This directly violates the "must not traverse the public internet" requirement.
D) SSL-protected S3 endpoint — SSL/TLS encrypts data in transit, but the traffic still travels over the public internet. Encryption protects confidentiality (no one can read the data), but it doesn't prevent the data from traversing the internet. The compliance requirement is about the network path, not just encryption.
Q47.A company wants to use AWS Network Firewall to inspect encrypted inbound and outbound SSL/TLS traffic. The company provisions certificates outside of AWS. The company wants to make the certificates available to Network Firewall.

Which solution will meet these requirements?
AUpload the certificates to an Amazon S3 bucket. Create a TLS inspection configuration that references the certificates. Associate the configuration with a Network Firewall policy.
BImport the certificates directly into Network Firewall. Create a TLS inspection configuration that references the certificates. Associate the configuration with a Network Firewall policy.
CImport the certificates to AWS Certificate Manager (ACM). Create a TLS inspection configuration that references the certificates. Associate the configuration with a Network Firewall policy.
DCreate a connector for Simple Certificate Enrollment Protocol (SCEP) and designate the private CA. Configure a Network Firewall policy to automatically request certificates from the private CA.
✓ Correct: C. Import certificates to ACM, then reference them in Network Firewall's TLS inspection configuration.

How to Think About This:
When you see "external certificates" + "Network Firewall TLS inspection", the path is always: Import to ACM first, then reference from Network Firewall. ACM is the central certificate store for AWS services — Network Firewall cannot use certificates from S3 or directly imported.

Key Concepts:
Network Firewall TLS Inspection — Allows Network Firewall to decrypt, inspect, and re-encrypt SSL/TLS traffic. This is how you inspect encrypted traffic for malware, data exfiltration, or policy violations. The firewall acts as a man-in-the-middle (for your own traffic) by using certificates you provide.

Certificate Flow:
External CA issues certificate → Import to ACM → Network Firewall TLS inspection config references ACM certificate → Associate config with firewall policy

Why ACM Is Required:
• ACM is the central certificate management service for AWS
• Network Firewall integrates only with ACM for TLS inspection certificates
• ACM supports importing third-party certificates (not just AWS-issued ones)
• When you import to ACM, you provide: the certificate, private key, and certificate chain

ACM Import Process:
aws acm import-certificate --certificate file://cert.pem --private-key file://key.pem --certificate-chain file://chain.pem

Why C is correct: Importing certificates to ACM and referencing them in the TLS inspection configuration is the only supported method. Network Firewall requires certificates to be stored in ACM — it cannot read from S3, accept direct imports, or auto-request from CAs via SCEP.

Why others are wrong:
A) Upload to S3 — S3 is object storage, not a certificate management service. Network Firewall's TLS inspection configuration cannot reference certificates in S3. Certificates must be in ACM for Network Firewall to use them.
B) Import directly into Network Firewall — Network Firewall does not support direct certificate import. It can only reference certificates stored in ACM. There is no "import certificate" feature in Network Firewall itself.
D) SCEP connector with private CA — The Connector for SCEP works with AWS Private CA to issue certificates to SCEP-enabled devices (routers, mobile devices). It is not compatible with Network Firewall. Network Firewall doesn't automatically request certificates — you must manually provide them via ACM.
Q48.A company maintains records in many Amazon S3 buckets and is currently involved in litigation. The company must preserve relevant data until the litigation has concluded (unknown duration). The data must be protected from deletion or overwriting. After the litigation, the company can remove the data.

Which solution will meet these requirements?
ACreate a new S3 bucket to store litigation data. Place a legal hold on the new S3 bucket. Move all litigation data to the new bucket.
BCreate a new S3 bucket. Configure the bucket to use S3 Object Lock compliance mode retention. Move all litigation data to the new bucket.
CEnable versioning and S3 Object Lock for the existing S3 buckets that store litigation data. Use a manifest in S3 Batch Operations to add legal holds to the objects.
DCreate a new S3 bucket. Enable versioning and S3 Object Lock for the new bucket. Move all litigation data to the new bucket. Use a manifest in S3 Batch Operations to add legal holds to the objects.
✓ Correct: D. New bucket with versioning + Object Lock enabled, consolidate data, then apply legal holds via S3 Batch Operations.

How to Think About This:
Three requirements narrow the answer:
1. "Unknown duration" → Legal hold (indefinite), NOT compliance mode (fixed retention period)
2. "Many S3 buckets" → S3 Batch Operations manifest requires all objects in the same bucket — must consolidate first
3. "Protected from deletion" → Requires versioning + Object Lock enabled on the bucket

Key Concepts:
S3 Object Lock — Legal Hold vs. Retention:
Legal Hold — Protects an object for an indefinite period. No expiration date. Stays in effect until manually removed. Perfect for litigation where the end date is unknown.
Compliance Mode Retention — Protects for a fixed period (e.g., 365 days). Nobody can delete or modify — not even root. Requires knowing the retention duration upfront.
Governance Mode Retention — Protects for a fixed period, but users with special permissions can override.

Prerequisites for Object Lock:
Versioning must be enabled on the bucket
Object Lock must be enabled when the bucket is created (cannot be added later to existing buckets)
• Legal holds are applied per-object, not per-bucket

S3 Batch Operations:
• Performs operations on large numbers of objects at once
• Uses a manifest file listing all target objects
Constraint: All objects in the manifest must be in the same bucket
• This is why you must consolidate litigation data from multiple buckets into one bucket first

The Complete Process:
Create new bucket (versioning + Object Lock) → Move litigation data from all buckets → Create manifest of all objects → Run S3 Batch Operations to apply legal holds → After litigation: remove legal holds → Delete data

Why others are wrong:
A) Legal hold on the bucket — Legal holds are applied to individual objects, not to the bucket itself. You cannot place a legal hold on a bucket. Also, this option doesn't mention enabling versioning or Object Lock — both are prerequisites for legal holds to work.
B) Compliance mode retention — Compliance mode requires a fixed retention period (e.g., 1 year, 7 years). The question says the duration is unknown. If you set 1 year and the litigation lasts 3 years, the data becomes deletable after year 1. If you set 10 years and it ends in 6 months, you're stuck with undeletable data for 9.5 years (compliance mode cannot be shortened, even by root). Legal hold is the correct choice for unknown durations.
C) Enable Object Lock on existing buckets + Batch Operations across multiple buckets — Two problems: (1) Object Lock cannot be enabled on existing buckets — it must be enabled at bucket creation time. (2) S3 Batch Operations manifest requires all objects in the same bucket — you can't run a single batch operation across multiple buckets.
Q49.A company wants to configure an S3 bucket to store logs from a sensitive application. Requirements:

• Logs must be retained until the retention period expires
• Users should not be able to delete or modify logs
• After settings are configured, no user should be able to modify the configurationSELECT TWO

Which combination of steps will meet these requirements?
AEnable multi-factor authentication (MFA) delete on the S3 bucket.
BEnable S3 Versioning and S3 Object Lock on the S3 bucket.
CPlace a legal hold on the S3 folder location that will store the application logs.
DEnable governance retention mode. Set a retention period on the S3 bucket.
EEnable compliance retention mode. Set a retention period on the S3 bucket.
✓ Correct: B and E. Enable Versioning + Object Lock (prerequisite) AND enable Compliance retention mode (immutable protection).

How to Think About This:
The critical phrase is "no user should be able to modify the configuration" — this means even the root user can't change it. Only compliance mode provides this level of immutability. Governance mode allows users with special permissions to override. This question pairs Q48's concepts — but here the retention period IS known, and the requirement is absolute immutability.

Key Concepts:
S3 Object Lock Prerequisites:
Versioning must be enabled (Object Lock requires versioning)
Object Lock must be enabled on the bucket (at creation time)
Both are prerequisites before you can configure any retention mode. That's why B is required.

Compliance Mode vs. Governance Mode:
FeatureCompliance ModeGovernance Mode
Root user can delete?NoYes (with bypass permission)
Can shorten retention?NoYes (with bypass permission)
Can modify config?No — nobody canYes (with special perms)
Only way to delete earlyDelete the AWS accountUse bypass header
Use caseRegulatory/legal complianceInternal policy protection

The question says "no user should be able to modify the configuration" — only compliance mode guarantees this.

Why B is correct: Versioning and Object Lock are prerequisites for any retention mode. You cannot configure compliance or governance mode without first enabling both. Versioning maintains object versions (so deletes create delete markers, not actual deletions). Object Lock provides the framework for retention and legal holds.

Why E is correct: Compliance mode provides the strongest immutability: no user (including root) can delete objects, modify objects, or change the retention settings until the retention period expires. This is the only mode that meets "no user should be able to modify the configuration." After the retention period expires, objects can be deleted normally.

Why others are wrong:
A) MFA delete — MFA delete requires MFA authentication to delete object versions or change versioning. It adds a security layer but does not enforce a retention period and does not prevent all users from deleting — anyone with MFA and the right permissions can still delete. It's a speed bump, not a wall.
C) Legal hold on folder — Two problems: (1) Legal holds have no expiration date — they remain until manually removed, which doesn't match "until the retention period expires." (2) Legal holds are applied to individual objects, not to "folders" (S3 doesn't have real folders, just key prefixes).
D) Governance retention mode — Governance mode prevents most users from deleting/modifying. However, users with s3:BypassGovernanceRetention permission (including root) CAN override it. The question explicitly says "no user should be able to modify the configuration" — governance mode fails this requirement.
Q50.A financial services company needs to implement encryption for sensitive customer transaction data on AWS. Requirements:

• Maintain control of encryption keys in FIPS 140-2 Level 3 validated hardware
• Support automated key rotation
• Ability to import custom key materialSELECT ALL FOUR (correct order)

Select all four steps in the correct implementation order:
1Set up an AWS CloudHSM cluster.
2Create a custom key store in AWS KMS and import custom key material into AWS CloudHSM.
3Generate data encryption keys by using the custom key store.
4Encrypt the data by using the generated keys.
✓ Correct order: 1 → 2 → 3 → 4. CloudHSM cluster → Custom key store + import key material → Generate data keys → Encrypt data.

How to Think About This:
This is a dependency chain — each step requires the previous step to be complete. Think of it as building layers: hardware first, then key management, then keys, then encryption.

Step-by-Step Breakdown:

Step 1: Set up an AWS CloudHSM cluster
Why first? CloudHSM provides the FIPS 140-2 Level 3 validated hardware security module. This is the physical foundation — the tamper-resistant hardware where keys are generated and stored. Everything else depends on having an active HSM cluster. Without it, you can't create a custom key store or generate any keys.

FIPS 140-2 Levels (exam favorite):
Level 2 — AWS KMS standard (software-based key management)
Level 3 — AWS CloudHSM (dedicated hardware, tamper-resistant, tamper-evident)
When a question mentions "FIPS 140-2 Level 3," the answer always involves CloudHSM.

Step 2: Create a custom key store in KMS + import key material into CloudHSM
Why second? A custom key store bridges KMS and CloudHSM. Instead of KMS managing keys in its default key store, it delegates key operations to your CloudHSM cluster. Importing custom key material gives the company full control over the cryptographic keys. This step requires the CloudHSM cluster to be active (Step 1).

Custom Key Store Architecture:
Application → KMS API (familiar interface) → Custom Key Store → CloudHSM (hardware-backed)
You get the convenience of KMS APIs with the security of dedicated HSM hardware.

Step 3: Generate data encryption keys using the custom key store
Why third? Now that the custom key store is connected to CloudHSM with imported key material, you can use kms:GenerateDataKey to create data encryption keys. These keys are generated inside the HSM hardware and wrapped by your imported master key material. This is envelope encryption: the data key encrypts your data, and the master key (in CloudHSM) encrypts the data key.

Step 4: Encrypt the data using the generated keys
Why last? With data encryption keys generated, you can finally encrypt the customer transaction data. The application uses the plaintext data key to encrypt data, then discards the plaintext key and stores only the encrypted data key alongside the encrypted data. To decrypt later, KMS uses CloudHSM to unwrap the data key.

The Complete Architecture:
CloudHSM (FIPS 140-2 L3 hardware) → Custom Key Store (KMS ↔ CloudHSM bridge) → KMS GenerateDataKey (envelope encryption) → Encrypt transaction data

Key Exam Concepts:
"FIPS 140-2 Level 3" = CloudHSM (always)
"FIPS 140-2 Level 2" = Standard KMS
"Custom key material" = Import into CloudHSM via custom key store
"Maintain control of keys" = CloudHSM (you own the hardware, AWS can't access keys)
"KMS convenience + HSM security" = Custom key store
Q51.A company has an unencrypted Amazon EFS file system with many files. The company must encrypt all existing and future files. The files must be encrypted with a key that is rotated every 90 days.

Which solution will meet these requirements?
ACreate a KMS customer managed key with automatic rotation. Modify the existing EFS file system to turn on encryption. Use a Lambda function to copy existing files in place to encrypt them.
BModify the existing EFS file system to turn on encryption using the default KMS service key for EFS. Configure automatic rotation. Use a Lambda function to copy existing files in place.
CCreate a KMS customer managed key with automatic rotation. Create a new EFS file system with encryption enabled using the KMS key. Use AWS DataSync to transfer existing files to the new encrypted file system.
DCreate a new EFS file system with encryption enabled using the default KMS service key for EFS. Configure automatic rotation. Use AWS DataSync to transfer existing files to the new file system.
✓ Correct: C. New EFS file system with customer managed KMS key (auto-rotation) + DataSync to migrate files.

How to Think About This:
Three rules eliminate the wrong answers:
1. Cannot modify an existing EFS to add encryption → must create a NEW file system (eliminates A and B)
2. "Rotated every 90 days" → must be a customer managed key, not the default service key (eliminates D)
3. Move existing files → DataSync is the managed migration tool (not Lambda copy-in-place)

Key Concepts:
EFS Encryption Rules:
• Encryption must be enabled at file system creation time
• You cannot enable encryption on an existing unencrypted EFS file system
• To encrypt existing data, you must: create a new encrypted EFS → migrate data → switch applications to the new file system

KMS Key Types and Rotation:
AWS-managed key (aws/elasticfilesystem) — AWS manages it. You cannot configure custom rotation periods. AWS rotates it automatically every year (365 days), and you cannot change this to 90 days.
Customer managed key — You create and control it. You can configure automatic rotation with a custom period (e.g., 90 days). This meets the "rotated every 90 days" requirement.

AWS DataSync:
• A managed data transfer service for moving files between storage systems
• Supports EFS-to-EFS, on-premises-to-EFS, S3-to-EFS, and more
• Handles encryption, integrity verification, and scheduling automatically
• The correct tool for migrating data between EFS file systems

Why C is correct: Creates a new EFS (required since you can't encrypt an existing one), uses a customer managed KMS key (supports custom 90-day rotation), and uses DataSync (managed migration tool) to transfer existing files. Every component is correct.

Why others are wrong:
A) Modify existing EFS + Lambda copy — You cannot modify an existing EFS to enable encryption. This is a hard AWS limitation. You must create a new file system. The customer managed key with rotation is correct, but the approach of modifying the existing file system is impossible.
B) Modify existing EFS + default service key — Two problems: (1) Cannot modify existing EFS to add encryption. (2) The default service key (aws/elasticfilesystem) does not support custom rotation periods — it rotates annually and you can't change it to 90 days.
D) New EFS + default service key — Correctly creates a new EFS and uses DataSync. However, the default service key cannot be configured for 90-day rotation. You must use a customer managed key to set custom rotation intervals.
Q52.A financial services company uses an Amazon SNS topic to provide market updates to business partners. A security engineer discovers that some messages contain personally identifiable information (PII). The engineer must ensure that any new PII is not made available to SNS subscribers.

Which solution will meet this requirement with the LEAST operational overhead?
ASubscribe a Lambda function to the SNS topic to write all messages to an S3 bucket. Enable Amazon Macie on the S3 bucket. Upload sanitized information to a new SNS topic for business partners.
BCreate an SNS access policy. Add a condition to deny subscriptions to PII. Attach the access policy to all SNS subscriptions.
CCreate an SNS access policy. Add a condition to deny the publication of PII. Attach the access policy to the SNS topic.
DCreate a message data protection policy in Amazon SNS. Add a rule that denies the publication of PII. Attach the data protection policy to the SNS topic.
✓ Correct: D. SNS message data protection policy to block PII in messages.

How to Think About This:
When you see "SNS" + "PII in messages" + "block/filter content" + "least overhead", the answer is SNS message data protection policy. This is a built-in SNS feature that inspects message content — no Lambda, no S3, no additional services needed.

The key distinction: access policies control WHO can use the topic. Data protection policies control WHAT content flows through the topic.

Key Concepts:
SNS Message Data Protection Policies — A built-in Amazon SNS feature that can:
Audit — Log when PII is detected in messages (to CloudWatch)
Deny — Block messages containing PII from being published entirely
De-identify (mask) — Redact PII before delivering to subscribers (e.g., replace SSN with ***)

SNS uses managed data identifiers to detect PII types including: names, email addresses, phone numbers, SSNs, credit card numbers, passport numbers, and more. No custom regex needed — AWS maintains the detection patterns.

Example Data Protection Policy:
{"Operation": "Deny", "DataIdentifier": ["arn:aws:dataprotection::aws:data-identifier/EmailAddress", "arn:aws:dataprotection::aws:data-identifier/CreditCardNumber"]}

This blocks any message containing email addresses or credit card numbers from being published. Completely built-in, zero custom code.

Why D is correct: SNS data protection policies are a native feature that inspects message content for PII before publication. You attach the policy to the topic, define which PII types to block, and SNS handles the rest. No Lambda functions, no S3 buckets, no additional services — least operational overhead possible.

Why others are wrong:
A) Lambda + S3 + Macie + new topic — This creates an entire pipeline: Lambda to capture messages, S3 to store them, Macie to scan for PII, then republish to a new topic. That's four additional services to build and maintain. Macie is designed for S3 data discovery, not real-time message filtering. Massive operational overhead compared to a single data protection policy.
B) SNS access policy to deny subscriptions to PII — Access policies control who can perform actions (publish, subscribe) on the topic. They operate on principals and resources, not on message content. You cannot write an access policy condition that inspects the actual content of messages for PII. Access policies don't see message payloads.
C) SNS access policy to deny publication of PII — Same problem as B. Access policies are not designed to filter message content. They control who can call sns:Publish, not what the message contains. An access policy can say "only this IAM role can publish" but cannot say "block messages containing SSNs." Content inspection requires a data protection policy, not an access policy.
Q53.A company deploys an application on Amazon EC2 instances with a load balancer. A security team must implement end-to-end encryption of traffic in transit from the public internet to the EC2 instances.

Which solution will meet these requirements?
AGenerate an SSL/TLS certificate using ACM. Create an Application Load Balancer (ALB) with an HTTPS listener. Import the certificate into the EC2 instances.
BGenerate an SSL/TLS certificate using ACM. Create a Network Load Balancer (NLB) with a TLS listener. Import the certificate into the EC2 instances.
CImport a third-party SSL/TLS certificate into ACM. Create an Application Load Balancer (ALB) with an HTTPS listener. Import the certificate into the EC2 instances.
DImport a third-party SSL/TLS certificate into ACM. Create a Network Load Balancer (NLB) with a TCP listener. Import the certificate into the EC2 instances.
✓ Correct: D. Third-party certificate + NLB with TCP listener (not TLS listener) + certificate on EC2 instances.

How to Think About This:
"End-to-end encryption" means the traffic stays encrypted from client all the way to the EC2 instance. The load balancer must NOT decrypt the traffic. Two decisions:
1. Listener type: TCP (passes encrypted traffic through) vs. HTTPS/TLS (terminates and decrypts at LB)
2. Certificate type: ACM-generated (cannot export private key) vs. third-party (can install on EC2)

Only NLB with TCP listener passes traffic through without decrypting. And only third-party certificates can be installed on EC2 instances (ACM won't export private keys).

Key Concepts:
TLS Termination vs. TLS Passthrough:
HTTPS/TLS listener (ALB or NLB) — The load balancer decrypts the traffic, inspects it, then optionally re-encrypts to backend. This is TLS termination. The traffic is decrypted at the LB — NOT end-to-end encrypted.
TCP listener (NLB only) — The load balancer treats the traffic as raw TCP. It passes encrypted traffic through without decrypting. The EC2 instance handles TLS termination directly. This IS end-to-end encrypted.

Note: ALBs only support HTTP/HTTPS listeners — they always terminate TLS. Only NLBs support TCP listeners for passthrough.

ACM Certificates vs. Third-Party Certificates:
ACM-generated certificates — AWS manages the private key. You cannot export the private key. You cannot install ACM-generated certificates on EC2 instances. ACM certificates can only be used with AWS services (ALB, NLB, CloudFront, API Gateway).
Third-party certificates — You own the private key. You can install the certificate on EC2 instances (Apache, Nginx, etc.). You can also import them into ACM for use with AWS services.

For end-to-end encryption, the EC2 instance must have the certificate installed. Since ACM won't export private keys, you need a third-party certificate.

The Complete Flow:
Client (HTTPS) → NLB TCP:443 (passes through, no decryption) → EC2 (TLS termination with third-party cert)
Traffic is encrypted the entire way. The NLB never sees the plaintext.

Why others are wrong:
A) ACM cert + ALB HTTPS listener — An ALB HTTPS listener terminates TLS at the ALB. Traffic between ALB and EC2 is decrypted (or re-encrypted, but that's a separate connection). This is NOT end-to-end encryption. Also, ACM certificates cannot be exported to EC2.
B) ACM cert + NLB TLS listener — An NLB TLS listener also terminates TLS at the NLB, just like an ALB HTTPS listener. The traffic is decrypted before reaching EC2. A TLS listener ≠ passthrough. Only a TCP listener passes traffic through. Also, ACM certificates cannot be exported to EC2.
C) Third-party cert + ALB HTTPS listener — Correct certificate type (third-party, can install on EC2). But ALB HTTPS still terminates TLS at the ALB. Even if the certificate is on both the ALB and EC2, the ALB decrypts and re-encrypts — two separate TLS sessions, not true end-to-end encryption.
Q54.A company requires all departments to use custom encryption keys generated by an internal HSM to encrypt objects in Amazon S3. There are multiple AWS accounts per department. The security team must distribute and enforce the use of department-specific keys.

Which solution meets these requirements in the MOST secure way?
ACreate an SOP for SSE-C encryption with department keys. Distribute a copy of the department key to each member. Create a bucket policy that allows only encrypted uploads.
BGenerate new encryption keys in AWS KMS for each department with unique aliases. Create key policies limiting use to department accounts. Create bucket and IAM policies enforcing encryption with the department's KMS key.
CCreate an SOP stating users can only store encrypted objects. Enable SSE-C on existing buckets. Create a bucket policy that allows only encrypted uploads.
DUpload each department's HSM-generated key to AWS KMS with a unique alias. Create key policies allowing only department accounts to use their key. Create bucket and IAM policies enforcing encryption with the department's KMS key.
✓ Correct: D. Import HSM-generated keys into KMS + key policies + bucket/IAM policies for enforcement.

How to Think About This:
Three requirements narrow the answer:
1. "Custom keys from internal HSM" → Must use the company's own keys, not AWS-generated (eliminates B)
2. "Enforce" → Must use technical controls (policies), not SOPs/procedures (eliminates A and C)
3. "Most secure" → Don't distribute keys to individuals; use centralized KMS management

Key Concepts:
Importing Custom Key Material into KMS (BYOK) — KMS supports "Bring Your Own Key": you can import key material generated by your own HSM into a KMS key. This gives you:
Your HSM-generated key — meets the custom key requirement
KMS management — key policies, aliases, audit via CloudTrail, integration with S3/EBS/etc.
Centralized distribution — departments use KMS key IDs/aliases in their API calls, never handling raw key material

Three Layers of Enforcement:
1. KMS Key Policy — Controls WHO can use the key. Only department accounts are allowed to use their respective department key. Other departments can't even call kms:Encrypt with the wrong key.
2. S3 Bucket Policy — Denies s3:PutObject unless the request includes encryption. Ensures nothing goes unencrypted.
3. IAM Policy — Allows s3:PutObject only when the encryption header specifies the correct department KMS key ARN. Ensures the RIGHT key is used, not just any encryption.

Example IAM Policy Condition:
"Condition": {"StringEquals": {"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:123456789012:key/dept-key-id"}}

Why D is correct: Uploading the HSM-generated keys to KMS preserves the company's custom key material while leveraging KMS for secure distribution, access control, and policy enforcement. Key policies restrict usage to authorized departments. Bucket and IAM policies enforce that the correct department key is used for every upload. This is the most secure approach — centralized, auditable, and technically enforced.

Why others are wrong:
A) SOP + distribute keys to members + SSE-C — Multiple security issues: (1) Distributing raw encryption keys to individuals increases exposure risk (keys on laptops, email, etc.). (2) SOPs are procedural controls, not technical controls — there's no enforcement mechanism. A user can ignore the SOP and upload unencrypted objects or use the wrong key. (3) The bucket policy ensures encryption but doesn't enforce WHICH key is used.
B) Generate NEW keys in KMS — Technically sound enforcement (key policies + IAM policies), but uses AWS-generated keys instead of the company's HSM-generated keys. The question specifically requires "custom encryption keys" generated by the company's "internal key management infrastructure." Generating new keys in KMS doesn't use the company's existing keys.
C) SOP + SSE-C on existing buckets — Same SOP weakness as A (procedural, not enforceable). Also, SSE-C doesn't enforce which key is used — any customer-provided key would be accepted. You cannot "enable SSE-C on a bucket" as a default — SSE-C must be provided per-request. This solution has no mechanism to ensure department-specific keys are used.
Q55.A company uses AWS Organizations and must establish controls across the organization. Match each use case to the correct organization policy type.SELECT ALL FIVE

A) Prevent any account from deleting CloudTrail trails → SCP
B) Enforce standardized billing codes and allowed values for all resources → Tag policy
C) Automatically share KMS keys with specific OUs → RCP
D) Enable cross-account access to specific S3 buckets → RCP
E) Restrict the maximum EBS volume size that can be created → SCP

Select all five to confirm you understand each mapping.
APrevent deleting CloudTrail trails → SCP (deny cloudtrail:DeleteTrail across all accounts)
BEnforce billing code tags → Tag policy (standardize tag keys/values across org)
CShare KMS keys with OUs → RCP (manage cross-account resource access for KMS)
DCross-account S3 access → RCP (manage cross-account resource access for S3)
ERestrict EBS volume size → SCP (limit resource configurations across accounts)
✓ All five mappings are correct.

The Organization Policy Matrix — Memorize This:

SCP (Service Control Policy) — Controls what IAM principals CAN DO
• Restricts the maximum permissions for all accounts in an OU
Deny specific API actions: "No one can delete CloudTrail trails"
Limit resource configurations: "No EBS volumes larger than 100GB"
• Applies to IAM users and roles (the principals making API calls)
• Does NOT affect the management account

Use cases: Block dangerous actions (delete trails, disable GuardDuty), restrict regions, limit instance types, enforce encryption.

RCP (Resource Control Policy) — Controls how resources are accessed cross-account
• Manages cross-account access for specific services: S3, STS, KMS, SQS, Secrets Manager
Share KMS keys with specific OUs automatically
Enable cross-account S3 access within the organization
• Applies to the resources themselves, not the principals
• Think: "Which OUs/accounts can access this shared resource?"

Key distinction: SCPs control what principals can do. RCPs control how resources are shared.

Tag Policy — Controls how resources are tagged
• Enforce standardized tag keys (e.g., must use "CostCenter" not "cost-center" or "costcenter")
• Enforce allowed tag values (e.g., CostCenter must be one of: "Engineering", "Marketing", "Finance")
• Applies to tagging operations on resources
• Perfect for billing code standardization across the organization

Use cases: Consistent billing allocation, resource ownership tracking, compliance reporting.

Quick Reference Table:
PolicyControlsExample
SCPWhat principals can DODeny DeleteTrail, limit EBS size
RCPHow resources are SHAREDShare KMS keys, S3 cross-account
Tag policyHow resources are TAGGEDEnforce billing codes
Backup policyHow resources are BACKED UPEnforce backup schedules
AI opt-out policyAI service data usageOpt out of AI training
Q56.A large company uses AWS Organizations to manage over 1,200 member accounts. The company needs to eliminate long-term root credentials while maintaining the ability to perform necessary privileged actions.

Which solution will meet these requirements?
AUse AWS IAM Identity Center to create delegated administrative roles with elevated permissions. Configure Organizations to restrict root account access to emergency break-glass procedures.
BImplement central management of root credentials combined with temporary root sessions.
CConfigure Organizations to create administrative IAM roles and automate root credential rotation.
DCreate a shared password management system by using AWS Secrets Manager to store root credentials. Implement mandatory monthly rotation schedules.
✓ Correct: B. Central management of root credentials with temporary root sessions.

How to Think About This:
This is the same concept as Q42 but from a different angle. When you see "eliminate long-term root credentials" + "maintain privileged actions" + "many accounts", the answer is centralized root access management with temporary sessions. This is the Organizations feature that removes permanent root credentials from member accounts while allowing time-limited root actions from the management account.

Key Concepts:
Centralized Root Access Management — An AWS Organizations feature (covered in Q42) that:
Removes all permanent root credentials from member accounts (passwords, access keys)
Disables account recovery from member accounts
• Provides temporary root sessions — time-limited, on-demand root-level access from the management account
• Root actions are performed through IAM in the management account, not by logging in as root

Temporary Root Sessions — The key differentiator. When a root-level action is needed (e.g., deleting a locked S3 bucket policy), an authorized user in the management account can initiate a temporary session that has root-equivalent privileges for a limited time. After the session expires, the privileges are gone. No permanent credentials exist.

Why This Scales to 1,200 Accounts:
• No root passwords to manage across 1,200 accounts
• No MFA tokens to track for 1,200 root users
• No credential rotation for 1,200 root accounts
• One management account controls everything centrally

Why B is correct: Central root management eliminates permanent root credentials (meets "eliminate long-term credentials") while temporary sessions maintain the ability to perform root-level actions when needed (meets "maintain privileged actions"). Scalable across 1,200+ accounts with minimal overhead.

Why others are wrong:
A) IAM Identity Center delegated admin roles — Identity Center can create roles with elevated permissions. However, some AWS operations can ONLY be performed by the root user (e.g., closing an account, changing account settings, restoring certain locked resources). Delegated admin roles, no matter how elevated, cannot replace root for these operations. This solution doesn't actually eliminate root credentials.
C) Administrative IAM roles + root credential rotation — Automating root credential rotation still maintains long-term credentials — they just change periodically. Rotation reduces risk but doesn't eliminate the credentials. Also, IAM roles cannot perform root-only operations, so you'd still need root credentials for some tasks.
D) Secrets Manager for root passwords + monthly rotation — Storing root passwords in Secrets Manager is centralized management, and monthly rotation is better than no rotation. However, root long-term credentials still exist (they're just stored in Secrets Manager). Someone could still retrieve and use them. This doesn't eliminate credentials — it just manages them better.
Q57.A security team wants to more efficiently manage security findings across multiple AWS accounts in AWS Organizations. Currently they must assume IAM roles in each account to view findings. They want a central way to view and manage compliance with industry and security standards.

Which solution meets these requirements?
AConfigure AWS CloudTrail in each account to deliver logs to a central S3 bucket. Create an Amazon Athena table to query and analyze the logs from all accounts.
BEnable AWS Security Hub. Designate a delegated administrator account. Ensure all other accounts are added as Organization members. View and manage all findings in the delegated admin account.
CEnable Amazon Detective. Designate a delegated administrator account. Ensure all other accounts are added as Organization members. View and manage all findings in the delegated admin account.
DConfigure CloudTrail to deliver logs to CloudWatch log groups. Stream to a central Amazon Data Firehose. Deliver to Amazon Redshift. Query and analyze findings from all accounts.
✓ Correct: B. AWS Security Hub with delegated administrator for centralized finding management.

How to Think About This:
When you see "central view" + "manage findings" + "compliance standards" + "multiple accounts", the answer is Security Hub. This is Security Hub's core purpose. The key words are "view AND manage" + "compliance standards" — only Security Hub does both.

Remember the security trifecta roles:
Security Hub = aggregate + manage + compliance standards (CIS, PCI, NIST)
Detective = investigate + correlate + attack paths
GuardDuty = detect threats

Key Concepts:
Security Hub Delegated Administrator — When enabled with Organizations:
Automatic member enrollment — All existing and new accounts automatically become members
Centralized findings — The delegated admin sees findings from ALL member accounts in one dashboard
Compliance management — Built-in security standards (CIS Benchmarks, PCI DSS, NIST 800-53, AWS FSBP) are evaluated across all accounts
Finding management — The admin can update finding status, add notes, trigger automated remediation, and manage findings across all accounts from one place
Cross-Region aggregation — Can aggregate findings from all Regions into one view (as we learned in Q13)

What Security Hub Aggregates:
Findings from GuardDuty, Inspector, Macie, Config, Firewall Manager, IAM Access Analyzer, and 60+ third-party integrations — all normalized into the AWS Security Finding Format (ASFF).

Why B is correct: Security Hub is purpose-built for centralized security finding management with compliance standards. The delegated admin model allows a security account to view and manage findings across all 1,200+ member accounts without assuming roles in each. Built-in compliance standards (CIS, PCI, NIST) meet the "industry standards" requirement.

Why others are wrong:
A) CloudTrail + S3 + Athena — CloudTrail logs API calls, not security findings. The logs are not normalized — you'd need to write custom SQL queries to extract security-relevant information. You cannot manage findings (update status, remediate) from Athena — it's a read-only query tool. Significant operational overhead compared to Security Hub's built-in dashboards.
C) Amazon Detective delegated admin — Detective is for investigation, not for centralized finding management or compliance. Detective can aggregate findings but it requires Security Hub as a source first. You cannot "manage" findings in Detective (update status, trigger remediation). Detective answers "what happened?" not "are we compliant?"
D) CloudTrail + CloudWatch + Firehose + Redshift — This builds a complex 4-service pipeline to achieve what Security Hub does out of the box. CloudTrail logs are not normalized security findings. Custom Redshift queries are needed. You cannot manage findings from Redshift. Massive operational overhead for an inferior result.
Q58.A company has Developer, Test, and Production OUs in AWS Organizations. Developers have full administrative privileges in their accounts. The company wants to allow only certain EC2 instance types in the Developer OU only.

How can the company prevent developer accounts from launching unapproved EC2 instance types?
ACreate a launch template in each Developer OU account to deny ec2:RunInstances for unapproved instance types. Associate the templates with all IAM principals.
BCreate an IAM policy to deny ec2:RunInstances for unapproved instance types. Attach the policy to all IAM principals in all Developer OU accounts.
CUse a managed SCP attached to the organization's root account to deny ec2:RunInstances for unapproved instance types.
DCreate an SCP to deny ec2:RunInstances for unapproved instance types. Attach the policy to the Developer OU.
✓ Correct: D. SCP attached to the Developer OU (not the root).

How to Think About This:
Two key requirements:
1. "Full administrative privileges" — Developers can remove any IAM policy, so IAM-level controls won't work (eliminates B)
2. "Developer OU only" — Restriction must apply only to the Developer OU, not Test or Production (eliminates C)

Only SCPs provide guardrails that developers with admin access cannot bypass, and SCPs can be scoped to a specific OU.

Key Concepts:
Why SCPs Over IAM Policies?
Developers have AdministratorAccess. They can:
• Detach any IAM policy from their own user/role
• Delete IAM policies
• Modify permissions boundaries

But they cannot modify or detach SCPs. SCPs are managed at the organization level, not the account level. Only someone with access to the management account can change SCPs. This makes SCPs the only effective guardrail when users have admin privileges.

SCP Attachment Scope:
Attached to Root — Affects ALL OUs and ALL accounts in the organization
Attached to specific OU — Affects only accounts in that OU and child OUs
Attached to specific account — Affects only that account

SCPs inherit downward. An SCP on the Developer OU affects all accounts in the Developer OU but NOT accounts in the Test or Production OUs.

Example SCP:
"Effect": "Deny", "Action": "ec2:RunInstances", "Resource": "*", "Condition": {"StringNotEquals": {"ec2:InstanceType": ["t3.micro", "t3.small", "t3.medium"]}}
This denies RunInstances unless the instance type is in the approved list.

Why D is correct: An SCP on the Developer OU creates an unbypassable guardrail that restricts instance types only for developer accounts. Developers cannot remove or modify the SCP regardless of their admin privileges. Test and Production OUs are unaffected.

Why others are wrong:
A) Launch templates — Launch templates define instance configurations for launching, primarily used with Auto Scaling groups. They cannot deny API calls — they're templates, not policies. You cannot attach launch templates to IAM principals to enforce restrictions. Also, the question says "auto scaling is not required."
B) IAM policy on all principals — Since developers have full administrative privileges, any developer can simply detach the IAM policy from their own user/role. IAM policies are ineffective when users have admin access. The restriction must come from above the account level (SCPs).
C) SCP on organization root — Attaching to the root affects ALL OUs: Developer, Test, AND Production. The requirement is to restrict only the Developer OU. Production accounts may need to use larger instance types. Attaching to the root is too broad.
Q59.A security team must allocate EC2 instance charges to specific cost centers for all accounts in the Production OU. They want to use a tag named costcenter with a list of allowed values. If the tag is not set with an appropriate value, resource creation of EC2 instances must be prevented regardless of the provisioning method.

What should the security engineer do?
AUse Amazon EventBridge to detect EC2 instance launches. Configure a Lambda function to verify the costcenter tag. If invalid, terminate the instance.
BUse AWS Config with the "required-tags" managed rule for costcenter. Deploy across Production OU with put-organization-config-rule. Create an SCP that denies EC2 creation if the costcenter tag is not present.
CCreate a tag policy in AWS Organizations defining the costcenter tag with allowed values in enforce mode. Attach to the Production OU. Create an SCP that denies EC2 creation if the costcenter tag is not present. Attach the SCP to the Production OU.
DUse AWS CloudFormation Guard to write a policy verifying costcenter tags. Use the preCreate CloudFormation hook in each production account with FailureMode set to FAIL.
✓ Correct: C. Tag policy (enforce allowed values) + SCP (deny creation if tag is missing). Both attached to Production OU.

How to Think About This:
This question requires two controls working together because no single policy handles both requirements:
"Allowed values only" → Tag policy in enforce mode (controls what values are acceptable)
"Tag must be present" → SCP denying creation without the tag (controls that the tag exists)

Neither control alone is sufficient. Tag policies enforce values but not presence. SCPs enforce presence but not specific values.

Key Concepts:
Tag Policy + SCP Combination — The two-layer approach:
Layer 1: Tag Policy (enforce mode)
• Defines acceptable values for costcenter tag (e.g., "Engineering", "Marketing", "Finance")
• In enforce mode, if a user tries to tag an EC2 instance with costcenter=FreeStuff, it's rejected
Limitation: Tag policies enforce the VALUE when a tag is applied, but they don't force the tag to be present. An instance can still be created without any costcenter tag at all.

Layer 2: SCP (deny without tag)
• Denies ec2:RunInstances if the costcenter tag is not present in the request
• Uses condition: "Condition": {"Null": {"aws:RequestTag/costcenter": "true"}} (deny if tag is null/missing)
Limitation: The SCP ensures the tag exists but doesn't validate the value — any value would pass

Together: SCP forces the tag to exist + Tag policy forces the value to be valid = complete enforcement.

Proactive vs. Reactive:
Proactive (prevent creation) — SCP + Tag policy. The resource is never created if non-compliant.
Reactive (detect after creation) — EventBridge + Lambda, Config rules. The resource is created first, then checked.
The question says "prevent resource creation" — must be proactive.

Why C is correct: The tag policy enforces that costcenter can only use approved values. The SCP ensures that costcenter must be present on all EC2 instances. Both are attached to the Production OU, so they only affect production accounts. Together they provide complete proactive enforcement regardless of provisioning method (console, CLI, SDK, CloudFormation).

Why others are wrong:
A) EventBridge + Lambda to terminate — This is reactive. The instance is created first, THEN the Lambda function checks the tag and terminates it. There's a window where a non-compliant instance exists. The question requires preventing creation, not detecting and cleaning up after.
B) Config rule + SCP — The SCP denies creation without the tag — but only checks tag presence, not value. An instance with costcenter=AnythingAtAll would pass the SCP. Config would detect the invalid value, but that's reactive (the resource already exists). Config alerts on non-compliance; it doesn't prevent creation.
D) CloudFormation Guard hook — This works ONLY for CloudFormation deployments. Users can still create instances via the AWS Console, CLI, SDK, or Terraform and bypass the CloudFormation hook entirely. The question says "regardless of the provisioning method" — CloudFormation Guard doesn't cover all methods.
Q60.A company needs to deploy all new Amazon EC2 instances with consistent security configurations across AWS accounts. They want a standardized deployment process that minimizes ongoing effort. All accounts are in AWS Organizations with AWS Control Tower.

Which solution meets these requirements in the MOST operationally efficient way?
ACreate CloudFormation templates for EC2 instances with security configs. Use StackSets to deploy across accounts. Use IAM policies to restrict EC2 creation. Monitor compliance with AWS Config rules.
BCreate baseline AMIs with EC2 Image Builder. Create CloudFormation templates using the AMIs. Publish to AWS Service Catalog. Configure Service Catalog portfolios to share across the organization. Enforce deployment only through Service Catalog via IAM policies.
CCreate AWS Config rules to check EC2 instance configurations. Deploy rules via a conformance pack across the organization. Use Systems Manager Automation runbooks to remediate noncompliant instances.
DCreate a Lambda function to check EC2 configs at launch. Use EventBridge rules to trigger the function. Use IAM policies and roles to control launch permissions based on configurations.
✓ Correct: B. EC2 Image Builder + Service Catalog portfolios shared across the organization.

How to Think About This:
When you see "standardized deployment" + "consistent configurations" + "across accounts" + "most efficient", the answer is Service Catalog. Service Catalog lets you create a "menu" of pre-approved, pre-configured products that users can deploy. Combined with Image Builder for automated AMI creation, this is the most hands-off approach.

The key distinction is proactive standardization (give users only approved options) vs. reactive compliance (let users create anything, then detect/fix non-compliance).

Key Concepts:
EC2 Image Builder — Automates the entire AMI lifecycle:
Build — Creates AMIs from base images with security configurations (hardening, patches, agents)
Test — Runs automated tests on built images
Distribute — Shares AMIs across accounts and Regions
Maintain — Automatically rebuilds when new patches are available
No ongoing manual effort for AMI maintenance.

AWS Service Catalog — A self-service portal for approved resources:
Products — CloudFormation templates that define approved configurations (e.g., "Secure Web Server" using the hardened AMI)
Portfolios — Collections of products shared across accounts in the organization
Constraints — Control how products can be launched (launch constraints, template constraints)
IAM enforcement — Restrict EC2 creation to Service Catalog only, so users can't launch unapproved instances

The Pipeline:
Image Builder (automated AMI) → CloudFormation template (uses AMI) → Service Catalog product → Shared portfolio → Users self-service deploy

Why B is correct: Image Builder automates AMI maintenance (minimal ongoing effort). Service Catalog provides centralized governance with pre-approved configurations shared across all accounts. Users can only deploy from approved products. This is proactive standardization with the least operational overhead.

Why others are wrong:
A) CloudFormation StackSets + IAM + Config — StackSets deploy templates across accounts, but you must maintain templates, manage IAM policies, and monitor Config rules across all accounts. More operational pieces to manage. StackSets deploy infrastructure; Service Catalog provides a self-service portal with governance — it's a higher-level abstraction with less overhead.
C) Config + Conformance Pack + SSM remediation — This is a reactive approach: let users create whatever they want, then detect non-compliance and auto-remediate. The instance exists in a non-compliant state between creation and remediation. The question asks for standardized deployment (proactive), not detect-and-fix (reactive).
D) Lambda + EventBridge + IAM — Custom Lambda code requires building, testing, deploying, and maintaining across accounts. Managing Lambda functions and EventBridge rules at scale is labor-intensive. Using purpose-built services (Image Builder + Service Catalog) designed for this exact use case is far more efficient.
Q61.A company uses AWS Private Certificate Authority (CA) to manage internal certificates. A development team in a separate AWS account needs to request and issue certificates from the company's CA, following security best practices.

Which solution meets these requirements with the LEAST operational overhead?
ACreate a cross-account IAM role in the CA account with certificate issuance permissions. Allow the development account to assume the role. Configure AWS Private CA to require MFA for certificate issuance.
BShare the AWS Private CA resource using AWS Resource Access Manager (AWS RAM) with the development team's account. Grant appropriate certificate issuance permissions using IAM policies in the development account.
CExport the AWS Private CA root certificate and private key to the development team's account. Allow them to issue certificates locally.
DCreate a public ACM certificate with AWS Private CA as the root authority. Import the certificate into the development account. Allow the team to create certificate requests from the imported certificate.
✓ Correct: B. Share AWS Private CA via AWS RAM + IAM policies in the development account.

How to Think About This:
When you see "share an AWS resource cross-account" + "least operational overhead", check if AWS RAM supports that resource type. AWS Private CA is a RAM-shareable resource. RAM provides controlled sharing without complex cross-account role setups.

Remember from Q1: AMIs are NOT shareable via RAM. But Private CA IS. Know which resources RAM supports.

Key Concepts:
AWS Resource Access Manager (RAM) — Lets you share AWS resources across accounts within or outside your organization. For AWS Private CA:
• The CA owner shares the Private CA via RAM with the development account
• The development team can then request certificates directly from the shared CA
• IAM policies in the development account control who can issue certificates and with what parameters
• No cross-account role assumption needed for each request
• The CA owner maintains full control — can revoke sharing at any time

RAM-Shareable Resources (exam favorites):
AWS Private CA — Share certificate authorities
VPC Subnets — Share subnets for VPC sharing
Transit Gateway — Share for multi-VPC connectivity
Route 53 Resolver rules — Share DNS resolution
License Manager configs — Share license configurations
NOT shareable via RAM: AMIs, S3 buckets, KMS keys (use native mechanisms)

The Flow:
CA Account creates RAM share (Private CA) → Dev Account accepts share → Dev team uses IAM policy to request certs → CA issues certs centrally

Why B is correct: RAM provides native, secure cross-account resource sharing for Private CA. The development team can request certificates without assuming roles each time. IAM policies in the dev account precisely control permissions. Centralized CA management is maintained. Least operational overhead — no custom roles, no key export, no complex setups.

Why others are wrong:
A) Cross-account IAM role + MFA — Two issues: (1) Developers must assume a role for each certificate request — more operational overhead than RAM sharing. (2) AWS Private CA does not directly support MFA requirements for certificate issuance. This solution doesn't follow AWS best practices for resource sharing.
C) Export root certificate and private key — This is a critical security violation. Exporting the CA's private key creates copies of the most sensitive cryptographic material. If the private key is compromised, the entire certificate chain is compromised. You lose centralized control, audit capability, and the ability to revoke access. Never export CA private keys.
D) Public ACM certificate from Private CA — This is technically incorrect. Public ACM certificates are issued by public CAs (Amazon Trust Services), not private CAs. You cannot "create certificate requests from an imported certificate" — imported certificates are end-entity certificates, not CA certificates. The trust chain doesn't work this way.
Q62.A security team needs a comprehensive IAM audit across a multi-account organization. They must:

• Identify all IAM users and their group memberships
• Detect users with direct policy attachments bypassing group-based access
• Identify groups with outdated managed policies
• Generate compliance reports
Automate the process and store results for historical tracking

Which solution meets these requirements with the LEAST operational overhead?
ACreate a Lambda function using iam list-users and list-groups-for-user commands. Store results in S3. Use Amazon QuickSight for visualization. Run across all accounts via Organizations.
BUse CloudFormation StackSets to deploy Config rules for IAM configurations. Configure Config aggregator for multi-account visibility. Use Parameter Store for policy version tracking. Use Security Hub for analysis.
CEnable IAM Access Analyzer across the organization. Create a Lambda function using iam get-account-authorization-details for detailed IAM analysis. Use Amazon EventBridge to schedule regular scans. Use Amazon Athena to query historical data in S3.
DCreate a Systems Manager Automation runbook using iam list-policies and get-group commands. Store results in DynamoDB. Use AWS Glue for ETL processing into analysis-ready formats.
✓ Correct: C. IAM Access Analyzer + get-account-authorization-details + EventBridge scheduling + Athena for historical queries.

How to Think About This:
The question asks for comprehensive IAM data + automation + historical tracking. The key API is get-account-authorization-details — a single API call that returns EVERYTHING about IAM in an account (users, groups, roles, policies, relationships). Combined with IAM Access Analyzer for security analysis, EventBridge for scheduling, and Athena+S3 for historical querying, this is the most efficient pipeline.

Key Concepts:
iam:GetAccountAuthorizationDetails — The "one API to rule them all" for IAM auditing. A single call returns:
• All IAM users and their group memberships
• All policies attached to users (direct attachments)
• All policies attached to groups
• All IAM roles and their trust policies
• All policy versions (to detect outdated policies)
• All inline and managed policy documents

Compare this to option A which uses list-users + list-groups-for-user (two separate calls, and misses policy details). Or option D which uses list-policies + get-group (less efficient, misses user-policy relationships).

The Complete Audit Pipeline:
EventBridge (scheduled trigger) → Lambda (calls get-account-authorization-details across accounts) → S3 (stores results as JSON) → IAM Access Analyzer (identifies security issues) → Athena (queries historical data for compliance reports)

IAM Access Analyzer — Adds automated security analysis on top of the raw data:
• Identifies resources shared with external entities
• Detects unused permissions (for least privilege)
• Validates policies against best practices
• Works at the organization level

Athena for Historical Tracking — S3 stores each scan's results as timestamped files. Athena queries them with SQL to compare IAM configurations over time, generate compliance reports, and detect changes.

Why C is correct: get-account-authorization-details provides the most comprehensive IAM data in a single API call. Access Analyzer identifies security issues automatically. EventBridge automates the scheduling. S3 + Athena provides historical tracking and compliance reporting. Each component is purpose-built for its role with minimal operational overhead.

Why others are wrong:
A) list-users + list-groups-for-user + QuickSight — These individual API calls provide incomplete data — no policy analysis, no policy versions, no inline policies. May hit API throttling limits across many accounts (each user requires a separate list-groups call). QuickSight is a BI dashboarding tool, not designed for detailed IAM compliance reporting.
B) StackSets + Config + Parameter Store + Security Hub — Too many moving parts: (1) StackSets to deploy Config rules = operational overhead. (2) Parameter Store is not designed for policy version tracking at scale. (3) Security Hub aggregates security findings but is not designed for detailed IAM user/group/policy analysis or historical audit tracking.
D) SSM Automation + list-policies + get-group + DynamoDB + Glue — Individual API calls (list-policies, get-group) are less efficient than get-account-authorization-details. AWS Glue for ETL adds significant operational overhead (ETL jobs to build, manage, and monitor). DynamoDB is not ideal for historical analysis compared to S3 + Athena.
Q63.A security architect must confirm that subnets in Amazon VPC are not assigned a public IP address before the subnets are provisioned. The architect also wants to automatically detect and centralize security findings.

What should the security architect do with the FEWEST configuration steps?
AUse AWS Config with detective evaluation triggered by VPC configuration changes. Use AWS Lambda to send Config rule evaluations to AWS Security Hub.
BUse AWS Config with detective evaluation running periodically. Use Amazon EventBridge to send Config rule evaluations to AWS Security Hub.
CWrite code in AWS Lambda to invoke when a VPC is launched. Use AWS Step Functions to coordinate sending Config rule evaluations to AWS Security Hub.
DUse AWS Config with proactive evaluation. Use AWS Security Hub to view managed and custom rule evaluation results from AWS Config.
✓ Correct: D. AWS Config proactive evaluation + Security Hub (automatic integration).

How to Think About This:
Two key phrases:
1. "Before the subnets are provisioned" → Proactive evaluation (not detective — detective evaluates AFTER provisioning)
2. "Fewest configuration steps" → Use built-in integration (Config → Security Hub is automatic, no Lambda/EventBridge needed)

Key Concepts:
AWS Config Evaluation Modes — This is a critical distinction:
Detective evaluation — Evaluates resources AFTER they are provisioned. Triggered by configuration changes or runs periodically. It tells you "this existing resource is non-compliant." Reactive.
Proactive evaluation — Evaluates resources BEFORE they are provisioned. Checks resource configurations against rules before CloudFormation creates them. It tells you "this resource WOULD BE non-compliant if created." Preventive.

The question says "before the subnets are provisioned" — only proactive evaluation does this. Detective evaluation (options A and B) checks resources after they already exist.

Config → Security Hub Integration — This is automatic and built-in. When Config evaluates a rule (whether proactive or detective), the results are automatically sent to Security Hub as security findings. No Lambda function, no EventBridge rule, no Step Functions needed. Just enable both services.

This means:
• Option A is wrong: Lambda to send Config results to Security Hub is unnecessary (it's automatic)
• Option B is wrong: EventBridge to send Config results to Security Hub is unnecessary (it's automatic)
• Option C is wrong: Lambda + Step Functions is a completely custom solution for something that's built-in

Proactive Evaluation with CloudFormation:
Proactive Config rules work with CloudFormation hooks. Before CloudFormation creates a resource, the hook triggers the Config rule to evaluate the proposed configuration. If the resource would be non-compliant (e.g., subnet with public IP), the creation can be blocked.

Why D is correct: Proactive evaluation checks resources before provisioning (meets "before provisioned"). Security Hub automatically receives Config results (meets "centralize findings" with fewest steps). Two services, zero custom code, zero middleware.

Why others are wrong:
A) Detective evaluation + Lambda to Security Hub — Two problems: (1) Detective evaluation checks resources AFTER provisioning, not before. (2) Lambda is unnecessary — Config sends results to Security Hub automatically. More steps, wrong evaluation mode.
B) Detective evaluation (periodic) + EventBridge to Security Hub — Same two problems: (1) Periodic detective evaluation only checks existing resources, not resources being created. (2) EventBridge is unnecessary — the Config-to-Security Hub integration is built-in. Also, periodic means there's a delay between checks.
C) Lambda + Step Functions — Building a custom solution with Lambda code and Step Functions orchestration when Config proactive evaluation + Security Hub does it natively. Maximum configuration steps for a problem that has a two-service built-in solution.