0 / 4 answered
Q1.Matthew is reviewing the following IAM policy that has been attached to a developer role in his organization. He wants to understand what this policy enforces.
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "RegionLock",
"Effect": "Deny",
"NotAction": ["aws-portal:*", "iam:*", "organizations:*", "support:*", "sts:*"],
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": ["eu-central-1", "eu-west-1"]
}
}
}]
}
What does this policy do?
✓ Correct: C. It denies all actions outside eu-central-1 and eu-west-1, except for billing, IAM, Organizations, Support, and STS which remain usable in any region.
How to Think About This:
This policy stacks three negations: Deny + NotAction + StringNotEquals. The key to reading it is to resolve each negation step by step rather than trying to parse it all at once.
Key Concepts:
Effect: Deny — This policy blocks actions. It does not grant any permissions. A separate Allow policy is still needed for users to actually do anything.
NotAction — This is the inverse of
StringNotEquals on aws:RequestedRegion — This condition triggers the deny only when the request is made outside the listed regions. If the region is eu-central-1 or eu-west-1, the condition does not match and the deny does not fire.
Resolving the logic:
1. Deny applies to everything...
2. ...except the five global services (NotAction exempts them)...
3. ...but only when the region is NOT eu-central-1 or eu-west-1 (Condition).
Result: Regional services (EC2, S3, Lambda, etc.) are blocked outside the two EU regions. Global services work everywhere.
Why C is correct: This accurately describes the three-negation logic. Actions outside the two EU regions are denied, but the five global services are carved out so they continue to work regardless of region. This is a standard region-restriction policy used in AWS to enforce data residency or compliance requirements.
Why others are wrong:
A) It allows all actions only in eu-central-1 and eu-west-1 — This policy has Effect: Deny, not Allow. It does not grant any permissions. A deny-only policy acts as a guardrail; separate Allow policies are still required. Also, it does not "explicitly allow" the five services — it merely exempts them from the deny.
B) It denies billing, IAM, Organizations, Support, and STS unless in EU regions — This is backwards.
D) It denies all actions in eu-central-1 and eu-west-1 — This reverses the region condition.
How to Think About This:
This policy stacks three negations: Deny + NotAction + StringNotEquals. The key to reading it is to resolve each negation step by step rather than trying to parse it all at once.
Key Concepts:
Effect: Deny — This policy blocks actions. It does not grant any permissions. A separate Allow policy is still needed for users to actually do anything.
NotAction — This is the inverse of
Action. Instead of specifying which actions to deny, it specifies which actions are exempt from the deny. The five exempted services (aws-portal, iam, organizations, support, sts) are all global services that do not operate in a specific region. Restricting them by region would break core account functionality.StringNotEquals on aws:RequestedRegion — This condition triggers the deny only when the request is made outside the listed regions. If the region is eu-central-1 or eu-west-1, the condition does not match and the deny does not fire.
Resolving the logic:
1. Deny applies to everything...
2. ...except the five global services (NotAction exempts them)...
3. ...but only when the region is NOT eu-central-1 or eu-west-1 (Condition).
Result: Regional services (EC2, S3, Lambda, etc.) are blocked outside the two EU regions. Global services work everywhere.
Why C is correct: This accurately describes the three-negation logic. Actions outside the two EU regions are denied, but the five global services are carved out so they continue to work regardless of region. This is a standard region-restriction policy used in AWS to enforce data residency or compliance requirements.
Why others are wrong:
A) It allows all actions only in eu-central-1 and eu-west-1 — This policy has Effect: Deny, not Allow. It does not grant any permissions. A deny-only policy acts as a guardrail; separate Allow policies are still required. Also, it does not "explicitly allow" the five services — it merely exempts them from the deny.
B) It denies billing, IAM, Organizations, Support, and STS unless in EU regions — This is backwards.
NotAction means those five services are excluded from the deny. Everything else is denied outside the EU regions, not those five.D) It denies all actions in eu-central-1 and eu-west-1 — This reverses the region condition.
StringNotEquals means the deny fires when the region is NOT in the list. Actions inside eu-central-1 and eu-west-1 are unaffected by this policy.
Q2.A company wants to restrict all AWS usage to the us-east-1 and us-west-2 regions. A junior administrator creates the following SCP and attaches it to an organizational unit (OU):
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "RegionRestrict",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": ["us-east-1", "us-west-2"]
}
}
}]
}
After applying this policy, developers report that they can launch EC2 instances in us-east-1 but they cannot create IAM roles or view their AWS billing dashboard. What is the most likely cause and the correct fix?
✓ Correct: B. The policy uses "Action": "*" instead of "NotAction", which denies global services outside the listed regions. The fix is to use "NotAction" to exempt global services.
How to Think About This:
When you see a region-restriction policy that breaks global services, the root cause is almost always that
Key Concepts:
Global Services vs. Regional Services — Most AWS services (EC2, S3, Lambda, RDS) operate within specific regions. However, several services are global — they don't belong to any particular region:
• IAM (Identity and Access Management) — roles, users, policies are global
• aws-portal / billing — cost management is account-wide
• STS (Security Token Service) — has global and regional endpoints
• AWS Organizations — account management is global
• AWS Support — support cases are not region-specific
Action vs. NotAction —
The Correct Pattern:
Replace
This denies all regional services outside us-east-1 and us-west-2, while leaving global services unaffected by the region condition.
Why B is correct: This correctly identifies both the root cause (global services blocked by
Why others are wrong:
A) Replace with specific service actions — While listing specific services would technically work for those services, it misses the fundamental issue: global services need to be exempted, not just omitted. Also, maintaining a list of every regional service action is impractical and error-prone.
C) Add a separate Allow statement in the SCP — SCPs work differently from IAM policies. An explicit Deny in an SCP always overrides any Allow — even within the same SCP. Adding an Allow for IAM and Billing would have no effect because the Deny with
D) Use aws:PrincipalOrgID condition —
How to Think About This:
When you see a region-restriction policy that breaks global services, the root cause is almost always that
Action: * was used instead of NotAction. Global services like IAM, STS, Billing, and Organizations do not have a regional endpoint — their API calls are not considered to be "in" us-east-1 or us-west-2, so a region-based deny blocks them entirely.Key Concepts:
Global Services vs. Regional Services — Most AWS services (EC2, S3, Lambda, RDS) operate within specific regions. However, several services are global — they don't belong to any particular region:
• IAM (Identity and Access Management) — roles, users, policies are global
• aws-portal / billing — cost management is account-wide
• STS (Security Token Service) — has global and regional endpoints
• AWS Organizations — account management is global
• AWS Support — support cases are not region-specific
Action vs. NotAction —
"Action": "*" matches every AWS API call, including global services. "NotAction": ["iam:*", "aws-portal:*", ...] matches every API call except the listed ones, effectively carving out exemptions. In region-restriction policies, NotAction is critical for keeping global services functional.The Correct Pattern:
Replace
"Action": "*" with:"NotAction": ["iam:*", "aws-portal:*", "sts:*", "support:*", "organizations:*"]This denies all regional services outside us-east-1 and us-west-2, while leaving global services unaffected by the region condition.
Why B is correct: This correctly identifies both the root cause (global services blocked by
Action: * combined with a region condition they can't satisfy) and the correct fix (NotAction to exempt global services). This is the AWS-recommended pattern for region-restriction SCPs.Why others are wrong:
A) Replace with specific service actions — While listing specific services would technically work for those services, it misses the fundamental issue: global services need to be exempted, not just omitted. Also, maintaining a list of every regional service action is impractical and error-prone.
NotAction is the correct mechanism because it automatically covers all current and future regional services while explicitly exempting only the small set of global ones.C) Add a separate Allow statement in the SCP — SCPs work differently from IAM policies. An explicit Deny in an SCP always overrides any Allow — even within the same SCP. Adding an Allow for IAM and Billing would have no effect because the Deny with
Action: * still matches those services and takes precedence. The deny must be scoped to not match global services in the first place.D) Use aws:PrincipalOrgID condition —
aws:PrincipalOrgID is a condition key that checks whether the calling principal belongs to a specific AWS Organization. It has nothing to do with region restrictions or global service access. This would not solve the problem of global services being denied by the region condition.
Q3.StreamWave Inc. runs a music streaming platform that serves audio files from an Amazon S3 bucket. The S3 bucket currently allows public read access so users can stream tracks directly. The engineering team discovers that dozens of third-party websites are embedding direct links to StreamWave's S3 objects, causing significant bandwidth costs without generating any ad revenue for the company.
Which approach would most effectively prevent unauthorized hotlinking while continuing to serve content to legitimate users?
Which approach would most effectively prevent unauthorized hotlinking while continuing to serve content to legitimate users?
✓ Correct: C. Remove public read access and use pre-signed URLs with short expiry times.
How to Think About This:
When you see "external sites linking to your S3 content" + "unauthorized usage / bandwidth costs", the problem is hotlinking. The mental model is: your bucket is publicly readable, so anyone with the URL can embed it. The fix must make the URL useless without authorization. Pre-signed URLs achieve this because they are cryptographically signed and expire after a set duration (e.g., 1 hour), meaning any link shared by a third-party site quickly becomes a dead link.
Key Concepts:
S3 Pre-Signed URLs — A pre-signed URL is generated server-side using your AWS credentials. It encodes the bucket, object key, expiration time, and a cryptographic signature into the URL itself. Anyone with the URL can access the object, but only until it expires. Your application generates a fresh pre-signed URL each time a legitimate user requests a track, so your own site always works while hotlinked URLs quickly die.
How the attack works (Hotlinking) — If your S3 bucket allows public read, the object URL is permanent and predictable (e.g.,
Why C is correct: By making the bucket private and serving content only through pre-signed URLs with short TTLs, you guarantee that: (1) Direct S3 URLs return
Why others are wrong:
A) CloudFront distribution for caching — CloudFront improves performance and reduces S3 egress costs through caching, but it does not solve the access control problem. If the origin bucket is still publicly readable (or CloudFront is configured with public access), external sites can hotlink through the CloudFront URL just as easily. CloudFront can help when paired with signed URLs or signed cookies, but on its own it does not prevent hotlinking.
B) Migrate to EBS volume — Moving audio files from S3 to an EBS volume is a step backwards in every dimension: EBS is single-AZ (no built-in redundancy), not designed for serving web content at scale, more expensive per GB, and requires managing an EC2 instance. It also does not inherently prevent hotlinking — the web server would still need access controls.
D) Block IPs in Security Groups — Security Groups are attached to EC2 instances and ENIs, not to S3 buckets. S3 is a fully managed service with no Security Group support. Even if you used S3 bucket policies to deny specific IPs, it would be a game of whack-a-mole — new offending sites appear constantly, and they may share IPs with legitimate users (CDNs, shared hosting).
How to Think About This:
When you see "external sites linking to your S3 content" + "unauthorized usage / bandwidth costs", the problem is hotlinking. The mental model is: your bucket is publicly readable, so anyone with the URL can embed it. The fix must make the URL useless without authorization. Pre-signed URLs achieve this because they are cryptographically signed and expire after a set duration (e.g., 1 hour), meaning any link shared by a third-party site quickly becomes a dead link.
Key Concepts:
S3 Pre-Signed URLs — A pre-signed URL is generated server-side using your AWS credentials. It encodes the bucket, object key, expiration time, and a cryptographic signature into the URL itself. Anyone with the URL can access the object, but only until it expires. Your application generates a fresh pre-signed URL each time a legitimate user requests a track, so your own site always works while hotlinked URLs quickly die.
How the attack works (Hotlinking) — If your S3 bucket allows public read, the object URL is permanent and predictable (e.g.,
https://bucket.s3.amazonaws.com/song.mp3). Any website can embed this URL in an <audio> tag. Every play costs you S3 data transfer fees, but the third-party site earns the ad revenue from their visitors.Why C is correct: By making the bucket private and serving content only through pre-signed URLs with short TTLs, you guarantee that: (1) Direct S3 URLs return
403 Forbidden. (2) Only your application can generate valid access URLs. (3) Hotlinked URLs expire and become useless within minutes or hours. This is the industry-standard approach for protecting private content on S3.Why others are wrong:
A) CloudFront distribution for caching — CloudFront improves performance and reduces S3 egress costs through caching, but it does not solve the access control problem. If the origin bucket is still publicly readable (or CloudFront is configured with public access), external sites can hotlink through the CloudFront URL just as easily. CloudFront can help when paired with signed URLs or signed cookies, but on its own it does not prevent hotlinking.
B) Migrate to EBS volume — Moving audio files from S3 to an EBS volume is a step backwards in every dimension: EBS is single-AZ (no built-in redundancy), not designed for serving web content at scale, more expensive per GB, and requires managing an EC2 instance. It also does not inherently prevent hotlinking — the web server would still need access controls.
D) Block IPs in Security Groups — Security Groups are attached to EC2 instances and ENIs, not to S3 buckets. S3 is a fully managed service with no Security Group support. Even if you used S3 bucket policies to deny specific IPs, it would be a game of whack-a-mole — new offending sites appear constantly, and they may share IPs with legitimate users (CDNs, shared hosting).
Q4.A development team has implemented pre-signed URLs for their S3-hosted media files to prevent hotlinking. However, a security engineer notices that some pre-signed URLs are being shared on social media and reused by unauthorized users before they expire. The current expiry time is set to 24 hours.
The team wants to tighten security while minimizing impact on the user experience for their paying subscribers. Which combination of actions best addresses this concern?
The team wants to tighten security while minimizing impact on the user experience for their paying subscribers. Which combination of actions best addresses this concern?
✓ Correct: B. Reduce the pre-signed URL expiry and serve through CloudFront with Origin Access Control (OAC).
How to Think About This:
This question builds on the hotlinking concept from Q3 but adds a twist: pre-signed URLs are already in place but the expiry window is too long (24 hours), giving shared links enough time to be reused. The fix requires two things: (1) shrink the time window, and (2) add a layer that prevents direct S3 access entirely.
Key Concepts:
Pre-Signed URL Expiry Tuning — A 24-hour expiry means a leaked URL works for an entire day. Reducing it to 5–15 minutes drastically limits the reuse window. Your application simply generates a fresh URL each time a subscriber clicks play — legitimate users never notice the difference, but shared links become worthless within minutes.
CloudFront + Origin Access Control (OAC) — OAC is the modern replacement for Origin Access Identity (OAI). When configured, the S3 bucket policy is set to only allow requests from your CloudFront distribution. This means: (1) Direct S3 URLs always return 403 — even if someone constructs a valid pre-signed URL to S3 directly. (2) All content must flow through CloudFront, where you can add additional controls like signed URLs, signed cookies, geo-restrictions, and WAF rules. (3) CloudFront caching reduces S3 egress costs.
Defense in Depth — The combination of short-lived pre-signed URLs + CloudFront OAC creates two layers of protection: the URL expires quickly (time-based control) and only CloudFront can reach S3 (network-based control).
Why B is correct: Shortening the expiry window directly addresses the "shared URL reuse" problem, while CloudFront with OAC locks down S3 so that even if someone obtains a direct S3 URL, it is useless. Together they provide defense in depth without impacting legitimate subscribers (who get fresh URLs on every request).
Why others are wrong:
A) Public bucket with aws:Referer condition — This is actually a downgrade in security. The
C) S3 server-side encryption with KMS — Server-side encryption (SSE-KMS) protects data at rest on S3's storage disks. It does not control who can download the file. When a user has
D) S3 Object Lock in Governance mode — Object Lock prevents objects from being deleted or overwritten (write protection / immutability). It is designed for compliance and data retention (e.g., WORM — Write Once Read Many). It has absolutely nothing to do with controlling read access or preventing unauthorized downloads.
How to Think About This:
This question builds on the hotlinking concept from Q3 but adds a twist: pre-signed URLs are already in place but the expiry window is too long (24 hours), giving shared links enough time to be reused. The fix requires two things: (1) shrink the time window, and (2) add a layer that prevents direct S3 access entirely.
Key Concepts:
Pre-Signed URL Expiry Tuning — A 24-hour expiry means a leaked URL works for an entire day. Reducing it to 5–15 minutes drastically limits the reuse window. Your application simply generates a fresh URL each time a subscriber clicks play — legitimate users never notice the difference, but shared links become worthless within minutes.
CloudFront + Origin Access Control (OAC) — OAC is the modern replacement for Origin Access Identity (OAI). When configured, the S3 bucket policy is set to only allow requests from your CloudFront distribution. This means: (1) Direct S3 URLs always return 403 — even if someone constructs a valid pre-signed URL to S3 directly. (2) All content must flow through CloudFront, where you can add additional controls like signed URLs, signed cookies, geo-restrictions, and WAF rules. (3) CloudFront caching reduces S3 egress costs.
Defense in Depth — The combination of short-lived pre-signed URLs + CloudFront OAC creates two layers of protection: the URL expires quickly (time-based control) and only CloudFront can reach S3 (network-based control).
Why B is correct: Shortening the expiry window directly addresses the "shared URL reuse" problem, while CloudFront with OAC locks down S3 so that even if someone obtains a direct S3 URL, it is useless. Together they provide defense in depth without impacting legitimate subscribers (who get fresh URLs on every request).
Why others are wrong:
A) Public bucket with aws:Referer condition — This is actually a downgrade in security. The
Referer header is sent by the browser and can be easily spoofed. Any user can set a custom Referer header using browser extensions or command-line tools like curl -H "Referer: https://yoursite.com". Moving from pre-signed URLs (cryptographic) to Referer checks (spoofable header) weakens the security posture significantly.C) S3 server-side encryption with KMS — Server-side encryption (SSE-KMS) protects data at rest on S3's storage disks. It does not control who can download the file. When a user has
s3:GetObject permission (or a valid pre-signed URL), S3 automatically decrypts the data before serving it. Encryption does not prevent hotlinking or URL sharing — it protects against a different threat (physical disk theft, unauthorized AWS account access to raw storage).D) S3 Object Lock in Governance mode — Object Lock prevents objects from being deleted or overwritten (write protection / immutability). It is designed for compliance and data retention (e.g., WORM — Write Once Read Many). It has absolutely nothing to do with controlling read access or preventing unauthorized downloads.
Q5.NovaTech Solutions is migrating its development and QA environments to AWS using separate accounts for each environment. Both accounts are linked to a central management account through AWS Organizations with Consolidated Billing enabled.
To control costs, NovaTech wants administrators in the management account to be able to stop, terminate, or delete resources running in the Dev and QA accounts when spending thresholds are exceeded.
What is the best approach to grant this access?
To control costs, NovaTech wants administrators in the management account to be able to stop, terminate, or delete resources running in the Dev and QA accounts when spending thresholds are exceeded.
What is the best approach to grant this access?
✓ Correct: C. Create IAM users with sts:AssumeRole permission in the management account, and create scoped cross-account roles in the target accounts.
How to Think About This:
When you see "management account needs to manage resources in member accounts", the pattern is always cross-account IAM roles. The key principle is least privilege: the roles in the target accounts should only have the specific permissions needed (stop, delete, terminate) — not full admin. The role lives in the target account (where the resources are), and the trust policy on that role specifies who can assume it.
Key Concepts:
Cross-Account Role Assumption (sts:AssumeRole) — This is the standard AWS mechanism for granting access across accounts. It involves two pieces:
• Trust policy (on the role in the target account) — specifies which AWS account or principal is allowed to assume the role. Example:
• Permission policy (on the role in the target account) — defines what the role can do once assumed. In this case:
• IAM policy (on the user in the management account) — grants
Least Privilege — The question asks administrators to stop, delete, and terminate resources — not to do everything. The cross-account role should be scoped to exactly those actions. Giving full Administrator access violates least privilege and creates unnecessary risk.
Consolidated Billing vs. IAM Access — These are completely separate concepts. Consolidated Billing groups multiple accounts under one payer for billing purposes (volume discounts, single invoice). It grants zero IAM permissions — no cross-account resource access whatsoever.
Why C is correct: This follows the AWS-recommended cross-account access pattern with least privilege. The roles in Dev and QA are scoped to only stop/delete/terminate permissions (not full admin). The management account users have only
Why others are wrong:
A) Full Admin + inheriting permissions from management account — Two problems: (1) Full Administrator permissions in the management account violates least privilege. (2) Cross-account roles do not inherit permissions from the source account. When you assume a role, you temporarily give up your original permissions and operate with only the permissions attached to the assumed role. There is no inheritance mechanism.
B) Cross-account role in the management account with full Admin — The role must be created in the target accounts (Dev and QA), not the management account. A role in the management account cannot grant access to resources in other accounts. Also, full Admin access violates least privilege — the requirement is only to stop, delete, and terminate.
D) Consolidated Billing grants resource access — This is a common misconception. Consolidated Billing is purely a billing feature. It combines usage across accounts for a single invoice and volume discounts. It does not create any IAM trust relationships, roles, or permissions. An IAM user in the management account has zero ability to manage resources in member accounts through billing alone.
How to Think About This:
When you see "management account needs to manage resources in member accounts", the pattern is always cross-account IAM roles. The key principle is least privilege: the roles in the target accounts should only have the specific permissions needed (stop, delete, terminate) — not full admin. The role lives in the target account (where the resources are), and the trust policy on that role specifies who can assume it.
Key Concepts:
Cross-Account Role Assumption (sts:AssumeRole) — This is the standard AWS mechanism for granting access across accounts. It involves two pieces:
• Trust policy (on the role in the target account) — specifies which AWS account or principal is allowed to assume the role. Example:
"Principal": {"AWS": "arn:aws:iam::123456789012:root"}• Permission policy (on the role in the target account) — defines what the role can do once assumed. In this case:
ec2:StopInstances, ec2:TerminateInstances, etc.• IAM policy (on the user in the management account) — grants
sts:AssumeRole for the target role ARN.Least Privilege — The question asks administrators to stop, delete, and terminate resources — not to do everything. The cross-account role should be scoped to exactly those actions. Giving full Administrator access violates least privilege and creates unnecessary risk.
Consolidated Billing vs. IAM Access — These are completely separate concepts. Consolidated Billing groups multiple accounts under one payer for billing purposes (volume discounts, single invoice). It grants zero IAM permissions — no cross-account resource access whatsoever.
Why C is correct: This follows the AWS-recommended cross-account access pattern with least privilege. The roles in Dev and QA are scoped to only stop/delete/terminate permissions (not full admin). The management account users have only
sts:AssumeRole permission. The trust policy explicitly grants the management account access. Every component follows least privilege.Why others are wrong:
A) Full Admin + inheriting permissions from management account — Two problems: (1) Full Administrator permissions in the management account violates least privilege. (2) Cross-account roles do not inherit permissions from the source account. When you assume a role, you temporarily give up your original permissions and operate with only the permissions attached to the assumed role. There is no inheritance mechanism.
B) Cross-account role in the management account with full Admin — The role must be created in the target accounts (Dev and QA), not the management account. A role in the management account cannot grant access to resources in other accounts. Also, full Admin access violates least privilege — the requirement is only to stop, delete, and terminate.
D) Consolidated Billing grants resource access — This is a common misconception. Consolidated Billing is purely a billing feature. It combines usage across accounts for a single invoice and volume discounts. It does not create any IAM trust relationships, roles, or permissions. An IAM user in the management account has zero ability to manage resources in member accounts through billing alone.
Q6.A security architect is setting up cross-account access so that a security team in Account A (111111111111) can investigate incidents in Account B (222222222222). The following IAM role trust policy is configured in Account B:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
}
}
}]
}
A security analyst in Account A tries to assume this role but receives an Access Denied error. The analyst has a valid IAM user with an sts:AssumeRole policy that targets the correct role ARN. What is the most likely cause?
✓ Correct: D. The analyst has not authenticated with MFA before attempting to assume the role.
How to Think About This:
When you see Access Denied on sts:AssumeRole and the trust policy looks correct, check the Conditions. A condition like
Key Concepts:
Trust Policy Conditions — A trust policy can include conditions that must be met in addition to the principal match.
How MFA Works with AssumeRole — To satisfy this condition, the analyst must:
1. Call
2. Use the temporary credentials returned by GetSessionToken to call
If using the AWS Console, the console handles MFA at login time. If using the CLI, the analyst must explicitly pass
"root" in a Principal — In a trust policy,
Why D is correct: The trust policy explicitly requires
Why others are wrong:
A) "root" only allows the root user — This is a very common misconception. In a trust policy,
B) Accounts must be in the same Organization — Cross-account role assumption works between any two AWS accounts regardless of whether they are in the same Organization. You just need a matching trust policy in the target account and an
C) Empty permission policy — An empty permission policy would cause Access Denied when the analyst tries to do something after assuming the role, not during the
How to Think About This:
When you see Access Denied on sts:AssumeRole and the trust policy looks correct, check the Conditions. A condition like
aws:MultiFactorAuthPresent: true means the caller must have an active MFA session. If the analyst is using long-term access keys (programmatic access) without an MFA token, this condition fails and the assume-role call is denied.Key Concepts:
Trust Policy Conditions — A trust policy can include conditions that must be met in addition to the principal match.
aws:MultiFactorAuthPresent checks whether the API caller authenticated with a second factor. This is a common security hardening measure for cross-account roles that access sensitive environments.How MFA Works with AssumeRole — To satisfy this condition, the analyst must:
1. Call
sts:GetSessionToken with their MFA device serial number and current TOTP code.2. Use the temporary credentials returned by GetSessionToken to call
sts:AssumeRole.If using the AWS Console, the console handles MFA at login time. If using the CLI, the analyst must explicitly pass
--serial-number and --token-code."root" in a Principal — In a trust policy,
arn:aws:iam::111111111111:root means the entire account — all IAM users and roles in Account A are eligible to assume this role (subject to their own IAM policies). It does not mean only the literal root user.Why D is correct: The trust policy explicitly requires
aws:MultiFactorAuthPresent: true. If the analyst calls sts:AssumeRole without an active MFA session (e.g., using plain access keys from the CLI), the condition evaluates to false and the request is denied — even though the principal and action match perfectly.Why others are wrong:
A) "root" only allows the root user — This is a very common misconception. In a trust policy,
:root refers to the entire AWS account as a principal, not the literal root user. Any IAM user or role in Account A that has an sts:AssumeRole policy can assume this role. If you wanted to restrict to a specific user, you would use arn:aws:iam::111111111111:user/username.B) Accounts must be in the same Organization — Cross-account role assumption works between any two AWS accounts regardless of whether they are in the same Organization. You just need a matching trust policy in the target account and an
sts:AssumeRole permission in the source account. Organizations enable additional features (SCPs, organization trails) but are not required for cross-account roles.C) Empty permission policy — An empty permission policy would cause Access Denied when the analyst tries to do something after assuming the role, not during the
sts:AssumeRole call itself. The error described in the question occurs when trying to assume the role, which means the trust policy evaluation is failing — pointing to the MFA condition.
Q7.A developer needs to upload objects to a specific S3 bucket named
company-data-backup but should not be able to delete any objects or access other buckets. Which policy configuration best follows AWS security best practices?
✓ Correct: C. Create a customer-managed policy allowing s3:PutObject on the specific bucket.
How to Think About This:
When you see "needs to do X but should NOT do Y", the answer is always least privilege: grant only the exact actions needed on the exact resources needed. Nothing more.
Key Concepts:
Principle of Least Privilege — Every IAM entity should have only the permissions required to perform their job function. This minimizes the blast radius if credentials are compromised. In this case, the developer needs one action (
Customer-Managed vs. Inline vs. AWS-Managed Policies — Customer-managed policies are the best practice because they are reusable, versionable, and centrally managed. Inline policies are embedded directly in a user/role and can't be reused. AWS-managed policies (like
Why C is correct: It grants exactly one action (
Why others are wrong:
A) AmazonS3FullAccess — This grants
B) s3:* on the specific bucket — While this scopes to the correct bucket,
D) s3:ListBucket on all resources —
How to Think About This:
When you see "needs to do X but should NOT do Y", the answer is always least privilege: grant only the exact actions needed on the exact resources needed. Nothing more.
Key Concepts:
Principle of Least Privilege — Every IAM entity should have only the permissions required to perform their job function. This minimizes the blast radius if credentials are compromised. In this case, the developer needs one action (
s3:PutObject) on one bucket (company-data-backup).Customer-Managed vs. Inline vs. AWS-Managed Policies — Customer-managed policies are the best practice because they are reusable, versionable, and centrally managed. Inline policies are embedded directly in a user/role and can't be reused. AWS-managed policies (like
AmazonS3FullAccess) are pre-built but often far too broad.Why C is correct: It grants exactly one action (
s3:PutObject) on exactly one resource (arn:aws:s3:::company-data-backup/*). The developer can upload files to that bucket and nothing else — no delete, no read, no access to other buckets. This is textbook least privilege.Why others are wrong:
A) AmazonS3FullAccess — This grants
s3:* on all S3 resources. The developer could delete objects, read any bucket, modify bucket policies — a massive over-grant of permissions and a security risk.B) s3:* on the specific bucket — While this scopes to the correct bucket,
s3:* includes s3:DeleteObject, s3:GetObject, s3:PutBucketPolicy, and dozens of other actions. The requirement explicitly states no delete access.D) s3:ListBucket on all resources —
ListBucket only lets you list objects in a bucket — it doesn't allow uploading. Also, "all resources" means every bucket in the account, violating least privilege.
Q8.Your organization recently acquired a subsidiary that operates its own AWS account. A security auditor in the parent "Main" account needs to review IAM configurations in the "Subsidiary" account. What is the most secure way to implement this access?
✓ Correct: B. Create a cross-account IAM Role in the Subsidiary account with a trust policy for the Main account.
How to Think About This:
"Cross-account access" = IAM Role with a trust policy. Never share credentials (access keys) between accounts — that creates permanent, unauditable access that can't be revoked without rotating the keys.
Key Concepts:
Cross-Account Roles — The role is created in the target account (Subsidiary). Its trust policy specifies the Main account as a trusted principal. The auditor in the Main account calls
Why B is correct: This follows the AWS-recommended cross-account access pattern: temporary credentials, auditable via CloudTrail, revocable by modifying the trust policy, and scoped to ReadOnlyAccess (no write permissions). The auditor never needs a separate user or permanent credentials in the Subsidiary account.
Why others are wrong:
A) Create an IAM user and share access keys — Sharing long-term credentials is a serious security anti-pattern. Access keys don't expire, can be leaked, are difficult to audit (who is actually using them?), and you'd need to manage a separate user. Cross-account roles are always preferred over credential sharing.
C) Use an SCP to grant access — SCPs restrict permissions — they cannot grant them. An SCP sets the maximum permission boundary for accounts in an OU. It can deny actions but cannot add new allow permissions. You still need IAM policies or roles to grant actual access.
D) Bucket Policy for IAM logs — IAM configurations are not stored in S3. IAM is an API-based service — you view configurations through the IAM API/console, not by reading files from a bucket. A bucket policy cannot grant access to IAM service APIs.
How to Think About This:
"Cross-account access" = IAM Role with a trust policy. Never share credentials (access keys) between accounts — that creates permanent, unauditable access that can't be revoked without rotating the keys.
Key Concepts:
Cross-Account Roles — The role is created in the target account (Subsidiary). Its trust policy specifies the Main account as a trusted principal. The auditor in the Main account calls
sts:AssumeRole and receives temporary credentials with ReadOnlyAccess. The credentials expire automatically, and every assumption is logged in CloudTrail.Why B is correct: This follows the AWS-recommended cross-account access pattern: temporary credentials, auditable via CloudTrail, revocable by modifying the trust policy, and scoped to ReadOnlyAccess (no write permissions). The auditor never needs a separate user or permanent credentials in the Subsidiary account.
Why others are wrong:
A) Create an IAM user and share access keys — Sharing long-term credentials is a serious security anti-pattern. Access keys don't expire, can be leaked, are difficult to audit (who is actually using them?), and you'd need to manage a separate user. Cross-account roles are always preferred over credential sharing.
C) Use an SCP to grant access — SCPs restrict permissions — they cannot grant them. An SCP sets the maximum permission boundary for accounts in an OU. It can deny actions but cannot add new allow permissions. You still need IAM policies or roles to grant actual access.
D) Bucket Policy for IAM logs — IAM configurations are not stored in S3. IAM is an API-based service — you view configurations through the IAM API/console, not by reading files from a bucket. A bucket policy cannot grant access to IAM service APIs.
Q9.An IAM user has two policies attached:
• Policy A: Allows
• Policy B: Denies
If the user attempts to launch an instance in us-east-1, what is the result?
• Policy A: Allows
ec2:RunInstances on all resources.• Policy B: Denies
ec2:RunInstances if the region is us-east-1.If the user attempts to launch an instance in us-east-1, what is the result?
✓ Correct: B. The request is denied because an explicit Deny always overrides an Allow.
How to Think About This:
AWS IAM policy evaluation follows one golden rule: Explicit Deny ALWAYS wins. No matter how many Allow statements exist, a single explicit Deny for the same action will override them all. The evaluation order of policies does not matter.
Key Concepts:
IAM Policy Evaluation Order — AWS evaluates all applicable policies and applies this logic:
1. Explicit Deny? → If any policy explicitly denies the action, the request is DENIED. Full stop.
2. Explicit Allow? → If no deny exists and any policy explicitly allows the action, the request is ALLOWED.
3. Neither? → The request is DENIED (implicit/default deny).
Think of it as: Deny > Allow > Default Deny
Why B is correct: Policy B explicitly denies
Why others are wrong:
A) Allow overrides Deny — This is backwards. In AWS IAM, Deny always overrides Allow. There is no exception to this rule.
C) Policy A is evaluated first — AWS does not evaluate policies sequentially. All applicable policies are collected and evaluated together. Order of attachment or creation is irrelevant. Even if AWS "saw" the Allow first, it would still check all policies and the Deny would win.
D) Default Deny — The default deny applies when there is no explicit allow or deny. In this case, there IS an explicit deny in Policy B. The request is denied because of the explicit deny, not the default deny. The distinction matters: explicit deny means "I specifically said no," while default deny means "nobody said yes."
How to Think About This:
AWS IAM policy evaluation follows one golden rule: Explicit Deny ALWAYS wins. No matter how many Allow statements exist, a single explicit Deny for the same action will override them all. The evaluation order of policies does not matter.
Key Concepts:
IAM Policy Evaluation Order — AWS evaluates all applicable policies and applies this logic:
1. Explicit Deny? → If any policy explicitly denies the action, the request is DENIED. Full stop.
2. Explicit Allow? → If no deny exists and any policy explicitly allows the action, the request is ALLOWED.
3. Neither? → The request is DENIED (implicit/default deny).
Think of it as: Deny > Allow > Default Deny
Why B is correct: Policy B explicitly denies
ec2:RunInstances when the region is us-east-1. The user is trying to launch in us-east-1, so the deny condition matches. Even though Policy A allows RunInstances, the explicit deny in Policy B takes absolute precedence. The request is denied.Why others are wrong:
A) Allow overrides Deny — This is backwards. In AWS IAM, Deny always overrides Allow. There is no exception to this rule.
C) Policy A is evaluated first — AWS does not evaluate policies sequentially. All applicable policies are collected and evaluated together. Order of attachment or creation is irrelevant. Even if AWS "saw" the Allow first, it would still check all policies and the Deny would win.
D) Default Deny — The default deny applies when there is no explicit allow or deny. In this case, there IS an explicit deny in Policy B. The request is denied because of the explicit deny, not the default deny. The distinction matters: explicit deny means "I specifically said no," while default deny means "nobody said yes."
Q10.Which of the following is considered a critical security best practice for the AWS Account Root User?
✓ Correct: C. Enable MFA, delete root access keys, and use IAM users/roles for all work.
How to Think About This:
The root user has unrestricted access to everything in the AWS account — it cannot be limited by IAM policies or SCPs. This makes it the highest-value target for attackers. The security strategy is: lock it down, don't use it.
Key Concepts:
Root User — The root user is the identity that created the AWS account. It has complete, unrestricted access to all services and resources. It is the only identity that can: close the account, change the account name/email, modify certain billing settings, and restore IAM permissions.
AWS Root User Best Practices:
• Enable MFA — Use a hardware MFA device (YubiKey) or virtual MFA (authenticator app). This prevents account takeover even if the password is compromised.
• Delete access keys — Root access keys allow programmatic access without MFA. Delete them so root can only be used via the console (which requires MFA).
• Don't use for daily tasks — Create IAM users or roles with AdministratorAccess for daily work. The root user should only be used for the handful of tasks that specifically require it.
Why C is correct: This covers all three critical best practices: MFA (prevents unauthorized login), deleting access keys (prevents programmatic abuse), and using IAM for daily work (limits root exposure). This matches AWS's own security recommendations.
Why others are wrong:
A) Use root for daily tasks — This is the exact opposite of best practice. Using root daily exposes the most powerful credentials to phishing, session hijacking, and accidental damage. If root is compromised, the attacker owns everything with no way to restrict them.
B) Delete the root user — The root user cannot be deleted. It is permanently tied to the AWS account. You can only secure it by enabling MFA, removing access keys, and not using it.
D) Share the root password — Sharing credentials violates every security principle. It eliminates accountability (who logged in?), increases exposure surface, and makes it impossible to rotate credentials safely. For emergency access, use a sealed envelope in a physical safe with MFA device — never share the password.
How to Think About This:
The root user has unrestricted access to everything in the AWS account — it cannot be limited by IAM policies or SCPs. This makes it the highest-value target for attackers. The security strategy is: lock it down, don't use it.
Key Concepts:
Root User — The root user is the identity that created the AWS account. It has complete, unrestricted access to all services and resources. It is the only identity that can: close the account, change the account name/email, modify certain billing settings, and restore IAM permissions.
AWS Root User Best Practices:
• Enable MFA — Use a hardware MFA device (YubiKey) or virtual MFA (authenticator app). This prevents account takeover even if the password is compromised.
• Delete access keys — Root access keys allow programmatic access without MFA. Delete them so root can only be used via the console (which requires MFA).
• Don't use for daily tasks — Create IAM users or roles with AdministratorAccess for daily work. The root user should only be used for the handful of tasks that specifically require it.
Why C is correct: This covers all three critical best practices: MFA (prevents unauthorized login), deleting access keys (prevents programmatic abuse), and using IAM for daily work (limits root exposure). This matches AWS's own security recommendations.
Why others are wrong:
A) Use root for daily tasks — This is the exact opposite of best practice. Using root daily exposes the most powerful credentials to phishing, session hijacking, and accidental damage. If root is compromised, the attacker owns everything with no way to restrict them.
B) Delete the root user — The root user cannot be deleted. It is permanently tied to the AWS account. You can only secure it by enabling MFA, removing access keys, and not using it.
D) Share the root password — Sharing credentials violates every security principle. It eliminates accountability (who logged in?), increases exposure surface, and makes it impossible to rotate credentials safely. For emergency access, use a sealed envelope in a physical safe with MFA device — never share the password.
Q11.An application running on an EC2 instance needs to read files from an S3 bucket. How should you provide the necessary credentials to the application?
✓ Correct: C. Assign an IAM Role to an Instance Profile and attach it to the EC2 instance.
How to Think About This:
When you see "EC2 needs access to another AWS service", the answer is always IAM Role + Instance Profile. Never embed credentials in code, config files, or repositories.
Key Concepts:
Instance Profile — An Instance Profile is a container for an IAM Role that is attached to an EC2 instance. When the instance launches, it can automatically assume the role and receive temporary credentials via the Instance Metadata Service (IMDS) at
How It Works:
1. Create an IAM Role with a policy allowing
2. The role's trust policy allows
3. Attach the role to an Instance Profile and associate it with the EC2 instance.
4. AWS automatically rotates the temporary credentials (they refresh before expiry).
5. Your application code simply calls S3 — the SDK handles credential retrieval automatically.
Why C is correct: IAM Roles with Instance Profiles provide temporary, auto-rotating credentials with no secrets to manage. The credentials are never stored in code or on disk. If the instance is compromised, you revoke access by detaching the role. This is the only AWS-recommended approach.
Why others are wrong:
A) Hardcode access keys — This is a critical security violation. Hardcoded keys end up in version control, logs, backups, and AMI snapshots. They don't rotate automatically. If compromised, an attacker has permanent access until you manually rotate the keys. This is the #1 cause of AWS credential leaks.
B) Store credentials in GitHub — Even in a "private" repository, this is extremely dangerous. Private repos can be accidentally made public, accessed by all org members, exposed through GitHub API tokens, or leaked through CI/CD logs. AWS credential scanners regularly find leaked keys on GitHub within minutes.
D) Use Public IP in a Bucket Policy — Public IP addresses change when instances stop/start (unless using Elastic IPs). Multiple instances may share NAT Gateway IPs, granting unintended access. IP-based policies also don't provide identity — you can't audit which application accessed the data. This approach is fragile and insecure.
How to Think About This:
When you see "EC2 needs access to another AWS service", the answer is always IAM Role + Instance Profile. Never embed credentials in code, config files, or repositories.
Key Concepts:
Instance Profile — An Instance Profile is a container for an IAM Role that is attached to an EC2 instance. When the instance launches, it can automatically assume the role and receive temporary credentials via the Instance Metadata Service (IMDS) at
http://169.254.169.254. AWS SDKs automatically discover and use these credentials — no configuration needed in your code.How It Works:
1. Create an IAM Role with a policy allowing
s3:GetObject on the target bucket.2. The role's trust policy allows
ec2.amazonaws.com as the principal.3. Attach the role to an Instance Profile and associate it with the EC2 instance.
4. AWS automatically rotates the temporary credentials (they refresh before expiry).
5. Your application code simply calls S3 — the SDK handles credential retrieval automatically.
Why C is correct: IAM Roles with Instance Profiles provide temporary, auto-rotating credentials with no secrets to manage. The credentials are never stored in code or on disk. If the instance is compromised, you revoke access by detaching the role. This is the only AWS-recommended approach.
Why others are wrong:
A) Hardcode access keys — This is a critical security violation. Hardcoded keys end up in version control, logs, backups, and AMI snapshots. They don't rotate automatically. If compromised, an attacker has permanent access until you manually rotate the keys. This is the #1 cause of AWS credential leaks.
B) Store credentials in GitHub — Even in a "private" repository, this is extremely dangerous. Private repos can be accidentally made public, accessed by all org members, exposed through GitHub API tokens, or leaked through CI/CD logs. AWS credential scanners regularly find leaked keys on GitHub within minutes.
D) Use Public IP in a Bucket Policy — Public IP addresses change when instances stop/start (unless using Elastic IPs). Multiple instances may share NAT Gateway IPs, granting unintended access. IP-based policies also don't provide identity — you can't audit which application accessed the data. This approach is fragile and insecure.
Q12.You need to create a single IAM policy that allows 100 different users to access only their own "folder" in an S3 bucket (e.g.,
s3://company-home/username/). Which IAM feature should you use?
✓ Correct: B. Use IAM Policy Variables like
How to Think About This:
When you see "one policy for many users, each with their own data", the answer is policy variables. They let you write a single policy that dynamically resolves to different values for each user.
Key Concepts:
IAM Policy Variables — AWS IAM supports variables in the
Example Policy:
When user "alice" makes a request, this resolves to
Why B is correct: Policy variables allow a single policy to dynamically scope access per user. You attach one policy to a group, add all 100 users to the group, and each user can only access their own folder. No need to create or maintain 100 separate policies.
Why others are wrong:
A) Individual resource-based policies for every user — This means creating and maintaining 100 separate policies (or 100 statements in one bucket policy). It's operationally impractical, error-prone, and hits the S3 bucket policy size limit (20KB). Policy variables solve this exact problem.
C) IAM Groups with specific prefix permissions — This means creating 100 groups, each with a policy hard-coding a specific user's prefix. Groups are meant for shared permissions, not per-user scoping. This defeats the purpose and is just as unmanageable as option A.
D) Permission Boundaries — Permission Boundaries set the maximum permissions an IAM entity can have. They are used to delegate safe IAM administration (e.g., allowing a junior admin to create users without escalating privileges). They are not designed for dynamic per-user resource scoping.
${aws:username} to dynamically scope the resource.How to Think About This:
When you see "one policy for many users, each with their own data", the answer is policy variables. They let you write a single policy that dynamically resolves to different values for each user.
Key Concepts:
IAM Policy Variables — AWS IAM supports variables in the
Resource and Condition elements that are replaced at evaluation time with values from the request context. The most common variable is ${aws:username}, which resolves to the IAM user's name.Example Policy:
"Resource": "arn:aws:s3:::company-home/${aws:username}/*"When user "alice" makes a request, this resolves to
arn:aws:s3:::company-home/alice/*. When user "bob" makes a request, it resolves to arn:aws:s3:::company-home/bob/*. One policy, 100 users, each scoped to their own prefix.Why B is correct: Policy variables allow a single policy to dynamically scope access per user. You attach one policy to a group, add all 100 users to the group, and each user can only access their own folder. No need to create or maintain 100 separate policies.
Why others are wrong:
A) Individual resource-based policies for every user — This means creating and maintaining 100 separate policies (or 100 statements in one bucket policy). It's operationally impractical, error-prone, and hits the S3 bucket policy size limit (20KB). Policy variables solve this exact problem.
C) IAM Groups with specific prefix permissions — This means creating 100 groups, each with a policy hard-coding a specific user's prefix. Groups are meant for shared permissions, not per-user scoping. This defeats the purpose and is just as unmanageable as option A.
D) Permission Boundaries — Permission Boundaries set the maximum permissions an IAM entity can have. They are used to delegate safe IAM administration (e.g., allowing a junior admin to create users without escalating privileges). They are not designed for dynamic per-user resource scoping.
Q13.In an AWS Organization, a developer complains they cannot create an S3 bucket even though their IAM user has
AdministratorAccess. You investigate and discover an SCP applied to their Organizational Unit (OU) that denies s3:CreateBucket. What is the result and why?
✓ Correct: C. The SCP denies the action regardless of IAM permissions.
How to Think About This:
Think of SCPs as a fence around a playground. IAM policies determine what you can do inside the fence, but the SCP determines the boundary of the playground. Even if your IAM policy says "you can do anything" (AdministratorAccess), you still can't go outside the fence.
Key Concepts:
Service Control Policies (SCPs) — SCPs are organization-level policies that set the maximum available permissions for accounts in an OU. Key facts:
• SCPs do NOT grant permissions — they only restrict them
• An explicit Deny in an SCP overrides ANY IAM Allow in the affected accounts
• SCPs apply to all IAM users and roles in the account (including admin users)
• SCPs do NOT affect the management account (only member accounts)
• For an action to be allowed, it must be permitted by BOTH the SCP AND the IAM policy
Effective Permissions = intersection of SCP + IAM Policy. If either says Deny, the action is denied.
Why C is correct: The SCP explicitly denies
Why others are wrong:
A) IAM takes precedence over SCPs — The opposite is true. SCPs act as a ceiling on permissions. IAM policies operate within the limits set by SCPs, not above them. Think: SCP is the constitution, IAM policies are local laws — local laws cannot override the constitution.
B) CLI vs. Console makes a difference — SCPs enforce at the API level, regardless of how the request is made. Whether you use the Console, CLI, SDK, or CloudFormation, the same SCP evaluation applies. The access method is irrelevant.
D) Only if bucket name contains "restricted" — The SCP described denies
How to Think About This:
Think of SCPs as a fence around a playground. IAM policies determine what you can do inside the fence, but the SCP determines the boundary of the playground. Even if your IAM policy says "you can do anything" (AdministratorAccess), you still can't go outside the fence.
Key Concepts:
Service Control Policies (SCPs) — SCPs are organization-level policies that set the maximum available permissions for accounts in an OU. Key facts:
• SCPs do NOT grant permissions — they only restrict them
• An explicit Deny in an SCP overrides ANY IAM Allow in the affected accounts
• SCPs apply to all IAM users and roles in the account (including admin users)
• SCPs do NOT affect the management account (only member accounts)
• For an action to be allowed, it must be permitted by BOTH the SCP AND the IAM policy
Effective Permissions = intersection of SCP + IAM Policy. If either says Deny, the action is denied.
Why C is correct: The SCP explicitly denies
s3:CreateBucket. This creates an absolute boundary that no IAM policy within the account can override. Even AdministratorAccess (which allows * on *) is bounded by the SCP. The developer cannot create a bucket until the SCP is modified or the account is moved to a different OU.Why others are wrong:
A) IAM takes precedence over SCPs — The opposite is true. SCPs act as a ceiling on permissions. IAM policies operate within the limits set by SCPs, not above them. Think: SCP is the constitution, IAM policies are local laws — local laws cannot override the constitution.
B) CLI vs. Console makes a difference — SCPs enforce at the API level, regardless of how the request is made. Whether you use the Console, CLI, SDK, or CloudFormation, the same SCP evaluation applies. The access method is irrelevant.
D) Only if bucket name contains "restricted" — The SCP described denies
s3:CreateBucket unconditionally — there is no condition on the bucket name. SCPs can include conditions, but this one does not.
Q14.A mobile application needs to allow users to upload photos directly to S3. You do not want to embed long-term AWS credentials in the app code. Which AWS service should you use to provide temporary, limited-privilege credentials to the mobile app?
✓ Correct: B. Amazon Cognito Identity Pools.
How to Think About This:
When you see "mobile app" + "direct access to AWS service" + "no embedded credentials", the answer is Amazon Cognito Identity Pools. Cognito is specifically designed to give end users (mobile/web) temporary AWS credentials scoped to their identity.
Key Concepts:
Amazon Cognito Identity Pools (Federated Identities) — Cognito Identity Pools exchange authentication tokens (from Cognito User Pools, Google, Facebook, SAML, etc.) for temporary AWS credentials. The flow:
1. User authenticates (signs in via your app).
2. App sends the auth token to Cognito Identity Pool.
3. Cognito calls
4. App receives temporary AWS credentials (access key, secret, session token).
5. App uses these credentials to upload directly to S3.
The IAM role attached to the Identity Pool determines what the user can do. You can use policy variables like
Why B is correct: Cognito Identity Pools are purpose-built for this exact scenario: providing temporary, scoped AWS credentials to mobile and web applications without embedding long-term access keys. It handles authentication, token exchange, and credential vending automatically.
Why others are wrong:
A) IAM User Groups — IAM User Groups organize IAM users for shared permissions within your AWS account. They are for your team members, not end users of a mobile app. You cannot create millions of IAM users for app users, and IAM users have long-term credentials — the opposite of what's needed.
C) AWS Directory Service — Directory Service provides managed Active Directory (Microsoft AD) or Simple AD for enterprise environments. It's used for corporate workstation authentication and on-premises integration, not for mobile app credential vending to consumers.
D) AWS Secrets Manager — Secrets Manager stores and rotates secrets (database passwords, API keys). If you stored AWS credentials in Secrets Manager and retrieved them from the mobile app, the app would still need credentials to access Secrets Manager — a chicken-and-egg problem. It doesn't solve the fundamental issue of how to authenticate anonymous/external users.
How to Think About This:
When you see "mobile app" + "direct access to AWS service" + "no embedded credentials", the answer is Amazon Cognito Identity Pools. Cognito is specifically designed to give end users (mobile/web) temporary AWS credentials scoped to their identity.
Key Concepts:
Amazon Cognito Identity Pools (Federated Identities) — Cognito Identity Pools exchange authentication tokens (from Cognito User Pools, Google, Facebook, SAML, etc.) for temporary AWS credentials. The flow:
1. User authenticates (signs in via your app).
2. App sends the auth token to Cognito Identity Pool.
3. Cognito calls
sts:AssumeRoleWithWebIdentity behind the scenes.4. App receives temporary AWS credentials (access key, secret, session token).
5. App uses these credentials to upload directly to S3.
The IAM role attached to the Identity Pool determines what the user can do. You can use policy variables like
${cognito-identity.amazonaws.com:sub} to scope each user to their own S3 prefix.Why B is correct: Cognito Identity Pools are purpose-built for this exact scenario: providing temporary, scoped AWS credentials to mobile and web applications without embedding long-term access keys. It handles authentication, token exchange, and credential vending automatically.
Why others are wrong:
A) IAM User Groups — IAM User Groups organize IAM users for shared permissions within your AWS account. They are for your team members, not end users of a mobile app. You cannot create millions of IAM users for app users, and IAM users have long-term credentials — the opposite of what's needed.
C) AWS Directory Service — Directory Service provides managed Active Directory (Microsoft AD) or Simple AD for enterprise environments. It's used for corporate workstation authentication and on-premises integration, not for mobile app credential vending to consumers.
D) AWS Secrets Manager — Secrets Manager stores and rotates secrets (database passwords, API keys). If you stored AWS credentials in Secrets Manager and retrieved them from the mobile app, the app would still need credentials to access Secrets Manager — a chicken-and-egg problem. It doesn't solve the fundamental issue of how to authenticate anonymous/external users.
Q15.A junior administrator has been given the ability to create new IAM users and assign policies. How can you ensure the junior admin does not create a new user with more permissions than they themselves have?
✓ Correct: B. Apply a Permission Boundary and require it on all users the junior admin creates.
How to Think About This:
When you see "delegate IAM administration safely" or "prevent privilege escalation", the answer is Permission Boundaries. They let you say: "You can create users, but every user you create is capped at these maximum permissions."
Key Concepts:
Permission Boundaries — A Permission Boundary is an advanced IAM feature that sets the maximum permissions an IAM entity (user or role) can have. Even if someone attaches AdministratorAccess to a user, the user's effective permissions are the intersection of their identity policy AND the boundary.
How to Enforce: You add a condition to the junior admin's IAM policy:
This means the junior admin can only call
Effective Permissions = Identity Policy ∩ Permission Boundary. If the boundary doesn't allow an action, it's denied — regardless of what the identity policy says.
Why B is correct: Permission Boundaries provide a preventive control. The junior admin is physically unable to create users that exceed the boundary. This is a real-time enforcement mechanism, not a detection-after-the-fact approach. It's the AWS-recommended solution for delegating IAM administration.
Why others are wrong:
A) IAM Access Analyzer — Access Analyzer identifies resources shared with external entities and unused permissions. It is an analysis/advisory tool, not an enforcement mechanism. It cannot automatically revoke permissions or prevent user creation in real time.
C) Review CloudTrail logs every 24 hours — This is a detective control, not a preventive one. By the time you review logs, a user with excessive permissions could have already been created and caused damage. You need prevention, not detection-after-the-fact.
D) AWS Config rules to delete users — While you could create a Config rule to detect non-compliant users, this is still reactive. There's a window between user creation and deletion where the over-privileged user exists. Also, automatically deleting users is risky and disruptive. Permission Boundaries prevent the problem from occurring in the first place.
How to Think About This:
When you see "delegate IAM administration safely" or "prevent privilege escalation", the answer is Permission Boundaries. They let you say: "You can create users, but every user you create is capped at these maximum permissions."
Key Concepts:
Permission Boundaries — A Permission Boundary is an advanced IAM feature that sets the maximum permissions an IAM entity (user or role) can have. Even if someone attaches AdministratorAccess to a user, the user's effective permissions are the intersection of their identity policy AND the boundary.
How to Enforce: You add a condition to the junior admin's IAM policy:
"Condition": {"StringEquals": {"iam:PermissionsBoundary": "arn:aws:iam::123456789012:policy/JuniorBoundary"}}This means the junior admin can only call
iam:CreateUser if they also attach the specified Permission Boundary. They literally cannot create unbounded users.Effective Permissions = Identity Policy ∩ Permission Boundary. If the boundary doesn't allow an action, it's denied — regardless of what the identity policy says.
Why B is correct: Permission Boundaries provide a preventive control. The junior admin is physically unable to create users that exceed the boundary. This is a real-time enforcement mechanism, not a detection-after-the-fact approach. It's the AWS-recommended solution for delegating IAM administration.
Why others are wrong:
A) IAM Access Analyzer — Access Analyzer identifies resources shared with external entities and unused permissions. It is an analysis/advisory tool, not an enforcement mechanism. It cannot automatically revoke permissions or prevent user creation in real time.
C) Review CloudTrail logs every 24 hours — This is a detective control, not a preventive one. By the time you review logs, a user with excessive permissions could have already been created and caused damage. You need prevention, not detection-after-the-fact.
D) AWS Config rules to delete users — While you could create a Config rule to detect non-compliant users, this is still reactive. There's a window between user creation and deletion where the over-privileged user exists. Also, automatically deleting users is risky and disruptive. Permission Boundaries prevent the problem from occurring in the first place.
Q16.Your security team wants to identify which IAM roles have not been used in the last 90 days and pinpoint unused permissions to help achieve least privilege. Which AWS tool is best suited for this?
✓ Correct: B. IAM Access Analyzer with Access Advisor / Last Accessed information.
How to Think About This:
When you see "unused roles/permissions" + "least privilege" + "audit", the answer is IAM Access Analyzer and its Access Advisor data. This is the only AWS tool that shows you which services and actions each role has actually used (and when they last used them).
Key Concepts:
IAM Access Analyzer — A tool with two major capabilities:
• External Access Findings — Identifies resources (S3 buckets, IAM roles, KMS keys, etc.) that are shared with external entities outside your account or organization.
• Unused Access Findings — Identifies IAM roles not used within a specified period, unused permissions, and unused access keys. This is the feature relevant to this question.
Access Advisor (Last Accessed Information) — Shows the last time each service was accessed by a role/user. If a role has
Policy Generation — Access Analyzer can even generate a least-privilege policy based on actual usage from CloudTrail logs, replacing overly broad policies with ones that match real behavior.
Why B is correct: IAM Access Analyzer with Access Advisor is the only tool that directly answers both questions: "which roles are unused?" (last activity timestamp) and "which permissions are unused?" (service-level last accessed data). It's purpose-built for achieving and maintaining least privilege.
Why others are wrong:
A) AWS Trusted Advisor — Trusted Advisor provides high-level best practice recommendations across cost, performance, security, fault tolerance, and service limits. It can flag some IAM issues (like MFA not enabled on root, or exposed access keys) but it does not provide granular last-accessed data or identify unused permissions per role.
C) Amazon Inspector — Inspector scans EC2 instances and container images for software vulnerabilities (CVEs) and unintended network exposure. It is a vulnerability management tool, not an IAM audit tool. It has nothing to do with IAM roles or permission analysis.
D) AWS Shield — Shield is a DDoS protection service. Shield Standard provides basic DDoS mitigation for all AWS accounts. Shield Advanced provides enhanced DDoS protection, cost protection, and 24/7 DDoS response team access. It has absolutely no IAM auditing capabilities.
How to Think About This:
When you see "unused roles/permissions" + "least privilege" + "audit", the answer is IAM Access Analyzer and its Access Advisor data. This is the only AWS tool that shows you which services and actions each role has actually used (and when they last used them).
Key Concepts:
IAM Access Analyzer — A tool with two major capabilities:
• External Access Findings — Identifies resources (S3 buckets, IAM roles, KMS keys, etc.) that are shared with external entities outside your account or organization.
• Unused Access Findings — Identifies IAM roles not used within a specified period, unused permissions, and unused access keys. This is the feature relevant to this question.
Access Advisor (Last Accessed Information) — Shows the last time each service was accessed by a role/user. If a role has
s3:* permissions but has only ever used s3:GetObject, Access Advisor reveals that all other S3 permissions are unused and can be safely removed. Available in the IAM console under each role's "Access Advisor" tab.Policy Generation — Access Analyzer can even generate a least-privilege policy based on actual usage from CloudTrail logs, replacing overly broad policies with ones that match real behavior.
Why B is correct: IAM Access Analyzer with Access Advisor is the only tool that directly answers both questions: "which roles are unused?" (last activity timestamp) and "which permissions are unused?" (service-level last accessed data). It's purpose-built for achieving and maintaining least privilege.
Why others are wrong:
A) AWS Trusted Advisor — Trusted Advisor provides high-level best practice recommendations across cost, performance, security, fault tolerance, and service limits. It can flag some IAM issues (like MFA not enabled on root, or exposed access keys) but it does not provide granular last-accessed data or identify unused permissions per role.
C) Amazon Inspector — Inspector scans EC2 instances and container images for software vulnerabilities (CVEs) and unintended network exposure. It is a vulnerability management tool, not an IAM audit tool. It has nothing to do with IAM roles or permission analysis.
D) AWS Shield — Shield is a DDoS protection service. Shield Standard provides basic DDoS mitigation for all AWS accounts. Shield Advanced provides enhanced DDoS protection, cost protection, and 24/7 DDoS response team access. It has absolutely no IAM auditing capabilities.
Q17.A third-party SaaS security vendor requires access to your AWS account to monitor CloudTrail logs. They ask you to create an IAM role that their AWS account can assume. To prevent the "Confused Deputy" problem, which condition should you add to the Trust Policy?
✓ Correct: B. Use
How to Think About This:
When you see "third-party access" + "cross-account role", immediately think Confused Deputy and ExternalId. The Confused Deputy is an attack where another customer of the same vendor tricks the vendor into assuming your role instead of their own.
Key Concepts:
The Confused Deputy Problem — Imagine the vendor (Account V) serves two customers: You (Account Y) and an Attacker (Account A). Both create roles trusting Account V. The attacker tells the vendor: "Please assume role in Account Y" instead of their own account. Without a distinguishing factor, the vendor happily assumes your role — the attacker has now accessed your data through the vendor.
ExternalId — A unique, secret string that acts as a shared password between you and the vendor. You add it to the trust policy condition:
When the vendor calls
Why B is correct:
Why others are wrong:
A) aws:SourceIp restricted to vendor's office IP — The vendor's service likely runs from AWS infrastructure (EC2, Lambda), not from office IPs. Also, IP-based restrictions don't prevent the Confused Deputy — the vendor's own service is making the call from their infrastructure, which is the same regardless of which customer triggered it.
C) aws:PrincipalArn restricted to vendor's root user — Restricting to the root user is overly broad (the entire account) and doesn't solve the Confused Deputy problem. The vendor's service uses the same principal ARN regardless of which customer triggered the action — that's the entire problem.
D) aws:RequestedRegion restricted to us-east-1 — Region restrictions are for limiting where services can be used. They have nothing to do with verifying the identity or intent of the calling party. The Confused Deputy attack works regardless of region.
sts:ExternalId with a unique value provided by the vendor.How to Think About This:
When you see "third-party access" + "cross-account role", immediately think Confused Deputy and ExternalId. The Confused Deputy is an attack where another customer of the same vendor tricks the vendor into assuming your role instead of their own.
Key Concepts:
The Confused Deputy Problem — Imagine the vendor (Account V) serves two customers: You (Account Y) and an Attacker (Account A). Both create roles trusting Account V. The attacker tells the vendor: "Please assume role in Account Y" instead of their own account. Without a distinguishing factor, the vendor happily assumes your role — the attacker has now accessed your data through the vendor.
ExternalId — A unique, secret string that acts as a shared password between you and the vendor. You add it to the trust policy condition:
"Condition": {"StringEquals": {"sts:ExternalId": "UniqueString12345"}}When the vendor calls
sts:AssumeRole, they must pass this ExternalId. Since the attacker doesn't know your ExternalId, they can't trick the vendor into assuming your role.Why B is correct:
sts:ExternalId is the AWS-recommended mechanism specifically designed to prevent the Confused Deputy problem. It ensures that only the intended vendor, using the correct ExternalId, can assume the role. AWS documentation explicitly recommends this for any third-party cross-account access.Why others are wrong:
A) aws:SourceIp restricted to vendor's office IP — The vendor's service likely runs from AWS infrastructure (EC2, Lambda), not from office IPs. Also, IP-based restrictions don't prevent the Confused Deputy — the vendor's own service is making the call from their infrastructure, which is the same regardless of which customer triggered it.
C) aws:PrincipalArn restricted to vendor's root user — Restricting to the root user is overly broad (the entire account) and doesn't solve the Confused Deputy problem. The vendor's service uses the same principal ARN regardless of which customer triggered the action — that's the entire problem.
D) aws:RequestedRegion restricted to us-east-1 — Region restrictions are for limiting where services can be used. They have nothing to do with verifying the identity or intent of the calling party. The Confused Deputy attack works regardless of region.
Q18.An organization uses AWS Organizations. A developer in a member account has an IAM policy granting
s3:* on all buckets. However, the developer receives an "Access Denied" error when trying to delete a bucket. You find an SCP attached to the member account that only allows s3:List* and s3:Get*. What is the cause?
✓ Correct: A. SCPs act as a filter — since the SCP doesn't allow
How to Think About This:
SCPs use an allow-list or deny-list strategy. In this case, the SCP uses an allow-list: it explicitly allows only
Key Concepts:
SCP as a Permission Ceiling — For an action to succeed in a member account, it must be allowed by BOTH the SCP AND the IAM policy. The effective permissions are the intersection:
• IAM allows:
• SCP allows:
• Effective:
Allow-List vs. Deny-List SCPs:
• Allow-List Strategy: Remove the default
• Deny-List Strategy: Keep
Why A is correct: The SCP only allows List and Get operations. Since
Why others are wrong:
B) Convert to Resource-based policy — The type of IAM policy (identity-based vs. resource-based) doesn't matter here. SCPs act as a ceiling over the entire member account. Even resource-based policies in the account are bounded by SCPs for principals within that account.
C) SCPs only apply to Root user — SCPs apply to all IAM users and roles in member accounts. The only exception is the management account of the organization — SCPs never restrict the management account itself.
D) IAM takes precedence over SCP — The opposite is true. SCPs set the maximum boundary. IAM policies operate within that boundary. If the SCP doesn't allow an action, no IAM policy can grant it.
s3:DeleteBucket, the action is implicitly denied.How to Think About This:
SCPs use an allow-list or deny-list strategy. In this case, the SCP uses an allow-list: it explicitly allows only
s3:List* and s3:Get*. Everything NOT in the allow-list is implicitly denied. Think of the SCP as a filter: the IAM policy says "allow s3:*", but the SCP only lets List and Get through the filter. DeleteBucket is blocked at the filter.Key Concepts:
SCP as a Permission Ceiling — For an action to succeed in a member account, it must be allowed by BOTH the SCP AND the IAM policy. The effective permissions are the intersection:
• IAM allows:
s3:* (everything)• SCP allows:
s3:List*, s3:Get* (read-only)• Effective:
s3:List*, s3:Get* (intersection = read-only)Allow-List vs. Deny-List SCPs:
• Allow-List Strategy: Remove the default
FullAWSAccess SCP, then add SCPs that allow only specific actions. Everything else is implicitly denied. (This is what's happening here.)• Deny-List Strategy: Keep
FullAWSAccess and add SCPs that explicitly deny specific actions. Everything else is allowed. (More common in practice.)Why A is correct: The SCP only allows List and Get operations. Since
s3:DeleteBucket is not in the allow-list, the SCP implicitly denies it. The IAM policy's s3:* is irrelevant — it cannot grant permissions beyond what the SCP allows.Why others are wrong:
B) Convert to Resource-based policy — The type of IAM policy (identity-based vs. resource-based) doesn't matter here. SCPs act as a ceiling over the entire member account. Even resource-based policies in the account are bounded by SCPs for principals within that account.
C) SCPs only apply to Root user — SCPs apply to all IAM users and roles in member accounts. The only exception is the management account of the organization — SCPs never restrict the management account itself.
D) IAM takes precedence over SCP — The opposite is true. SCPs set the maximum boundary. IAM policies operate within that boundary. If the SCP doesn't allow an action, no IAM policy can grant it.
Q19.You want to allow a "Project Lead" to create IAM roles for their team's Lambda functions. However, you must ensure the Lead cannot create a role with
AdministratorAccess and then assume it themselves to escalate privileges. What is the best architectural solution?
✓ Correct: B. Attach a Permissions Boundary and require it on all roles the Lead creates.
How to Think About This:
This is a privilege escalation prevention scenario. The danger: if a user can create roles with arbitrary permissions AND can assume those roles, they can effectively give themselves any permission. Permission Boundaries break this chain by capping what any created role can do.
Key Concepts:
The Escalation Path:
1. Lead creates a role with
2. Lead assumes that role via
3. Lead now has full admin — privilege escalation complete.
How Permissions Boundaries Block This:
1. The Lead's IAM policy includes a condition:
2. The Lead can only call
3. Even if the Lead attaches
4. The Lead assumes the role but only gets the bounded permissions — escalation is prevented.
Why B is correct: Permissions Boundaries provide a preventive, real-time control that makes privilege escalation architecturally impossible. The Lead can create roles freely for their team's Lambda functions, but none of those roles can exceed the boundary. This is the AWS-recommended pattern for delegating IAM administration safely.
Why others are wrong:
A) Use Programmatic Access key for tracking — Tracking (via CloudTrail) is a detective control, not a preventive one. By the time you detect the escalation, the damage may already be done. An access key type has no bearing on what permissions the user has.
C) SCP to restrict iam:CreateRole — If you block
D) Lambda to delete over-privileged roles — This is a reactive control with a dangerous gap. Between when the role is created and when the Lambda triggers, the Lead could assume the over-privileged role and take destructive actions. Event-driven cleanup is useful as a secondary control but should never be the primary defense against privilege escalation.
How to Think About This:
This is a privilege escalation prevention scenario. The danger: if a user can create roles with arbitrary permissions AND can assume those roles, they can effectively give themselves any permission. Permission Boundaries break this chain by capping what any created role can do.
Key Concepts:
The Escalation Path:
1. Lead creates a role with
AdministratorAccess.2. Lead assumes that role via
sts:AssumeRole.3. Lead now has full admin — privilege escalation complete.
How Permissions Boundaries Block This:
1. The Lead's IAM policy includes a condition:
"iam:PermissionsBoundary": "arn:aws:iam::123456789012:policy/ProjectBoundary".2. The Lead can only call
iam:CreateRole if they attach ProjectBoundary as the Permission Boundary on the new role.3. Even if the Lead attaches
AdministratorAccess to the new role, the effective permissions are Admin ∩ ProjectBoundary = ProjectBoundary.4. The Lead assumes the role but only gets the bounded permissions — escalation is prevented.
Why B is correct: Permissions Boundaries provide a preventive, real-time control that makes privilege escalation architecturally impossible. The Lead can create roles freely for their team's Lambda functions, but none of those roles can exceed the boundary. This is the AWS-recommended pattern for delegating IAM administration safely.
Why others are wrong:
A) Use Programmatic Access key for tracking — Tracking (via CloudTrail) is a detective control, not a preventive one. By the time you detect the escalation, the damage may already be done. An access key type has no bearing on what permissions the user has.
C) SCP to restrict iam:CreateRole — If you block
iam:CreateRole entirely, the Lead cannot create roles for Lambda functions at all — which is their legitimate job. SCPs are too coarse for this use case. You need to allow role creation but limit what the created roles can do. That's exactly what Permission Boundaries achieve.D) Lambda to delete over-privileged roles — This is a reactive control with a dangerous gap. Between when the role is created and when the Lambda triggers, the Lead could assume the over-privileged role and take destructive actions. Event-driven cleanup is useful as a secondary control but should never be the primary defense against privilege escalation.
Q20.Your company uses an on-premises Active Directory (AD). You need to allow employees to access the AWS Management Console using their AD credentials without creating individual IAM users for each employee. Which approach is the most secure?
✓ Correct: B. Use SAML 2.0 Identity Federation with AD FS and
How to Think About This:
When you see "on-premises AD" + "console access" + "no IAM users", the answer is SAML 2.0 Federation. The key words are "without creating individual IAM users" — federation lets users authenticate through your existing identity provider and receive temporary AWS credentials mapped to IAM roles.
Key Concepts:
SAML 2.0 Federation Flow:
1. Employee navigates to the corporate SSO portal (AD FS).
2. AD FS authenticates the user against Active Directory.
3. AD FS returns a SAML assertion containing the user's identity and group memberships.
4. The browser posts the SAML assertion to the AWS Sign-In endpoint.
5. AWS calls
6. Employee is redirected to the AWS Console with a time-limited session.
Role Mapping — AD groups map to IAM roles. For example: AD group "Developers" → IAM role "DeveloperRole"; AD group "SecurityTeam" → IAM role "SecurityAuditRole". Users get permissions based on their AD group membership.
Why B is correct: SAML 2.0 federation meets all requirements: employees use existing AD credentials (single sign-on), no IAM users are created, access is temporary (credentials expire), and permissions are managed through AD group-to-role mappings. This is the standard enterprise pattern for AWS access.
Why others are wrong:
A) Sync AD users to IAM — AWS Directory Service does not create IAM users from AD users. It provides managed Active Directory in AWS. Even if you could sync users, creating IAM users defeats the requirement of "without creating individual IAM users." Syncing also creates maintenance overhead and long-term credentials.
C) Single shared IAM user — Sharing credentials eliminates accountability (who did what?), violates least privilege (everyone has the same access), and is impossible to revoke for one person without affecting all. This is a critical security anti-pattern.
D) Secrets Manager for AD credentials — Secrets Manager stores secrets for applications. It doesn't help with user authentication or console access. Employees would still need a way to authenticate to AWS, and storing their AD passwords in Secrets Manager doesn't provide federated access — it just moves the credential storage problem.
AssumeRoleWithSAML.How to Think About This:
When you see "on-premises AD" + "console access" + "no IAM users", the answer is SAML 2.0 Federation. The key words are "without creating individual IAM users" — federation lets users authenticate through your existing identity provider and receive temporary AWS credentials mapped to IAM roles.
Key Concepts:
SAML 2.0 Federation Flow:
1. Employee navigates to the corporate SSO portal (AD FS).
2. AD FS authenticates the user against Active Directory.
3. AD FS returns a SAML assertion containing the user's identity and group memberships.
4. The browser posts the SAML assertion to the AWS Sign-In endpoint.
5. AWS calls
sts:AssumeRoleWithSAML and returns temporary credentials.6. Employee is redirected to the AWS Console with a time-limited session.
Role Mapping — AD groups map to IAM roles. For example: AD group "Developers" → IAM role "DeveloperRole"; AD group "SecurityTeam" → IAM role "SecurityAuditRole". Users get permissions based on their AD group membership.
Why B is correct: SAML 2.0 federation meets all requirements: employees use existing AD credentials (single sign-on), no IAM users are created, access is temporary (credentials expire), and permissions are managed through AD group-to-role mappings. This is the standard enterprise pattern for AWS access.
Why others are wrong:
A) Sync AD users to IAM — AWS Directory Service does not create IAM users from AD users. It provides managed Active Directory in AWS. Even if you could sync users, creating IAM users defeats the requirement of "without creating individual IAM users." Syncing also creates maintenance overhead and long-term credentials.
C) Single shared IAM user — Sharing credentials eliminates accountability (who did what?), violates least privilege (everyone has the same access), and is impossible to revoke for one person without affecting all. This is a critical security anti-pattern.
D) Secrets Manager for AD credentials — Secrets Manager stores secrets for applications. It doesn't help with user authentication or console access. Employees would still need a way to authenticate to AWS, and storing their AD passwords in Secrets Manager doesn't provide federated access — it just moves the credential storage problem.
Q21.A security engineer has an IAM policy allowing
s3:GetObject on arn:aws:s3:::sensitive-data/*. The S3 bucket has a Bucket Policy that explicitly denies s3:GetObject if the request does NOT come from a specific VPC Endpoint (vpce-abc123). The engineer makes the request from that exact VPC Endpoint. Can they access the data?
✓ Correct: B. Yes — the Deny condition is not met, so the IAM Allow takes effect.
How to Think About This:
Read the Deny carefully: it denies
Key Concepts:
Conditional Deny — A Deny with a condition is only active when the condition evaluates to true. The bucket policy likely uses:
This means: "Deny if the VPC Endpoint is NOT vpce-abc123." When the engineer uses vpce-abc123,
Policy Evaluation with Same-Account Requests — For same-account access to S3:
1. AWS collects ALL applicable policies (IAM + bucket policy).
2. Check for explicit Deny → The conditional Deny doesn't match, so no explicit Deny.
3. Check for explicit Allow → The IAM policy allows
4. Result: ALLOWED.
Why B is correct: The Deny is conditional, and the condition is not satisfied because the engineer is using the correct VPC Endpoint. Without an active Deny, the IAM policy's explicit Allow is sufficient to grant access. This is a common pattern for restricting S3 access to specific VPC Endpoints while still allowing authorized users through that endpoint.
Why others are wrong:
A) Bucket Policy always overrides IAM — This is false. For same-account access, IAM policies and bucket policies are evaluated together. Neither automatically "overrides" the other. The standard Deny > Allow > Default Deny logic applies across all policies combined. (Note: for cross-account access, BOTH must explicitly allow.)
C) Other Deny statements in IAM — The question only describes the specific policies. If there were other Deny statements, they could change the outcome, but that's not what's described. The correct analysis is based on the policies given.
D) Always additive and never conflict — Policies can absolutely conflict. If the bucket policy had an unconditional Deny, it would override the IAM Allow. The fact that they "work together" doesn't mean they never conflict — the Deny > Allow rule resolves conflicts.
How to Think About This:
Read the Deny carefully: it denies
GetObject when the request does NOT come from vpce-abc123. The engineer IS coming from vpce-abc123, so the condition does not match. The Deny statement does not fire. With no active Deny, the IAM policy's Allow is evaluated, and the request succeeds.Key Concepts:
Conditional Deny — A Deny with a condition is only active when the condition evaluates to true. The bucket policy likely uses:
"Condition": {"StringNotEquals": {"aws:sourceVpce": "vpce-abc123"}}This means: "Deny if the VPC Endpoint is NOT vpce-abc123." When the engineer uses vpce-abc123,
StringNotEquals evaluates to false → the Deny does not apply.Policy Evaluation with Same-Account Requests — For same-account access to S3:
1. AWS collects ALL applicable policies (IAM + bucket policy).
2. Check for explicit Deny → The conditional Deny doesn't match, so no explicit Deny.
3. Check for explicit Allow → The IAM policy allows
s3:GetObject.4. Result: ALLOWED.
Why B is correct: The Deny is conditional, and the condition is not satisfied because the engineer is using the correct VPC Endpoint. Without an active Deny, the IAM policy's explicit Allow is sufficient to grant access. This is a common pattern for restricting S3 access to specific VPC Endpoints while still allowing authorized users through that endpoint.
Why others are wrong:
A) Bucket Policy always overrides IAM — This is false. For same-account access, IAM policies and bucket policies are evaluated together. Neither automatically "overrides" the other. The standard Deny > Allow > Default Deny logic applies across all policies combined. (Note: for cross-account access, BOTH must explicitly allow.)
C) Other Deny statements in IAM — The question only describes the specific policies. If there were other Deny statements, they could change the outcome, but that's not what's described. The correct analysis is based on the policies given.
D) Always additive and never conflict — Policies can absolutely conflict. If the bucket policy had an unconditional Deny, it would override the IAM Allow. The fact that they "work together" doesn't mean they never conflict — the Deny > Allow rule resolves conflicts.
Q22.A developer is trying to associate an IAM role with an EC2 instance but receives an "Unauthorized" error for the
iam:PassRole action. Why is this permission required?
✓ Correct: B.
How to Think About This:
Key Concepts:
iam:PassRole — This permission controls whether a user can delegate a specific role to an AWS service. "Passing" a role means telling AWS: "Let this service (EC2, Lambda, ECS, etc.) operate with this role's permissions." It is NOT about the user assuming the role themselves — it's about handing the role to a service.
The Escalation Scenario Without PassRole:
1. Developer has limited permissions (e.g., only
2. Developer launches an EC2 instance with an
3. Developer SSHs into the instance and uses its role credentials.
4. Developer now has full admin access — privilege escalation complete.
How PassRole Prevents This:
The developer's IAM policy must explicitly include:
The developer can only pass that specific role to a service. Attempting to pass
Why B is correct:
Why others are wrong:
A) Change the password of the IAM role — IAM roles do not have passwords. Roles use trust policies and temporary credentials.
C) Allow EC2 to talk to IAM — EC2 communicates with IAM through the AWS control plane automatically.
D) Legacy permission replaced by AssumeRole —
iam:PassRole ensures the user has permission to pass a specific role to an AWS service, preventing privilege escalation.How to Think About This:
iam:PassRole is a privilege escalation guardrail. Without it, a user with limited permissions could create an EC2 instance with an AdministratorAccess role, SSH into it, and use the role's credentials to do anything — effectively escalating their own privileges through the service.Key Concepts:
iam:PassRole — This permission controls whether a user can delegate a specific role to an AWS service. "Passing" a role means telling AWS: "Let this service (EC2, Lambda, ECS, etc.) operate with this role's permissions." It is NOT about the user assuming the role themselves — it's about handing the role to a service.
The Escalation Scenario Without PassRole:
1. Developer has limited permissions (e.g., only
ec2:RunInstances).2. Developer launches an EC2 instance with an
AdministratorAccess role attached.3. Developer SSHs into the instance and uses its role credentials.
4. Developer now has full admin access — privilege escalation complete.
How PassRole Prevents This:
The developer's IAM policy must explicitly include:
"Action": "iam:PassRole", "Resource": "arn:aws:iam::123456789012:role/LimitedLambdaRole"The developer can only pass that specific role to a service. Attempting to pass
AdministratorAccess role fails with "Unauthorized."Why B is correct:
iam:PassRole is specifically designed to prevent privilege escalation by requiring explicit permission to delegate specific roles to AWS services. It's a critical security control that separates "what you can do" from "what you can assign to a service."Why others are wrong:
A) Change the password of the IAM role — IAM roles do not have passwords. Roles use trust policies and temporary credentials.
PassRole has nothing to do with passwords.C) Allow EC2 to talk to IAM — EC2 communicates with IAM through the AWS control plane automatically.
PassRole is about the user's permission to assign a role, not about the EC2 instance's network connectivity.D) Legacy permission replaced by AssumeRole —
PassRole and AssumeRole are different actions for different purposes. PassRole = handing a role to a service. AssumeRole = temporarily becoming the role yourself. Both are current and actively used.
Q23.Your Security Operations Center (SOC) receives an alert from IAM Access Analyzer. It identifies that a role in your production account can be assumed by an entity in a personal, unrecognized AWS account. What is the immediate recommended action?
✓ Correct: B. Modify the Trust Policy to remove the unauthorized external Principal.
How to Think About This:
Access Analyzer found an unintended external trust. The fix is to remove the trust at the source — the role's trust policy. You control your role; you can't control external accounts.
Key Concepts:
IAM Access Analyzer External Access Findings — Access Analyzer continuously monitors resource policies to identify resources shared with external entities. When it finds a role whose trust policy allows an unknown external account, it generates a finding. This could indicate: a misconfiguration, a leftover from a vendor engagement, or a malicious backdoor.
Incident Response: For an unrecognized external principal:
1. Investigate: Check CloudTrail for
2. Remediate: Remove the external principal from the trust policy immediately.
3. Assess impact: If the role was assumed, review what actions were performed using the temporary credentials.
Why B is correct: The trust policy on your production role is the source of the vulnerability. Removing the external principal from the trust policy immediately prevents any future assumption of that role by the unauthorized account. This is the direct, immediate fix.
Why others are wrong:
A) Delete the personal AWS account — You cannot delete someone else's AWS account. Even if it were in your Organization, deleting an account is an extreme action. The fix is to remove the trust relationship, not to destroy accounts.
C) Archive the finding — Archiving a finding means marking it as "reviewed and accepted" — it tells Access Analyzer this external access is intentional. It silences the alert but does NOT fix the problem. The external account can still assume the role. This is the opposite of what you should do for an unrecognized entity.
D) Apply Permission Boundary to external account — You cannot apply Permission Boundaries to accounts you don't own. Permission Boundaries are applied to IAM entities within your own account. You have no control over an external personal account.
How to Think About This:
Access Analyzer found an unintended external trust. The fix is to remove the trust at the source — the role's trust policy. You control your role; you can't control external accounts.
Key Concepts:
IAM Access Analyzer External Access Findings — Access Analyzer continuously monitors resource policies to identify resources shared with external entities. When it finds a role whose trust policy allows an unknown external account, it generates a finding. This could indicate: a misconfiguration, a leftover from a vendor engagement, or a malicious backdoor.
Incident Response: For an unrecognized external principal:
1. Investigate: Check CloudTrail for
AssumeRole events — has this role been assumed by the external account?2. Remediate: Remove the external principal from the trust policy immediately.
3. Assess impact: If the role was assumed, review what actions were performed using the temporary credentials.
Why B is correct: The trust policy on your production role is the source of the vulnerability. Removing the external principal from the trust policy immediately prevents any future assumption of that role by the unauthorized account. This is the direct, immediate fix.
Why others are wrong:
A) Delete the personal AWS account — You cannot delete someone else's AWS account. Even if it were in your Organization, deleting an account is an extreme action. The fix is to remove the trust relationship, not to destroy accounts.
C) Archive the finding — Archiving a finding means marking it as "reviewed and accepted" — it tells Access Analyzer this external access is intentional. It silences the alert but does NOT fix the problem. The external account can still assume the role. This is the opposite of what you should do for an unrecognized entity.
D) Apply Permission Boundary to external account — You cannot apply Permission Boundaries to accounts you don't own. Permission Boundaries are applied to IAM entities within your own account. You have no control over an external personal account.
Q24.You need to create an IAM policy that ensures users can only perform
ec2:TerminateInstances if they have signed in using Multi-Factor Authentication (MFA). Which policy condition should you use?
✓ Correct: A. Use
How to Think About This:
When you see "only if MFA is used", the condition key is always
Key Concepts:
aws:MultiFactorAuthPresent — A Boolean condition key available in all AWS policies. It is
Pattern 1 — Allow only with MFA:
Only allows the action if MFA was used.
Pattern 2 — Deny without MFA (more common):
Denies the action if MFA was NOT used. Using
Why A is correct:
Why others are wrong:
B) "iam:MFA": "active" —
C) Deny if user is in MFA-Only group — IAM group membership is not a condition you can evaluate in a policy. Groups are for organizing users and attaching shared policies, not for use as condition values. Also, the description is vague and doesn't specify actual policy syntax.
D) SCP to block all EC2 actions globally — This would block ALL EC2 actions for ALL users, not just
"Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}}How to Think About This:
When you see "only if MFA is used", the condition key is always
aws:MultiFactorAuthPresent. This is a global condition key that evaluates to true when the request was authenticated with a second factor.Key Concepts:
aws:MultiFactorAuthPresent — A Boolean condition key available in all AWS policies. It is
true when the API caller authenticated with MFA. You can use it two ways:Pattern 1 — Allow only with MFA:
"Effect": "Allow", "Action": "ec2:TerminateInstances", "Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}}Only allows the action if MFA was used.
Pattern 2 — Deny without MFA (more common):
"Effect": "Deny", "Action": "ec2:TerminateInstances", "Condition": {"BoolIfExists": {"aws:MultiFactorAuthPresent": "false"}}Denies the action if MFA was NOT used. Using
BoolIfExists handles cases where the key might not be present (e.g., programmatic access without MFA).Why A is correct:
aws:MultiFactorAuthPresent is the correct, official AWS condition key for MFA enforcement. The Bool operator checks its boolean value. This is the standard pattern used in AWS documentation and best practices.Why others are wrong:
B) "iam:MFA": "active" —
iam:MFA is not a valid condition key. The correct key is aws:MultiFactorAuthPresent. AWS condition keys follow the format aws: for global keys and service: for service-specific keys. There is no iam:MFA key.C) Deny if user is in MFA-Only group — IAM group membership is not a condition you can evaluate in a policy. Groups are for organizing users and attaching shared policies, not for use as condition values. Also, the description is vague and doesn't specify actual policy syntax.
D) SCP to block all EC2 actions globally — This would block ALL EC2 actions for ALL users, not just
TerminateInstances without MFA. It's far too broad and would break all EC2 workloads in the organization.
Q25.A security audit reveals several
UnauthorizedOperation errors in CloudTrail logs related to iam:CreateAccessKey. You need to determine which IAM principal attempted these calls and from what IP address. Where do you find this information?
✓ Correct: A. The
How to Think About This:
When you need who + where + what for an API call, CloudTrail is always the answer. Every API call to AWS is recorded in CloudTrail with rich metadata. The log entry for each event contains everything you need for forensic investigation.
Key Concepts:
CloudTrail Log Entry Fields:
•
•
•
•
•
•
Why A is correct: CloudTrail is the authoritative source for API call forensics. The
Why others are wrong:
B) Trusted Advisor report — Trusted Advisor provides best practice recommendations, not forensic investigation data. It doesn't log individual API calls or show IP addresses of callers. It can flag exposed access keys but cannot tell you who made a specific API call.
C) IAM Credential Report — The Credential Report is a CSV export showing the status of all IAM users' credentials (password age, MFA status, access key last used). It shows aggregate credential metadata, not individual API call details. It cannot tell you who attempted
D) VPC Flow Logs — VPC Flow Logs capture network-level traffic (source IP, destination IP, ports, protocol) for ENIs. IAM is a global AWS service — API calls to IAM are made over HTTPS to the AWS control plane, not through your VPC. Even if captured, Flow Logs show only network tuples, not API-level details like the action attempted or the IAM principal.
userIdentity and sourceIPAddress fields in the CloudTrail log entry.How to Think About This:
When you need who + where + what for an API call, CloudTrail is always the answer. Every API call to AWS is recorded in CloudTrail with rich metadata. The log entry for each event contains everything you need for forensic investigation.
Key Concepts:
CloudTrail Log Entry Fields:
•
userIdentity — WHO made the call. Contains the IAM user ARN, role ARN, account ID, access key ID, and session context. Example: "arn:aws:iam::123456789012:user/malicious-actor"•
sourceIPAddress — WHERE the call came from. The IP address of the caller. Can reveal if the call came from an unexpected location or known malicious IP.•
eventName — WHAT was attempted. In this case: CreateAccessKey.•
errorCode — The result. UnauthorizedOperation or AccessDenied indicates a failed attempt.•
eventTime — WHEN it happened. UTC timestamp.•
requestParameters — Details of the request (e.g., which user they tried to create a key for).Why A is correct: CloudTrail is the authoritative source for API call forensics. The
userIdentity field tells you exactly which principal (user, role, or federated identity) made the call, and sourceIPAddress tells you where the request originated. Together, they answer "who tried this and from where?"Why others are wrong:
B) Trusted Advisor report — Trusted Advisor provides best practice recommendations, not forensic investigation data. It doesn't log individual API calls or show IP addresses of callers. It can flag exposed access keys but cannot tell you who made a specific API call.
C) IAM Credential Report — The Credential Report is a CSV export showing the status of all IAM users' credentials (password age, MFA status, access key last used). It shows aggregate credential metadata, not individual API call details. It cannot tell you who attempted
CreateAccessKey or from what IP.D) VPC Flow Logs — VPC Flow Logs capture network-level traffic (source IP, destination IP, ports, protocol) for ENIs. IAM is a global AWS service — API calls to IAM are made over HTTPS to the AWS control plane, not through your VPC. Even if captured, Flow Logs show only network tuples, not API-level details like the action attempted or the IAM principal.
Q26.You are tasked with reducing the permissions of an IAM group that has had
PowerUserAccess for a year. You want to identify which specific AWS services the group members have actually used in the last 90 days before tightening the policy. Which tool provides this data?
✓ Correct: B. IAM Access Advisor (Service Last Accessed data).
How to Think About This:
When you see "which services were actually used" + "refine/reduce permissions" + "least privilege", the answer is Access Advisor. It's the only tool that shows service-by-service last accessed timestamps for IAM entities, letting you identify and remove unused permissions with confidence.
Key Concepts:
IAM Access Advisor — Available in the IAM console under each user, group, or role. It shows:
• Every AWS service the entity has permissions for.
• Whether the service was accessed, and when it was last accessed.
• Which specific Region the service was accessed from.
Example: The group has
Access Analyzer Policy Generation — IAM Access Analyzer can go further by analyzing CloudTrail logs and generating a least-privilege policy based on actual API calls (action-level, not just service-level). This complements Access Advisor for deeper refinement.
Why B is correct: Access Advisor directly answers the question: "Which services did this group actually use in the last 90 days?" It provides the data needed to make informed decisions about which permissions to keep and which to remove. This is the standard AWS workflow for achieving least privilege.
Why others are wrong:
A) IAM Policy Simulator — The Policy Simulator tests whether a specific action would be allowed or denied under current policies. It answers "what CAN this user do?" not "what DID this user do?" It's useful for testing policies before deployment, but it doesn't show historical usage data.
C) CloudWatch Logs Insights — CloudWatch Logs Insights queries log data stored in CloudWatch Log Groups. You could theoretically query CloudTrail logs (if sent to CloudWatch), but it requires writing custom queries and doesn't provide the clean service-by-service last accessed view that Access Advisor offers out of the box.
D) AWS Config managed rules — Config rules evaluate resource compliance (e.g., "are S3 buckets encrypted?"). They don't track which IAM permissions are being used. Config monitors resource configuration, not IAM access patterns.
How to Think About This:
When you see "which services were actually used" + "refine/reduce permissions" + "least privilege", the answer is Access Advisor. It's the only tool that shows service-by-service last accessed timestamps for IAM entities, letting you identify and remove unused permissions with confidence.
Key Concepts:
IAM Access Advisor — Available in the IAM console under each user, group, or role. It shows:
• Every AWS service the entity has permissions for.
• Whether the service was accessed, and when it was last accessed.
• Which specific Region the service was accessed from.
Example: The group has
PowerUserAccess (grants access to ~200+ services). Access Advisor shows that in the last 90 days, they only used: S3, EC2, Lambda, DynamoDB, CloudWatch. You can now create a custom policy granting only those 5 services and remove PowerUserAccess.Access Analyzer Policy Generation — IAM Access Analyzer can go further by analyzing CloudTrail logs and generating a least-privilege policy based on actual API calls (action-level, not just service-level). This complements Access Advisor for deeper refinement.
Why B is correct: Access Advisor directly answers the question: "Which services did this group actually use in the last 90 days?" It provides the data needed to make informed decisions about which permissions to keep and which to remove. This is the standard AWS workflow for achieving least privilege.
Why others are wrong:
A) IAM Policy Simulator — The Policy Simulator tests whether a specific action would be allowed or denied under current policies. It answers "what CAN this user do?" not "what DID this user do?" It's useful for testing policies before deployment, but it doesn't show historical usage data.
C) CloudWatch Logs Insights — CloudWatch Logs Insights queries log data stored in CloudWatch Log Groups. You could theoretically query CloudTrail logs (if sent to CloudWatch), but it requires writing custom queries and doesn't provide the clean service-by-service last accessed view that Access Advisor offers out of the box.
D) AWS Config managed rules — Config rules evaluate resource compliance (e.g., "are S3 buckets encrypted?"). They don't track which IAM permissions are being used. Config monitors resource configuration, not IAM access patterns.