Skip to main content
  1. Home
  2. >
  3. AWS
  4. >
  5. SAP-C02
  6. >
  7. This article

AWS SAP-C02 Drill: Cross-Account VPC Sharing - The Centralized Control vs. Network Sprawl Trade-off

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | Multi-Cloud Architect & Strategist.
Jeff's Architecture Insights
Go beyond static exam dumps. Jeff’s Insights is engineered to cultivate the mindset of a Production-Ready Architect. We move past ‘correct answers’ to dissect the strategic trade-offs and multi-cloud patterns required to balance reliability, security, and TCO in mission-critical environments.

While preparing for the AWS SAP-C02, many candidates get confused by multi-account network sharing strategies. In the real world, this is fundamentally a decision about centralized governance vs. operational autonomy vs. network complexity costs. Let’s drill into a simulated scenario.

The Scenario
#

GlobalTech Innovations operates a regulated SaaS platform across 47 AWS accounts organized under AWS Organizations. Their Cloud Governance Board has mandated a “network consolidation initiative” driven by three concerns:

  1. Compliance audit findings: 12 development accounts had overly permissive security groups due to decentralized network ownership
  2. Cost explosion: Monthly VPC endpoints and NAT Gateway costs reached $18,400 across duplicated resources
  3. Shadow IT risks: Application teams were creating VPCs with overlapping CIDR ranges, causing future merger conflicts

The Infrastructure Engineering team (operating from a dedicated “network-hub” AWS account) has been tasked with implementing a solution where:

  • All network infrastructure (VPCs, subnets, route tables, security groups) remains under the Infrastructure team’s exclusive control
  • Application teams in 45 separate AWS accounts can deploy EC2 instances, RDS databases, and Lambda functions into approved subnets
  • No application account should have permissions to modify routing, create internet gateways, or peer VPCs
  • The solution must support the existing organizational structure (3 OUs: Production, Development, Sandbox)

Key Requirements
#

Design a centralized network governance architecture that eliminates redundant network resources, enforces CIDR discipline, and maintains strict RBAC boundaries—while allowing application teams deployment autonomy within approved network segments.

The Options
#

  • A) Create a Transit Gateway in the network-hub account and attach all application account VPCs to it via Transit Gateway attachments
  • B) Enable AWS RAM resource sharing from the AWS Organizations management account
  • C) In each of the 45 application accounts, create VPCs with identical CIDR ranges and subnets matching the network-hub VPC; establish VPC peering connections from each application VPC to the central network-hub VPC
  • D) In the network-hub account, create an AWS Resource Access Manager (RAM) resource share; specify the target AWS Organizations OUs that require network access; associate the specific subnets to the resource share
  • E) In the network-hub account, create an AWS Resource Access Manager (RAM) resource share; specify the target AWS Organizations OUs; associate managed prefix lists to the resource share

(Select TWO options)

Correct Answer
#

B and D.


The Architect’s Analysis
#

Correct Answer
#

Options B and D.

Step-by-Step Winning Logic
#

This scenario requires a solution that satisfies three non-negotiable constraints:

  1. Centralized network control (no application account can modify routing/security)
  2. Resource deployment autonomy (application teams can launch EC2/RDS in shared subnets)
  3. Organizational scalability (must work across OUs without per-account configuration)

Why B is required:

AWS RAM resource sharing for VPC subnets requires organizational trust to be established. Option B—enabling resource sharing from the AWS Organizations management account—is the prerequisite configuration step that allows the network-hub account to share subnets with other accounts in the organization. Without this, RAM sharing would fail with permission errors.

Why D is the implementation:

Option D leverages AWS RAM’s subnet sharing capability:

  • Ownership boundary preservation: The network-hub account retains full ownership of the VPC, route tables, NACLs, and internet gateways
  • Granular delegation: Application accounts gain ec2:RunInstances and ec2:CreateNetworkInterface permissions within shared subnets, but cannot modify route tables or security group rules managed by the hub account
  • OU-level targeting: By associating the resource share with specific OUs (Production, Development, Sandbox), the solution auto-enrolls new accounts without manual intervention
  • No data transfer costs: Unlike Transit Gateway (which charges $0.02/GB for inter-VPC traffic), shared subnets treat all resources as if they’re in the same VPC—eliminating cross-AZ data transfer charges

Cost impact quantification:

  • Before: 45 accounts × $32/month (NAT Gateway) + $7/month (S3 VPC Endpoint) = $1,755/month just for network infrastructure
  • After: 1 centralized VPC with 3 NAT Gateways (HA) = $96/month
  • Monthly savings: $1,659 (~94% reduction in network infrastructure costs)

The Traps (Distractor Analysis)
#

Why not A (Transit Gateway)?

  • Cost trap: Transit Gateway charges $0.05/hour per attachment ($36/month × 45 accounts = $1,620/month) plus $0.02/GB data transfer
  • Architectural mismatch: Transit Gateway is designed for routing between VPCs, not for sharing a single VPC’s subnets. The scenario requires application teams to deploy resources into the hub VPC itself, not into separate VPCs that route through TGW
  • Complexity overhead: Requires managing TGW route tables, associations, and propagations—unnecessary when subnet sharing achieves the goal directly

Why not C (VPC Peering Mesh)?

  • Governance violation: This creates 45 independent VPCs owned by application teams, directly contradicting the requirement that “independent accounts should not have management authority over their own networks”
  • CIDR conflict guarantee: The question explicitly mentions past problems with overlapping CIDR ranges—this solution perpetuates that risk
  • Peering limits: AWS supports max 125 peering connections per VPC; this design creates a hub-and-spoke with 45 peers, hitting scaling limits
  • Operational nightmare: Every new subnet in the hub VPC requires updating route tables in 45 peered VPCs

Why not E (Prefix Lists)?

  • Wrong resource type: AWS RAM can share managed prefix lists, but prefix lists are route table references for CIDR blocks—they don’t grant cross-account subnet access
  • Doesn’t solve the core problem: Even if prefix lists were shared, application accounts would still need their own VPCs, failing the centralization requirement
  • Distractor design: This option tests whether you understand what AWS RAM can share (subnets, Transit Gateways, Route 53 Resolver rules) vs. what it can’t use for network access delegation

The Architect Blueprint
#

graph TB subgraph "AWS Organizations" MgmtAcct[Management Account<br/>Enable RAM Sharing] end subgraph "Network-Hub Account (Infrastructure Team)" VPC[Shared VPC<br/>10.0.0.0/16] RAM[AWS RAM<br/>Resource Share] subgraph "Shared Subnets" ProdSubnet1[Prod-Subnet-1a<br/>10.0.1.0/24] ProdSubnet2[Prod-Subnet-1b<br/>10.0.2.0/24] DevSubnet[Dev-Subnet-1a<br/>10.0.10.0/24] end VPC --> ProdSubnet1 VPC --> ProdSubnet2 VPC --> DevSubnet RAM -.Share.-> ProdSubnet1 RAM -.Share.-> ProdSubnet2 RAM -.Share.-> DevSubnet end subgraph "Application Accounts (45 Accounts)" ProdOU[Production OU] DevOU[Development OU] ProdApp1[App Account 1<br/>EC2 in Prod-Subnet-1a] ProdApp2[App Account 2<br/>RDS in Prod-Subnet-1b] DevApp[App Account 3<br/>Lambda in Dev-Subnet-1a] ProdOU --> ProdApp1 ProdOU --> ProdApp2 DevOU --> DevApp end MgmtAcct -.Enable Trust.-> RAM RAM -.Grant Access.-> ProdOU RAM -.Grant Access.-> DevOU ProdApp1 -.Deploy Into.-> ProdSubnet1 ProdApp2 -.Deploy Into.-> ProdSubnet2 DevApp -.Deploy Into.-> DevSubnet style VPC fill:#FF9900,stroke:#232F3E,stroke-width:3px,color:#fff style RAM fill:#759C3E,stroke:#232F3E,stroke-width:2px style MgmtAcct fill:#D86613,stroke:#232F3E,stroke-width:2px style ProdSubnet1 fill:#527FFF,stroke:#232F3E,stroke-width:1px style ProdSubnet2 fill:#527FFF,stroke:#232F3E,stroke-width:1px style DevSubnet fill:#527FFF,stroke:#232F3E,stroke-width:1px

Diagram Note: The network-hub account creates a single VPC and shares specific subnets via AWS RAM to target OUs; application accounts deploy resources directly into shared subnets without owning any VPC infrastructure.

The Decision Matrix
#

Option Est. Complexity Est. Monthly Cost Pros Cons
B + D (RAM Subnet Sharing) Medium (Requires RAM config + IAM policies) Low ($96 for centralized NAT + $0 RAM fees) ✅ True centralized control
✅ No data transfer costs
✅ OU-level automation
✅ Eliminates VPC sprawl
⚠️ Requires understanding of RAM permissions
⚠️ Shared subnets cannot be deleted while in use
A (Transit Gateway) High (TGW route management, attachments) Very High ($1,620/mo attachments + $500+/mo data transfer) ✅ Supports hybrid connectivity (VPN/DX)
✅ Inter-VPC routing flexibility
❌ Overkill for subnet sharing use case
❌ Data transfer costs scale with traffic
❌ Doesn’t enforce centralized VPC ownership
C (VPC Peering Mesh) Very High (45 peering connections + route management) Medium ($1,755/mo duplicated NAT/Endpoints) ✅ Simple peer-to-peer connectivity ❌ Violates governance requirement
❌ CIDR overlap risks
❌ Scaling limits (125 peers max)
❌ Route table explosion
E (Prefix Lists) Low (Simple RAM share) Low ($0 for prefix lists) ✅ Useful for firewall rule management Does not grant subnet access
❌ Doesn’t solve network sharing problem
❌ Red herring distractor
B Only (RAM Enabled, No Share) Low (One-time config) Low ($0) ✅ Prerequisite for RAM sharing ❌ Incomplete solution without Option D
❌ Doesn’t actually share subnets

FinOps Deep Dive:

  • Transit Gateway hidden costs: At 500GB/day inter-account traffic, TGW data transfer adds $300/month on top of attachment fees
  • VPC Peering cost illusion: While peering itself is free, the duplicated NAT Gateways ($32/mo × 45 = $1,440/mo) and VPC endpoints ($7/mo × 45 = $315/mo) create $1,755/month in waste
  • RAM break-even analysis: AWS RAM sharing has zero per-account fees, making it cost-effective even at 1000+ accounts (unlike TGW, which scales linearly with attachment count)

Real-World Practitioner Insight
#

Exam Rule
#

“For the SAP-C02 exam, when you see centralized network control + cross-account resource deployment + AWS Organizations, always select AWS RAM subnet sharing (Options B + D). If the question mentions routing between VPCs or hybrid connectivity, then consider Transit Gateway.”

Real World
#

In production environments, we typically implement a hybrid approach:

  1. Core shared services VPC (via RAM): Hosts shared Active Directory, DNS resolvers, and inspection appliances—shared to all accounts
  2. Application VPCs (via Transit Gateway): Production workloads use dedicated VPCs attached to TGW for traffic inspection and micro-segmentation
  3. Cost optimization trigger: We use RAM subnet sharing when accounts need identical network configuration (common in development/test environments). We switch to TGW when accounts need isolated failure domains or custom routing policies

Real-world gotcha not in the exam:

  • Shared subnet IP exhaustion: If the hub VPC subnet runs out of IPs, all participant accounts are affected. We implement IPAM (IP Address Manager) to monitor subnet utilization and alert at 70% capacity
  • Security group complexity: Shared subnets still use per-account security groups, creating potential for “security group soup.” We enforce AWS Firewall Manager policies to standardize rules across accounts
  • Cross-account ENI limits: Each account has a default limit of 5,000 ENIs. In a shared subnet with 45 accounts, you could theoretically hit 225,000 ENIs—we request limit increases proactively for production OUs