Skip to main content
  1. Home
  2. >
  3. GCP
  4. >
  5. PCA
  6. >
  7. This article

GCP PCA Drill: VPC Networking - The Multi-VPC Connectivity Trade-off

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | Multi-Cloud Architect & Strategist.
Jeff's Architecture Insights
Go beyond static exam dumps. Jeff’s Insights is engineered to cultivate the mindset of a Production-Ready Architect. We move past ‘correct answers’ to dissect the strategic trade-offs and multi-cloud patterns required to balance reliability, security, and TCO in mission-critical environments.

While preparing for the GCP Professional Cloud Architect (PCA) exam, many candidates struggle with multi-VPC networking and connectivity strategies. In real-world architectures, this challenge boils down to balancing network isolation, operational simplicity, latency, and cost. Let’s drill into a simulated scenario that captures these core trade-offs.

The Scenario
#

Acme Cloud Gaming is a fast-growing global gaming studio that runs separated Virtual Private Clouds (VPCs) for different parts of their infrastructure: development, analytics, and gaming backend services. Each VPC contains a Compute Engine instance dedicated to specific workloads. To maintain strict network isolation, their VPC subnets do not overlap and must remain segregated. However, their lead game server instance (Instance #1) must establish direct internal IP connectivity with the analytics instance (Instance #2) and backend instance (Instance #3) located in separate VPCs to enable low-latency data fetch and command-and-control exchange within the Google Cloud region. Achieving this connectivity securely and sustainably is critical for Acme’s SRE and FinOps goals.

Requirements
#

Enable Instance #1 to communicate directly with Instance #2 and Instance #3 via internal IPs. Ensure that the isolated, non-overlapping subnet design remains intact and security best practices are followed.

The Options
#

  • A) Create a Cloud Router to advertise subnet #2 and subnet #3 routes to subnet #1.
  • B) Add two additional network interfaces (NICs) to Instance #1 with the following configs: NIC1 attached to VPC #2 subnet, NIC2 attached to VPC #3 subnet. Update firewall rules accordingly.
  • C) Set up two VPN tunnels using Cloud VPN: one between VPC #1 and VPC #2, another between VPC #2 and VPC #3. Update firewall rules to permit inter-instance traffic.
  • D) Peer all three VPCs: Peer VPC #1 with VPC #2 and peer VPC #2 with VPC #3. Update firewall rules to enable traffic between instances.

Correct Answer
#

B.


The Architect’s Analysis
#

Correct Answer
#

Option B

Step-by-Step Winning Logic
#

The requirement demands secure, direct internal IP communication between isolated VPCs without overlapping subnet IPs. Attaching multiple NICs (network interfaces) on the single Compute Engine instance in VPC #1—each connected to VPC #2 and VPC #3 respectively—provides direct layer 3 connectivity with full control over traffic flows. This design respects subnet non-overlap constraints, maintains strict network separation while allowing Instance #1 to act as a multi-homed bridge.

Network peering (Option D) can enable connectivity but requires transitive peering which is not supported in GCP (VPC peering is non-transitive). Cloud VPN tunnels (Option C) introduce unnecessary operational toil, latency, and cost for intra-region connectivity. The cloud router advertised routes (Option A) alone do not enable cross-VPC communication without peering or VPN, so fails to fulfill the requirement.

From an SRE perspective, this multi-NIC approach minimizes additional networking components to monitor and maintain, while keeping latency low. FinOps benefits arise from avoiding the extra cost of VPN tunnels or multiple network appliance instances.

The Traps (Distractor Analysis)
#

  • Why not A? Cloud Router advertises routes but does not establish cross-VPC connectivity without peering or VPN, so connectivity is incomplete.
  • Why not C? Cloud VPN adds high complexity, managed service cost, and latency overhead unsuitable for intra-region internal traffic.
  • Why not D? VPC peering is non-transitive—peering VPC #1 with #2 and #2 with #3 does not automatically connect #1 with #3. You’d need to peer #1 with #3 explicitly, which is out of scope or not allowed here.

The Architect Blueprint
#

Mermaid Diagram illustrating multi-NIC instance bridging VPCs:

graph TD subgraph VPC2 VM2[Instance #2] end subgraph VPC3 VM3[Instance #3] end subgraph VPC1 VM1[Instance #1] VM1 -->|NIC1| VPC2 VM1 -->|NIC2| VPC3 end VM2 --- VM1 VM3 --- VM1 style VPC1 fill:#a2c4c9,stroke:#333 style VPC2 fill:#f4cccc,stroke:#333 style VPC3 fill:#d9ead3,stroke:#333

Diagram Note: Instance #1 is multi-homed with two NICs attached to different VPCs enabling direct internal IP communication across otherwise isolated networks.

The Decision Matrix
#

Option Est. Complexity Est. Monthly Cost Pros Cons
A. Cloud Router Advertise Low Low Simple routing concept Does not enable cross-VPC connectivity alone; needs peering/VPN
B. Multiple NICs on Instance Medium Low-Moderate Direct connectivity with granular control; low latency Slightly more complex VM config; potential NIC limits per VM
C. VPN Tunnels High High (VPN data + HA costs) Secure encrypted links; familiar tech Adds latency and operational overhead; expensive intra-region
D. VPC Peering Medium Low Low latency, no bandwidth charges Non-transitive; Peering #1<>#3 missing; Not sufficient alone

Real-World Practitioner Insight
#

Exam Rule
#

For the exam, always consider multiple NICs first when a single VM must connect directly and securely across isolated VPCs with non-overlapping IP spaces.

Real World
#

In production, multi-NIC VMs are invaluable for security zones separation, hybrid network topologies, or acting as proxy/gateway nodes without incurring VPN or complex peering costs.

GCP Professional Cloud Architect Drills

Design, develop, and manage robust, secure, and scalable Google Cloud solutions.