Skip to main content
  1. Home
  2. >
  3. GCP
  4. >
  5. ACE
  6. >
  7. This article

GCP ACE Drill: Private Connectivity - Securing Cloud Run to GKE API Access

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | Multi-Cloud Architect & Strategist.
Jeff's Architecture Insights
Go beyond static exam dumps. Jeff’s Insights is engineered to cultivate the mindset of a Production-Ready Architect. We move past ‘correct answers’ to dissect the strategic trade-offs and multi-cloud patterns required to balance reliability, security, and TCO in mission-critical environments.

While preparing for the GCP Associate Cloud Engineer (ACE) exam, many candidates struggle with private service connectivity patterns. In the real world, this is fundamentally about balancing secure communication without sacrificing operational simplicity or incurring unnecessary costs. Let’s drill into a simulated scenario that reflects a common industry use case.

The Scenario
#

FinCred Solutions is a fast-growing fintech startup offering a secure loan approval platform. Their development team deploys customer-facing microservices on Cloud Run to rapidly scale during peak loan application periods. Internally, the core loan validation logic is hosted on Google Kubernetes Engine (GKE) with tightly controlled APIs.

To adhere to compliance and security standards, FinCred must ensure that the Cloud Run services can reach the GKE API privately, without exposing the API endpoints to the public internet — all while following Google recommended practices that prioritize operational simplicity and strong security.

Key Requirements
#

Enable secure, private connectivity from the Cloud Run service to the API running in the GKE cluster, enforcing that no data transfers over public IP addresses, and leveraging Google Cloud’s managed networking features.

The Options
#

  • A) Deploy a public ingress resource on the GKE cluster to expose the API to the internet. Use Cloud Armor to whitelist the Cloud Run service’s IP address dynamically by having the application fetch its public IP and updating the Cloud Armor policy on startup. Allow Cloud Run to call the API on ports 80 and 443.
  • B) Create a VPC ingress firewall rule to allow inbound connections from 0.0.0.0/0 on ports 80 and 443 to the GKE nodes.
  • C) Create a VPC egress firewall rule to allow outbound connections from the Cloud Run service to 0.0.0.0/0 on ports 80 and 443.
  • D) Deploy an internal Application Load Balancer (internal HTTP(S) load balancer) to expose the API privately within the VPC. Configure Cloud DNS with the internal load balancer’s IP address. Deploy a Serverless VPC Access Connector to enable the Cloud Run service to call the API via the internal load balancer’s fully qualified domain name.

Correct Answer
#

D.


The Architect’s Analysis
#

Correct Answer
#

Option D

Step-by-Step Winning Logic
#

Google Cloud’s recommended best practice for private connectivity between Cloud Run and a GKE-hosted API is to deploy an internal HTTP(S) load balancer that exposes the API on a private IP address within the VPC. Using Cloud DNS to resolve the internal load balancer’s address enables standard service discovery.

To allow Cloud Run services (which run in a managed environment outside the default VPC) to access this internal IP, you deploy a Serverless VPC Access Connector, a purpose-built managed service that provides private, secure egress from Cloud Run to resources inside the VPC.

This approach aligns with several key principles:

  • Security: No public internet exposure of internal APIs eliminates attack surface.
  • SRE Simplicity: Avoids brittle IP whitelisting on Cloud Armor (Option A) or overly permissive firewall rules (Options B & C).
  • Cost Efficiency: Serverless VPC access and internal load balancers are cost-optimized managed services with minimal operational overhead.
  • Google Best Practices: Matches Google’s official docs for hybrid connectivity with Cloud Run and GKE.

The Trap (Distractor Analysis)
#

  • Why not A?
    This option exposes the API publicly and attempts to secure it by IP whitelisting via Cloud Armor, but managing dynamic IPs of Cloud Run is complex and fragile, increasing operational toil. It also violates the principle of minimizing attack surface.

  • Why not B?
    Allowing ingress from 0.0.0.0/0 on all IPs nullifies network security by exposing GKE nodes broadly to the internet, a major security risk.

  • Why not C?
    Outbound egress rules are permissive to all IPs, not restricting access properly. This does not guarantee private connectivity and does not secure the GKE API.

The Architect Blueprint
#

Mermaid Diagram illustrating the flow of the correct solution:

graph TD CloudRunService[Cloud Run Service] --> |"Serverless VPC Access Connector"| VPC[VPC Network] VPC --> InternalLB[Internal Application Load Balancer] InternalLB -->|HTTPS| GKECluster[GKE API Pods] style CloudRunService fill:#4285F4,stroke:#333,color:#fff style InternalLB fill:#34A853,stroke:#333,color:#fff style GKECluster fill:#FBBC05,stroke:#333,color:#333

Diagram Note: The Cloud Run service routes secured API requests through a Serverless VPC Access Connector into the VPC, hitting the internal load balancer which forwards to the private GKE API pods.

Real-World Practitioner Insight
#

Exam Rule
#

For the GCP ACE exam, when asked about private connectivity between managed serverless (Cloud Run) and GKE APIs, always pick an internal load balancer combined with Serverless VPC Access Connector.

Real World
#

In production, this pattern minimizes risk and effort while providing clean separation of network zones. It also scales well and complies with both security and cost governance standards typically enforced by FinOps teams.

GCP Associate Cloud Engineer Drills

Focus on Google Cloud Resource Manager, IAM, and GKE management.