While preparing for the AWS SAP-C02, many candidates get confused by when to use serverless versus container-based refactoring. In the real world, this is fundamentally a decision about operational overhead elimination vs. migration complexity. Let’s drill into a simulated scenario.
The Scenario #
StreamVibe, a digital media startup, operates a user-generated video platform that experiences unpredictable traffic with massive spikes during quarterly talent competitions. Their current architecture consists of:
- Web tier: EC2 instances in an Auto Scaling Group serving the upload interface
- Storage: Videos stored on EBS volumes attached to processing instances
- Processing tier: EC2 instances in a separate Auto Scaling Group consuming from an SQS queue
- Classification: A licensed third-party video recognition library requiring specialized installation and maintenance
- Content delivery: Static web assets served from the same EC2 fleet
The engineering VP has mandated a refactoring initiative with three non-negotiable goals:
- Minimize operational overhead by maximizing AWS managed services
- Eliminate third-party software licensing and maintenance burden
- Maintain cost efficiency during traffic variability
Key Requirements #
Redesign the architecture to maximize use of AWS managed services, eliminate custom software dependencies, and reduce operational complexity while handling variable traffic patterns cost-effectively.
The Options #
-
A) Migrate the web application to Amazon ECS containers; use Spot Instances for the SQS processing Auto Scaling Group; replace custom software with Amazon Rekognition for video classification.
-
B) Store uploaded videos in Amazon EFS and mount the file system to the web application EC2 instances; use AWS Lambda functions that invoke Amazon Rekognition API to process the SQS queue for video classification.
-
C) Host the web application’s static content in Amazon S3; store uploaded videos in Amazon S3; use S3 Event Notifications to publish to the SQS queue; use AWS Lambda functions that invoke Amazon Rekognition API to process the SQS queue for video classification.
-
D) Use AWS Elastic Beanstalk to launch the web application EC2 instances in an Auto Scaling Group and deploy a worker environment to process the SQS queue; replace custom software with Amazon Rekognition for video classification.
Correct Answer #
Option C.
The Architect’s Analysis #
Correct Answer #
Option C - Full serverless refactoring with S3 static hosting and Lambda-based video processing.
Step-by-Step Winning Logic #
This solution represents the optimal trade-off for the stated requirements:
-
Operational Overhead Minimization:
- S3 static hosting eliminates web server patching, scaling, and monitoring
- Lambda removes ALL compute infrastructure management
- S3 Event Notifications provide native, zero-configuration event routing
- No Auto Scaling Group tuning, no capacity planning
-
Cost Optimization for Variable Traffic:
- S3 hosting: Pay only for storage + requests (typically $0.023/GB + $0.0004/1K requests)
- Lambda: Pay per 100ms execution increments (no idle cost during low traffic)
- Rekognition: Pay per video minute analyzed (usage-based pricing)
- Contrast with EC2: Even with Auto Scaling, minimum instance count incurs base cost during low-traffic periods
-
Third-Party Dependency Elimination:
- Rekognition is fully managed (no software installation, licensing, or version management)
- Integrated with Lambda via AWS SDK (no custom integration layers)
-
Scalability Without Engineering:
- S3 automatically handles request spikes
- Lambda concurrency scales to 1,000 concurrent executions by default (can request increase)
- No Auto Scaling policy tuning or warmup delays
The Traps (Distractor Analysis) #
-
Why not Option A?
- ECS still requires cluster management, task definition versioning, and container image maintenance
- Spot Instances add operational complexity (interruption handling, mixed instance policies)
- Does NOT address static content hosting - still serving from compute instances
- Fails the “maximize managed services” requirement - containers are infrastructure-as-code, not fully managed
-
Why not Option B?
- EFS + EC2 architecture retains server management (patching, AMI updates, scaling policies)
- EFS cost model ($0.30/GB-month for Standard) is expensive for video storage compared to S3 ($0.023/GB-month)
- Mounting EFS to multiple instances introduces potential I/O bottlenecks and throughput mode tuning
- Static content still served from EC2 - misses S3 hosting opportunity
-
Why not Option D?
- Elastic Beanstalk is Platform-as-a-Service, not serverless - still manages underlying EC2 instances
- Worker environments still require capacity planning and instance management
- Higher operational overhead than Lambda (platform updates, environment health monitoring)
- Cost inefficiency - EB environments maintain minimum instance capacity even during idle periods
- Partial compliance - reduces overhead compared to raw EC2, but doesn’t maximize managed services
The Architect Blueprint #
Diagram Note: S3 Event Notifications directly trigger SQS, which invokes Lambda functions to process videos via Rekognition, creating a fully serverless, event-driven pipeline with zero server management.
The Decision Matrix #
| Option | Est. Complexity | Est. Monthly Cost (1M videos, 500GB storage, 10TB egress) | Pros | Cons |
|---|---|---|---|---|
| A - ECS + Spot | High | $800-1,200 (ECS cluster ~$150, Spot ~$300-500, data transfer ~$350, Rekognition ~$200) | • Container portability • Spot pricing for batch |
• ECS cluster management overhead • Spot interruption handling • Still serving static content from compute • Container image maintenance |
| B - EFS + Lambda | Medium-High | $1,400-1,800 (EC2 ~$300, EFS 500GB ~$150, Lambda ~$100, Rekognition ~$200, data transfer ~$650-850) | • Lambda for processing • Rekognition integration |
• Expensive EFS storage for videos • EC2 operational overhead • EFS performance tuning complexity • High data transfer costs |
| C - Full Serverless ✅ | Low | $450-600 (S3 500GB ~$12, requests ~$40, Lambda ~$100, Rekognition ~$200, CloudFront optional ~$100-150) | • Zero server management • Pay-per-use pricing • Auto-scaling without configuration • Lowest operational cost • S3 durability (11 9’s) |
• Lambda 15-minute timeout (large videos need chunking) • Cold start latency (mitigated with provisioned concurrency if needed) |
| D - Elastic Beanstalk | Medium | $700-950 (EB environment ~$250-400, worker ~$200-300, Rekognition ~$200, data transfer ~$50-150) | • Simplified EC2 deployment • Integrated monitoring |
• Platform-level management still required • Minimum capacity costs • Not fully serverless • EB platform update windows |
FinOps Insight: Option C reduces monthly operational cost by 40-60% while eliminating infrastructure management. For 1M videos/month with variable traffic, Lambda’s pay-per-invocation model saves ~$200-400/month compared to maintaining minimum EC2/ECS capacity during low-traffic periods.
Real-World Practitioner Insight #
Exam Rule #
For the SAP-C02 exam, when you see “minimize operational overhead” + “variable traffic” + “replace third-party software”, always prioritize:
- S3 for static content and object storage
- Lambda for event-driven processing
- Managed AI/ML services (Rekognition, Comprehend, etc.) over self-hosted solutions
The exam favors maximum serverless adoption at the Professional level.
Real World #
In production, we would likely implement:
- CloudFront in front of S3 for global content delivery and DDoS protection (adds ~$100-150/month but reduces S3 request costs and improves UX)
- S3 Intelligent-Tiering for video storage to automatically move infrequently accessed videos to cheaper tiers (saves ~30% on storage after 6 months)
- Step Functions to orchestrate multi-step video processing workflows (transcoding, thumbnail generation, Rekognition analysis) instead of single Lambda functions
- Reserved Capacity for Rekognition if video processing volume is predictable (can save 20-30% on Rekognition costs)
- Lambda Provisioned Concurrency for 10-50 warm instances during known peak hours (quarterly competitions) to eliminate cold starts
- S3 Transfer Acceleration if users upload from globally distributed locations
- Cost anomaly detection via AWS Cost Anomaly Detection to catch unexpected Rekognition or Lambda usage spikes
Trade-off Alert: The exam scenario doesn’t mention global users or latency requirements. In reality, if StreamVibe has international users, the architecture would need CloudFront + S3 Transfer Acceleration, increasing costs by ~15-20% but reducing upload times by 50-200% for distant users.
Hybrid Consideration: For enterprise customers with compliance requirements to keep video processing on-premises (e.g., unreleased film studios), we’d use S3 File Gateway to present S3 as NFS/SMB shares to on-prem processing systems while still using Lambda + Rekognition for the cloud-native workflow.