Control vs Data Plane

Why we split them

If you've used AWS, you've heard "control plane vs data plane" thrown around. Most explanations are garbage. Here's the real reason we split them:

Your 10MB file upload shouldn't block someone else's 5ms API key check.

That's it. That's the whole philosophy.

Control Plane: The fast stuff

Runs on: Cloudflare Workers (275+ edge locations)
Response time: 5-50ms (P95)
Handles:

  • API key validation — Check Redis, verify signature
  • Rate limiting — Memory guard → Redis check → DB fallback
  • Quota enforcement — Check Redis cache of usage
  • Signed URL generation — Pure crypto (AWS SDK v3), no network calls
  • Request routing — Which provider? Which bucket?
  • Analytics metadata — Queue background jobs, don't wait

What's NOT here: Files. Never files.

Data Plane: The heavy stuff

Runs on: Your storage provider (R2, Vercel Blob, Supabase, Uploadcare)
Response time: Depends on file size and provider
Handles:

  • Actual file uploads — Your browser → Storage (direct)
  • File downloads — Storage → Your users (direct)
  • File storage — Provider's infrastructure, not ours
  • Bandwidth costs — You pay your provider, not us

What's NOT here: Our API. We're not in the file path.

Example: Upload a 5MB image

Step 1: Control Plane (15ms)
POST /api/upload/r2/signed-url
{
  "filename": "photo.jpg",
  "contentType": "image/jpeg",
  "r2AccessKey": "...",
  "r2SecretKey": "...",
  "r2Bucket": "my-photos"
}

→ Validate API key (10ms)
→ Check rate limit (2ms)
→ Generate signed URL (5ms, pure crypto)
→ Return response:
{
  "uploadUrl": "https://r2.../photo.jpg?signature=...",
  "publicUrl": "https://pub-xxx.r2.dev/photo.jpg"
}
Step 2: Data Plane (2-5 seconds, depending on connection)
// In your browser/app:
await fetch(uploadUrl, {
  method: 'PUT',
  body: file, // 5MB image
  headers: { 'Content-Type': 'image/jpeg' }
});

→ File goes directly to R2
→ We're not involved
→ No bandwidth cost to us
→ Your R2 bill: ~$0.00007 (5MB * $0.015/GB)
Step 3: Control Plane again (5ms, optional)
POST /api/upload/complete
{
  "uploadId": "photo.jpg",
  "provider": "r2"
}

→ Log analytics (queued, non-blocking)
→ Update usage counter
→ Return 200 OK

Total time in our API: 20ms (15ms + 5ms)
Total time uploading: 2-5 seconds (depends on your internet, not us)
Bandwidth we paid for: 0 bytes

Why this matters for performance

Traditional architecture (bad):

Your browser → Upload API → Storage
              ↑
              Bottleneck: Server bandwidth, CPU, memory
              
- 10 users uploading 10MB files = 100MB through your server
- Server maxes out at 100Mbps = uploads slow down
- CPU busy processing uploads = API requests slow down
- One slow upload blocks other requests

Our architecture (good):

Your browser → Storage (direct)
Your browser → Our API (for permissions only)
              ↑
              No bottleneck: We never see files
              
- 10 users uploading 10MB files = 0MB through our servers
- Our API: 10 requests × 15ms = 150ms total
- No CPU load from files
- Fast requests stay fast

Why this matters for cost

Let's say you have 1,000 users uploading 100MB/month each = 100GB total traffic.

Traditional service (proxying files):
Inbound traffic: 100GB × $0.01/GB = $1.00
Outbound traffic: 100GB × $0.09/GB = $9.00
Storage (if they host): 100GB × $0.02/GB = $2.00
CPU/processing: $5.00
Total cost: $17/month
To be profitable at $24/mo price: IMPOSSIBLE
Actual price needed: $99/month minimum (hello, Cloudinary)
ObitoX (signed URLs only):
Traffic through us: 0GB
API requests: 1,000 users × 100 uploads = 100k requests
Cloudflare cost: $0 (1M free requests/month)
Redis cost: $0 (free tier)
Supabase cost: $0 (free tier)
Total cost: $0/month
Revenue at $24/mo × 1,000 users: $24,000/month
Profit margin: 100% 🔥

Your users' cost (they pay R2 directly):
100GB storage: 100 × $0.015 = $1.50/month
100GB bandwidth: 100 × $0.00 = $0/month (R2 has no egress fees)

Combined cost: $24 (us) + $1.50 (R2) = $25.50/month
vs Cloudinary: $99/month minimum

You save $73.50/month by bringing your own storage.

The tradeoff (honesty required)

Pro: Insanely cheap, insanely fast, no vendor lock-in
Con: More setup (you manage storage credentials)

What this means:

  • You create an R2/Vercel/Supabase account
  • You get API keys from them
  • You pass those keys to us in API requests
  • We generate signed URLs using your keys
  • You rotate keys, manage buckets, handle billing with them

If you want "just upload files, we handle everything", use Cloudinary ($99/mo).
If you want "I'll manage storage, you handle the API layer", use us ($9/mo).

When the data plane is slow, it's not our fault

Real scenario: User uploads 50MB file, takes 10 seconds, complains "your API is slow."

Truth: Our API took 15ms to generate the signed URL. The 10 seconds was their internet uploading to R2. We're not in that path.

We can't make R2 faster. We can't make their internet faster. We can only make our part fast.

This is the control plane / data plane split in action. We optimize what we control. We don't pretend to control what we don't.

How to think about it

Control Plane: The thin, fast, cheap layer
Permissions, security, routing, metadata
Cost: Nearly $0
Speed: 5-50ms
Data Plane: The thick, heavy, expensive layer
Actual files, bandwidth, storage
Cost: You pay your provider
Speed: Depends on file size and network

Traditional services combine both layers → expensive, slow, locked in
ObitoX separates them → cheap, fast, modular

That's the architecture. No magic, no buzzwords, just don't put files in the critical path.