What we believe

Performance is a feature, not a footnote

We publish response times: 5-15ms for R2 signed URLs, 20-30ms with full caching, 400ms worst-case with all security checks.

Most companies say "blazing fast" and hide the numbers. We say "here's the benchmark, run it yourself." If something's slow, we fix the architecture, not the marketing copy.

Example: Our Supabase integration went from 3311ms to 848ms (74% faster) by adding multi-layer caching (memory → Redis → DB). That's documented in our GitHub with before/after metrics.

Read the code, not the promises

Our rate limiter isn't "enterprise-grade" because we said so. It has 7 security layers: Cloudflare DDoS protection → Arcjet bot detection → API key validation → memory guards → Redis rate limits → quota enforcement → abuse logging.

We didn't build this to impress VCs. We built it because we got mass-account-creation attacks during testing and needed something that wouldn't fall over.

Pricing should make sense

Free tier: 1,000 requests/month. Why? Because you can test properly with ~300 uploads, but you can't run production and abuse it.

Pro tier: $24/month for 50,000 requests.

We charge for operations (signed URLs, analytics, security), not storage or traffic. Cloudinary charges $99/month minimum because they host your files. We charge $24 because we don't.

Lock-in is lazy engineering

If switching providers requires rewriting your app, the abstraction failed.

Our SDK: Same code works with R2, Vercel Blob, Supabase Storage, Uploadcare. Provider credentials are in the request body, not hardcoded. Change 3 lines to switch providers. No migration scripts. No downtime.

This costs us revenue (you could leave easily), but it's honest. We'd rather compete on quality than handcuffs.

Open about tradeoffs

Our batch operations count as 1 request (100 files = 1 API call). This is extremely efficient, but it means you could theoretically game the system by batching everything. We limit it: Free tier gets 5 files/batch, Pro gets 100, Enterprise gets CUSTOM.

Response times: 200-300ms cached, 700-900ms on cold starts. We don't say "instant" — that's lying. We say "faster than proxying data through servers" and show the math.