Skip to content

✍️ 필사 모드: A Deep-Dive Guide to the New Era of Edge Computing — Cloudflare Workers, Durable Objects, D1, Vercel Edge, Deno Deploy, Fastly, Fly.io, Turso, and Region-aware Architecture (2025)

English
0%
정확도 0%
💡 왼쪽 원문을 읽으면서 오른쪽에 따라 써보세요. Tab 키로 힌트를 받을 수 있습니다.

TL;DREdge computing, which started with Cloudflare Workers in 2017, has become the default infrastructure of most modern SaaS in 2024-2025. The UX bar of "50ms even for users on the other side of the planet" is the product of V8 Isolates plus WASM runtimes distributed across 300+ PoPs (Points of Presence). Edge is not an evolution of CDNs — it is a completely different architectural paradigm. It has matured from stateless request handling to Durable Objects (stateful), D1/Turso (edge SQLite), region-aware writes, and Smart Placement. This article maps the full terrain: the philosophies and trade-offs of Cloudflare Workers, Vercel Edge, Deno Deploy, Fastly, and Fly.io; edge databases (D1, Turso libSQL, PlanetScale, Neon); the cold-start race (V8 Isolate vs Container vs Firecracker vs WASM); and the 2025 edge AI stack.

From CDN to Edge Compute — 30 Years of Evolution

1st-Generation CDN (1998-) — Static Cache

Akamai was spun out of MIT in 1998. Static files like images, CSS, and JS were replicated to regional caches. Bandwidth savings plus reduced latency.

2nd-Generation CDN (2010-) — Dynamic Content

CloudFront (AWS, 2008), Cloudflare (2010). HTTP cache-control, purge APIs, signed URLs. Even dynamic pages could be partially cached.

3rd Generation — Edge Compute (2017-)

Cloudflare Workers (Sept 2017). "Run code on CDN nodes." Requests no longer have to reach origin; they are handled at the edge.

// The original Workers example
addEventListener('fetch', event => {
  event.respondWith(new Response('Hello from the edge!'))
})

Impact: Latency drops from 100ms to 5ms — 1/20th. Origin load shrinks dramatically.

4th Generation — Stateful Edge (2021-)

Cloudflare Durable Objects (2021) and Fly.io (2020) popularized "state at the edge." Edge had been stateless until then, but consistency-guaranteed state became possible at the edge.

5th Generation — Edge AI (2024-)

Cloudflare Workers AI, Vercel AI SDK, WebLLM. LLM inference at the edge. "AI near the user's device" is now reality.

Why Edge?

1. The Physics of Latency

Light travels at 300,000 km/s in vacuum and ~200,000 km/s in fiber. The Seoul-to-New York great-circle distance is roughly 11,000 km, giving a theoretical minimum round-trip of 110ms. In practice, routing inefficiencies push it to 150-200ms.

TCP handshake (1.5 RTT) plus TLS 1.3 handshake (1 RTT) plus the HTTP request totals at least 3.5 RTT — 500ms+ for Seoul-New York. If the edge responds from a Seoul PoP, it's 5ms.

2. Origin Protection

DDoS, traffic spikes, crawlers — the edge absorbs them. Origin stays focused on origin work.

3. Regulatory Compliance

GDPR, data residency — European users' data is processed only at European edges.

4. Cost Structure

Egress bandwidth costs AWS Lambda 0.09 USD/GB versus Cloudflare Workers 0 USD. For heavy request volumes, the gap is tens to hundreds of times.

Cloudflare Workers — The V8 Isolate Originator

Architecture

Workers run on V8 Isolates. Not containers, not VMs — isolated JS contexts inside a single V8 process.

Cloudflare Node (300+ PoPs globally)
├─ V8 Runtime (one process)
│   ├─ Isolate A (tenant 1, Worker X)
│   ├─ Isolate B (tenant 1, Worker Y)
│   ├─ Isolate C (tenant 2, Worker Z)
...thousands

Benefits:

  • 5ms cold starts (vs 100ms+ for containers)
  • 3MB per isolate (vs 100MB+ for containers)
  • Tens of thousands of tenants per node

Constraints:

  • CPU time limits (Free 10ms, Paid 50ms, up to 30s)
  • No eval, no native add-ons
  • WASM is supported

Core APIs

export default {
  async fetch(request, env, ctx) {
    // KV Store
    const value = await env.MY_KV.get("key")
    
    // D1 Database (SQLite)
    const { results } = await env.DB.prepare("SELECT * FROM users").all()
    
    // R2 (S3-compatible)
    const object = await env.MY_BUCKET.get("file.jpg")
    
    // Queues
    await env.MY_QUEUE.send({ event: "user_signup" })
    
    // AI
    const ai = new Ai(env.AI)
    const response = await ai.run("@cf/meta/llama-3-8b-instruct", {
      messages: [{ role: "user", content: "Hello" }]
    })
    
    return Response.json({ result: response })
  }
}

Durable Objects — Edge Actors

The core of stateful edge compute. Each Durable Object:

  • Exists as a single global instance
  • Is auto-placed in a specific region (Smart Placement)
  • Persists internal state (transactional SQLite since 2024)
  • Can maintain WebSocket connections
export class ChatRoom {
  constructor(state, env) {
    this.state = state
    this.sessions = []
  }

  async fetch(request) {
    const pair = new WebSocketPair()
    this.sessions.push(pair[1])
    pair[1].accept()
    
    pair[1].addEventListener('message', e => {
      for (const session of this.sessions) {
        session.send(e.data)  // broadcast
      }
    })
    
    return new Response(null, { status: 101, webSocket: pair[0] })
  }
}

Typical uses: real-time chat, game matchmaking, collaborative editing (Google Docs-style), order processing.

D1 — Edge SQLite

Launched in 2022, GA in 2024. Replicates SQLite databases to each Cloudflare PoP. Primary lives in one region; read replicas are worldwide.

const { results } = await env.DB
  .prepare("SELECT * FROM users WHERE id = ?")
  .bind(userId)
  .all()

await env.DB
  .prepare("INSERT INTO orders (user_id, total) VALUES (?, ?)")
  .bind(userId, 99.99)
  .run()

Limit: 10GB per DB (as of 2024), writes are routed to the primary region.

R2 — S3-compatible Object Storage

Zero egress fees. S3-compatible API. BYO domain support added in 2024.

Workers AI

Launched in 2023. Open-source model inference on Cloudflare-operated GPU pools.

const ai = new Ai(env.AI)

// Llama 3
const response = await ai.run("@cf/meta/llama-3-8b-instruct", {
  messages: [{ role: "user", content: "What is edge computing?" }]
})

// Stable Diffusion
const image = await ai.run("@cf/stabilityai/stable-diffusion-xl-base-1.0", {
  prompt: "A cat coding on a laptop"
})

// Embedding
const vector = await ai.run("@cf/baai/bge-base-en-v1.5", {
  text: "Hello world"
})

Vectorize

GA in 2024. Cloudflare's vector database. Workers AI plus Vectorize yields edge RAG.

Vercel Edge Functions + Middleware

Launched in 2022. Aimed at framework developers, Next.js-centric.

Edge Runtime

Built on V8 isolates like Cloudflare Workers. Supports some Node.js APIs (AsyncLocalStorage, Buffer).

// Next.js Edge API Route
export const config = { runtime: 'edge' }

export default async function handler(req) {
  const country = req.geo?.country
  return new Response(`Hello from ${country}`)
}

Middleware

// middleware.ts
import { NextResponse } from 'next/server'

export function middleware(request) {
  const country = request.geo?.country
  if (country === 'KR') {
    return NextResponse.rewrite(new URL('/kr', request.url))
  }
}

export const config = { matcher: '/((?!api|_next).*)' }

Primarily used for A/B testing, geolocation routing, and bot protection.

Vercel Edge Config

Read-only KV with ~20ms global propagation. Ideal for feature flags and A/B bucket definitions.

Vercel AI SDK

Launched in 2023. Standardizes streamText, generateObject, and tool calling.

import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'

export async function POST(req) {
  const { messages } = await req.json()
  const result = await streamText({
    model: openai('gpt-4-turbo'),
    messages,
  })
  return result.toDataStreamResponse()
}

Deno Deploy — V8 plus TypeScript Native

Deno was built by Ryan Dahl, creator of Node.js. Deno Deploy shipped in 2021.

Characteristics

  • Built on V8 isolates
  • TypeScript native — transpilation is automatic
  • Web-standard APIs — standard fetch/Request/Response instead of Node.js compat
  • npm compatibility (added in 2023)
Deno.serve((req) => {
  return new Response("Hello from Deno Deploy")
})

Netlify Edge Functions

Uses Deno Deploy's runtime internally. Netlify integration.

Fastly Compute@Edge — 100% WASM

GA in 2020. Where Cloudflare is V8-based, Fastly is Wasmtime-based.

Characteristics

  • Language-agnostic — Rust, Go, JavaScript, AssemblyScript
  • Deterministic cold starts — no GC, claims 35μs
  • Relatively expensive, enterprise-targeted
// Rust on Fastly
use fastly::{Error, Request, Response};

#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
    Ok(Response::from_body("Hello from WASM edge"))
}

KV Store, Config Store, Secret Store

Edge storage primitives similar to Cloudflare's.

Fly.io — Regional VM Orchestration

Launched in 2020. "Run actual containers at the edge."

Architecture

  • Firecracker MicroVMs (the same hypervisor as AWS Lambda)
  • 35+ regions
  • Pick regions via fly.toml
# fly.toml
app = "my-app"
primary_region = "nrt"  # Tokyo

[build]
  image = "my-app:latest"

[[services]]
  internal_port = 8080
  protocol = "tcp"
  [[services.ports]]
    port = 443
    handlers = ["tls", "http"]

[http_service]
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0

vs Cloudflare Workers

AxisCloudflare WorkersFly.io
RuntimeV8 IsolateFirecracker VM
LanguageJS + WASMAnything
Cold start5mshundreds of ms to seconds
Statelessnessstateless by defaultalways-running possible
DBD1, KVPostgres, SQLite full-stack
WebSocketDurable Objectsplain TCP
Pricingrequest-basedVM-time based

Selection criteria:

  • JS/TS web APIs, stateless → Cloudflare
  • Python/Rust/Go long-running processes, Postgres required → Fly.io
  • Phoenix LiveView, Discord-style → Fly.io (several public case studies)

Phoenix LiveView + Fly.io

The Elixir/Phoenix team took equity in Fly.io and is an official partner. LiveView requires persistent WebSockets, and Fly.io is a perfect fit.

AWS's Answer — Lambda@Edge, CloudFront Functions

CloudFront Functions (2021)

  • JavaScript only
  • 1ms cold starts
  • 1ms max execution time
  • Header rewrites, simple redirects only

Lambda@Edge (2017)

  • Node.js, Python
  • 5-second timeout
  • More powerful but has cold starts
  • Only 13 regions (vs Cloudflare's 300+)

Limitation: Targeted at customers who must stay inside the AWS ecosystem. As a general-purpose edge platform, it trails Cloudflare/Vercel/Fly.

Edge Database Competition

The real bottleneck at the edge is data. Stateless compute is easy, but stateful consistency wars against physics.

Turso — libSQL (Chiselstrike, 2022)

  • SQLite fork "libSQL" — built-in remote replication
  • Read replicas per region, writes go to primary
  • Claims 1ms read latency
turso db create my-db --location fra
turso db replicate my-db hnd   # Add a Tokyo replica
import { createClient } from '@libsql/client'

const db = createClient({
  url: process.env.TURSO_URL,
  authToken: process.env.TURSO_TOKEN
})

const result = await db.execute({
  sql: "SELECT * FROM users WHERE id = ?",
  args: [userId]
})

Innovations: Git-like branching; Embedded Replicas (embed the DB inside the app for local queries).

Cloudflare D1

Covered above. Has a 10GB limit.

PlanetScale

MySQL-based. Vitess sharding. The "merge a branch into main" dev workflow.

pscale branch create my-db feature-x
pscale deploy-request create my-db feature-x

PostgreSQL shipped in 2024 (an expansion from its MySQL-only roots).

Neon

Serverless PostgreSQL. Storage-compute separation.

  • Branching — like git
  • Scale to zero — compute 0 when idle
  • Fast cold start — 100ms

DB of the Year in 2024.

Xata

PostgreSQL + typesafe client + full-text search.

Supabase Edge Functions

Supabase's edge runtime. Deno-based. Best-in-class integration with Supabase's Postgres.

EdgeDB

Graph queries, strong typing. A new query language sitting on top of PostgreSQL.

Designing Region-Aware Architectures

"Distribute compute across the edge" is easy; "keep data consistent" is hard.

Read Local, Write Global

  • Reads go to the nearest replica
  • Writes route to the primary region (accept the extra latency)
User (Seoul)
read (5ms)  
Seoul replica ←─── replication ──── Primary (Frankfurt)
write (250ms)

Smart Placement (Cloudflare)

A contrarian idea: place the Worker near the origin DB. Users hit the CDN cache for low latency while Workers co-locate with the DB.

// wrangler.toml
[placement]
mode = "smart"

Single-Leader with Leader Election

  • CockroachDB, Spanner style
  • Raft/Paxos consensus
  • Leader migrates by region

Geo-partitioning

Pin user data to regions. Korean users' data goes to the Seoul DB. Essential for GDPR compliance.

-- CockroachDB
ALTER TABLE users CONFIGURE ZONE USING
  constraints = '{"+region=seoul": 1}';

Eventual Consistency

DynamoDB Global Tables, Cloudflare KV. Free to read in any zone, writes are eventual.

Put (us-east-1): key=x, value=v1
             (tens of seconds)
Get (ap-northeast-1): key=x, value=v1 (or previous)

Suitable for data without per-second accuracy requirements — news feeds, social timelines.

CRDT — Conflict-free Replication

Conflict-free Replicated Data Types. Implemented by Riak, Redis CRDT, Automerge, and Y.js.

  • Figma, Linear, Notion use them
  • Offline editing plus automatic merging

The Cold-Start Race — 2025 Numbers

TechnologyCold startMemoryNotes
Cloudflare Workers5ms3MBV8 isolate
Fastly Compute35μs-1msMBWasmtime
Deno Deploy10msMBV8 + TS
Vercel Edge Functions10msMBV8 isolate
AWS Lambda (SnapStart)100-300msMBContainer snapshot
AWS Lambda@Edge100ms-1sMBNode.js/Python
Fly.io (Firecracker)200ms-1sMB-GBMicroVM
Fly Machines 2.0100ms (hibernated)MBVM checkpoint
Google Cloud Run500ms-2sMBContainer
AWS ECS Fargate10-30sGBContainer

Firecracker (open-sourced by AWS in 2018): boots a KVM microVM in 125ms. The foundation of Lambda and Fly.io.

Isolates vs microVMs: Isolates are faster but have weaker security boundaries (no hardware isolation). MicroVMs deliver KVM isolation and are still fast (hundreds of ms).

Edge AI — The 2024-2025 Explosion

Cloudflare Workers AI

  • 40+ models including Llama 3, Mistral, Stable Diffusion
  • Usage-based pricing, no direct GPU management required

Vercel AI SDK + v0

  • UI-generation AI (v0.dev)
  • Streaming text, integrated with RSC

Supabase AI

  • pgvector plus Edge Functions

WebLLM (MLC AI)

  • Runs Llama/Mistral in the browser
  • Uses WebGPU

Transformers.js (HuggingFace)

  • BERT, Whisper, SAM in the browser

Ollama Cloud (2024)

  • Cloud extension of local-first LLMs

Shared pattern: prompts at the edge, heavy inference at a GPU region → hybrid.

Edge Security — Zero Trust

DDoS Defense

Cloudflare reported a 259M QPS DDoS mitigation case in 2024. A reason the edge must absorb the traffic.

Rate Limiting

// Cloudflare Workers
export default {
  async fetch(req, env) {
    const { success } = await env.RATE_LIMITER.limit({ key: req.headers.get("CF-Connecting-IP") })
    if (!success) return new Response("Too many", { status: 429 })
    // ...
  }
}

mTLS, Zero Trust

Cloudflare Access, Tailscale, Zscaler — VPN replacements. Per-app authentication performed at the edge.

WAF (Web Application Firewall)

Cloudflare WAF auto-blocks the OWASP Top 10. Runs at the edge to protect origin.

Bot Management

AI-powered bot detection. Filters Claude, GPT-4 scrapers.

Real-World Adoption Guide — Which Edge, When

Scenario 1 — Mostly Static plus Some Dynamic

  • Recommendation: Vercel, Netlify
  • Next.js/Nuxt/SvelteKit plus ISR plus Edge Middleware

Scenario 2 — API plus WebSocket plus Real-time

  • Recommendation: Cloudflare Workers plus Durable Objects
  • Chat, game lobbies, collaborative editors

Scenario 3 — Global SaaS plus Postgres

  • Recommendation: Fly.io plus Neon/Supabase, or Vercel plus Neon
  • Traditional web apps, region-aware

Scenario 4 — Enterprise plus Compliance

  • Recommendation: Fastly with dedicated regions / AWS Lambda@Edge
  • Finance, healthcare

Scenario 5 — Edge AI

  • Recommendation: Cloudflare Workers AI plus Vectorize
  • Or Vercel AI SDK plus OpenAI

Scenario 6 — IoT/Low Latency

  • Recommendation: AWS IoT plus Wavelength (5G edge)
  • Cloudflare Workers

1. Edge-plus-AI Integration

Every major platform has added an AI runtime. Workers AI, Vercel AI, Fastly AI (announced 2024).

2. Stateful Edge Matures

Durable Objects, Turso Embedded Replicas, Fly Machines auto-hibernate.

3. Region-aware Frameworks

Next.js App Router's runtime: 'edge', Remix, and Astro make edge deployment easy.

4. Edge Database Wars

D1 vs Turso vs Neon vs PlanetScale. SQLite/libSQL is surging.

5. MicroVM Improvements

Firecracker 2.0 (2024) cuts boot from seconds to 100ms.

6. Edge Security Standardization

WAF plus Zero Trust plus Bot Management bundles.

7. Edge FinOps

Request-based pricing flips — at high volume, containers become cheaper. A cost-optimization trade-off.

Adoption Checklist (2025)

  1. Clear use case — static/dynamic/stateful/AI
  2. Runtime choice — Workers (V8), Fastly (WASM), Fly (VM)
  3. Latency benchmark — measure from real user regions
  4. Data architecture — region-aware, read replicas, write routing
  5. Observability — OTLP export, Cloudflare Logs, Datadog
  6. Security — WAF, rate limiting, Zero Trust
  7. Cost model — per-request vs VM-hour
  8. Minimize vendor lock-in — build on Web Standard APIs
  9. Failure modes — origin fallback, multi-edge provider
  10. Cold-start budget — set p99 targets
  11. DB strategy — choose among Turso/D1/Neon
  12. CI/CD — Wrangler, Vercel CLI, flyctl

10 Common Anti-Patterns

  1. Forcing heavy compute to the edge — CPU time exceeded
  2. Hiding state under a stateless premise — consistency bugs
  3. Routing every request through the edge — some are better at origin
  4. No latency measurement — an edge deploy users don't feel
  5. Ignoring regulation — GDPR residency violations
  6. Storing 10GB+ in an edge DB — exceeding D1's limit
  7. Assuming WebSocket scalability — requires Durable Object design
  8. No origin fallback — an edge outage takes the whole site down
  9. No cold-start budget — p99 degrades
  10. Single-vendor lock-in — avoid by sticking to Web Standard APIs

Next Article Preview — "The Evolution of Modern CI/CD" — GitHub Actions, GitLab CI, Dagger, Nx, Turborepo, Remote Cache, Hermetic Builds

As edge deployment got faster, CI/CD pipelines underwent their own revolution. In 2024-2025, CI/CD left the "30-minute build" era behind: Remote Cache, Hermetic Builds, Dagger, Distributed Test normalized "5-minute builds, 10-minute tests."

The next article covers:

  • CI/CD history — Jenkins → CircleCI → GitHub Actions → Dagger
  • GitHub Actions in depth — matrix, reusable workflows, composite actions
  • GitLab CI vs Jenkins vs Buildkite vs CircleCI
  • Monorepo builds — Nx, Turborepo, Bazel, Rush, pnpm workspace
  • Remote Cache plus Distributed Build — Bazel, Turborepo Remote Cache, Nx Cloud
  • Hermetic Build — the philosophy of reproducibility
  • Dagger — "CI/CD as code" programmable pipelines
  • Container registries — GitHub Packages, ECR, Harbor
  • Supply chain security — SLSA, sigstore, cosign, SBOM
  • Test parallelization — Jest/Vitest sharding, Playwright shards
  • Deployment strategies — Blue/Green, Canary, Progressive Delivery (Flagger)
  • Platform engineering — Backstage plus CI/CD integration

We'll track how "fast CI/CD" is not just a tech concern but a lever that directly governs team productivity and deploy frequency, and why "monorepo-scale organizations" treat CI/CD as a product.

현재 단락 (1/330)

Akamai was spun out of MIT in 1998. Static files like images, CSS, and JS were replicated to **regio...

작성 글자: 0원문 글자: 16,783작성 단락: 0/330