Skip to content

✍️ 필사 모드: Modern Backend Runtimes 2025 — Node 22, Bun, Deno, WinterJS, Cloudflare Workers, V8 Isolates, Tokio (S7 E1)

English
0%
정확도 0%
💡 왼쪽 원문을 읽으면서 오른쪽에 따라 써보세요. Tab 키로 힌트를 받을 수 있습니다.

Prologue — the runtime wars are back

"Runtime" used to mean "is it Node or is it Python?" In 2026 it's a legitimate architectural choice: traditional Node/JVM/Python, Bun, Deno, Workers-style V8 isolates, and native runtimes (Go, Rust/Tokio). Each solves a different problem.


1. Node 22 — the adult Node

Node 22 (LTS in late 2024) shipped features that made it feel like a new platform.

  • Permissions model (--permission --allow-fs-read=./data) — finally a way to sandbox Node processes.
  • Native TypeScript via --experimental-strip-types (stable in Node 23+).
  • Built-in test runner maturity.
  • Performance — V8 13.x, improved streams.
  • Fetch, WebSocket, Blob, FormData — all standard globals.

Use when: existing Node ecosystem, stable LTS, long-running services.


2. Bun 1.1+ — the toolchain runtime

Bun positions itself as "a runtime AND the toolchain": package manager, bundler, test runner, SQLite, JSX, TS — all built in.

Highlights

  • JavaScriptCore engine (from WebKit) instead of V8.
  • Bun.serve() — the fastest HTTP server in the JS world.
  • Native SQLite and Postgres drivers.
  • bun install — order of magnitude faster than npm/yarn.
  • bun test — Jest-compatible.
  • 2025: S3 native, macros, improved Node compatibility.

Use when: greenfield projects, small services where throughput matters, monorepos that want fewer tools. Watch: Node compatibility edge cases still bite.


3. Deno — the reset

Deno originally pitched "Node, but secure, with TS out of the box." The pure-TS / URL-imports bet lost to npm's gravity. Deno 2 (late 2024) embraced npm: node_modules support, deno.json as package.json-compatible, full npm registry.

  • Permissions still first-class (--allow-net).
  • Fresh (framework), Deno KV, Deno Deploy.
  • jsr.io registry — Deno's answer to npm, with better TS types.

Use when: you want TS-first with fewer config files, or targeting Deno Deploy.


4. Cloudflare Workers — V8 isolates, not V8 processes

The architectural insight: run many tenants inside one V8 process as isolates, sharing memory/compile caches. Cold-start in microseconds, not seconds.

  • Limits (2025): ~30s CPU time, 128MB memory, no Node APIs (but growing nodejs_compat).
  • Storage: Workers KV (eventual), Durable Objects (consistent state), R2, D1 (SQLite), Queues.
  • Node compat: nodejs_compat flag + bun-compat mode unlocked most ecosystems by 2025.
  • Workers for Platforms: per-tenant workers at scale.

Use when: global low-latency, API aggregation, auth middleware, edge logic.


5. Vercel Functions & Edge — the pragmatic pair

Vercel provides two lanes: Node.js Functions (Lambda under the hood) and Edge Functions (V8 isolates via Workers-compatible runtime).

  • Edge: cold-start ~0, limits similar to Workers.
  • Node Functions: full Node ecosystem, Lambda cold-start tradeoff.
  • Fluid Compute (2025): hybrid — Node Functions that pool concurrent requests like isolates. Big performance win.

Use when: shipping with Next.js, hybrid latency/ecosystem tradeoffs.


6. Deno Deploy / WinterJS / Fastly Compute — peer options

  • Deno Deploy: V8 isolates + Deno API, good for Deno-first teams.
  • WinterJS: Rust-based JS runtime by Wasmer, focuses on WinterCG standards.
  • Fastly Compute: Wasm-based, polyglot (Rust, Go, JS, AssemblyScript).

The WinterCG standardization effort is the reason these runtimes feel interchangeable — the APIs are converging.


7. The non-JS side — Go, Rust/Tokio, Python

Go

  • Fast runtime, goroutines, net/http 1.22+ has native routing.
  • Small binaries, easy deploy.
  • Sweet spot: backends where ops simplicity matters.

Rust + Tokio

  • Fastest + most memory efficient.
  • Ecosystem (Axum, Tower) is mature.
  • Use when: performance-critical, WASM targets, edge compute with WASM.
  • Trade-off: compile times and learning curve.

Python (3.13)

  • Free-threaded (no-GIL) build is experimental — don't bet on it yet in production.
  • FastAPI, Litestar remain excellent.
  • Best when: AI/ML adjacency, rapid prototypes, Django legacy.

8. Benchmark reality (~2025 conditions)

Rough "hello world" HTTP throughput on a single core, local machine, not a real prod test:

Runtimereq/s (ballpark)
Rust Axum600k+
Go net/http250–400k
Bun.serve200–300k
Node 22 (uWebSockets)150–250k
Node 22 (stock http)70–120k
FastAPI (uvicorn)30–60k

Caveats

  • Real apps aren't hello world. DB, JSON, auth dominate.
  • Cold-start dominates short-lived functions, where Workers/Isolates beat everyone.
  • Memory per connection matters more than raw req/s in real prod.

9. Streaming and SSE — the AI era reality

AI APIs (OpenAI, Anthropic) stream responses. Your backend needs to:

  • Open a streaming upstream (SSE or chunked HTTP).
  • Pass tokens to the frontend without buffering.
  • Abort upstream when client disconnects.

What works well

  • Hono/Bun — native ReadableStream pipelines.
  • FastAPI with StreamingResponse.
  • Cloudflare Workers — excellent SSE, built for this.
  • Node 22 with fetch + res.write.

Watch out

  • Serverless Functions (non-Edge) often limit duration — not great for long streams.
  • API gateways may buffer (API Gateway v1, CloudFront without specific config).

10. Observability at the runtime layer

  • Node 22 has --inspect and improved perf hooks.
  • Bun shipped OTel integration in 2025.
  • Workers expose Tail Workers + Logpush for observability.
  • Rust/Tokio: tracing crate is the default.
  • FastAPI: OpenTelemetry Python SDK, integrates with Datadog/Honeycomb.

Rule: instrument at the framework level (traces) + system level (metrics) + process level (logs). OpenTelemetry is the connective tissue.


11. Memory and connection models

RuntimeConcurrencyMemory per 10k req/s
Rust TokioAsync tasks~50MB
GoGoroutines~80MB
BunJS event loop~120MB
Node 22JS event loop~160MB
Node + Workers poolMultiple procs~500MB
FastAPI (uvicorn)Async~200MB

If you run serverless, memory matters at the margin — pricing and cold-start both correlate.


12. Decision tree

Q1: Ultra-low global latency?
├─ Yes: Cloudflare Workers, Vercel Edge, Deno Deploy
└─ NoQ2

Q2: Raw performance / memory critical?
├─ Yes: Rust (Axum), Go
└─ NoQ3

Q3: TS team, modern toolchain desired?
├─ Yes: Bun + Elysia/Hono, or Node 22 + Hono/Fastify
└─ NoQ4

Q4: Python data/AI adjacent?
├─ Yes: FastAPI on Node-equiv (async Python)
└─ NoNode 22 LTS + NestJS/Fastify

13. 2026 outlook

  1. WinterCG convergence — runtimes behaving more alike at the API surface. You'll swap Node for Bun without rewriting handlers.
  2. Node vs Bun vs Deno will matter less than server vs edge isolate vs native.
  3. WASM Components will let you plug Rust modules into JS runtimes seamlessly.
  4. Persistent isolates (Durable Objects etc.) will blur the line between "request-scoped compute" and "stateful server."
  5. Bun Deploy + Elysia may become a credible third-party serverless platform.

12-item adoption checklist

  1. Team strength mapped to runtime choice?
  2. Cold-start budget defined?
  3. Observability from day 1?
  4. Framework ecosystem maturity checked?
  5. Auth + validation libraries available on chosen runtime?
  6. Database driver quality verified?
  7. Package manager speed acceptable?
  8. Local dev parity with production?
  9. Memory/CPU costs modeled at scale?
  10. Vendor lock evaluated (Workers, Deno Deploy)?
  11. Escape hatch defined (can you move off)?
  12. AI/streaming use cases handled natively?

10 common mistakes

  1. Picking Bun too early in an enterprise setting — Node compat gaps cost.
  2. Picking Workers for long-running tasks — CPU limits bite.
  3. Running FastAPI with sync code everywhere — async/await partial = deadlocks.
  4. Using Lambda for chat-streaming — duration limits break UX.
  5. "Rust for everything" — developer velocity drops, bugs don't.
  6. Assuming Node and Edge are interchangeable — DB drivers differ.
  7. Ignoring process-per-request cost in Lambda — use pooled/fluid compute.
  8. No lightweight healthcheck — hanging runtimes go undetected.
  9. Logging to stdout at high cardinality — observability bill explosion.
  10. Hardcoding runtime APIs — makes migration painful later.

Next episode

S7 E2: Modern Backend Frameworks 2025 — NestJS, Fastify, Hono, Elysia, Spring Boot, FastAPI, Go, Axum, and API styles (tRPC, GraphQL, gRPC). Which framework, which API style, for which team.

— End of Modern Backend Runtimes.

현재 단락 (1/129)

"Runtime" used to mean "is it Node or is it Python?" In 2026 it's a legitimate architectural choice:...

작성 글자: 0원문 글자: 7,204작성 단락: 0/129