- Published on
Modern Backend Runtimes 2025 — Node 22, Bun, Deno, WinterJS, Cloudflare Workers, V8 Isolates, Tokio (S7 E1)
- Authors

- Name
- Youngju Kim
- @fjvbn20031
Prologue — the runtime wars are back
"Runtime" used to mean "is it Node or is it Python?" In 2026 it's a legitimate architectural choice: traditional Node/JVM/Python, Bun, Deno, Workers-style V8 isolates, and native runtimes (Go, Rust/Tokio). Each solves a different problem.
1. Node 22 — the adult Node
Node 22 (LTS in late 2024) shipped features that made it feel like a new platform.
- Permissions model (
--permission --allow-fs-read=./data) — finally a way to sandbox Node processes. - Native TypeScript via
--experimental-strip-types(stable in Node 23+). - Built-in test runner maturity.
- Performance — V8 13.x, improved streams.
- Fetch, WebSocket, Blob, FormData — all standard globals.
Use when: existing Node ecosystem, stable LTS, long-running services.
2. Bun 1.1+ — the toolchain runtime
Bun positions itself as "a runtime AND the toolchain": package manager, bundler, test runner, SQLite, JSX, TS — all built in.
Highlights
- JavaScriptCore engine (from WebKit) instead of V8.
- Bun.serve() — the fastest HTTP server in the JS world.
- Native SQLite and Postgres drivers.
bun install— order of magnitude faster than npm/yarn.bun test— Jest-compatible.- 2025: S3 native, macros, improved Node compatibility.
Use when: greenfield projects, small services where throughput matters, monorepos that want fewer tools. Watch: Node compatibility edge cases still bite.
3. Deno — the reset
Deno originally pitched "Node, but secure, with TS out of the box." The pure-TS / URL-imports bet lost to npm's gravity. Deno 2 (late 2024) embraced npm: node_modules support, deno.json as package.json-compatible, full npm registry.
- Permissions still first-class (
--allow-net). - Fresh (framework), Deno KV, Deno Deploy.
jsr.ioregistry — Deno's answer to npm, with better TS types.
Use when: you want TS-first with fewer config files, or targeting Deno Deploy.
4. Cloudflare Workers — V8 isolates, not V8 processes
The architectural insight: run many tenants inside one V8 process as isolates, sharing memory/compile caches. Cold-start in microseconds, not seconds.
- Limits (2025): ~30s CPU time, 128MB memory, no Node APIs (but growing nodejs_compat).
- Storage: Workers KV (eventual), Durable Objects (consistent state), R2, D1 (SQLite), Queues.
- Node compat:
nodejs_compatflag +bun-compatmode unlocked most ecosystems by 2025. - Workers for Platforms: per-tenant workers at scale.
Use when: global low-latency, API aggregation, auth middleware, edge logic.
5. Vercel Functions & Edge — the pragmatic pair
Vercel provides two lanes: Node.js Functions (Lambda under the hood) and Edge Functions (V8 isolates via Workers-compatible runtime).
- Edge: cold-start ~0, limits similar to Workers.
- Node Functions: full Node ecosystem, Lambda cold-start tradeoff.
- Fluid Compute (2025): hybrid — Node Functions that pool concurrent requests like isolates. Big performance win.
Use when: shipping with Next.js, hybrid latency/ecosystem tradeoffs.
6. Deno Deploy / WinterJS / Fastly Compute — peer options
- Deno Deploy: V8 isolates + Deno API, good for Deno-first teams.
- WinterJS: Rust-based JS runtime by Wasmer, focuses on WinterCG standards.
- Fastly Compute: Wasm-based, polyglot (Rust, Go, JS, AssemblyScript).
The WinterCG standardization effort is the reason these runtimes feel interchangeable — the APIs are converging.
7. The non-JS side — Go, Rust/Tokio, Python
Go
- Fast runtime, goroutines, net/http 1.22+ has native routing.
- Small binaries, easy deploy.
- Sweet spot: backends where ops simplicity matters.
Rust + Tokio
- Fastest + most memory efficient.
- Ecosystem (Axum, Tower) is mature.
- Use when: performance-critical, WASM targets, edge compute with WASM.
- Trade-off: compile times and learning curve.
Python (3.13)
- Free-threaded (no-GIL) build is experimental — don't bet on it yet in production.
- FastAPI, Litestar remain excellent.
- Best when: AI/ML adjacency, rapid prototypes, Django legacy.
8. Benchmark reality (~2025 conditions)
Rough "hello world" HTTP throughput on a single core, local machine, not a real prod test:
| Runtime | req/s (ballpark) |
|---|---|
| Rust Axum | 600k+ |
| Go net/http | 250–400k |
| Bun.serve | 200–300k |
| Node 22 (uWebSockets) | 150–250k |
| Node 22 (stock http) | 70–120k |
| FastAPI (uvicorn) | 30–60k |
Caveats
- Real apps aren't hello world. DB, JSON, auth dominate.
- Cold-start dominates short-lived functions, where Workers/Isolates beat everyone.
- Memory per connection matters more than raw req/s in real prod.
9. Streaming and SSE — the AI era reality
AI APIs (OpenAI, Anthropic) stream responses. Your backend needs to:
- Open a streaming upstream (SSE or chunked HTTP).
- Pass tokens to the frontend without buffering.
- Abort upstream when client disconnects.
What works well
- Hono/Bun — native
ReadableStreampipelines. - FastAPI with
StreamingResponse. - Cloudflare Workers — excellent SSE, built for this.
- Node 22 with
fetch+res.write.
Watch out
- Serverless Functions (non-Edge) often limit duration — not great for long streams.
- API gateways may buffer (API Gateway v1, CloudFront without specific config).
10. Observability at the runtime layer
- Node 22 has
--inspectand improved perf hooks. - Bun shipped OTel integration in 2025.
- Workers expose Tail Workers + Logpush for observability.
- Rust/Tokio:
tracingcrate is the default. - FastAPI: OpenTelemetry Python SDK, integrates with Datadog/Honeycomb.
Rule: instrument at the framework level (traces) + system level (metrics) + process level (logs). OpenTelemetry is the connective tissue.
11. Memory and connection models
| Runtime | Concurrency | Memory per 10k req/s |
|---|---|---|
| Rust Tokio | Async tasks | ~50MB |
| Go | Goroutines | ~80MB |
| Bun | JS event loop | ~120MB |
| Node 22 | JS event loop | ~160MB |
| Node + Workers pool | Multiple procs | ~500MB |
| FastAPI (uvicorn) | Async | ~200MB |
If you run serverless, memory matters at the margin — pricing and cold-start both correlate.
12. Decision tree
Q1: Ultra-low global latency?
├─ Yes: Cloudflare Workers, Vercel Edge, Deno Deploy
└─ No → Q2
Q2: Raw performance / memory critical?
├─ Yes: Rust (Axum), Go
└─ No → Q3
Q3: TS team, modern toolchain desired?
├─ Yes: Bun + Elysia/Hono, or Node 22 + Hono/Fastify
└─ No → Q4
Q4: Python data/AI adjacent?
├─ Yes: FastAPI on Node-equiv (async Python)
└─ No → Node 22 LTS + NestJS/Fastify
13. 2026 outlook
- WinterCG convergence — runtimes behaving more alike at the API surface. You'll swap Node for Bun without rewriting handlers.
- Node vs Bun vs Deno will matter less than server vs edge isolate vs native.
- WASM Components will let you plug Rust modules into JS runtimes seamlessly.
- Persistent isolates (Durable Objects etc.) will blur the line between "request-scoped compute" and "stateful server."
- Bun Deploy + Elysia may become a credible third-party serverless platform.
12-item adoption checklist
- Team strength mapped to runtime choice?
- Cold-start budget defined?
- Observability from day 1?
- Framework ecosystem maturity checked?
- Auth + validation libraries available on chosen runtime?
- Database driver quality verified?
- Package manager speed acceptable?
- Local dev parity with production?
- Memory/CPU costs modeled at scale?
- Vendor lock evaluated (Workers, Deno Deploy)?
- Escape hatch defined (can you move off)?
- AI/streaming use cases handled natively?
10 common mistakes
- Picking Bun too early in an enterprise setting — Node compat gaps cost.
- Picking Workers for long-running tasks — CPU limits bite.
- Running FastAPI with sync code everywhere — async/await partial = deadlocks.
- Using Lambda for chat-streaming — duration limits break UX.
- "Rust for everything" — developer velocity drops, bugs don't.
- Assuming Node and Edge are interchangeable — DB drivers differ.
- Ignoring process-per-request cost in Lambda — use pooled/fluid compute.
- No lightweight healthcheck — hanging runtimes go undetected.
- Logging to stdout at high cardinality — observability bill explosion.
- Hardcoding runtime APIs — makes migration painful later.
Next episode
S7 E2: Modern Backend Frameworks 2025 — NestJS, Fastify, Hono, Elysia, Spring Boot, FastAPI, Go, Axum, and API styles (tRPC, GraphQL, gRPC). Which framework, which API style, for which team.
— End of Modern Backend Runtimes.