Skip to content
Published on

3D Development for the Web in 2026 — Three.js, R3F, WebGPU, Gaussian Splatting (a deep-dive on the modern web 3D stack) (english)

Authors

Prologue — WebGL grew up, and WebGPU is already yesterday

In the late 2010s, "doing 3D on the web" meant "doing WebGL." Three.js stacked a humane abstraction on top, and we lived with two copies of every shader — GLSL for WebGL today, WGSL for WebGPU tomorrow.

January 2026 closed that chapter. With Safari 26 shipping WebGPU on macOS Tahoe and iOS, WebGPU became Baseline. Chrome, Edge, Firefox, Safari — all on by default, with global coverage around 95%. The remaining 5% silently falls back to WebGL 2 through Three.js.

That sentence is not the whole change.

  • Three.js r182 (December 2025) made WebGPURenderer the recommended renderer.
  • TSL (Three Shading Language) — write the shader once; Three.js compiles to both WGSL and GLSL. No more dual maintenance.
  • React Three Fiber v9 accepts an async gl prop, so WebGPU init wires up cleanly.
  • Gaussian Splatting is now a real, distinct way of representing scenes — photorealistic, real-time, polygon-free.
  • Meshy / Tripo / Rodin spit out PBR-textured meshes from a single sentence.

This post walks the 2026 web-3D stack end-to-end. From the first scene on screen to R3F, WebGPU, gsplat, AI 3D — and a matrix in the middle telling you which tool to pick for which use case.


1. The rendering pipeline — how 3D actually gets drawn

A diagram first. You can't compare tools until you know what tools do.

[Scene Graph]
   |  (tree of mesh / light / camera)
   v
[CPU: JS] -- matrices / culling / sorting --+
                                             |
                                             v
                                       [Draw Call]
                                             |
   GPU pipeline ────────────────────────────┴──────────────
   |                                                       |
   v                                                       v
 Vertex Shader     ->     Rasterizer    ->    Fragment Shader
 (transform verts)        (slice to px)        (color the px)
   |                                                       |
   v                                                       v
                  Z-buffer / Blend / Output
                              |
                              v
                       [Framebuffer]
                              |
                              v
                          \<canvas\>

Where this pipeline lives matters because it tells you what each library actually owns.

  • CPU side — scene graph, transform matrices, culling, sorting. Three.js owns this end-to-end. R3F lays a declarative React layer on top.
  • Draw calls — units of work sent to the GPU. Fewer is faster. Instancing, merging, atlasing all exist to shrink this number.
  • Shaders (vertex / fragment) — tiny GPU programs. WebGL = GLSL, WebGPU = WGSL. TSL unifies the two.
  • Output compositing — Z-buffer, blending, post-processing.

One line to remember: "Half of performance is draw calls; the other half is shaders."


2. Three.js — the de-facto standard for web 3D

By the numbers: Three.js has roughly 2.7 million weekly npm downloads, ~270x Babylon.js and ~337x PlayCanvas. It is not the "de facto standard" — it is the standard.

The smallest scene needs three things — scene, camera, renderer — plus a mesh inside.

import * as THREE from 'three'

// 1. Scene — the container for everything
const scene = new THREE.Scene()

// 2. Camera — where you look from
const camera = new THREE.PerspectiveCamera(
  75,                                  // fov (degrees)
  window.innerWidth / window.innerHeight,
  0.1,                                 // near clip
  1000                                 // far clip
)
camera.position.z = 5

// 3. Renderer — in 2026, WebGPURenderer is the default
import { WebGPURenderer } from 'three/webgpu'
const renderer = new WebGPURenderer({ antialias: true })
await renderer.init()                 // async init — required
renderer.setSize(window.innerWidth, window.innerHeight)
document.body.appendChild(renderer.domElement)

// 4. Mesh = Geometry + Material
const geometry = new THREE.BoxGeometry(1, 1, 1)
const material = new THREE.MeshStandardMaterial({ color: 0x44aa88 })
const cube = new THREE.Mesh(geometry, material)
scene.add(cube)

// 5. Lights — Standard materials need light or they render black
scene.add(new THREE.AmbientLight(0xffffff, 0.4))
const dir = new THREE.DirectionalLight(0xffffff, 1.0)
dir.position.set(5, 10, 7.5)
scene.add(dir)

// 6. Loop — use setAnimationLoop (not requestAnimationFrame)
renderer.setAnimationLoop(() => {
  cube.rotation.x += 0.01
  cube.rotation.y += 0.01
  renderer.render(scene, camera)
})

Three things to call out.

  1. await renderer.init() — WebGPU is async. Skip this and your first frame is black.
  2. MeshStandardMaterial needs light — black screen? Suspect lighting first.
  3. setAnimationLoop — not requestAnimationFrame. It hooks WebXR for free.

That snippet is the skeleton of every Three.js program. Everything else is laid on top.


3. React Three Fiber + drei — the React way to do 3D

Imperative Three.js is fine when small, but once your scene has 50 nodes it turns into spaghetti. React Three Fiber (R3F) maps Three.js onto a React component tree.

Same cube in R3F:

import { Canvas } from '@react-three/fiber'
import { OrbitControls, Environment } from '@react-three/drei'

function Cube() {
  return (
    <mesh rotation={[0, 0.4, 0]}>
      <boxGeometry args={[1, 1, 1]} />
      <meshStandardMaterial color="hotpink" />
    </mesh>
  )
}

export default function Scene() {
  return (
    <Canvas camera={{ position: [0, 0, 5], fov: 75 }}>
      <ambientLight intensity={0.4} />
      <directionalLight position={[5, 10, 7.5]} intensity={1} />
      <Cube />
      <OrbitControls />
      <Environment preset="city" />
    </Canvas>
  )
}

Same outcome. But:

  • The scene graph is a React tree. Conditional rendering, state, hooks all just work.
  • <Canvas> handles resize, the render loop, and pixel ratio for you.
  • drei — the Poimandres helper library. OrbitControls, Environment, useGLTF, Html, Text — all the daily-driver utilities live there.

R3F v9 + WebGPU — async gl prop

R3F v9 lets the gl prop be an async factory. WebGPU's async init slots in naturally.

import { Canvas } from '@react-three/fiber'
import { WebGPURenderer } from 'three/webgpu'

<Canvas
  gl={async (props) => {
    const renderer = new WebGPURenderer(props)
    await renderer.init()
    return renderer
  }}
>
  {/* ...scene... */}
</Canvas>

As of May 2026, R3F's WebGPU story is still smoothing out — Poimandres is actively polishing — but the pattern above is production-viable. Want WebGL 2 fallback? Pass WebGLRenderer instead.

useFrame — the per-frame hook

The most React-flavored part of R3F. A component registers a callback fired every frame.

import { useRef } from 'react'
import { useFrame } from '@react-three/fiber'

function Spinner() {
  const ref = useRef(null)
  useFrame((state, delta) => {
    if (ref.current) ref.current.rotation.y += delta
  })
  return (
    <mesh ref={ref}>
      <torusKnotGeometry args={[1, 0.3, 128, 32]} />
      <meshStandardMaterial color="orange" />
    </mesh>
  )
}

delta is the seconds elapsed since the previous frame. That is the starting point of every framerate-independent animation.


4. WebGPU and TSL — two shader copies become one

One-line difference: WebGL is the web port of OpenGL ES 2.0; WebGPU is a modern GPU API in the Vulkan / Metal / DX12 family.

What this means in practice:

  • Lower per-draw-call cost — in draw-call-heavy scenes (particles, many instances), expect 2–10x.
  • Compute shaders as first-class citizens — GPGPU (particles, physics, sims, post) sits naturally in the main pipeline.
  • WGSL — a new shading language, replacing GLSL for WebGPU.

The last one used to be a real headache. GLSL written for WebGL had to be re-authored in WGSL for WebGPU. TSL ends that.

TSL = Three Shading Language. A node-based shader abstraction. Write once; Three.js compiles it to both WGSL (WebGPU) and GLSL (WebGL) internally.

A simple noise shader (wobble the material color by a sine wave):

import { MeshStandardNodeMaterial } from 'three/webgpu'
import { uniform, vec3, mix, sin, time, positionLocal } from 'three/tsl'

const speed = uniform(1.0)
const wave  = sin(positionLocal.y.mul(8.0).add(time.mul(speed)))
const color = mix(vec3(0.1, 0.4, 0.9), vec3(1.0, 0.4, 0.2), wave.mul(0.5).add(0.5))

const material = new MeshStandardNodeMaterial()
material.colorNode = color

positionLocal, time, mix, sin are all nodes. You assemble shaders in JavaScript. You write neither GLSL nor WGSL by hand.

A small WGSL fragment for reference (TSL emits something like this under the hood):

@fragment
fn fs_main(@location(0) uv: vec2<f32>) -> @location(0) vec4<f32> {
  let c = mix(vec3(0.1, 0.4, 0.9), vec3(1.0, 0.4, 0.2), sin(uv.y * 8.0) * 0.5 + 0.5);
  return vec4(c, 1.0);
}

The point is — in 2026 you almost never write either by hand. TSL handles it. Reading WGSL once for literacy is enough.


5. glTF — the JPEG of 3D

The standard for shipping polygon models over the web is glTF 2.0. People call it "the JPEG of 3D." PBR materials, animations, skinning, Draco compression — all in one file.

Three.js loader:

import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js'
import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js'

const draco = new DRACOLoader()
draco.setDecoderPath('https://www.gstatic.com/draco/v1/decoders/')

const loader = new GLTFLoader()
loader.setDRACOLoader(draco)

loader.load('/models/robot.glb', (gltf) => {
  scene.add(gltf.scene)
  // gltf.animations is an array of AnimationClips
})

R3F + drei collapse that to one line:

import { useGLTF } from '@react-three/drei'

function Robot() {
  const { scene } = useGLTF('/models/robot.glb')
  return <primitive object={scene} />
}
useGLTF.preload('/models/robot.glb')

useGLTF is Suspense-aware; the same model used in multiple places loads once. preload lets you hit the network early.

glTF optimization — three things

  1. Draco compression — compresses vertex data. Files shrink 5–10x.
  2. KTX2 / Basis textures — instead of JPG/PNG, GPU-native compressed textures. Lower memory, faster load.
  3. gltf-transform CLI — applies both with one command. Bake it into CI and forget.
npx @gltf-transform/cli optimize input.glb output.glb \
  --texture-compress webp --simplify 0.5

6. Animation — from clips to springs

Three streams of 3D animation:

  1. Bone / skin animations baked into glTF — straight playback of what Blender or Maya produced.
  2. Math-driven — rotate, translate, scale inside useFrame.
  3. Interaction-driven — hover, drag, scroll. Usually spring physics.

Playing glTF clips

import { useAnimations, useGLTF } from '@react-three/drei'
import { useEffect } from 'react'

function Robot() {
  const { scene, animations } = useGLTF('/models/robot.glb')
  const { actions, names } = useAnimations(animations, scene)
  useEffect(() => {
    actions[names[0]]?.reset().fadeIn(0.3).play()
  }, [actions, names])
  return <primitive object={scene} />
}

Spring animation — react-spring/three

import { useSpring, animated } from '@react-spring/three'

function Box({ hovered }) {
  const { scale } = useSpring({ scale: hovered ? 1.3 : 1.0 })
  return (
    <animated.mesh scale={scale}>
      <boxGeometry />
      <meshStandardMaterial />
    </animated.mesh>
  )
}

Values don't jerk — they interpolate the way real objects do. It's the difference between "AI-generated feel" and "polished portfolio."


7. Post-processing — the half-step that defines the finish

The same scene, run through Bloom, SSAO, film grain, lives in a different visual universe. The standard library is postprocessing (Vanruesc); R3F's wrapper is @react-three/postprocessing.

import { EffectComposer, Bloom, DepthOfField, Vignette } from '@react-three/postprocessing'

<Canvas>
  {/* ...scene... */}
  <EffectComposer>
    <Bloom intensity={1.2} luminanceThreshold={0.6} mipmapBlur />
    <DepthOfField focusDistance={0} focalLength={0.02} bokehScale={2} />
    <Vignette eskil={false} offset={0.1} darkness={1.0} />
  </EffectComposer>
</Canvas>

Caveat: post-processing is full-screen — cost scales with pixel count. On mobile, cap pixelRatio (1.5–2.0) and pick two or three effects max.


8. Performance — draw calls are half of everything

3D web performance almost always comes down to draw-call count and shader cost. The 2026 playbook in five patterns.

1. Instancing — 10,000 of the same mesh in one draw call

For many copies of the same geometry / material (trees, grass, box piles), instancing collapses draw calls to 1.

import { Instances, Instance } from '@react-three/drei'

<Instances limit={10000}>
  <boxGeometry args={[1, 1, 1]} />
  <meshStandardMaterial color="white" />
  {positions.map((p, i) => (
    <Instance key={i} position={p} />
  ))}
</Instances>

Instancing is cheaper on WebGPU. If 5,000 instances was the WebGL ceiling, 50,000 is often fine on WebGPU.

2. Frustum culling, LOD

Three.js culls objects outside the camera by default. Mesh.frustumCulled is true out of the box — don't turn it off. LOD swaps mesh resolution by camera distance.

import { Detailed } from '@react-three/drei'

<Detailed distances={[0, 10, 50]}>
  <HighPolyMesh />
  <MidPolyMesh />
  <LowPolyMesh />
</Detailed>

3. Share materials and geometries

Same material / geometry should exist once in memory. In R3F, create them outside the component and reuse.

4. Textures — KTX2 and mipmaps

JPG/PNG: decoded by the CPU, then uploaded. KTX2 (Basis Universal): the GPU consumes the compressed bytes directly. Faster load, lower VRAM.

5. Cap pixelRatio

A retina device with devicePixelRatio = 3 shades 9x the pixels. Always cap.

<Canvas dpr={[1, 2]}>  {/* min 1, max 2 */}

9. WebXR — VR / AR on the web

Between setAnimationLoop and WebXRManager, WebXR in Three.js is nearly free. R3F has @react-three/xr.

import { XR, createXRStore, XROrigin } from '@react-three/xr'

const store = createXRStore()

<button onClick={() => store.enterVR()}>Enter VR</button>
<Canvas>
  <XR store={store}>
    <XROrigin />
    {/* ...scene... */}
  </XR>
</Canvas>

WebGPU + WebXR is surprisingly light in 2026 — Apple Vision Pro, Quest 3, Quest 3S all run WebGPU-backed WebXR cleanly. Adoption is picking up in marketing, education, and healthcare.


10. Three.js vs Babylon.js vs PlayCanvas

Engine comparison, short version.

ItemThree.jsBabylon.jsPlayCanvas
LicenseMITApache 2.0MIT (engine)
StrengthsMassive ecosystem, examples, communityGame features (physics, audio, material editor)Visual editor, cloud IDE
WeaknessesGame features you assemble yourselfSmaller ecosystemCode-first is weak
WebGPURecommended in r182Stable since Babylon 7Engine-level support
Weekly npm DLs~2.7M~10K~8K
Sweet spotPortfolios, products, art, vizBrowser games, simulationAds, games, configurators

One-line picks:

  • Creative work, art, portfolios, product viz → Three.js (+ R3F).
  • Game-style interaction, heavy physics → Babylon.js.
  • Editor-driven visual collaboration → PlayCanvas.

Ecosystem size dominates. When in doubt, Three.js.


11. Gaussian Splatting — photorealistic 3D without polygons

This is the new page.

Gaussian Splatting (gsplat) is not a polygon mesh. It represents a scene as millions of tiny 3D Gaussians — ellipsoids with position, color (in spherical harmonics), opacity, and orientation. The camera "splats" them onto the screen to form the image.

Polygon mesh:                    Gaussian Splatting:
  ┌── vertex / face data           ┌── millions of Gaussians
  ├── UV, textures                 │    (position, SH color, scale, rotation, alpha)
  ├── normals, materials           ├── no textures
  └── shaded by lights             └── lighting baked at capture time

Why is this exciting?

  1. Photorealistic — trained from 30 to several hundred photos / video. Output is near-photographic.
  2. Real-time — GPU-friendly. 60fps on the web is achievable.
  3. Zero mesh modeling — no Blender, no UVs, no textures, no normals. You need a camera, nothing else.
  4. The successor to NeRF — if NeRF was the academic milestone, gsplat is the practical tool (fast training, real-time render).

And clear limits

  • Lighting is baked in — dynamically relighting the scene is hard.
  • Collisions / physics are tough — no mesh to collide with.
  • Editing is fiddly — purpose-built editors like SuperSplat exist for a reason.
  • Files are big — tens to hundreds of MB.

The use case: capturing an existing space and shipping it. Real estate, heritage, museums, concerts, events, exhibitions.

The 2026 tooling landscape

  • Polycam — the mobile capture market leader. iOS LiDAR + photogrammetry + gsplat. 4.7 average rating, 540k+ iOS reviews. Easiest on-ramp.
  • Luma AI — cloud pipeline widely considered to produce the best visual quality among free consumer gsplat tools. Embeddable.
  • SuperSplat — a free, open-source, browser-based gsplat editor built on PlayCanvas. Live annotations, hotspots, post-effects (bloom, vignette), camera animations, WebXR support. Export as an HTML viewer and host on GitHub Pages, Netlify, Vercel.
  • NeRF Studio — research-oriented. Strong for local training and experimentation.

Putting it on the web — @mkkellogg/gaussian-splats-3d

A lightweight Three.js-compatible gsplat viewer. Inside R3F:

import { GaussianSplats3D } from '@mkkellogg/gaussian-splats-3d'
import { useThree } from '@react-three/fiber'
import { useEffect } from 'react'

function Splat({ url }) {
  const { scene, camera, gl } = useThree()
  useEffect(() => {
    const viewer = new GaussianSplats3D.Viewer({
      threeScene: scene,
      camera,
      renderer: gl,
      selfDrivenMode: false,
    })
    viewer.addSplatScene(url)
    return () => viewer.dispose()
  }, [url, scene, camera, gl])
  return null
}

Millions of Gaussians in your browser, 60fps. Five years ago this was science fiction.


12. AI-generated 3D — Meshy, Tripo, Rodin

The last branch. AI that produces 3D meshes from text or images. In 2026, no longer experimental — it is part of the workflow.

Three contenders.

  • Meshy 6 — the most balanced product. Text-to-3D, image-to-3D, PBR textures, topology control, wide export support. 40–60s generation. The default recommendation.
  • Tripo AI — fastest (20–30s). Sensible defaults, the lowest-friction entry. Text and image.
  • Rodin AI (Gen-2) — 10-billion-parameter model. Highest quality. Strong on characters and structured assets. 60–180s.

A typical workflow:

  1. Concept — quick variations in Tripo (~30s).
  2. Favorite candidate — re-run through Meshy for clean PBR.
  3. Final character — Rodin Gen-2 for the high-quality pass.
  4. Export glTF — drop into Three.js / R3F.
Idea ─▶ Tripo (explore)
         └─▶ Meshy (refine, PBR)
                  └─▶ Rodin (finish, characters)
                          └─▶ glTF
                               └─▶ useGLTF in R3F

Reality check: AI-generated topology is messy. Portfolios, viz, game backgrounds are fine, but characters that need rigging and animation typically require a retopo pass in Blender.


13. "What for what" — the use-case matrix

Use caseStackNotes
Developer portfolioR3F + drei + BloomA pinch of drei's Float / Text usually suffices
Product configuratorR3F + glTF + KTX2Color / texture options via material swap
Real-estate virtual tourgsplat (Luma / Polycam) + SuperSplatPhotorealistic, real-space
Museum / exhibitiongsplat + WebXRHotspots, annotations
Browser gameBabylon.js or Three.js + RapierPhysics, collision
Data visualizationR3F + camera choreographyLean on instancing
AR marketingR3F + @react-three/xr + WebXRPair with iOS Quick Look
Interactive artThree.js + TSL (hand-rolled shaders)Node shaders for freedom
Character-centric interactionR3F + AI gen (Rodin) + Mixamo retargetPlus a topology cleanup pass
LiDAR-captured assetsPolycam → glTF or gsplatPhone-only end-to-end

Rules of thumb:

  • "Show a space as it is" → gsplat.
  • "Let users manipulate it (color, options, physics)" → polygon (glTF) + R3F.
  • "Build something small, fast" → R3F + AI gen.

14. Building a portfolio site — 30-minute recipe

The most common first project. Here is the skeleton, in one pass.

import { Canvas } from '@react-three/fiber'
import { OrbitControls, Environment, Float, Text3D, useGLTF, ContactShadows } from '@react-three/drei'
import { EffectComposer, Bloom } from '@react-three/postprocessing'
import { Suspense } from 'react'

function Hero() {
  const { scene } = useGLTF('/hero.glb')
  return <primitive object={scene} scale={1.4} />
}

export default function Portfolio() {
  return (
    <Canvas camera={{ position: [0, 0, 6], fov: 50 }} dpr={[1, 2]}>
      <color attach="background" args={['#0a0a0a']} />
      <Suspense fallback={null}>
        <Environment preset="studio" />
        <Float speed={1.5} rotationIntensity={0.4} floatIntensity={0.8}>
          <Hero />
        </Float>
        <ContactShadows position={[0, -1.6, 0]} opacity={0.6} blur={2.4} />
      </Suspense>
      <OrbitControls enableZoom={false} />
      <EffectComposer>
        <Bloom intensity={0.8} mipmapBlur />
      </EffectComposer>
    </Canvas>
  )
}

Checklist:

  • Model from Meshy / Tripo, or a CC0 Sketchfab grab, then gltf-transform optimize once.
  • Solid background plus a single Environment HDR (studio / city).
  • Float, ContactShadows, Bloom to mask the "AI-gen look."
  • dpr={[1, 2]} to keep retina sane.
  • Kill Bloom on mobile (media query + conditional render).

Epilogue — the web really is 3D now

By 2026, web 3D is not a "look-once demo" slot any more. Product pages, real estate, museums, advertising, learning tools all carry 3D as a matter of course. The daily tooling:

  • Three.js + R3F — polygon-based standard stack.
  • WebGPU + TSL — two shader copies become one.
  • Gaussian Splatting — a camera is enough for photorealistic space.
  • AI 3D generation — a sentence is enough for a mesh.

Two parting artifacts.

A 14-item checklist

  1. Is WebGPURenderer your default, with WebGL 2 fallback verified?
  2. Did you remember the async renderer.init()?
  3. Are your glTF assets Draco / KTX2 compressed?
  4. Are dense identical meshes drawn through instancing, not one by one?
  5. Is pixelRatio capped on mobile?
  6. Is frustum culling still on (you didn't disable it)?
  7. Are duplicate materials / geometries de-duplicated?
  8. Did you trim the number of post-processing passes on mobile?
  9. Is loading wrapped in Suspense for UX?
  10. Is WebXR entry inside a user gesture (click)?
  11. Are gsplat assets exported in a compressed format (SPZ / KSPLAT)?
  12. Did you retopologize AI-gen meshes (when needed)?
  13. Does setAnimationLoop run in exactly one place (no duplicate loops)?
  14. If the first frame is black — is it missing lights or missing init?

Ten anti-patterns

  1. Skipping await renderer.init() on WebGPU and shipping a black first frame.
  2. Using MeshStandardMaterial without lights and rendering black.
  3. Reloading the same model every frame.
  4. Disabling frustumCulled "to be safe."
  5. Uploading raw JPG/PNG instead of compressed textures.
  6. Identical dpr settings on desktop and mobile.
  7. Five post-processing passes on phones.
  8. Rigging AI-gen meshes without a retopo pass.
  9. Trying to edit / physics-collide gsplat like polygons.
  10. Calling GLTFLoader.load in each component instead of useGLTF.

Coming up next

Candidate posts: WebGPU compute shaders in practice — GPGPU for a million particles, The Gaussian Splatting workflow — from capture to web embed, R3F + Rapier — an interactive 3D game in an hour.

"The web really is 3D now. Polygons become meshes, photos become Gaussians, sentences become assets. The thread connecting them is still Three.js."

— 3D Development for the Web 2026, end.


References