The Dual Renderer Architecture: WebGLRenderer vs. the New Backend System
Prerequisites
- ›Articles 1-2
- ›Basic understanding of graphics pipeline concepts (vertex/fragment shaders, render targets, depth buffers)
- ›Familiarity with WebGL or WebGPU API concepts
The Dual Renderer Architecture: WebGLRenderer vs. the New Backend System
Three.js is in the middle of a historic architectural transition. For over a decade, WebGLRenderer was a single monolithic class that directly managed WebGL state through 30+ internal helper modules. Now, a new three-layer system separates concerns: an API-agnostic Renderer orchestrator, an abstract Backend interface, and concrete backend implementations for WebGPU and WebGL 2. Understanding both architectures is essential — the legacy renderer is still widely used, while the new system is where all active development happens.
The Architectural Split: Monolithic vs. Modular
WebGLRenderer directly imports over 30 internal WebGL-specific modules: WebGLAttributes, WebGLBindingStates, WebGLBufferRenderer, WebGLCapabilities, WebGLClipping, WebGLExtensions, WebGLGeometries, WebGLInfo, WebGLMorphtargets, WebGLObjects, WebGLPrograms, WebGLProperties, WebGLRenderLists, WebGLRenderStates, WebGLShadowMap, WebGLState, WebGLTextures, WebGLUniforms, and more. Every one of these directly calls gl.* methods. This tight coupling made adding WebGPU support impossible without a complete rewrite.
The new architecture inverts this coupling:
flowchart TB
subgraph "Legacy Architecture"
WGL["WebGLRenderer<br/>(3,600+ lines)"]
WGL --> WGLA["WebGLAttributes"]
WGL --> WGLB["WebGLBindingStates"]
WGL --> WGLP["WebGLPrograms"]
WGL --> WGLS["WebGLState"]
WGL --> WGLT["WebGLTextures"]
WGL --> WGLMore["...25+ more modules"]
end
subgraph "New Architecture"
R["Renderer<br/>(3,680+ lines)<br/>API-agnostic"]
R --> RL["RenderLists"]
R --> RO["RenderObjects"]
R --> PP["Pipelines"]
R --> TX["Textures"]
R --> NM["NodeManager"]
R --> More["...10+ more"]
R -->|"delegates GPU calls"| BE["Backend (abstract)"]
BE --> WGPU["WebGPUBackend<br/>(~2,600 lines)"]
BE --> WGLBE["WebGLBackend<br/>(~2,800 lines)"]
end
The key insight is that Renderer handles all the logic of rendering (scene traversal, sorting, culling, pipeline management) while Backend handles all the GPU API calls (creating buffers, compiling shaders, issuing draw commands). This separation means adding a new backend (say, for WebNN or a future API) requires implementing only the Backend interface, not rewriting the entire render loop.
The New Renderer Base Class
Renderer is the 3,680+ line orchestrator. Its constructor at lines 83-107 accepts a Backend instance and a parameters object, then wires up an array of management components:
| Component | Responsibility |
|---|---|
RenderLists |
Manages sorted render lists for opaque/transparent objects |
RenderObjects |
Caches and provides RenderObject instances |
Pipelines |
Manages GPU pipeline state (shader programs + render state) |
Bindings |
Manages uniform buffers and bind groups |
Attributes |
Manages GPU attribute buffers |
Geometries |
Manages geometry state and wireframe generation |
Textures |
Manages GPU texture creation and updates |
NodeManager |
Manages node graph compilation and caching |
Background |
Handles scene background rendering |
Lighting |
Manages light node setup |
RenderContexts |
Provides render context objects for render passes |
RenderBundles |
Manages command bundle recording |
Animation |
Manages the animation/render loop |
XRManager |
Manages WebXR session integration |
The imports at the top of Renderer.js reveal the scope — over 20 imports from the common/ directory plus TSL node imports for internal shader construction.
Tip: When debugging rendering issues in the new system, start with the management component responsible for your problem area. Texture issues? Check
Textures. Shader compilation failures? CheckNodeManagerandPipelines. Draw order bugs? CheckRenderList.
Backend: The Abstract GPU Interface
Backend defines the contract for GPU operations. The constructor is minimal — it stores parameters, creates a WeakMap for per-object GPU data, and holds references to the renderer and DOM element that get set during initialization.
The abstract methods that backends must implement cover the full GPU lifecycle:
flowchart LR
subgraph "Backend Interface"
Init["init()"]
Create["createTexture()<br/>createAttribute()<br/>createBindings()"]
Update["updateTexture()<br/>updateAttribute()<br/>updateBindings()"]
Render["beginRender()<br/>draw()<br/>endRender()"]
Compute["beginCompute()<br/>compute()<br/>endCompute()"]
Destroy["destroyTexture()<br/>destroyAttribute()"]
end
Init --> Create --> Update --> Render --> Destroy
Create --> Compute
WebGPUBackend (~2,600 lines) implements these methods using the native WebGPU API (GPUDevice, GPUCommandEncoder, GPURenderPassEncoder). WebGLBackend (~2,800 lines) implements the same interface using WebGL 2 calls (gl.bindTexture, gl.drawElements, etc.), providing a drop-in fallback for browsers that don't yet support WebGPU.
The data WeakMap is a crucial design element. Instead of attaching backend-specific properties to Three.js objects (which would leak abstraction), each backend stores its GPU handles (WebGL textures, WebGPU buffers, pipeline objects) in this WeakMap keyed by the source object. When a Three.js object is garbage collected, its GPU data becomes reclaimable automatically.
WebGPURenderer: The 107-Line Orchestrator
WebGPURenderer is remarkably thin — just 107 lines total, most of which is JSDoc. The constructor at lines 53-103 does three things:
- Selects the backend: If
parameters.forceWebGLis true, it usesWebGLBackenddirectly. Otherwise, it creates aWebGPUBackendand sets up a fallback function:
parameters.getFallback = () => {
warn( 'WebGPURenderer: WebGPU is not available, running under WebGL2 backend.' );
return new WebGLBackend( parameters );
};
-
Passes the backend to Renderer:
super( backend, parameters ) -
Installs the StandardNodeLibrary:
this.library = new StandardNodeLibrary()— this maps classic Three.js materials and lights to their node-based equivalents, enabling backward compatibility (more on this in Part 4).
The fallback mechanism is elegant: Renderer calls getFallback() during init() if the primary backend fails initialization. This means you can write new WebGPURenderer() and it automatically works on both WebGPU and WebGL 2 browsers.
The Render Pipeline: _renderScene() Walkthrough
The heart of the render loop is _renderScene() at line 1424. Let's trace a single frame:
sequenceDiagram
participant App
participant Renderer
participant RenderList
participant Background
participant Backend
App->>Renderer: render(scene, camera)
Renderer->>Renderer: _renderScene(scene, camera)
Renderer->>Renderer: Update projection matrix
Renderer->>Renderer: _projectObject(scene) → build RenderList
Renderer->>RenderList: sort opaque (front-to-back)
Renderer->>RenderList: sort transparent (back-to-front)
Renderer->>Backend: beginRender(renderContext)
Renderer->>Background: render background
Renderer->>Renderer: _renderObjects(opaqueList)
Renderer->>Renderer: _renderTransparents(transparentList)
Renderer->>Backend: endRender(renderContext)
The method starts by preserving the current render state (render ID, render context, render object function) to support nested render calls — shadow maps and transmission effects both trigger recursive renders. It then sets up the render context and configures the camera's coordinate system and reversed depth buffer.
The projection step walks the scene graph, testing each object against the view frustum, and categorizes visible objects into opaque, transparent, and bundle groups in the RenderList. After sorting (opaques front-to-back for early-z rejection, transparents back-to-front for correct blending), it renders the background, then opaque objects, then transparents.
RenderObject and RenderList
RenderObject is the bridge between the scene graph and GPU draw commands. It's constructed with references to the 3D object, material, scene, camera, lights node, render context, and clipping context. It caches everything the backend needs to issue a draw call: the compiled node graph, pipeline object, bind groups, and draw parameters.
The caching strategy is key to performance. Rather than recompiling shader programs every frame, RenderObject checks if its cached state is still valid by hashing the current material properties, geometry attributes, and lighting configuration. Only when something changes does it trigger recompilation.
RenderList implements two sorting strategies. For opaques, painterSortStable sorts by groupOrder, then renderOrder, then z (front-to-back — closer objects first). For transparents, reversePainterSortStable reverses the z comparison (back-to-front — farther objects first):
// Opaque: front-to-back (smaller z first → early-z rejection)
return a.z - b.z;
// Transparent: back-to-front (larger z first → correct blending)
return b.z - a.z;
Both functions use a.id - b.id as the final tiebreaker to ensure deterministic order.
flowchart TD
PO["_projectObject()"] -->|"Categorize"| Opaque["Opaque List"]
PO -->|"Categorize"| Trans["Transparent List"]
Opaque -->|"painterSortStable<br/>front-to-back"| SortO["Sorted Opaques"]
Trans -->|"reversePainterSortStable<br/>back-to-front"| SortT["Sorted Transparents"]
SortO --> RenderO["_renderObjects()"]
SortT --> RenderT["_renderTransparents()"]
RenderO -->|"For each item"| RO["Get/Create RenderObject"]
RenderT -->|"For each item"| RO
RO -->|"Backend.draw()"| GPU["GPU Draw Call"]
What's Next
The new renderer architecture is inseparable from the node system — the entire shader generation pipeline has moved from static GLSL template chunks to dynamic node graphs that get compiled into WGSL or GLSL. In the next article, we'll explore how the Node base class works, how TSL provides a fluent JavaScript API for writing shaders, and how NodeBuilder traverses the DAG to produce actual shader code.