Math Primitives, Lighting, and the Camera System
Prerequisites
- ›Articles 1-2
- ›Basic linear algebra (vectors, matrices, dot/cross products)
- ›Understanding of coordinate spaces (local, world, view, clip)
Math Primitives, Lighting, and the Camera System
Three.js ships a complete linear algebra library, a sophisticated color management system, a camera hierarchy with projection matrix computation, and a light system that integrates with both the legacy and node-based renderers. These subsystems are foundational — the math primitives are used in virtually every source file, and understanding how cameras and lights work is essential for debugging rendering issues. This article examines each piece and shows how they connect.
The Math Library: In-Place Mutation API
The math classes — Vector2, Vector3, Vector4, Matrix3, Matrix4, Quaternion, Euler, Color — all follow a consistent API philosophy: every operation mutates and returns this. This enables method chaining while avoiding garbage collection pressure from temporary allocations.
Vector3 with its ~1,263 lines illustrates the pattern:
// In-place chaining — no allocations
direction.copy( target ).sub( origin ).normalize();
// Equivalent to:
direction.copy( target );
direction.sub( origin );
direction.normalize();
Compare this to an immutable API where each operation returns a new vector: that would allocate three Vector3 instances per frame per call site. In a renderer running at 60fps with thousands of objects, the GC pressure becomes measurable.
For cases where you need immutable-style semantics, clone() creates a copy and copy(source) overwrites the current instance:
const original = new Vector3( 1, 2, 3 );
const copy = original.clone(); // New Vector3(1,2,3)
const other = new Vector3();
other.copy( original ); // other is now (1,2,3)
classDiagram
class Vector3 {
+x: number
+y: number
+z: number
+set(x, y, z): this
+add(v): this
+sub(v): this
+multiplyScalar(s): this
+normalize(): this
+dot(v): number
+cross(v): this
+clone(): Vector3
+copy(v): this
}
class Matrix4 {
+elements: Float32Array[16]
+compose(pos, quat, scale): this
+decompose(pos, quat, scale): this
+multiply(m): this
+invert(): this
+makePerspective(): this
}
class Quaternion {
+x: number
+y: number
+z: number
+w: number
+setFromEuler(e): this
+setFromAxisAngle(axis, angle): this
+slerp(q, t): this
+_onChange(callback)
}
Matrix4 at ~1,314 lines is the workhorse for transforms. Its compose(position, quaternion, scale) method builds a TRS (translate-rotate-scale) matrix, and decompose(position, quaternion, scale) extracts the components back — these are used in Object3D's updateMatrix() and applyMatrix4() as we saw in Part 2.
The MathUtils namespace at src/math/MathUtils.js provides utility functions: clamp, lerp, smoothstep, mapLinear, degToRad, isPowerOfTwo, and the generateUUID function that uses a pre-computed hex lookup table at line 3 for fast UUID generation.
Tip: Three.js uses column-major matrix storage (matching WebGL/WebGPU conventions).
elements[12],elements[13],elements[14]are the translation components. If you're debugging transforms by reading the elements array, remember this layout.
Color and ColorManagement
ColorManagement is Three.js's system for ensuring color math happens in the correct color space. The central concept is the working color space — by default LinearSRGBColorSpace.
All color operations (lighting, blending, texture sampling) should happen in linear space for physically correct results. ColorManagement handles the conversion pipeline:
flowchart LR
Input["sRGB Input<br/>(textures, CSS colors)"] -->|"EOTF (gamma decode)"| Linear["Linear Working Space<br/>(all math here)"]
Linear -->|"Lighting, blending,<br/>tone mapping"| Linear
Linear -->|"OETF (gamma encode)"| Output["sRGB Output<br/>(canvas display)"]
The convert() method uses CIE XYZ as an intermediate space for color space transformations, with pre-computed 3×3 matrices at the top of the file (lines 5-15) for the Rec.709 (sRGB) primaries:
const LINEAR_REC709_TO_XYZ = new Matrix3().set(
0.4123908, 0.3575843, 0.1804808,
0.2126390, 0.7151687, 0.0721923,
0.0193308, 0.1191948, 0.9505322
);
When ColorManagement.enabled is true (the default), colors provided as hex values (0xff0000) or CSS strings ('red') are automatically converted from sRGB to linear space before use. This means new Color(1, 0, 0) is linear red (full intensity), not sRGB red.
Camera Hierarchy and Projection
The camera system follows a clean inheritance chain. Camera extends Object3D and adds three matrices:
matrixWorldInverse: The view matrix — the inverse of the camera's world transform.updateMatrixWorld()is overridden (line 112) to compute this automatically.projectionMatrix: Set by subclasses for perspective or orthographic projection.projectionMatrixInverse: The inverse projection, useful for screen-to-world unprojection.
Camera also overrides getWorldDirection() to negate the result (line 108), because cameras look down their negative local Z-axis by convention.
classDiagram
class Object3D {
+matrixWorld: Matrix4
}
class Camera {
+matrixWorldInverse: Matrix4
+projectionMatrix: Matrix4
+projectionMatrixInverse: Matrix4
+coordinateSystem: number
}
class PerspectiveCamera {
+fov: number
+aspect: number
+near: number
+far: number
+updateProjectionMatrix()
}
class OrthographicCamera {
+left: number
+right: number
+top: number
+bottom: number
+updateProjectionMatrix()
}
Object3D <|-- Camera
Camera <|-- PerspectiveCamera
Camera <|-- OrthographicCamera
PerspectiveCamera takes fov (vertical field of view in degrees), aspect, near, and far. Its updateProjectionMatrix() method computes the perspective projection matrix using these parameters plus zoom, filmGauge, filmOffset, and an optional sub-frustum view for tiled rendering.
A subtle detail in Camera's updateMatrixWorld() at line 118: it excludes scale from the view matrix for glTF conformance. If the camera has non-uniform scale (unusual but possible), the scale is stripped when computing matrixWorldInverse.
Light Types and Hierarchy
Light extends Object3D and adds just two properties: color (Color) and intensity (number). It dispatches a 'dispose' event for resource cleanup. Concrete light types add their specialized properties:
| Light Type | Properties | Scene Graph Behavior |
|---|---|---|
AmbientLight |
color, intensity | No position needed |
DirectionalLight |
color, intensity, target | Position + target define direction |
PointLight |
color, intensity, distance, decay | Position defines origin |
SpotLight |
color, intensity, distance, angle, penumbra, decay, target | Position + target + cone |
HemisphereLight |
color, groundColor, intensity | Direction from orientation |
RectAreaLight |
color, intensity, width, height | Position + orientation define area |
LightProbe |
sh (SphericalHarmonics3) | Irradiance probe |
Lights that need a direction (DirectionalLight, SpotLight) use a target property — another Object3D that the light "points at." This is an elegant design: instead of storing a direction vector, the direction is derived from the light's world position and the target's world position, both of which participate in the normal scene graph transform system.
classDiagram
class Object3D
class Light {
+color: Color
+intensity: number
+isLight: boolean
}
class AmbientLight {
+isAmbientLight: boolean
}
class DirectionalLight {
+target: Object3D
+shadow: DirectionalLightShadow
}
class PointLight {
+distance: number
+decay: number
}
class SpotLight {
+distance: number
+angle: number
+penumbra: number
+decay: number
+target: Object3D
}
Object3D <|-- Light
Light <|-- AmbientLight
Light <|-- DirectionalLight
Light <|-- PointLight
Light <|-- SpotLight
Lights as Nodes: AnalyticLightNode and LightsNode
In the new renderer, lights don't directly contribute to shading through uniform arrays and #ifdef blocks. Instead, they become nodes in the shader graph. As we learned in Part 4, the StandardNodeLibrary maps each light type to a corresponding node class (e.g., PointLight → PointLightNode, SpotLight → SpotLightNode).
LightsNode is the aggregator. It extends Node with a 'vec3' output type and maintains totalDiffuseNode and totalSpecularNode properties that accumulate the contribution of all lights. During setup, it iterates through the scene's lights, creates or retrieves the corresponding AnalyticLightNode subclass for each one, and has each light node contribute its diffuse and specular terms.
graph TD
LN["LightsNode"] --> DL["DirectionalLightNode"]
LN --> PL["PointLightNode"]
LN --> SL["SpotLightNode"]
LN --> AL["AmbientLightNode"]
DL --> Diff["totalDiffuseNode (vec3)"]
PL --> Diff
SL --> Diff
AL --> Diff
DL --> Spec["totalSpecularNode (vec3)"]
PL --> Spec
SL --> Spec
Diff --> LC["LightingContextNode"]
Spec --> LC
LC --> Material["NodeMaterial output"]
Each AnalyticLightNode subclass (e.g., DirectionalLightNode, PointLightNode) implements the light's contribution calculation: direction computation, distance attenuation, cone angle falloff (for spots), and shadow evaluation. These are all TSL expressions, so they compile to WGSL or GLSL automatically.
Tip: To add custom light attenuation or a non-standard falloff curve, you can subclass
AnalyticLightNodeand override the relevant methods. The node system composes the result into the lighting pipeline without touching any other light's code.
Shadow Mapping and Frustum Culling
Shadow mapping in the new renderer is handled through ShadowNode and ShadowBaseNode. The shadow map is generated by rendering the scene from the light's point of view, then sampled during the main lighting pass to determine which fragments are in shadow. Different filtering strategies (basic, PCF, PCF soft, VSM) are implemented as separate TSL functions.
Frustum culling happens in the render loop before any draw commands are issued. The Frustum class in src/math/Frustum.js represents the camera's view volume as six Plane objects. During scene projection, each object's bounding sphere is tested against these planes:
flowchart TD
Cam["Camera"] -->|"projection × view"| ProjMat["projScreenMatrix"]
ProjMat -->|"setFromProjectionMatrix()"| Frust["Frustum (6 planes)"]
Obj["Object3D"] --> BS["boundingSphere"]
BS --> Test{"frustum.intersectsObject()"}
Frust --> Test
Test -->|"Inside"| Add["Add to RenderList"]
Test -->|"Outside"| Skip["Skip object"]
Objects with frustumCulled = false (set on the Object3D, as we saw in Part 2) bypass this test entirely. This is useful for objects like skyboxes that should always render regardless of camera position.
The interplay between these systems — math primitives computing transforms, cameras producing projection matrices, frustums culling objects, lights generating shader nodes, and shadow maps testing occlusion — forms the backbone of the rendering pipeline that produces every frame.
What's Next
In the final article, we'll explore the asset pipeline that feeds data into this rendering system: the loader architecture and caching system, the critical GLTFLoader addon, post-processing with the new RenderPipeline class, the controls system for user interaction, and how to navigate the testing infrastructure and contribute to the project.