Read OSS

The Pool System: How Vitest Distributes and Executes Tests Across Workers

Advanced

Prerequisites

  • Article 1: Architecture and Project Layout
  • Article 2: CLI Boot and Config Resolution
  • Node.js worker_threads and child_process APIs
  • Understanding of RPC (Remote Procedure Call) patterns

The Pool System: How Vitest Distributes and Executes Tests Across Workers

After Vitest resolves its configuration and initializes projects, the framework faces its core challenge: executing potentially thousands of test files in parallel, across isolated environments, while streaming results back to reporters in real time. The pool system is the engine that makes this happen.

Vitest's pool architecture is a three-layer design: Pool manages a task queue and schedules work across multiple PoolRunner instances, each wrapping a PoolWorker that owns the actual thread or process. This abstraction allows the same scheduling logic to drive worker_threads, child_process.fork(), VM-isolated variants, browser sessions, and TypeScript type-checking — all through a single interface.

Pool Types and Creation

Vitest supports six built-in pool types, defined in packages/vitest/src/node/pool.ts#L38-L45:

export const builtinPools: BuiltinPool[] = [
  'forks',
  'threads',
  'browser',
  'vmThreads',
  'vmForks',
  'typescript',
]
Pool Mechanism Use Case
threads worker_threads Default. Fast startup, shared memory.
forks child_process.fork() Better isolation. Required for some native modules.
vmThreads worker_threads + node:vm Per-file VM context isolation within threads.
vmForks child_process.fork() + node:vm VM isolation in child processes.
browser Browser orchestration Runs tests in actual browsers via Playwright/WebdriverIO.
typescript Type checker Runs tsc/vue-tsc for type-level tests.

The createPool() function at line 54 creates a single Pool instance and configures it with resolved options. The executeTests() function groups TestSpecification objects by pool type, then dispatches each group to the appropriate pool implementation.

flowchart TD
    ET["executeTests(specs)"] --> Seq["sequencer.sort(specs)"]
    Seq --> Group["Group by pool type + project + environment"]
    Group --> G1["threads group"]
    Group --> G2["forks group"]
    Group --> G3["browser group"]
    G1 --> Pool["Pool.run(task)"]
    G2 --> Pool
    G3 --> BrowserPool["Browser Pool (separate)"]

The Three-Layer Pool Architecture

The pool system is composed of three distinct layers, each with clear responsibilities:

classDiagram
    class Pool {
        -queue: QueuedTask[]
        -activeTasks: ActiveTask[]
        -maxWorkers: number
        +run(task, method): Promise
        +cancel(): Promise
        -schedule(): Promise
        -getPoolRunner(task): PoolRunner
    }

    class PoolRunner {
        +poolId: number
        +project: TestProject
        +environment: ContextTestEnvironment
        -_state: RunnerState
        -_rpc: BirpcReturn
        +start(): Promise
        +stop(options?): Promise
        +waitForTerminated(): Promise
    }

    class PoolWorker {
        <<interface>>
        +name: string
        +on(event, callback): void
        +off(event, callback): void
        +send(message: WorkerRequest): void
        +start(): Promise
        +stop(): Promise
        +canReuse?(task): boolean
        +deserialize(data): unknown
    }

    Pool --> "0..*" PoolRunner
    PoolRunner --> "1" PoolWorker

Pool (packages/vitest/src/node/pools/pool.ts#L31-L50) maintains a task queue and a set of active tasks. When run() is called, the task is enqueued and schedule() checks if a worker is available. If active tasks are below maxWorkers, it dequeues a task, finds or creates a PoolRunner, and dispatches the work. Memory limits are monitored — when a worker exceeds its limit, the runner is stopped and recreated.

PoolRunner (packages/vitest/src/node/pools/poolRunner.ts#L42-L77) manages the lifecycle of a single worker. It tracks state transitions (IDLE → STARTING → STARTED → STOPPING → STOPPED), owns the birpc channel, and handles start/stop timeouts. Runners can be reused for multiple test files if isolation is disabled and the worker's canReuse() returns true.

PoolWorker (packages/vitest/src/node/pools/types.ts#L22-L41) is the interface that actual thread/process implementations must satisfy. Its contract is deliberately simple: on/off for events, send for messages, start/stop for lifecycle, and optional canReuse for worker reuse decisions.

Worker Implementations

The ThreadsPoolWorker in packages/vitest/src/node/pools/workers/threadsWorker.ts is a clean example of the PoolWorker interface:

export class ThreadsPoolWorker implements PoolWorker {
  public readonly name: string = 'threads'
  
  constructor(options: PoolOptions) {
    this.entrypoint = resolve(options.distPath, 'workers/threads.js')
  }

  async start(): Promise<void> {
    this._thread ||= new Worker(this.entrypoint, {
      env: this.env,
      execArgv: this.execArgv,
      stdout: true,
      stderr: true,
    })
    this._thread.stdout.pipe(this.stdout)
    this._thread.stderr.pipe(this.stderr)
  }

  send(message: WorkerRequest): void {
    this.thread.postMessage(message)
  }

  async stop(): Promise<void> {
    await this.thread.terminate()
  }
}

The entrypoint resolves to the compiled workers/threads.js from the rollup build — the bridge between Node-side orchestration and worker-side test execution.

Tip: When debugging worker issues, check the execArgv array — this is where --inspect, --max-old-space-size, and other Node flags are injected. The env object controls what environment variables workers see.

The birpc Communication Bridge

Communication between Node and workers flows through birpc, which creates bidirectional RPC channels. The Node side defines its RPC methods in packages/vitest/src/node/pools/rpc.ts#L17-L29:

sequenceDiagram
    participant Worker as Worker (Runtime)
    participant RPC as birpc Channel
    participant Node as Node (Orchestration)

    Worker->>RPC: rpc.fetch(url, importer, env)
    RPC->>Node: createMethodsRPC().fetch()
    Node->>Node: project._fetcher(url, ...)
    Node-->>RPC: TransformResult
    RPC-->>Worker: Module source code

    Worker->>RPC: rpc.resolve(id, importer, env)
    RPC->>Node: pluginContainer.resolveId()
    Node-->>Worker: Resolved path

    Worker->>RPC: rpc.onTaskUpdate(packs, events)
    RPC->>Node: StateManager + Reporters
    
    Worker->>RPC: rpc.snapshotSaved(snapshot)
    RPC->>Node: SnapshotManager

The createMethodsRPC() function exposes key operations:

  • fetch() — Fetches and transforms modules through Vite's pipeline. This is how test files and their dependencies get transpiled.
  • resolve() — Resolves module specifiers through Vite's plugin container.
  • onTaskUpdate() — Reports test progress and results back to StateManager.
  • Snapshot operationssnapshotSaved, resolveSnapshotPath, etc.

The worker side accesses these through rpc() calls in packages/vitest/src/runtime/rpc.ts. Every module import inside a worker ultimately calls back to the Node process via fetch(), where Vite's full plugin pipeline transforms the source.

Worker Boot Sequence

The worker entry for threads is astonishingly minimal — just 4 lines in packages/vitest/src/runtime/workers/threads.ts:

import { runBaseTests, setupBaseEnvironment } from './base'
import workerInit from './init-threads'

workerInit({ runTests: runBaseTests, setup: setupBaseEnvironment })

The workerInit function from init-threads sets up the message handling loop: it listens for WorkerRequest messages (start, run, collect, stop, cancel) and dispatches them appropriately. On a start message, it calls setupBaseEnvironment.

The setupBaseEnvironment in packages/vitest/src/runtime/workers/base.ts#L72 handles the heavy lifting:

sequenceDiagram
    participant PM as Pool Manager
    participant W as Worker Thread
    participant Base as base.ts
    participant MR as Module Runner
    participant Env as Environment

    PM->>W: { type: 'start', config, environment }
    W->>Base: setupBaseEnvironment(context)
    Base->>MR: startModuleRunner(options)
    MR->>MR: Override process.exit
    MR->>MR: Set up error listeners
    MR->>MR: Create VitestModuleRunner or NativeModuleRunner
    Base->>Env: loadEnvironment(name)
    Env->>Env: Setup (jsdom/happy-dom/node/edge-runtime)
    Base-->>W: Return teardown function
    W-->>PM: { type: 'started' }

The module runner is created once per worker and reused across files. If the experimental viteModuleRunner flag is false, a NativeModuleRunner using Node's native module loading is used instead of Vite's ModuleRunner.

Test Execution in Workers: runBaseTests

When the pool sends a { type: 'run' } message, the worker invokes run() from packages/vitest/src/runtime/runBaseTests.ts#L21-L92:

flowchart TD
    Run["run(method, files, config, moduleRunner, environment)"] --> Setup["Parallel setup:<br>1. Resolve test runner<br>2. Setup global env<br>3. Start coverage<br>4. Resolve snapshot env"]
    Setup --> Loop["For each file"]
    Loop --> Isolated{config.isolate?}
    Isolated -- "yes" --> Reset["Reset mocker + modules"]
    Isolated -- "no" --> Skip["Skip reset"]
    Reset & Skip --> SetPath["workerState.filepath = file"]
    SetPath --> Method{method?}
    Method -- "run" --> StartTests["startTests([file], testRunner)"]
    Method -- "collect" --> CollectTests["collectTests([file], testRunner)"]
    StartTests & CollectTests --> PostFile["vi.resetConfig()<br>vi.restoreAllMocks()"]
    PostFile --> Loop
    Loop -- "all files done" --> Coverage["stopCoverageInsideWorker()"]

The four setup operations run in parallel via Promise.all — this overlaps test runner resolution, global environment setup, coverage initialization, and snapshot environment resolution. Then the per-file loop runs sequentially: if isolation is enabled, modules are reset between files; the test runner is invoked; mocks are restored.

The startTests() and collectTests() functions come from @vitest/runner — the framework-agnostic test runner we'll explore in detail in the next article.

Tip: If tests are slow to start, the parallel setup phase is the first place to look. Coverage initialization and environment setup can be expensive — check the traces (experimental.openTelemetry) for bottleneck identification.

Result Collection: StateManager and TestRun

Test results flow from workers through the RPC bridge to StateManager and TestRun on the Node side.

packages/vitest/src/node/state.ts#L19-L26StateManager maintains the complete test state:

export class StateManager {
  filesMap: Map<string, File[]> = new Map()
  pathsSet: Set<string> = new Set()
  idMap: Map<string, Task> = new Map()
  taskFileMap: WeakMap<Task, File> = new WeakMap()
  errorsSet: Set<unknown> = new Set()
  leakSet: Set<AsyncLeak> = new Set()
  reportedTasksMap: WeakMap<Task, TestModule | TestCase | TestSuite> = new WeakMap()
}

packages/vitest/src/node/test-run.ts#L27-L56TestRun orchestrates the reporter event lifecycle:

sequenceDiagram
    participant TR as TestRun
    participant SM as StateManager
    participant R as Reporters

    TR->>SM: collectPaths(filepaths)
    TR->>R: onTestRunStart(specifications)

    loop For each file
        TR->>SM: collectFiles(project, [file])
        TR->>R: onTestModuleQueued(testModule)
        
        Note over TR: Tests collected
        TR->>SM: collectFiles(project, files)
        TR->>R: onTestModuleCollected(testModule)
        
        Note over TR: Tests executing...
        TR->>R: onTestCaseReady(testCase)
        TR->>R: onTestCaseResult(testCase)
    end

    TR->>R: onTestRunEnd(modules, errors, reason)

The TestRun.start() method collects file paths into StateManager and fires onTestRunStart. As workers report back via RPC, enqueued() and collected() update the state and dispatch to reporters. Individual test results flow through task update events. When all workers finish, TestRun fires onTestRunEnd with the final reason: 'passed', 'failed', or 'interrupted'.

What's Next

We've now traced the path from the pool scheduler through worker boot to result collection. In the next article, we'll descend into @vitest/runner — the framework-agnostic engine that powers the actual test DSL. We'll see how describe/test/it build a chainable API, how files are collected into a task tree, how hooks execute in the right order, and how Playwright-style fixtures provide dependency injection.