Read OSS

The DevEnv Controller Pattern: How `wrangler dev` Orchestrates Local Development

Advanced

Prerequisites

  • Articles 1 and 2 in this series
  • Node.js EventEmitter pattern
  • Observer / publish-subscribe pattern
  • Basic familiarity with esbuild and Miniflare

The DevEnv Controller Pattern: How wrangler dev Orchestrates Local Development

Running wrangler dev looks deceptively simple from the outside: start a local server, reload on file changes, show errors in the browser. Under the hood, it's the most architecturally sophisticated subsystem in the entire SDK — an event-driven controller bus that coordinates five independent controllers, each managing a different phase of the development lifecycle.

This design wasn't inevitable. It emerged from the need to support local and remote development modes with the same codebase, handle multi-worker configurations, support hot module replacement, manage esbuild's incremental builds, and keep the proxy layer independent from the runtime layer. The result is a miniature actor system built on Node.js EventEmitter.

DevEnv: EventEmitter as a Controller Bus

The DevEnv class at packages/wrangler/src/api/startDevWorker/DevEnv.ts#L22-L70 extends EventEmitter and implements ControllerBus. It holds four controller slots:

export class DevEnv extends EventEmitter implements ControllerBus {
    config: ConfigController;
    bundler: BundlerController;
    runtimes: RuntimeController[];
    proxy: ProxyController;

Notice that runtimes is an array. This is deliberate — it supports multi-worker local development where each worker gets its own runtime controller. The default factory creates two entries: a LocalRuntimeController and a RemoteRuntimeController. Events are broadcast to all runtime controllers, but typically only one is active at a time depending on whether you're running in local or remote mode.

classDiagram
    class DevEnv {
        +config: ConfigController
        +bundler: BundlerController
        +runtimes: RuntimeController[]
        +proxy: ProxyController
        +dispatch(event): void
        +startWorker(options): Worker
        +teardown(): Promise
    }
    class ControllerBus {
        <<interface>>
        +dispatch(event): void
    }
    DevEnv --|> EventEmitter
    DevEnv ..|> ControllerBus
    DevEnv --> ConfigController
    DevEnv --> BundlerController
    DevEnv --> RuntimeController
    DevEnv --> ProxyController

Factory Injection: Testability and Controller Swapping

The DevEnv constructor doesn't instantiate controllers directly. Instead, it accepts factory functions at packages/wrangler/src/api/startDevWorker/DevEnv.ts#L45-L58:

constructor({
    configFactory = (devEnv) => new ConfigController(devEnv),
    bundlerFactory = (devEnv) => new BundlerController(devEnv),
    runtimeFactories = [
        (devEnv) => new LocalRuntimeController(devEnv),
        (devEnv) => new RemoteRuntimeController(devEnv),
    ],
    proxyFactory = (devEnv) => new ProxyController(devEnv),
}: { /* ... typed factory signatures ... */ } = {}) {

This is dependency injection at the constructor level. Unit tests can inject stub controllers that don't spawn real processes. The MultiworkerRuntimeController (used for multi-worker mode) replaces the default runtime factories. And the NoOpProxyController can be injected when the proxy layer isn't needed.

Each factory receives the DevEnv instance itself, giving controllers access to the bus for dispatching events. The circular reference is intentional — controllers need to talk back to the bus.

Tip: If you're writing tests for wrangler commands that involve DevEnv, inject mock controller factories rather than mocking the entire DevEnv. The factory pattern was designed specifically for this.

The dispatch() Event Routing Table

The heart of DevEnv is its dispatch() method at packages/wrangler/src/api/startDevWorker/DevEnv.ts#L86-L135. This is a straightforward switch statement that routes events to the appropriate controllers:

sequenceDiagram
    participant CC as ConfigController
    participant BC as BundlerController
    participant RC as RuntimeController(s)
    participant PC as ProxyController
    participant DE as DevEnv (dispatch)

    CC->>DE: configUpdate
    DE->>BC: onConfigUpdate
    DE->>PC: onConfigUpdate

    BC->>DE: bundleStart
    DE->>PC: onBundleStart
    DE->>RC: onBundleStart

    BC->>DE: bundleComplete
    DE->>RC: onBundleComplete

    RC->>DE: reloadStart
    DE->>PC: onReloadStart

    RC->>DE: reloadComplete
    DE->>PC: onReloadComplete

    RC->>DE: devRegistryUpdate
    DE->>CC: onDevRegistryUpdate

    PC->>DE: previewTokenExpired
    DE->>RC: onPreviewTokenExpired

The routing is explicit — there's no dynamic event subscription or observer pattern complexity. You can read dispatch() and immediately know which controllers are affected by which events. The default case uses TypeScript's exhaustiveness checking (const _exhaustive: never = event) to ensure the compiler catches any unhandled event types.

The event types themselves are defined as a discriminated union at packages/wrangler/src/api/startDevWorker/events.ts. Each event carries the current config and, where applicable, the current bundle — ensuring controllers always have up-to-date state without maintaining their own caches of stale data.

BaseController and RuntimeController Abstract Classes

All controllers inherit from the Controller base class at packages/wrangler/src/api/startDevWorker/BaseController.ts#L27-L49:

export abstract class Controller {
    protected bus: ControllerBus;
    #tearingDown = false;

    protected emitErrorEvent(event: ErrorEvent) {
        if (this.#tearingDown) {
            logger.debug("Suppressing error event during teardown");
            return;
        }
        this.bus.dispatch(event);
    }
}

The #tearingDown flag suppresses error events during shutdown. Without this, race conditions during dispose() would bubble errors to the user for processes that are intentionally being killed.

The RuntimeController abstract subclass at packages/wrangler/src/api/startDevWorker/BaseController.ts#L51-L75 defines the contract for runtime implementations:

export abstract class RuntimeController extends Controller {
    abstract onBundleStart(_: BundleStartEvent): void;
    abstract onBundleComplete(_: BundleCompleteEvent): void;
    abstract onPreviewTokenExpired(_: PreviewTokenExpiredEvent): void;

    protected emitReloadStartEvent(data: ReloadStartEvent): void { /*...*/ }
    protected emitReloadCompleteEvent(data: ReloadCompleteEvent): void { /*...*/ }
    protected emitDevRegistryUpdateEvent(data: DevRegistryUpdateEvent): void { /*...*/ }
}

This separation between abstract event handlers (what the runtime must react to) and protected event emitters (what the runtime can trigger) creates a clean bidirectional contract.

The Five Controllers in Detail

Each controller manages one stage of the development pipeline:

ConfigController (line 495 of ConfigController.ts) reads the Wrangler configuration, resolves entry points, watches files via chokidar for config changes, resolves available ports, and emits configUpdate events. It also handles incoming devRegistryUpdate events from the runtime, closing a feedback loop where the runtime can inform config about other workers discovered during multi-worker development.

BundlerController (line 29 of BundlerController.ts) manages esbuild. On configUpdate, it sets up or reconfigures esbuild's incremental build. It handles abort signals for in-flight builds — if a config change arrives while esbuild is still bundling the previous change, the old build is cancelled. It emits bundleStart (so the proxy can show a loading state) and bundleComplete (so runtimes can reload).

LocalRuntimeController (line 152 of LocalRuntimeController.ts) manages a Miniflare instance. On bundleComplete, it converts the config and bundle into Miniflare options and calls setOptions() to reload the worker. It uses a Mutex to serialize updates — critical because buildMiniflareOptions() is async, and without the mutex, rapid updates could apply out of order.

RemoteRuntimeController manages preview sessions on Cloudflare's edge. It uploads your Worker code to a remote preview endpoint and emits reloadComplete with the remote preview URL as proxy data.

ProxyController (line 46 of ProxyController.ts) runs its own independent Miniflare instance as the user-facing HTTP proxy. This is the most surprising architectural decision and deserves its own section.

flowchart LR
    CONFIG["ConfigController<br/>watches config files"] -->|configUpdate| BUS["DevEnv Bus"]
    BUS -->|configUpdate| BUNDLER["BundlerController<br/>manages esbuild"]
    BUNDLER -->|bundleComplete| BUS
    BUS -->|bundleComplete| RUNTIME["RuntimeController<br/>manages Miniflare"]
    RUNTIME -->|reloadComplete| BUS
    BUS -->|reloadComplete| PROXY["ProxyController<br/>HTTP proxy"]

Two-Tier Proxy Architecture

Here's the key insight that's not obvious from a casual read: the ProxyController runs its own Miniflare instance at packages/wrangler/src/api/startDevWorker/ProxyController.ts#L60-L85, completely separate from the one in LocalRuntimeController.

Why? Because the proxy itself is a Cloudflare Worker. It's a Worker that proxies requests to your Worker. This gives the proxy access to the full Workers API for:

  • Live reload injection — the proxy can intercept HTML responses and inject a live-reload script
  • Request inspection — inspecting and modifying requests before they reach your Worker
  • Inspector proxying — the proxy hosts a WebSocket endpoint that bridges Chrome DevTools to your Worker's V8 inspector
  • Error rendering — pretty error pages are rendered by the proxy Worker, not your Worker
flowchart LR
    BROWSER["Browser"] -->|"HTTP request"| PROXY_MF["ProxyController's Miniflare<br/>(Proxy Worker)"]
    PROXY_MF -->|"forwarded request"| RUNTIME_MF["LocalRuntimeController's Miniflare<br/>(Your Worker)"]
    RUNTIME_MF -->|"response"| PROXY_MF
    PROXY_MF -->|"modified response<br/>(+ live reload)"| BROWSER
    DEVTOOLS["Chrome DevTools"] -->|"WebSocket"| PROXY_MF
    PROXY_MF -->|"WebSocket"| RUNTIME_MF

This two-tier architecture means the user-facing port (default 8787) is served by the proxy Miniflare, and the user's Worker runs on an internal port that's only accessible through the proxy. When your Worker reloads, the ProxyController receives a reloadComplete event with the new internal URL and seamlessly updates the proxy's routing.

startDev() and Multi-Worker Support

The entry point for wrangler dev is startDev() in packages/wrangler/src/dev/start-dev.ts#L31. When a single-worker configuration is provided, it creates one DevEnv instance. When the config specifies multiple workers (via an array config), multiple DevEnv instances are created and coordinated through MultiworkerRuntimeController from packages/wrangler/src/api/startDevWorker/MultiworkerRuntimeController.ts.

In multi-worker mode, the primary worker gets a full DevEnv with all controllers, while secondary workers may use NoOpProxyController since the primary proxy handles all incoming traffic. The MultiworkerRuntimeController wires the dev registry updates so workers can discover each other's service bindings during local development.

Tip: The devEnv variable in startDev() is typed as DevEnv | DevEnv[] | undefined — a union that reveals the three states: single worker, multi-worker, or not yet initialized. If you're debugging multi-worker issues, check whether you're looking at the primary or secondary DevEnv.

What's Next

The LocalRuntimeController in this article delegates all the heavy lifting to Miniflare — but Miniflare itself is a substantial system with 28 plugins, a workerd child process manager, and its own configuration serialization pipeline. Article 4 takes us inside Miniflare's architecture.