Read OSS

Handling a Request: IoContext, WorkerEntrypoint, and Cross-Thread Ownership

Advanced

Prerequisites

  • Article 1: Architecture and Directory Map
  • Article 2: CLI Boot and Server Lifecycle
  • Understanding of V8 isolates and garbage collection basics
  • Familiarity with KJ promises and event loops
  • Knowledge of move semantics and RAII in C++

Handling a Request: IoContext, WorkerEntrypoint, and Cross-Thread Ownership

In Part 2, we left off with the HttpListener accept loop ready to receive connections. Now we follow the signal all the way down: from a TCP connection arriving to JavaScript's fetch() handler executing and a response streaming back. This journey passes through some of workerd's most carefully designed abstractions — the IoContext that bridges V8's garbage-collected world with KJ's thread-bound I/O, the WorkerEntrypoint that implements fault tolerance features like passThroughOnException(), and the IoOwn/IoPtr smart pointer system that ensures safe cross-thread object destruction.

Worker, Script, and Isolate: The Execution Hierarchy

Before any request can be handled, there must be compiled JavaScript to execute. workerd organizes this into three layers:

src/workerd/io/worker.h#L86-L200

classDiagram
    class Worker_Isolate {
        +v8::Isolate* isolate
        +Api& api
        +CompatibilityFlags flags
        +lock() AsyncLock
    }
    class Worker_Script {
        +Isolate& isolate
        +compiled modules
        +global bindings
    }
    class Worker {
        +Script& script
        +bindings
        +Lock lock()
        +AsyncLock takeAsyncLock()
    }
    class Worker_Actor {
        +Worker& worker
        +IoContext& ioContext
        +InputGate inputGate
        +OutputGate outputGate
        +ActorCacheInterface storage
    }

    Worker_Isolate "1" --> "*" Worker_Script : contains
    Worker_Script "1" --> "*" Worker : instantiates
    Worker "1" --> "0..*" Worker_Actor : hosts

Worker::Isolate wraps a V8 isolate. Isolates can be shared across workers that have identical configuration (same compatibility flags, same API surface). This is an important optimization — creating a V8 isolate is expensive, and sharing one amortizes that cost.

Worker::Script holds the compiled JavaScript modules within an Isolate. It's the result of parsing and compiling the Worker's source code.

Worker is the live instance. It holds the script reference plus the runtime bindings (env object). The key constraint: V8 isolates are single-threaded. Access is serialized through Worker::Lock (synchronous) and Worker::AsyncLock (async queue-based). AsyncLock ensures that when multiple threads want to run code in the same isolate, they queue up fairly rather than racing.

Worker::Actor is for Durable Objects — a stateful, single-threaded entity with its own IoContext, input/output gates, and persistent storage. We'll cover actors in detail in Part 5.

WorkerEntrypoint: The Request Handler

When a request arrives at a WorkerService, it creates a WorkerEntrypoint — the class that orchestrates the entire request handling flow:

src/workerd/io/worker-entrypoint.c++#L35-L64

WorkerEntrypoint implements WorkerInterface, the abstract protocol for request handling. WorkerInterface is central to workerd — it defines methods for every event type a Worker can receive:

src/workerd/io/worker-interface.h#L24-L55

The construct() static method is the real entry point. It creates the WorkerEntrypoint, initializes the IoContext, wraps the result in a metrics observer, and returns a WorkerInterface:

src/workerd/io/worker-entrypoint.c++#L171-L197

The init() method handles a subtle actor optimization: if the request is for an actor that already has an IoContext, it reuses the existing one instead of creating a new one. For stateless requests, a fresh IoContext is always created.

sequenceDiagram
    participant HS as HttpListener
    participant WS as WorkerService
    participant WE as WorkerEntrypoint
    participant IoC as IoContext
    participant JS as JavaScript Handler

    HS->>WS: startRequest(metadata)
    WS->>WE: construct(worker, entrypoint, actor, ...)
    WE->>IoC: Create or reuse IoContext
    WE->>IoC: Create IncomingRequest
    WS-->>HS: WorkerInterface

    HS->>WE: request(method, url, headers, body, response)
    WE->>IoC: delivered()
    WE->>JS: Execute fetch handler
    JS-->>WE: Response (or exception)

    alt Exception thrown
        WE->>WE: Convert to HTTP 500
        WE->>WE: Check passThroughOnException
    end

    WE->>WE: drain() waitUntil tasks

IoContext: Bridging V8 GC and KJ I/O

The IoContext is perhaps the most important class in workerd. Its job is to solve a fundamental impedance mismatch:

  • V8 isolates can move between threads, bringing all JavaScript heap objects with them.
  • KJ I/O objects are pinned to a single thread — sockets, timers, promises are all bound to the event loop that created them.

src/workerd/io/io-context.h#L169-L250

The IoContext owns all I/O resources associated with a request (or an actor). When the IoContext is destroyed, all outstanding I/O is canceled immediately — even if JavaScript objects on the heap still hold references. Any attempt to access an I/O object from the wrong context throws an exception.

This has a deliberate user-visible consequence: if a Worker saves a request object from one invocation into global state and tries to use it during a different invocation, it will throw. The designers consider this a feature, not a bug — it prevents resource leaks and cross-request interference.

flowchart TB
    subgraph "V8 Heap (can move between threads)"
        JSObj["JavaScript Objects"]
        Refs["IoOwn/IoPtr references"]
    end

    subgraph "IoContext (pinned to one thread)"
        OwnedList["OwnedObjectList"]
        Timers["TimerChannel"]
        Channels["IoChannelFactory"]
        DeleteQ["DeleteQueue"]
    end

    Refs --> |"safe access"| OwnedList
    JSObj --> |"via IoOwn"| OwnedList
    DeleteQ --> |"cross-thread deletion"| OwnedList

The context is accessed via IoContext::current() — a thread-local that is set whenever JavaScript is executing. This is how API implementations like fetch() find their way to the I/O layer without passing the context through every function call.

IncomingRequest: Per-Request State Tracking

For non-actor workers, there's exactly one IncomingRequest per IoContext. For actors, multiple IncomingRequests share a single IoContext. The IncomingRequest class tracks per-request metrics, tracing, and lifecycle:

src/workerd/io/io-context.h#L54-L114

The lifecycle has three phases:

  1. Delivered: The delivered() method signals that the request has started executing. Before this, the request might be canceled without any JavaScript running.
  2. Active: JavaScript is executing. Subrequests and resource usage are attributed to the "current" incoming request — defined as the newest request that hasn't completed.
  3. Drain: After the response is sent, drain() waits for waitUntil() tasks. For actor requests, draining continues until all tasks finish, a new request arrives (which takes over), or the actor shuts down.
stateDiagram-v2
    [*] --> Created
    Created --> Delivered: delivered()
    Created --> Canceled: destroyed before delivery
    Delivered --> Draining: drain()
    Delivered --> FinishScheduled: finishScheduled()
    Draining --> Done: waitUntil tasks complete
    Draining --> Superseded: new request arrives (actors)
    FinishScheduled --> Done: tasks complete
    FinishScheduled --> Timeout: time limit exceeded
    Done --> [*]
    Canceled --> [*]

The finishScheduled() method is particularly interesting — it's used for scheduled events (cron triggers) where the client is waiting for the result. Unlike drain(), which runs after the response is sent, finishScheduled() blocks the response until all waitUntil tasks complete or a timeout is hit.

Tip: When debugging "disappearing" waitUntil tasks, check whether drain() is being called correctly. In actor mode, a new incoming request can supersede the drain of a previous request — meaning your waitUntil tasks might be reassigned to a different request's drain cycle.

IoOwn, IoPtr, and the DeleteQueue

How do garbage-collected JavaScript objects safely reference thread-bound I/O objects? Through IoOwn<T> and IoPtr<T>:

src/workerd/io/io-own.h#L1-L100

IoOwn<T> is an owning pointer — it guarantees the pointed-to object will be destroyed on the correct I/O thread. IoPtr<T> is a non-owning reference. Both assert that access happens from the correct IoContext.

The critical problem is destruction. When V8's garbage collector collects a JavaScript wrapper that holds an IoOwn<T>, the destructor might run on a different thread than the one that owns the I/O object. Deleting a KJ object on the wrong thread would be undefined behavior.

The DeleteQueue solves this elegantly:

flowchart LR
    GC["V8 GC (any thread)"] --> |"destructor runs"| DQ["DeleteQueue::scheduleDeletion()"]
    DQ --> |"mutex-protected enqueue"| Queue["Cross-thread queue"]
    Queue --> |"processed on I/O thread"| Delete["Actual deletion on correct thread"]

When an IoOwn is destroyed from the wrong thread, instead of directly deleting the object, it adds a pointer to the DeleteQueue. The DeleteQueue is protected by a mutex and polled by the owning IoContext. If the IoContext has already been destroyed (meaning all I/O objects were already cleaned up), cross-thread deletions are simply ignored.

The DeleteQueue also supports scheduling arbitrary actions across contexts — used for cross-context promise resolution, where one IoContext needs to fulfill a promise owned by another.

End-to-End Request Trace

Let's put it all together by tracing a complete HTTP request:

sequenceDiagram
    participant Client
    participant HL as HttpListener
    participant WS as WorkerService
    participant WE as WorkerEntrypoint
    participant IoC as IoContext
    participant W as Worker::Lock
    participant JS as fetch() handler

    Client->>HL: TCP connection
    HL->>HL: Extract peer identity, build cfBlob
    HL->>WS: startRequest(metadata)
    WS->>WE: construct(worker, limitEnforcer, ...)
    WE->>IoC: new IoContext(worker, actor, limits)
    WE->>IoC: new IncomingRequest(context, channels, metrics)

    HL->>WE: request(GET, "/path", headers, body, response)
    WE->>IoC: incomingRequest.delivered()
    WE->>W: takeAsyncLock() → Worker::Lock
    W->>JS: Execute export default { fetch(request, env, ctx) }
    JS-->>W: Response object
    W->>HL: response.send(200, headers, body)

    Note over WE: Response sent to client

    WE->>IoC: drain()
    IoC->>IoC: Wait for waitUntil() tasks
    IoC-->>WE: Done
    WE->>WE: Destruction, IoContext cleanup

The WorkerService (src/workerd/server/server.c++#L1882-L1932) is particularly interesting because it implements three interfaces simultaneously: Service (for the nanoservice protocol), IoChannelFactory (providing I/O channels to the IoContext), and LimitEnforcer (enforcing CPU and memory limits). This triple-duty design is what makes the service graph self-contained.

Error handling follows two paths. If the JavaScript handler throws before the response headers are sent, WorkerEntrypoint converts the exception to an HTTP 500 response. But if the Worker called passThroughOnException() and a fallback service is configured, the request is instead proxied to that fallback — a production feature that allows partial degradation instead of hard failures.

The waitUntil() drain is fire-and-forget from the client's perspective — the response has already been sent. But the IoContext keeps running until drain completes, and any exceptions in waitUntil tasks are logged but don't affect the response.

Understanding this request lifecycle — the IoContext bridge, the async lock queue, the drain protocol — is essential before we can understand how JavaScript APIs are actually implemented. In Part 4, we'll descend into the JSG binding layer to see how C++ classes become JavaScript objects and how V8 function calls route back to C++ methods.