Read OSS

From `next dev` to First Response: The Server Boot and Request Pipeline

Advanced

Prerequisites

  • Article 1: Architecture and Codebase Navigation
  • Node.js HTTP server fundamentals (createServer, request/response lifecycle)
  • Understanding of process forking and worker patterns in Node.js
  • Basic familiarity with class inheritance in TypeScript

From next dev to First Response: The Server Boot and Request Pipeline

When you type next dev and hit enter, a surprisingly complex chain of events unfolds before your browser receives its first response. The server architecture involves process forking, layered abstraction, lazy initialization, and a ~3,050-line abstract class that forms the backbone of request handling. Understanding this boot sequence and request pipeline is essential for debugging Next.js internals — and it reveals why certain architectural decisions were made.

CLI to Server: The Boot Sequence

The journey begins in packages/next/src/cli/next-dev.ts. The nextDev function first calls parseBundlerArgs() (which we covered in Article 1) to determine the active bundler, then resolves the project directory.

The key architectural decision here is the child process fork. The main CLI process doesn't run the server directly — it spawns a child process using fork():

sequenceDiagram
    participant CLI as next dev (main process)
    participant Child as Server (child process)
    participant HTTP as HTTP Server

    CLI->>CLI: parseBundlerArgs()
    CLI->>CLI: preflight checks (sass, react versions)
    CLI->>Child: fork('start-server.ts')
    Child->>Child: startServer()
    Child->>HTTP: http.createServer()
    HTTP-->>Child: 'listening' event
    Child->>Child: initialize() (router-server)
    Child-->>CLI: IPC: { nextServerReady, port, distDir }
    CLI->>CLI: Store port/distDir for telemetry

The fork happens at next-dev.ts#L323, passing environment variables like TURBOPACK, NEXT_PRIVATE_WORKER, and Node options. The child communicates back via IPC messages — nextWorkerReady triggers sending server options, and nextServerReady signals that the port is bound and the server is accepting connections.

Why fork? Two reasons: isolation and restartability. If the server process runs out of memory or encounters an unrecoverable error, the main process can restart it. Note the memory pressure check in start-server.ts#L249-L265 — when heap usage exceeds 80%, the child process exits with RESTART_EXIT_CODE and the parent respawns it.

HTTP Server Creation and Port Binding

Inside the child process, startServer() creates the HTTP server. The implementation uses a clever deferred handler pattern:

let handlersPromise: Promise<void> | undefined = new Promise<void>(...)
let requestHandler: WorkerRequestHandler = async (req, res) => {
  if (handlersPromise) {
    await handlersPromise
    return requestHandler(req, res)
  }
  throw new Error('Invariant request handler was not setup')
}

The HTTP server starts listening immediately, but requests are queued until the router-server initialization completes. This means the port is bound as fast as possible — the browser won't get "connection refused" even if initialization takes a few seconds.

The actual server creation at line 270-278 is straightforward http.createServer() (or https.createServer() for self-signed certs in dev). Port retry logic handles EADDRINUSE by incrementing the port up to 10 times.

Router-Server and Render-Server Split

Once the HTTP server is listening, initialization calls into the initialize() function in router-server.ts. This is where the real architecture emerges — a two-layer split between routing and rendering:

flowchart TD
    HTTP["HTTP Server\n(start-server.ts)"] --> RouterServer["Router Server\n(router-server.ts)"]

    RouterServer --> Config["Load Config"]
    RouterServer --> FsCheck["Setup Filesystem Checker"]
    RouterServer --> Compression["Setup Compression"]
    RouterServer --> ResolveRoutes["Route Resolver"]

    RouterServer -->|"Delegates rendering"| RenderServer["Render Server\n(render-server.ts)"]

    RenderServer -->|"Lazy creates"| NextServer["NextServer\n(next.ts)"]
    NextServer -->|"Lazy loads"| NodeServer["NextNodeServer\n(next-server.ts)"]
    NextServer -->|"Or in dev"| DevServer["DevServer\n(next-dev-server.ts)"]

    subgraph "Router Layer (always running)"
        RouterServer
        ResolveRoutes
    end

    subgraph "Render Layer (lazy, replaceable)"
        RenderServer
        NextServer
        NodeServer
        DevServer
    end

The router-server loads configuration, sets up the filesystem checker (which reads manifests to know what routes exist), and creates the route resolver. The render-server is a lazy wrapper — it instantiates the actual NextServer only when a rendering request arrives.

This separation exists primarily for development. When code changes, the render worker can be torn down and recreated without disrupting the routing layer. Middleware continues to run even while the render server is restarting. In router-server.ts#L129-L137, you can see the render server is stored as LazyRenderServerInstance — an object with an optional instance property that can be replaced:

const renderServer: LazyRenderServerInstance = {}

In development mode, the router-server also sets up the dev bundler via setupDevBundler(), which initializes Turbopack or Webpack's HMR infrastructure and watches for file changes.

The Server Class Hierarchy

When the render-server creates its NextServer instance, we enter the server class hierarchy — the most important abstraction in the codebase:

classDiagram
    class Server {
        <<abstract>>
        +handleRequest(req, res, parsedUrl)
        -handleRequestImpl(req, res, parsedUrl)
        #run(req, res, parsedUrl)
        -pipe(fn, context)
        #renderToResponse(ctx)*
        #loadComponents(page)*
        #findPageComponents(params)*
        #getRoutesManifest()*
        hostname: string
        nextConfig: NextConfigRuntime
        distDir: string
        buildId: string
    }

    class NextNodeServer {
        +loadComponents(page)
        +findPageComponents(params)
        +getRoutesManifest()
        -loadManifestWithRetries(name)
        +serveStatic(req, res, path)
        -sendRenderResult(req, res, result)
    }

    class DevServer {
        +ensurePage(opts)
        -getCompilationError(page)
        -logErrorWithOriginalStack(err)
        +getStaticPaths(params)
    }

    Server <|-- NextNodeServer : extends
    NextNodeServer <|-- DevServer : extends

The abstract Server class in base-server.ts (~3,050 lines) is runtime-agnostic. It defines the request handling pipeline without assuming Node.js — this is what allows the same logic to potentially work on Edge. It handles route matching, caching, middleware application, and response generation.

NextNodeServer adds Node.js-specific capabilities: filesystem access for loading manifests, gzip compression, static file serving, and the IncomingMessage/ServerResponse adaptation layer.

DevServer adds development features: on-demand page compilation via ensurePage(), error overlay integration, source map support, and HMR coordination.

Tip: When reading base-server.ts, focus on handleRequest() (line 872) and run() (line 1737). These two methods orchestrate everything — handleRequest is the entry point and run is where rendering actually happens.

The NextServer Wrapper and Lazy Loading

Before requests reach NextNodeServer, they pass through NextServer — the public API wrapper class. This class uses lazy initialization extensively. The actual server implementation (ServerImpl) is loaded on first access via getServerImpl():

const getServerImpl = async () => {
  if (ServerImpl === undefined) {
    ServerImpl = (
      await Promise.resolve(
        require('./next-server') as typeof import('./next-server')
      )
    ).default
  }
  return ServerImpl
}

This lazy require() is significant. The next-server.ts module pulls in a massive dependency tree — manifest loaders, rendering engines, route matchers. By deferring this import, the router-server can start handling simple requests (static files, redirects) before the full rendering infrastructure is loaded.

Route Resolution: From URL to Handler

When a request arrives at the router-server, the first step is route resolution. The getResolveRoutes() function in resolve-routes.ts (~928 lines) creates a resolver that evaluates the request against the filesystem checker and custom routes:

flowchart TD
    Request["Incoming Request"] --> BasePath["Strip Base Path"]
    BasePath --> I18N["Locale Detection"]
    I18N --> Headers["Apply Custom Headers"]
    Headers --> Redirects["Check Redirects"]
    Redirects -->|Match| RedirectResponse["301/302 Response"]
    Redirects -->|No Match| Rewrites["Before Rewrites"]
    Rewrites --> Middleware["Run Middleware"]
    Middleware -->|Rewrite/Redirect| MiddlewareResult["Apply Middleware Result"]
    Middleware -->|Pass-through| FsCheck["Filesystem Check"]
    FsCheck -->|Static File| StaticServe["Serve Static"]
    FsCheck -->|Route Match| AfterRewrites["After Rewrites"]
    AfterRewrites --> RenderServer["Dispatch to Render Server"]

The filesystem checker (filesystem.ts) reads manifests (pages-manifest.json, app-paths-manifest.json, routes-manifest.json) to build a lookup table of known routes. In development, this table is updated dynamically as pages are compiled.

The route resolver processes requests through a specific order: base path stripping → locale detection → headers → redirects → "before" rewrites → middleware → filesystem check → "after" rewrites → fallback rewrites. This ordering is specified in the routes manifest produced by the build.

Request Handling Pipeline

Once a request passes route resolution and reaches base-server.ts, it enters the handleRequest() method. This wraps the real implementation in OpenTelemetry tracing:

sequenceDiagram
    participant Client as Browser
    participant BS as BaseServer.handleRequest()
    participant RM as RouteMatcherManager
    participant RL as Route Module Loader
    participant Render as renderToResponse()

    Client->>BS: HTTP Request
    BS->>BS: handleRequestImpl() - parse URL, normalize
    BS->>BS: Check for data requests (_next/data/*)
    BS->>RM: match(pathname)
    RM-->>BS: RouteMatch { definition, params }
    BS->>RL: loadComponents(match.page)
    RL-->>BS: { Component, mod, DocumentComponent }
    BS->>BS: run(req, res, parsedUrl)
    BS->>Render: pipe(renderToResponse, context)
    Render-->>BS: RenderResult
    BS->>Client: HTTP Response (HTML or RSC payload)

Inside handleRequestImpl(), the server first normalizes the URL, strips RSC-specific headers, and determines if this is an RSC (Flight) request or a full HTML request. It then delegates to run(), which calls pipe() with renderToResponse().

The route matching uses a provider pattern. Each route type has a matcher provider — AppPageRouteMatcherProvider, PagesRouteMatcherProvider, etc. These providers read from manifests and create matchers that are registered with the DefaultRouteMatcherManager. When a request comes in, the manager iterates through matchers to find the first match.

The matched route is then loaded via loadComponents(), which returns the React component, the route module, and associated metadata. For App Router routes, this includes the loader tree (the nested layout structure); for Pages Router routes, it includes getServerSideProps or getStaticProps functions.

Tip: The pipe() method at line 1755 is a critical junction — it's where the abstract renderToResponse() is called. Each server subclass implements this differently: NextNodeServer delegates to route modules, while the route modules themselves call into the rendering engines (app-render.tsx for App Router, render.tsx for Pages Router).

Development vs. Production Differences

In production, the boot sequence is simpler. The next start CLI creates an HTTP server and calls initialize() directly — no process fork, no dev bundler, no HMR. The NextServer wrapper creates a NextNodeServer instead of a DevServer, and manifests are read once from disk rather than dynamically updated.

In development, several additional subsystems activate:

  • The dev bundler (Turbopack or Webpack HMR) watches for file changes
  • ensurePage() triggers on-demand compilation for routes not yet built
  • Error overlays intercept rendering errors and display them in the browser
  • The router-server watches next.config.js for changes and triggers a full restart

The DevServer class overrides key methods to inject development behavior. For instance, when loadComponents() fails because a page hasn't been compiled yet, DevServer calls ensurePage() to trigger compilation, waits for it to complete, then retries the load.

What's Next

We've traced the path from CLI to first response, understanding the layered architecture that separates routing from rendering and the abstract server hierarchy that powers both development and production. In the next article, we'll dive deep into what happens inside renderToResponse() for App Router pages — the 7,350-line app-render.tsx file that orchestrates React Server Components, streaming, and Partial Prerendering.