Read OSS

The Scan Phase: Parallel Parsing and Module Graph Construction

Advanced

Prerequisites

  • Article 1: Architecture and Project Layout
  • Understanding of Go concurrency (goroutines, channels)
  • Familiarity with JavaScript module systems (ESM, CommonJS)

The Scan Phase: Parallel Parsing and Module Graph Construction

As we saw in Part 1, every esbuild build has two phases. The scan phase — ScanBundle() — is where the heavy lifting happens. It starts from entry points, resolves import paths, parses files in parallel, and constructs a complete module graph. This is the phase that benefits most from esbuild's concurrency model, and where the streaming lexer and two-pass parser design pay off most dramatically.

In this article, we'll trace the journey from raw source text to a fully resolved module graph.

The Streaming Lexer: On-Demand Tokenization

Most compilers have a clear pipeline: lex the entire file into tokens, then parse the token stream. esbuild deliberately breaks this convention. The lexer comment at internal/js_lexer/js_lexer.go#L1-L14 explains why:

// The lexer converts a source file to a stream of tokens. Unlike many
// compilers, esbuild does not run the lexer to completion before the parser is
// started. Instead, the lexer is called repeatedly by the parser as the parser
// parses the file. This is because many tokens are context-sensitive and need
// high-level information from the parser. Examples are regular expression
// literals and JSX elements.

Consider this JavaScript:

let x = a / b / c    // Two division operators
let y = /regex/g      // A regex literal

After seeing a, is / a division operator or the start of a regex? The answer depends on parsing context — specifically, whether the parser is expecting an expression or an operator. A standalone lexer can't know this. By making the lexer demand-driven, esbuild lets the parser provide this context naturally.

sequenceDiagram
    participant P as Parser
    participant L as Lexer

    P->>L: Next token (expecting expression)
    L-->>P: TIdentifier "a"
    P->>L: Next token (expecting operator)
    L-->>P: TSlash (division)
    P->>L: Next token (expecting expression)
    L-->>P: TIdentifier "b"
    P->>L: Next token (expecting operator)
    L-->>P: TSlash (division)
    P->>L: Next token (expecting expression)
    L-->>P: TIdentifier "c"

The lexer also handles an efficient dual encoding strategy for text: identifiers are stored as UTF-8 slices of the input (zero-allocation), while string literals use UTF-16 encoding to correctly represent Unicode surrogates. This is called out in the package comment — a subtle but impactful choice for a tool that processes millions of identifiers per build.

The token type is defined as a uint8 enum starting at internal/js_lexer/js_lexer.go#L29, with distinct token kinds for template literal parts (TTemplateHead, TTemplateMiddle, TTemplateTail), which is critical for correctly handling template literal parsing.

The Two-Pass Parser

The parser is the largest single file in the codebase — js_parser.go alone is tens of thousands of lines. Its design is documented at the top in internal/js_parser/js_parser.go#L22-L35:

// This parser does two passes:
//
// 1. Parse the source into an AST, create the scope tree, and declare symbols.
//
// 2. Visit each node in the AST, bind identifiers to declared symbols, do
//    constant folding, substitute compile-time variable definitions, and
//    lower certain syntactic constructs as appropriate given the language target.
//
// So many things have been put in so few passes because we want to minimize
// the number of full-tree passes to improve performance.

Why Two Passes?

The split into two passes is forced by JavaScript's variable hoisting semantics. Consider:

console.log(x); // Must resolve 'x' to the var below
var x = 5;      // Hoisted to function scope

Pass 1 must see the entire scope to find all var declarations before Pass 2 can resolve identifier references. The parser tracks this via scopesInOrder — a list of scope entries from Pass 1 that Pass 2 consumes in order. From internal/js_parser/js_parser.go#L130-L144:

// The parser does two passes and we need to pass the scope tree information
// from the first pass to the second pass. That's done by tracking the calls
// to pushScopeForParsePass() and popScope() during the first pass in
// scopesInOrder.

TypeScript Integration

A distinctive design choice: TypeScript is parsed inline in the same parser, not as a separate compilation step. The TypeScript-specific parsing logic lives in internal/js_parser/ts_parser.go but is called directly from the main parser. This means type annotations are stripped during parsing with zero overhead from a separate transform step.

flowchart TD
    SRC["Source Text"] --> LEXER["Streaming Lexer"]
    LEXER --> PASS1["Pass 1: Parse + Declare"]
    PASS1 --> AST["AST + Scope Tree"]
    PASS1 --> SCOPES["scopesInOrder"]
    AST --> PASS2["Pass 2: Visit + Bind"]
    SCOPES --> PASS2
    PASS2 --> FOLD["Constant Folding"]
    PASS2 --> LOWER["Syntax Lowering"]
    PASS2 --> BIND["Identifier Binding"]
    PASS2 --> TS["TypeScript Stripping"]
    FOLD --> RESULT["Final AST"]
    LOWER --> RESULT
    BIND --> RESULT
    TS --> RESULT

The Parser Struct

The parser struct at internal/js_parser/js_parser.go#L36-L170 is the central state machine. It contains everything from import tracking to TypeScript namespace resolution data. A few fields worth noting:

  • symbols []ast.Symbol — the flat symbol table for this file
  • importRecords []ast.ImportRecord — tracks every dependency
  • runtimeImports map[string]ast.LocRef — references to runtime helpers like __commonJS
  • refToTSNamespaceMemberData — enables TypeScript enum and namespace resolution

Tip: When debugging parser behavior, start with the parser struct fields. They tell you what state the parser tracks, which is often more informative than reading the parsing methods themselves.

The Scanner: Fan-Out/Fan-In Concurrency

The scanner orchestrates the entire scan phase. It starts from entry points, parses files in parallel, and resolves dependencies on a single coordinator goroutine.

The Core Loop

The key function is scanAllDependencies():

func (s *scanner) scanAllDependencies() {
    for s.remaining > 0 {
        result := <-s.resultChannel
        s.remaining--
        if !result.ok {
            continue
        }

        if recordsPtr := result.file.inputFile.Repr.ImportRecords(); 
           s.options.Mode == config.ModeBundle && recordsPtr != nil {
            records := *recordsPtr
            for importRecordIndex := range records {
                record := &records[importRecordIndex]
                resolveResult := result.resolveResults[importRecordIndex]
                if resolveResult == nil { continue }
                
                path := resolveResult.PathPair.Primary
                if !resolveResult.PathPair.IsExternal {
                    sourceIndex := s.maybeParseFile(...)
                    record.SourceIndex = ast.MakeIndex32(sourceIndex)
                }
            }
        }
        s.results[result.file.inputFile.Source.Index] = result
    }
}

The pattern is elegant:

  1. Read a completed parse result from the channel
  2. Iterate over its import records
  3. For each import, call maybeParseFile() which either returns an existing source index (deduplication) or spawns a new parsing goroutine
  4. Each new goroutine sends its result back to the same channel

The maybeParseFile Deduplication

maybeParseFile() is the gatekeeper. It checks a visited map to ensure each file is parsed only once:

func (s *scanner) maybeParseFile(...) uint32 {
    path := resolveResult.PathPair.Primary
    visited, ok := s.visited[visitedKey]
    if ok {
        // Already parsed — return existing source index
        return visited.sourceIndex
    }
    // Not yet parsed — allocate index and spawn goroutine
    // ...
}

This is safe because maybeParseFile runs exclusively on the coordinator goroutine — there's no concurrent access to the visited map.

Runtime as Source Index Zero

A subtle but important detail: the runtime file is always parsed first and placed at source index 0. From ScanBundle():

// Always start by parsing the runtime file
s.results = append(s.results, parseResult{})
s.remaining++
go func() {
    source, ast, ok := globalRuntimeCache.parseRuntime(&options)
    s.resultChannel <- parseResult{...}
}()

This means runtime.SourceIndex (the constant 0) can be used anywhere in the codebase to reference the runtime helpers, without any dynamic lookup.

sequenceDiagram
    participant S as Scanner (coordinator)
    participant CH as Result Channel
    participant G as Goroutine Pool

    S->>G: Parse runtime.go (index 0)
    S->>G: Parse entry.js (index 1)
    G-->>CH: runtime result
    S->>S: Process runtime (no imports)
    G-->>CH: entry.js result
    S->>S: Resolve imports: [./utils, lodash]
    S->>G: Parse utils.js (index 2)
    Note over S: lodash is external, skip
    G-->>CH: utils.js result
    S->>S: Resolve imports: [./helper]
    S->>G: Parse helper.js (index 3)
    G-->>CH: helper.js result
    S->>S: No more imports
    Note over S: s.remaining == 0, loop exits

Module Resolution and PathPair

esbuild's resolver handles the complexity of Node.js module resolution — node_modules traversal, package.json exports/imports maps, tsconfig.json paths, and the browser/module/main field priority.

The PathPair Design

One particularly clever design is the PathPair type at internal/resolver/resolver.go#L67-L74:

type PathPair struct {
    // Either secondary will be empty, or primary will be "module" and secondary
    // will be "main"
    Primary   logger.Path
    Secondary logger.Path
    IsExternal bool
}

The problem PathPair solves: when a package.json has both "module" and "main" fields, bundlers prefer "module" for ESM imports (enabling tree shaking), but require() calls must use "main" to get a CommonJS module. Rather than resolving twice or guessing incorrectly, the resolver returns both paths. The linker later decides which one to use based on how the module is actually imported.

Platform-Specific Main Fields

The default main field order is hardcoded per platform at internal/resolver/resolver.go#L23-L55:

var defaultMainFields = map[config.Platform][]string{
    config.PlatformBrowser: {"browser", "module", "main"},
    config.PlatformNode:    {"main", "module"},
    config.PlatformNeutral: {},
}

The comments in this code are illuminating — they explain that browser wins over module because the presence of a browser field signals that the module field may contain non-browser code, and that main wins over module for Node because some packages incorrectly treat module as "code for the browser."

flowchart TD
    IMP["import 'pkg'"] --> RES["Resolver"]
    RES --> NM["Find in node_modules"]
    NM --> PJ["Read package.json"]
    PJ --> EXPORTS{"Has 'exports' map?"}
    EXPORTS -->|Yes| EXP["Resolve via exports map"]
    EXPORTS -->|No| FIELDS["Check main fields"]
    FIELDS --> BROWSER{"Platform = browser?"}
    BROWSER -->|Yes| BF["browser → module → main"]
    BROWSER -->|No| NF["main → module"]
    BF --> PP["PathPair{Primary, Secondary}"]
    NF --> PP
    EXP --> PP

InputFile, InputFileRepr, and ImportRecords

The scan phase produces a flat array of InputFile values — the core data structure that flows from scanning to linking.

The Polymorphic File Representation

InputFile uses an interface InputFileRepr to handle different file types:

type InputFile struct {
    Repr           InputFileRepr
    InputSourceMap *sourcemap.SourceMap
    SideEffects    SideEffects
    Source         logger.Source
    Loader         config.Loader
    // ...
}

type InputFileRepr interface {
    ImportRecords() *[]ast.ImportRecord
}

Three concrete types implement InputFileRepr:

  • JSRepr — JavaScript/TypeScript files (holds the full AST)
  • CSSRepr — CSS files (holds the CSS AST)
  • CopyRepr — files using the copy or file loader

This design lets the bundler and linker operate generically on InputFile while each phase can type-switch when it needs format-specific behavior.

ImportKind and ImportRecord

Dependencies between files are captured as ImportRecord values, with ImportKind distinguishing how each dependency was introduced:

type ImportKind uint8

const (
    ImportEntryPoint ImportKind = iota
    ImportStmt          // import ... from 'path'
    ImportRequire       // require('path')
    ImportDynamic       // import('path')
    ImportRequireResolve // require.resolve('path')
    ImportAt            // CSS @import
    ImportComposesFrom  // CSS composes
    ImportURL           // CSS url()
)

The ImportKind is used throughout the pipeline — the linker uses it to determine whether to wrap a module in __commonJS, the resolver uses it to decide which PathPair path to prefer, and the printer uses it to generate the correct output syntax.

classDiagram
    class InputFile {
        +Repr InputFileRepr
        +Source logger.Source
        +SideEffects SideEffects
        +Loader config.Loader
    }
    class InputFileRepr {
        <<interface>>
        +ImportRecords() *[]ImportRecord
    }
    class JSRepr {
        +AST js_ast.AST
        +Meta JSReprMeta
    }
    class CSSRepr {
        +AST css_ast.AST
    }
    class CopyRepr {
    }
    class ImportRecord {
        +Path logger.Path
        +SourceIndex Index32
        +Kind ImportKind
        +Flags ImportRecordFlags
    }

    InputFile --> InputFileRepr
    InputFileRepr <|.. JSRepr
    InputFileRepr <|.. CSSRepr
    InputFileRepr <|.. CopyRepr
    JSRepr --> ImportRecord : contains
    CSSRepr --> ImportRecord : contains

Tip: The SideEffects field on InputFile is critical for tree shaking. It captures whether a file was marked as side-effect-free via package.json's sideEffects field, whether the AST was empty, or whether it was a data-only file (like JSON). The linker consults this to decide if an unused import can be safely removed.

From Scan to Compile

At the end of ScanBundle(), all files have been parsed, all dependencies resolved, and the results assembled into a Bundle:

return Bundle{
    fs:              fs,
    res:             s.res,
    files:           files,
    entryPoints:     entryPointMeta,
    uniqueKeyPrefix: uniqueKeyPrefix,
    options:         s.options,
}

This Bundle is the immutable handoff point to the compile phase. In Part 3, we'll see how the linker clones this data, resolves cross-file import/export bindings, performs tree shaking via reachability analysis, and splits files into output chunks using a BitSet-based entry-point membership system.