Read OSS

Watch Mode, onSuccess Hooks, and the Development Feedback Loop

Intermediate

Prerequisites

  • Completion of Articles 1–3
  • Basic familiarity with chokidar and Node.js fs events
  • Understanding of child process management (spawn, kill signals)

Watch Mode, onSuccess Hooks, and the Development Feedback Loop

A bundler that only runs once isn't much help during development. tsup's watch mode turns the build into a continuous feedback loop — files change, the bundle rebuilds, your dev server restarts. But doing this well requires care: you need to rebuild only when relevant files change, avoid overlapping builds during rapid saves, and cleanly manage child processes across platforms. This article examines each piece.

Entering Watch Mode: chokidar Setup and Initial Build

Watch mode is activated inside mainTasks(). After the initial buildAll() completes, startWatcher() is called on line 451. The function lazily imports chokidar only when watch mode is enabled:

const startWatcher = async () => {
  if (!options.watch) return
  const { watch } = await import('chokidar')
  // ...
}

The watcher is configured with sensible ignores on lines 378-382:

const ignored = [
  '**/{.git,node_modules}/**',
  options.outDir,
  ...customIgnores,
]

The output directory is always ignored — without this, a rebuild would trigger another file change event, creating an infinite loop.

flowchart TD
    A["mainTasks()"] --> B["buildAll() — initial build"]
    B --> C["startWatcher()"]
    C --> D["import chokidar"]
    D --> E["chokidar.watch(paths, { ignored })"]
    E --> F["watcher.on('all', handler)"]
    F --> G{"File changed"}
    G -->|relevant file| H["debouncedBuildAll()"]
    G -->|irrelevant file| I["Skip"]
    H --> B

The watch paths default to '.' (the current directory) when --watch is passed as a boolean. But they can be strings or arrays:

tsup --watch src --watch lib    # Watch specific directories
tsup --watch                     # Watch everything in "."

Smart Rebuilds with buildDependencies and esbuild Metafile

Not every file change should trigger a rebuild. If you edit a README while tsup is watching, there's no reason to rebuild. tsup tracks exactly which files were used during the build using esbuild's metafile.

At the end of runEsbuild(), every input file is added to a shared buildDependencies set:

if (result.metafile) {
  for (const file of Object.keys(result.metafile.inputs)) {
    buildDependencies.add(file)
  }
}

The metafile's inputs object contains every file that esbuild read during the build — source files, imported modules, resolved dependencies. This set is then checked in the watcher's change handler on lines 428-435:

if (options.watch === true) {
  if (file === 'package.json' && !buildDependencies.has(file)) {
    const currentHash = await getAllDepsHash(process.cwd())
    shouldSkipChange = currentHash === depsHash
    depsHash = currentHash
  } else if (!buildDependencies.has(file)) {
    shouldSkipChange = true
  }
}
sequenceDiagram
    participant FS as File System
    participant W as Watcher
    participant BD as buildDependencies Set
    participant BUILD as buildAll()

    Note over BD: After initial build:<br/>{src/index.ts, src/utils.ts, ...}
    FS->>W: Change: README.md
    W->>BD: Has "README.md"?
    BD-->>W: No
    W->>W: shouldSkipChange = true
    Note over W: Skip rebuild

    FS->>W: Change: src/utils.ts
    W->>BD: Has "src/utils.ts"?
    BD-->>W: Yes
    W->>BUILD: debouncedBuildAll()

When custom watch paths are specified (e.g., --watch src), all changes in those paths trigger rebuilds — the buildDependencies check only applies when options.watch === true (the default boolean mode).

Note the defensive handling on lines 293-296: if a build fails, the previous buildDependencies are restored so the watcher still knows which files to watch for the next attempt.

package.json Dependency Hash: Selective Rebuild on Dep Changes

The package.json file gets special treatment. Editing a description or version field shouldn't trigger a rebuild, but adding a new dependency should (because the externals list may change).

getAllDepsHash() computes a hash of all dependency sections:

export async function getAllDepsHash(cwd: string) {
  const data = await loadPkg(cwd, true)  // clearCache: true
  return JSON.stringify({
    ...data.dependencies,
    ...data.peerDependencies,
    ...data.devDependencies,
  })
}

The hash is computed before the initial build and stored. When package.json changes, the new hash is compared to the stored one. Only if they differ does a rebuild proceed. The clearCache: true flag ensures joycon re-reads the file instead of using a stale cached version.

flowchart TD
    A["package.json changed"] --> B["Compute new depsHash"]
    B --> C{"depsHash changed?"}
    C -->|yes| D["Rebuild — externals may have changed"]
    C -->|no| E["Skip — metadata-only change"]

debouncePromise: Coalescing Rapid File Changes

When you save a file in your editor, the filesystem may emit multiple events in rapid succession (write, rename, change). Without debouncing, each event would start a new build, creating a pile-up.

The debouncePromise utility in src/utils.ts is not a standard debounce — it's async-aware:

export function debouncePromise<T extends unknown[]>(
  fn: (...args: T) => Promise<void>,
  delay: number,
  onError: (err: unknown) => void,
) {
  let timeout: ReturnType<typeof setTimeout> | undefined
  let promiseInFly: Promise<void> | undefined
  let callbackPending: (() => void) | undefined

  return function debounced(...args: Parameters<typeof fn>) {
    if (promiseInFly) {
      callbackPending = () => {
        debounced(...args)
        callbackPending = undefined
      }
    } else {
      if (timeout != null) clearTimeout(timeout)
      timeout = setTimeout(() => {
        timeout = undefined
        promiseInFly = fn(...args)
          .catch(onError)
          .finally(() => {
            promiseInFly = undefined
            if (callbackPending) callbackPending()
          })
      }, delay)
    }
  }
}
sequenceDiagram
    participant E as File Events
    participant D as debounced()
    participant B as buildAll()

    E->>D: Change event 1
    Note over D: Start 100ms timer
    E->>D: Change event 2 (50ms later)
    Note over D: Reset timer to 100ms
    E->>D: Change event 3 (80ms later)
    Note over D: Reset timer to 100ms
    Note over D: 100ms elapsed...
    D->>B: Start build
    E->>D: Change event 4 (during build)
    Note over D: Build in flight —<br/>store as pending callback
    B-->>D: Build complete
    D->>D: Execute pending callback
    Note over D: Start new 100ms timer

The behavior has three states:

  1. Idle: New events reset the delay timer (standard debounce behavior)
  2. Building: New events are stored as a pending callback — no new build starts until the current one finishes
  3. Build complete with pending: The pending callback fires, starting a new debounce cycle

This prevents overlapping builds. The 100ms delay on line 287 is the default and handles the typical rapid-save scenario well.

Tip: If you find watch mode rebuilding too aggressively, use --ignore-watch to exclude directories. For example, --ignore-watch test prevents test file changes from triggering rebuilds. The ignored patterns are passed directly to chokidar.

The onSuccess Hook: Commands and Callbacks

After a successful build in watch mode, you typically want to do something — restart a dev server, run tests, or copy files. The onSuccess option supports two forms.

String commands are spawned as shell processes using tinyexec:

if (typeof options.onSuccess === 'function') {
  onSuccessCleanup = await options.onSuccess()
} else {
  onSuccessProcess = exec(options.onSuccess, [], {
    nodeOptions: { shell: true, stdio: 'inherit' },
  })
}

Function callbacks can return an optional cleanup function, which is called before the next rebuild.

The lifecycle management is handled by doOnSuccessCleanup() on lines 269-281. Before each rebuild, the previous onSuccess process or cleanup function must complete:

stateDiagram-v2
    [*] --> Idle
    Idle --> Building: File change
    Building --> RunningOnSuccess: Build succeeds
    Building --> Idle: Build fails
    RunningOnSuccess --> KillingPrevious: File change
    KillingPrevious --> Building: Previous process killed
    RunningOnSuccess --> Idle: Process exits

For shell commands, killing the previous process uses killProcess(), which wraps the tree-kill package:

const killProcess = ({ pid, signal }: { pid: number; signal: KILL_SIGNAL }) =>
  new Promise<void>((resolve, reject) => {
    kill(pid, signal, (err) => {
      if (err && !isTaskkillCmdProcessNotFoundError(err)) return reject(err)
      resolve()
    })
  })

The isTaskkillCmdProcessNotFoundError function handles a Windows-specific edge case: tree-kill uses the taskkill command on Windows, which returns exit code 128 when the target process has already exited. Without this check, tsup would throw an error on every rebuild where the previous process exited naturally.

The KILL_SIGNAL option (configurable via --killSignal) defaults to SIGTERM, which allows graceful shutdown. Set it to SIGKILL for immediate termination if your onSuccess process doesn't handle SIGTERM.

Test Infrastructure: End-to-End Validation

tsup's integration tests provide a practical example of how all these pieces come together. The test harness in test/utils.ts creates temporary fixture projects and runs the pre-built tsup binary against them:

const bin = path.resolve(__dirname, '../dist/cli-default.js')

export async function run(
  title: string,
  files: { [name: string]: string },
  options: { entry?: string[]; flags?: string[]; env?: Record<string, string> } = {},
) {
  const testDir = path.resolve(cacheDir, filenamify(title))
  // Write fixture files
  await Promise.all(
    Object.keys(files).map(async (name) => {
      const filePath = path.resolve(testDir, name)
      await fsp.mkdir(path.dirname(filePath), { recursive: true })
      return fsp.writeFile(filePath, files[name], 'utf8')
    }),
  )
  // Run tsup
  const processPromise = exec(bin, [...entry, ...(options.flags || [])], {
    nodeOptions: { cwd: testDir, env: { ...process.env, ...options.env } },
  })
  // ...
}

Each test writes files to .cache/<test-name>/, executes dist/cli-default.js as a subprocess, and asserts on the output. This is a true end-to-end test — it exercises the CLI argument parsing, config loading, esbuild pipeline, plugin transforms, and file output.

flowchart LR
    A["Test: write fixture files<br/>to .cache/test-name/"] --> B["exec(dist/cli-default.js,<br/>flags, { cwd: testDir })"]
    B --> C["tsup runs in subprocess"]
    C --> D["Read dist/ output files"]
    D --> E["Assert on file contents,<br/>output file list, logs"]

The design requires that tsup is pre-built before tests run — the package.json scripts enforce this: "test": "pnpm run build && pnpm run test-only". This is intentional: the tests validate the published artifact, not the raw source. If a bug exists in the build process itself, the tests will catch it.

Tip: When contributing to tsup, run pnpm build before pnpm test-only to ensure you're testing against your latest changes. The test suite won't pick up source modifications until the project is rebuilt.

Series Recap

Across these six articles, we've traced the complete lifecycle of a tsup build:

  1. Architecture: Three entry points converge on build(), which dispatches JS and DTS tasks in parallel
  2. Configuration: joycon discovers config files, bundle-require executes TypeScript configs, normalizeOptions() resolves everything into a strict type
  3. esbuild Pipeline: Auto-externalization, six esbuild plugins for file-level transforms, write: false for in-memory output
  4. tsup Plugins: Seven built-in post-build plugins, carefully ordered — shebang → tree-shaking → CJS splitting → CJS interop → SWC target → size reporter → terser
  5. DTS Generation: Two strategies — Rollup-based --dts in a Worker thread versus tsc + API Extractor with --experimental-dts
  6. Watch Mode: Smart rebuilds via metafile tracking, debounced builds, cross-platform process management

tsup's design philosophy is pragmatic composition: use the best tool for each job (esbuild for speed, Rollup for tree shaking, SWC for decorator metadata, TypeScript for types), and bridge them together with a clean in-memory pipeline. The result is a bundler that's fast by default and extensible when you need it.