Read OSS

The RPC Bridge: IPC Communication and Terminal Session Lifecycle

Intermediate

Prerequisites

  • Article 1: Architecture and Project Navigation
  • Electron IPC basics (ipcMain, ipcRenderer)
  • Understanding of pseudo-terminals (PTY)

The RPC Bridge: IPC Communication and Terminal Session Lifecycle

As we saw in Part 1, Hyper's main and renderer processes are separate worlds connected by Electron's IPC mechanism. But Hyper doesn't use IPC directly — it wraps it in a typed, UUID-scoped RPC system that provides per-window message isolation, EventEmitter-style APIs, and two distinct communication patterns. Understanding this bridge is essential because every meaningful interaction in Hyper — from keystrokes to terminal output — crosses it.

In this article, we'll dissect the RPC system, then trace the complete lifecycle of a terminal session: how a PTY is spawned, how its output is batched for performance, and how it arrives at xterm.js for rendering.

The UUID-Scoped RPC Channel

Each BrowserWindow in Hyper gets its own dedicated IPC channel identified by a UUID. This is a critical design decision: without it, terminal data from one window could leak to another.

The Server class (main process side) generates the UUID and establishes the channel:

app/rpc.ts#L16-L36

When a BrowserWindow finishes loading, the server sends its UUID via webContents.send('init', uid). The Client class (renderer side) receives this UUID and subscribes to it:

lib/utils/rpc.ts#L20-L42

sequenceDiagram
    participant M as Main (Server)
    participant E as Electron IPC
    participant R as Renderer (Client)

    M->>M: Generate UUID (e.g., "a1b2c3...")
    M->>E: ipcMain.on(uuid, listener)
    Note over M: Window finishes loading
    M->>R: webContents.send('init', uuid, profileName)
    R->>R: Cache uuid in window.__rpcId
    R->>E: ipcRenderer.on(uuid, listener)
    R->>R: emitter.emit('ready')
    Note over M,R: Channel established — all future messages use this UUID

    R->>E: ipcRenderer.send(uuid, {ev: 'new', data: {...}})
    E->>M: ipcMain receives on uuid channel
    M->>R: webContents.send(uuid, {ch: 'session add', data: {...}})

Notice the clever caching in the Client constructor: window.__rpcId is set so that if the Client object is re-instantiated (e.g., during hot reload), it can skip waiting for the init event and immediately reconnect using the cached UUID.

Both Server and Client wrap EventEmitter instances with typed generic parameters. The Server emits RendererEvents (main-to-renderer) and listens for MainEvents (renderer-to-main). The Client does the opposite. This ensures that all messages are type-checked at compile time — you can't accidentally emit a main-process event from the renderer.

Two IPC Patterns: Events vs Request-Response

Hyper uses two fundamentally different IPC patterns, each suited to different use cases:

Pattern 1: Fire-and-Forget Events — Used for streaming data and commands where the sender doesn't need a response. The RPC Server.emit() and Client.emit() methods implement this pattern. Terminal output, session lifecycle events, and UI commands all use events.

Pattern 2: Request-Response (invoke/handle) — Used when the renderer needs to query the main process for data. This uses Electron's ipcMain.handle() / ipcRenderer.invoke() pair, which returns a Promise.

app/plugins.ts#L467-L480

The request-response pattern is used exclusively for plugin-related queries: getting decorated config, decorated keymaps, loaded plugin versions, and filesystem paths. These are things the renderer needs synchronously during setup but that are managed by the main process.

flowchart LR
    subgraph "Fire-and-Forget (RPC)"
        A[Renderer] -->|"emit('new', options)"| B[Main]
        B -->|"emit('session data', data)"| A
    end

    subgraph "Request-Response (invoke/handle)"
        C[Renderer] -->|"invoke('getDecoratedConfig')"| D[Main]
        D -->|"Promise<configOptions>"| C
    end

Tip: If you're writing a Hyper plugin and need data from the main process, prefer the invoke pattern (via ipcRenderer.invoke) over RPC events. The request-response pattern gives you a clean Promise-based API and avoids the need to manage event listeners for the reply.

Type-Safe Event Definitions

The type definitions in typings/common.d.ts are the source of truth for all IPC communication. Three types define the contract:

MainEvents (renderer → main) defines 15 events:

typings/common.d.ts#L32-L48

RendererEvents (main → renderer) defines 42 events:

typings/common.d.ts#L50-L93

IpcCommands (request-response) defines 8 commands:

typings/common.d.ts#L112-L128

classDiagram
    class MainEvents {
        +close: never
        +command: string
        +data: uid+data+escaped
        +exit: uid
        +init: null
        +new: sessionExtraOptions
        +resize: uid+cols+rows
        ...15 events total
    }
    class RendererEvents {
        +session add: Session
        +session data: string
        +session exit: uid
        +termgroup add req: options
        +split request: options
        +move jump req: number
        ...42 events total
    }
    class IpcCommands {
        +getDecoratedConfig() configOptions
        +getDecoratedKeymaps() keymaps
        +getLoadedPluginVersions() versions
        +getPaths() paths
        +child_process.exec() stdout+stderr
        ...8 commands total
    }

The FilterNever<T> utility type at line 98 is a clever pattern: events with payload type never (like close or maximize) don't require a data argument when emitting. The emit method is overloaded to enforce this — calling rpc.emit('close') is valid, but rpc.emit('session data') without the data string would be a type error.

PTY Session Creation and Environment Setup

When a new terminal tab is requested, the main process creates a Session object that wraps a node-pty pseudo-terminal:

app/session.ts#L113-L168

The environment setup is meticulous:

  1. Base environment is cloned from process.env, with AppImage paths cleaned on Linux.
  2. Terminal variables are set: TERM=xterm-256color, COLORTERM=truecolor, TERM_PROGRAM=Hyper.
  3. Locale is detected via os-locale and set as LANG=xx_XX.UTF-8.
  4. Electron leaks are cleaned: GOOGLE_API_KEY is removed to prevent it from appearing in the shell environment.
  5. Plugin decoration: the decorateEnv extension point lets plugins add or modify environment variables.

The shell fallback mechanism (lines 182–218) deserves attention: if a shell exits within 1 second with a non-zero exit code, Hyper assumes the configuration is broken and falls back to the default shell. This prevents users from getting locked out by a misconfigured shell path.

The DataBatcher Performance Optimization

Terminal emulators can receive thousands of small data chunks per second from the PTY. Sending each one individually through IPC would be devastating for performance. Hyper's DataBatcher solves this with a two-threshold batching strategy:

app/session.ts#L43-L85

flowchart TD
    A[PTY emits data chunk] --> B{Batch size >= 200KB?}
    B -->|Yes| C[Flush immediately]
    B -->|No| D[Append to batch buffer]
    D --> E{Timer running?}
    E -->|No| F[Start 16ms timer]
    E -->|Yes| G[Wait for timer]
    F --> H[Timer fires → Flush]
    G --> H
    C --> I[Reset buffer to UID prefix]
    H --> I
    I --> J[Emit 'flush' → RPC sends to renderer]

The constants — 16ms timeout and 200KB max size — are carefully chosen. 16ms aligns with a 60fps frame budget, ensuring the renderer processes at most one batch per frame. The 200KB cap prevents memory pressure from a single giant batch.

The most subtle optimization is the UID prepending strategy. Each batch is initialized with this.data = this.uid — the 36-character UUID is the first thing in the buffer. When the renderer receives this string, it extracts the UID with a simple d.slice(0, 36) and the data with d.slice(36). This avoids creating a wrapper object for every batch, keeping the IPC payload as a single string.

Window as Orchestration Point

All of Hyper's subsystems converge in app/ui/window.ts. The newWindow function creates a BrowserWindow and wires together the RPC server, session management, config subscriptions, and plugin hooks.

app/ui/window.ts#L69-L70

Two critical resources are created per window: an RPC Server and a Map<string, Session> for tracking active terminal sessions.

The session creation flow at app/ui/window.ts#L122-L180 shows CWD preservation in action — when preserveCWD is enabled, the working directory of the active session's PTY process is resolved via native-process-working-directory and used as the starting directory for the new session.

sequenceDiagram
    participant R as Renderer
    participant RPC as RPC Channel
    participant W as Window Manager
    participant S as Session/PTY

    R->>RPC: emit('new', {activeUid, profile})
    RPC->>W: rpc.on('new') handler
    W->>W: Resolve CWD from active session PID
    W->>W: Get decorated session options from plugins
    W->>S: new Session({uid, shell, cwd, ...})
    S->>S: Spawn node-pty with environment
    W->>RPC: emit('session add', {uid, shell, pid, ...})
    RPC->>R: Renderer creates tab/pane

    loop Terminal Output
        S->>S: PTY data → DataBatcher.write()
        S->>W: batcher 'flush' event
        W->>RPC: emit('session data', uid+data)
        RPC->>R: Dispatch SESSION_PTY_DATA
    end

    S->>W: PTY exit event
    W->>RPC: emit('session exit', {uid})
    W->>W: Clean up session from map

The cleanup function at app/ui/window.ts#L359-L365 ensures no resources leak: the RPC server is destroyed, all sessions are killed, and both config and plugin subscriptions are unregistered.

Tip: The window.rpc and window.sessions properties exposed at line 329-330 are the backdoor that makes Hyper's plugin system possible. Plugin onWindow hooks receive the BrowserWindow object with these properties attached, giving them direct access to the IPC layer and session management.

What's Next

We've now traced how data flows between the main and renderer processes. But what happens when terminal data arrives in the renderer? It enters Redux — and Hyper's Redux setup is anything but standard. In the next article, we'll explore a middleware chain where thunk appears twice, a write middleware that bypasses React entirely, and an immutable tree structure that models split panes.