Read OSS

The DI Engine: How VS Code Wires 190+ Services Together

Advanced

Prerequisites

  • Article 1: Architecture and Layering
  • Article 2: Startup and Process Architecture
  • TypeScript decorators and generics
  • Basic dependency injection concepts

The DI Engine: How Visual Studio Code Wires 190+ Services Together

In Article 2 we saw CodeMain create about 15 services, then CodeApplication expand that to dozens more, and finally the renderer-side Workbench pull in hundreds of registered singletons. None of this uses a third-party DI framework. Visual Studio Code has its own dependency injection system — compact, powerful, and deeply integrated with TypeScript's type system. Understanding it is essential for reading any part of the codebase.

createDecorator and ServiceIdentifier

Every service in Visual Studio Code is identified by a ServiceIdentifier<T> — an object that doubles as a TypeScript parameter decorator. The factory function createDecorator creates these:

export function createDecorator<T>(serviceId: string): ServiceIdentifier<T> {
    if (_util.serviceIds.has(serviceId)) {
        return _util.serviceIds.get(serviceId)!;
    }
    const id = function (target: Function, key: string, index: number) {
        if (arguments.length !== 3) {
            throw new Error('@IServiceName-decorator can only be used to decorate a parameter');
        }
        storeServiceDependency(id, target, index);
    } as ServiceIdentifier<T>;
    id.toString = () => serviceId;
    _util.serviceIds.set(serviceId, id);
    return id;
}

The trick: id is a function that TypeScript calls as a parameter decorator, but it also carries a type brand via the ServiceIdentifier<T> interface. This dual nature means a single declaration like export const IFileService = createDecorator<IFileService>('fileService') gives you both:

  1. A type-safe lookup key for the DI container.
  2. A parameter decorator for constructor injection.
classDiagram
    class ServiceIdentifier~T~ {
        +type: T
        +(target, key, index): void
        +toString(): string
    }
    
    class createDecorator {
        +createDecorator~T~(serviceId: string): ServiceIdentifier~T~
    }
    
    class _util {
        +serviceIds: Map~string, ServiceIdentifier~
        +DI_TARGET: string
        +DI_DEPENDENCIES: string
        +getServiceDependencies(ctor): Dependency[]
    }
    
    createDecorator ..> ServiceIdentifier : creates
    ServiceIdentifier ..> _util : stores dependency metadata

When you write a constructor like constructor(@ILogService private logService: ILogService), the @ILogService decorator fires at class definition time, recording that this constructor's parameter at the given index depends on the ILogService identifier. This metadata is stored on the constructor function itself via storeServiceDependency(), using the $di$dependencies property.

InstantiationService: Graph-Based Resolution

The InstantiationService is the DI container. It holds a ServiceCollection (a simple Map<ServiceIdentifier, instance | SyncDescriptor>) and resolves dependencies by building a graph:

flowchart TD
    A["createInstance(MyClass)"] --> B["Read @decorator metadata<br/>from MyClass constructor"]
    B --> C["For each dependency,<br/>check ServiceCollection"]
    C --> D{Instance exists?}
    D -->|Yes| E["Use existing instance"]
    D -->|No, has SyncDescriptor| F["Add to dependency graph"]
    F --> G["Recursively resolve<br/>descriptor's dependencies"]
    G --> H["Topological sort the graph"]
    H --> I{Cycle detected?}
    I -->|Yes| J["Throw CyclicDependencyError"]
    I -->|No| K["Instantiate in dependency order"]
    K --> L["Return MyClass instance"]

The graph construction and cycle detection happen in the internal _createAndCacheServiceInstance method. When a service depends on another service that's registered as a SyncDescriptor (meaning "not yet instantiated"), the resolver adds an edge to the graph. After all dependencies are collected, it performs a topological sort and instantiates services leaf-first.

The CyclicDependencyError at instantiationService.ts#L21-L26 provides helpful diagnostics — it runs findCycleSlow() to identify the exact cycle path, which is invaluable during development.

The container also supports child scopes via createChild(). A child inherits all parent services but can override specific ones. This is used in the workbench to create scoped containers for specific features.

Lazy Instantiation via SyncDescriptor and Proxy

SyncDescriptor is a simple wrapper: it holds a constructor reference, static arguments, and a crucial supportsDelayedInstantiation flag.

export class SyncDescriptor<T> {
    readonly ctor: any;
    readonly staticArguments: unknown[];
    readonly supportsDelayedInstantiation: boolean;

    constructor(ctor: new (...args: any[]) => T, 
                staticArguments: unknown[] = [], 
                supportsDelayedInstantiation: boolean = false) { ... }
}

When supportsDelayedInstantiation is true, the InstantiationService doesn't create the actual service instance immediately. Instead, it creates a JavaScript Proxy that stands in for the service. The real instance is only constructed when a method is first called on the proxy.

sequenceDiagram
    participant Consumer
    participant Proxy as JS Proxy
    participant DI as InstantiationService
    participant Service as Real Service

    Consumer->>Proxy: accessor.get(IFooService)
    Note over Proxy: Proxy returned, not real instance
    Consumer->>Proxy: fooService.doSomething()
    Proxy->>DI: First access! Instantiate now.
    DI->>Service: new FooService(deps...)
    Proxy->>Service: Forward doSomething()
    Service-->>Consumer: Result
    Note over Proxy: Subsequent calls go<br/>directly to Service

This is a massive startup performance optimization. Many services are registered during boot but aren't needed until the user performs a specific action. Without lazy proxies, the workbench would need to instantiate hundreds of services synchronously during startup, blocking the first paint.

Tip: When registering a new service, use InstantiationType.Delayed (which maps to supportsDelayedInstantiation: true) unless your service has side effects that must run at startup. This is the recommended default.

registerSingleton: The Global Service Registry

The registerSingleton function is the bridge between the barrel files (from Article 1) and the DI container:

const _registry: [ServiceIdentifier<any>, SyncDescriptor<any>][] = [];

export function registerSingleton<T>(id: ServiceIdentifier<T>, 
    ctorOrDescriptor: ..., 
    supportsDelayedInstantiation?: InstantiationType): void {
    if (!(ctorOrDescriptor instanceof SyncDescriptor)) {
        ctorOrDescriptor = new SyncDescriptor(ctorOrDescriptor, [], Boolean(supportsDelayedInstantiation));
    }
    _registry.push([id, ctorOrDescriptor]);
}

It's beautifully simple: push a [ServiceIdentifier, SyncDescriptor] tuple into a module-level array. During workbench startup, Workbench.initServices() calls getSingletonServiceDescriptors() to drain this array into the ServiceCollection:

const contributedServices = getSingletonServiceDescriptors();
for (const [id, descriptor] of contributedServices) {
    serviceCollection.set(id, descriptor);
}

When you import a barrel file like workbench.desktop.main.ts, every module it imports has a chance to call registerSingleton() at module evaluation time. By the time initServices() runs, the full service graph is declared. The InstantiationType enum controls whether a service is Eager (instantiated during this loop) or Delayed (instantiated lazily via proxy).

The Disposable Pattern and Lifecycle Management

The IDisposable interface — with its single dispose() method — is the foundation of resource management across the entire codebase.

src/vs/base/common/lifecycle.ts defines the key building blocks:

classDiagram
    class IDisposable {
        <<interface>>
        +dispose(): void
    }
    
    class Disposable {
        #_register(disposable): T
        +dispose(): void
    }
    
    class DisposableStore {
        +add(disposable): T
        +clear(): void
        +dispose(): void
    }
    
    class DisposableMap {
        +set(key, disposable): void
        +deleteAndDispose(key): void
        +dispose(): void
    }
    
    class GCBasedDisposableTracker {
        -_registry: FinalizationRegistry
        +trackDisposable(d): void
        +markAsDisposed(d): void
    }
    
    IDisposable <|.. Disposable
    IDisposable <|.. DisposableStore
    Disposable *-- DisposableStore : uses internally
    IDisposable <|.. DisposableMap

The Disposable base class provides a _register() method that adds child disposables to an internal DisposableStore. When the parent is disposed, all registered children are disposed too. This creates a natural ownership tree.

The GCBasedDisposableTracker is particularly clever: it uses the FinalizationRegistry API to detect disposables that were garbage collected without being disposed — a strong signal of a resource leak. During development, this surfaces warnings like [LEAKED DISPOSABLE] CREATED via: ... with the creation stack trace.

The Event/Emitter System

VS Code's custom Event<T> type is a function that accepts a listener and returns an IDisposable — the subscription itself. The Emitter<T> class backs it, providing fire() to emit values.

What makes this system special is the Event namespace, which provides composable operators:

  • Event.map(event, fn) — transforms event payloads
  • Event.filter(event, predicate) — only fires for matching events
  • Event.debounce(event, merge, delay) — merges rapid fires
  • Event.buffer(event) — buffers events until a listener attaches
  • Event.once(event) — auto-disposes after the first fire

All operators return new Event instances, and all integrate with the disposable pattern. This is the reactive backbone of VS Code — services expose onDidChange* events, and consumers compose them into derived signals without managing raw listener lifecycles.

Registry and Contribution Patterns

The Registry is a simple string-keyed Map used for extension points within the codebase. You register a contribution registry with Registry.add(id, data) and retrieve it with Registry.as<T>(id).

The highest-level use of this pattern is registerWorkbenchContribution2, which registers workbench contributions with a specific lifecycle phase:

export const enum WorkbenchPhase {
    BlockStartup = LifecyclePhase.Starting,    // Blocks editor from showing
    BlockRestore = LifecyclePhase.Ready,       // Blocks UI state restore
    AfterRestored = LifecyclePhase.Restored,   // Views and editors restored
    Eventually = LifecyclePhase.Eventually     // 2-5 seconds after restore
}

src/vs/workbench/common/contributions.ts#L31-L62

flowchart LR
    A["BlockStartup"] --> B["BlockRestore"]
    B --> C["AfterRestored"]
    C --> D["Eventually"]
    
    A -.->|"Essential init<br/>(blocks first paint)"| A1["Keybindings,<br/>Theming"]
    C -.->|"Non-critical<br/>(after UI visible)"| C1["Terminal restore,<br/>Extension recommendations"]
    D -.->|"Background<br/>(2-5s delay)"| D1["Telemetry,<br/>Update checker"]

Most contributions should use WorkbenchPhase.AfterRestored or Eventually. Using BlockStartup delays the first paint and should be reserved for contributions that must run before the user sees anything. The contribution system tracks creation times and logs warnings for contributions that take more than 2ms.

Tip: The { lazy: true } instantiation option defers a contribution until explicitly requested via getWorkbenchContribution(), making it ideal for features that may never be activated in a session.

What's Next

The DI system, disposables, events, and contributions form the foundational patterns that everything in Visual Studio Code is built on. The next article takes us into the extension host — a separate process where third-party code runs, connected to the workbench through an RPC protocol with over 140 proxy interfaces.