Read OSS

The Plugin System and launchd Integration

Advanced

Prerequisites

  • Article 1: Architecture and Navigation Guide
  • Article 2: The XPC Communication Layer

The Plugin System and launchd Integration

One of the more surprising things about apple/container is that its own built-in components — the Linux runtime, the vmnet network manager, the images helper — are all plugins. They follow the exact same discovery and registration mechanism that third-party extensions would use. This isn't a bolted-on extension system; it's the primary way the project composes its own functionality.

This article traces the plugin system from the config.json schema through directory scanning, launchd plist generation, Mach service naming conventions, and the CLI's transparent execvp dispatch for unrecognized subcommands.

Plugin Types and the config.json Schema

Every plugin is a directory containing a config.json file and optionally a bin/ subdirectory with an executable. The PluginConfig struct defines the schema.

The most important distinction is between CLI plugins (which have no servicesConfig) and daemon plugins (which do). Daemon plugins declare one or more services, each with a DaemonPluginType:

classDiagram
    class PluginConfig {
        +abstract: String
        +author: String?
        +servicesConfig: ServicesConfig?
        +isCLI: Bool
    }
    class ServicesConfig {
        +loadAtBoot: Bool
        +runAtLoad: Bool
        +services: [Service]
        +defaultArguments: [String]
    }
    class Service {
        +type: DaemonPluginType
        +description: String?
    }
    class DaemonPluginType {
        <<enumeration>>
        runtime
        network
        core
        auxiliary
    }
    PluginConfig --> ServicesConfig
    ServicesConfig --> Service
    Service --> DaemonPluginType
Type Lifetime Example
runtime One instance per container container-runtime-linux
network One instance per network container-network-vmnet
core Singleton, tied to API server container-core-images
auxiliary Reserved for future use

The built-in runtime plugin's config.json is instructive:

{
    "abstract": "Linux container runtime plugin",
    "servicesConfig": {
        "loadAtBoot": false,
        "runAtLoad": false,
        "services": [{ "type": "runtime" }],
        "defaultArguments": []
    }
}

Note that loadAtBoot is false — runtime plugins aren't registered with launchd when the API server starts. Instead, they're registered on-demand when a container is created. The network plugin's config.json has the same loadAtBoot: false but sets runAtLoad: true, meaning it starts executing as soon as it's loaded into launchd.

Tip: The isCLI computed property at line 99 is a simple nil check: servicesConfig == nil. A plugin with no services configuration is a CLI plugin. This means a plugin can provide both a daemon service and a CLI interface if it includes a servicesConfig.

Plugin Discovery: Scanning Directories for config.json

PluginLoader.findPlugins() scans multiple directories for plugin installations. The directories are searched in priority order:

  1. User plugins<installRoot>/libexec/container-plugins/
  2. App bundle pluginsBundle.main.resourceURL/plugins/ (for .app installations)
  3. Install root plugins<installRoot>/libexec/container/plugins/ (for Unix-like installations)
flowchart TD
    A[findPlugins] --> B[For each plugin directory]
    B --> C[List subdirectories]
    C --> D[For each subdirectory]
    D --> E[Try each PluginFactory]
    E --> F{Factory creates Plugin?}
    F -->|Yes| G{Name already seen?}
    F -->|No| H[Try next factory]
    G -->|Yes| I[Skip - shadowed]
    G -->|No| J[Add to results]
    J --> K[Record name in set]

The shadowing mechanism is important: if a plugin with the same name exists in both the user directory and the install root directory, the user's version wins. This allows users to override built-in plugins without modifying the installation. The order matters — user directories are scanned first, and pluginNames is a Set<String> that prevents duplicates.

The PluginFactory protocol enables different directory layouts. DefaultPluginFactory expects a config.json and a bin/ directory. AppBundlePluginFactory handles macOS app bundle layouts. The factory pattern means the discovery logic doesn't need to know about layout details.

launchd Registration: Plist Generation and bootstrap/bootout

When a daemon plugin needs to run, PluginLoader.registerWithLaunchd() generates a launchd plist and registers it:

  1. Construct the launchd label from the plugin name and optional instance ID
  2. Build the command-line arguments (defaulting to ["start"] plus any resource paths and debug flags)
  3. Filter the environment to only pass through CONTAINER_* vars and proxy settings
  4. Generate a LaunchPlist struct with the label, arguments, environment, Mach service names, and session types
  5. Serialize the plist to disk
  6. Call ServiceManager.register(plistPath:) to invoke launchctl bootstrap
sequenceDiagram
    participant API as container-apiserver
    participant PL as PluginLoader
    participant SM as ServiceManager
    participant LD as launchd

    API->>PL: registerWithLaunchd(plugin, instanceId)
    PL->>PL: Generate LaunchPlist
    PL->>PL: Write plist to disk
    PL->>SM: register(plistPath)
    SM->>LD: launchctl bootstrap <domain> <plist>
    LD->>LD: Register Mach services
    LD-->>SM: OK

ServiceManager is a thin wrapper around /bin/launchctl. It shells out to launchctl bootstrap for registration, launchctl bootout for deregistration, launchctl kickstart for restarts, and launchctl kill for signal delivery. The domain is determined dynamically by querying launchctl managername — which returns Aqua for GUI sessions, Background for background sessions, or System for system sessions, mapping to gui/<uid>, user/<uid>, or system respectively.

The environment filtering at PluginLoader.swift#L268-L275 is a security measure: only environment variables starting with CONTAINER_ and common proxy variables (http_proxy, HTTP_PROXY, etc.) are passed to plugin processes. This prevents accidentally leaking sensitive environment variables.

Mach Service Naming Convention

The Mach service naming follows a predictable pattern defined in Plugin.swift#L59-L75:

com.apple.container.{type}.{pluginName}[.{instanceId}]

For example:

  • com.apple.container.runtime.container-runtime-linux.abc123 — a runtime instance for container abc123
  • com.apple.container.network.container-network-vmnet — the singleton network plugin
  • com.apple.container.core.container-core-images — the singleton images plugin
flowchart LR
    subgraph "Singleton plugins"
        N["com.apple.container.network.container-network-vmnet"]
        I["com.apple.container.core.container-core-images"]
    end
    subgraph "Per-instance plugins"
        R1["com.apple.container.runtime.container-runtime-linux.{uuid1}"]
        R2["com.apple.container.runtime.container-runtime-linux.{uuid2}"]
    end

The instance ID suffix is critical for runtime plugins — each container needs its own runtime process with its own Mach service name. The SandboxClient.machServiceLabel method constructs this label when connecting, and the ContainersService uses it when registering the plugin with launchd.

The launchd label follows a slightly different pattern: com.apple.container.{pluginName}[.{instanceId}] — note the absence of the type component. This is because launchd labels must be unique across all services, and the plugin name already includes enough context.

CLI Plugin Dispatch via execvp

The final piece of the plugin system handles CLI extensions. When you type an unrecognized subcommand like container foo, the DefaultCommand catches it.

DefaultCommand is registered as the defaultSubcommand in Application's command configuration, so it receives any arguments that don't match a known command. Its run() method:

  1. Creates a PluginLoader via the API server
  2. Extracts the first argument as the potential plugin name
  3. Searches for a plugin with that name
  4. Validates it's a CLI plugin (plugin.config.isCLI)
  5. Resets signal handlers to defaults (so the plugin can manage its own signals)
  6. Calls plugin.exec(args:) — which invokes execvp
flowchart TD
    A["container foo --bar baz"] --> B[DefaultCommand.run]
    B --> C{Plugin 'foo' exists?}
    C -->|No| D[Print error with hint paths]
    C -->|Yes| E{Is CLI plugin?}
    E -->|No| D
    E -->|Yes| F[Reset SIGINT/SIGTERM]
    F --> G["plugin.exec(args)"]
    G --> H["execvp('/path/to/foo', args)"]
    H --> I[Plugin takes over process]

The execvp call in Plugin.swift#L102-L111 replaces the current process entirely — there's no fork. The plugin binary takes over, and from the user's perspective it looks like a native subcommand. The signal reset at DefaultCommand.swift#L104-L107 ensures the plugin starts with clean signal handling rather than inheriting the CLI's custom handlers.

Tip: If you're developing a CLI plugin, the error message when a plugin isn't found includes the exact directories where plugins should be installed. This is computed dynamically from the install root — not hardcoded — so it's always accurate.

What's Next

We've now covered how apple/container finds, loads, and manages its components through the plugin system. The final article in this series examines the build subsystem — a fascinating departure from the XPC-based architecture. container build communicates with a BuildKit process running inside a Linux VM via gRPC over vsock, uses HPACK metadata headers to pass build configuration, and manages bidirectional streaming for progress and terminal resize events. It's a completely different communication model, and the reasons for that difference illuminate the broader architectural philosophy.