Architecture and Navigation Guide: How apple/container is Organized
Prerequisites
- ›Basic familiarity with container concepts (images, namespaces)
- ›Basic Swift reading ability
- ›Awareness of macOS processes and inter-process communication at a conceptual level
Architecture and Navigation Guide: How apple/container is Organized
Most container runtimes on macOS run all your containers inside a single, shared Linux VM. apple/container takes a fundamentally different approach: every container gets its own lightweight virtual machine. This isn't just an implementation detail — it shapes every architectural decision in the codebase, from the process model to the networking stack to the way stdio file descriptors travel between processes. If you want to understand how this project works, you first need to understand why it works this way.
This article is a map. By the end, you'll know exactly where to look for any piece of functionality in the repository, and you'll understand the process boundaries that connect the CLI on your terminal to a Linux kernel booting inside a VM.
What apple/container Actually Does
The container tool lets you build, run, and manage OCI-compatible Linux containers on Apple Silicon Macs. It consumes standard OCI images — the same images you'd use with Docker or Podman — so there's no lock-in. The key differentiator is the isolation model.
Traditional container runtimes on macOS spin up one Linux VM (often managed by something like Lima or Docker Desktop's backend) and then launch containers as Linux processes inside that shared VM. apple/container instead creates a dedicated VM for each container using Apple's Virtualization.framework. The project's technical overview explains the rationale clearly: each container gets full VM-level isolation, you only mount the data each container needs (rather than everything the shared VM might ever need), and boot times remain comparable to shared-VM containers.
The heavy lifting of VM creation and management is handled by the companion apple/containerization library. The container tool is essentially the user-facing application layer on top of that library — handling CLI interactions, process orchestration, image management, networking, and persistence.
Tip: The
containerizationlibrary appears throughoutPackage.swiftas a dependency on nearly every target. When you see types likeVZVirtualMachineManager,ContentStore, orPlatform, they come from that library, not from this repository.
The Four-Layer Architecture
apple/container is not one process. It's a coordinated set of five separate executables, organized into four logical layers:
flowchart TD
CLI["container CLI<br/>(user-facing)"]
API["container-apiserver<br/>(central coordinator)"]
RT["container-runtime-linux<br/>(one per container)"]
NET["container-network-vmnet<br/>(one per network)"]
IMG["container-core-images<br/>(singleton)"]
VF["Virtualization.framework"]
VM["vmnet.framework"]
XPC_FW["XPC / launchd"]
CLI -->|XPC| API
API -->|XPC| RT
API -->|XPC| NET
API -->|XPC| IMG
RT --> VF
NET --> VM
API --> XPC_FW
style CLI fill:#4A90D9,color:#fff
style API fill:#D94A4A,color:#fff
style RT fill:#7B68EE,color:#fff
style NET fill:#2ECC71,color:#fff
style IMG fill:#F39C12,color:#fff
Layer 1: CLI. The container binary is what users type. It parses arguments, constructs XPC messages, and sends them to the API server. It contains almost no business logic.
Layer 2: API Server. The container-apiserver is a long-running launch agent. It's the central coordinator — it manages container state, orchestrates network allocation, registers runtime plugins with launchd, and runs two DNS servers. All XPC routes for container, network, volume, and plugin operations are registered here.
Layer 3: Helper Daemons. Three helper processes handle specific resource types:
container-runtime-linux— one instance per running container, managing the VM lifecyclecontainer-network-vmnet— one instance per virtual network, managing IP allocation via vmnet.frameworkcontainer-core-images— a singleton managing OCI image storage and registry interactions
Layer 4: macOS Frameworks. Virtualization.framework, vmnet.framework, XPC, and launchd provide the OS primitives.
The API server's startup sequence in APIServer+Start.swift shows how all these pieces come together. The run() method initializes plugin loaders, container services, network services, health check handlers, and two DNS servers — then launches everything concurrently using a TaskGroup.
Swift Package Structure and Target Graph
The Package.swift manifest defines 5 executable targets and ~20 library targets. The executable targets map directly to the processes described above:
| Executable Target | Binary Name | Path |
|---|---|---|
container |
container |
Sources/CLI |
container-apiserver |
container-apiserver |
Sources/Helpers/APIServer |
container-runtime-linux |
container-runtime-linux |
Sources/Helpers/RuntimeLinux |
container-network-vmnet |
container-network-vmnet |
Sources/Helpers/NetworkVmnet |
container-core-images |
container-core-images |
Sources/Helpers/Images |
The library targets follow a client/server split pattern. Each service has separate targets for its server-side logic and its client-side XPC wrapper:
graph LR
subgraph "ContainerAPIService"
APIS["Server<br/>ContainerAPIService"]
APIC["Client<br/>ContainerAPIClient"]
end
subgraph "ContainerSandboxService"
SS["Server<br/>ContainerSandboxService"]
SC["Client<br/>ContainerSandboxServiceClient"]
end
subgraph "ContainerNetworkService"
NS["Server<br/>ContainerNetworkService"]
NC["Client<br/>ContainerNetworkServiceClient"]
end
subgraph "ContainerImagesService"
IS["Server<br/>ContainerImagesService"]
IC["Client<br/>ContainerImagesServiceClient"]
end
APIC --> SC
APIC --> IC
APIS --> NC
APIS --> SC
Notice that the API server's server target depends on the network and sandbox client targets — because the API server acts as a client when talking to the helper daemons. This is the key insight: every process boundary is modeled as a client/server pair.
The shared foundation libraries (ContainerXPC, ContainerPlugin, ContainerPersistence, ContainerResource) are used by nearly every target. ContainerXPC is especially critical — it provides the message format and transport used across all process boundaries.
Directory Structure Walkthrough
Here's a navigational cheat-sheet for the source tree:
| Directory | Responsibility |
|---|---|
Sources/CLI/ |
Thin @main entry point; delegates immediately to Application |
Sources/ContainerCommands/ |
All CLI command definitions (run, build, exec, etc.) |
Sources/ContainerBuild/ |
gRPC-based build subsystem for container build |
Sources/ContainerResource/ |
Shared data types: ContainerConfiguration, NetworkConfiguration, etc. |
Sources/ContainerPersistence/ |
JSON-on-disk entity store with in-memory index |
Sources/ContainerPlugin/ |
Plugin discovery, config schema, launchd registration |
Sources/ContainerXPC/ |
XPC message types, server, and client |
Sources/Services/ |
Client/Server pairs for API, Sandbox, Network, Images services |
Sources/Helpers/ |
Entry points for the four helper executables |
Sources/DNSServer/ |
Custom UDP DNS server built on SwiftNIO |
Sources/TerminalProgress/ |
Terminal progress bar rendering |
Sources/ContainerOS/ |
OS-level utilities |
config/ |
Plugin config.json files for built-in runtime and network helpers |
The CLI entry point at ContainerCLI.swift is remarkably thin — just 19 lines. It delegates entirely to Application, which defines the full command tree in Application.swift. The command tree uses Swift Argument Parser's groupedSubcommands to organize commands into Container, Image, Volume, and Other groups, with DefaultCommand as the hidden default subcommand that handles plugin dispatch.
XPC as the Communication Backbone
Every process boundary in apple/container uses XPC — Apple's native inter-process communication framework based on Mach messages. The project doesn't use sockets, shared memory, or HTTP between its macOS-side processes. (gRPC is used, but only for communication with Linux processes inside VMs — more on that in Article 6.)
Why XPC? Three reasons:
- Privilege separation. XPC connections carry audit tokens that identify the calling process. The server validates that the client has the same effective user ID, preventing cross-user access.
- launchd integration. XPC services are registered as Mach services with launchd, which handles on-demand process launching and lifecycle management.
- File descriptor passing. XPC can send file descriptors across process boundaries — essential for passing stdio pipes from the CLI through the API server to the container runtime.
The project builds a custom abstraction layer over the raw xpc_object_t C API. XPCServer provides route-based message dispatch, XPCClient wraps the callback-based send API into Swift async/await, and XPCMessage provides type-safe accessors over the underlying dictionary. We'll explore these in depth in the next article.
sequenceDiagram
participant CLI as container CLI
participant API as container-apiserver
participant RT as container-runtime-linux
CLI->>API: XPC: containerCreate
API->>API: Persist ContainerSnapshot
API->>API: Register runtime with launchd
CLI->>API: XPC: containerBootstrap
API->>RT: XPC: createEndpoint
RT-->>API: XPC endpoint
API->>RT: XPC: bootstrap (via endpoint)
RT->>RT: Boot Linux VM
RT-->>API: OK
API-->>CLI: OK
Tip: When reading the codebase, always check which process a file runs in. Code in
Sources/Services/ContainerAPIService/Server/runs insidecontainer-apiserver. Code inSources/Services/ContainerAPIService/Client/runs inside whichever process needs to talk to the API server — usually the CLI. Mixing these up is the fastest way to get confused.
What's Next
Now that you have the map, the next article dives deep into the XPC communication layer — the custom abstractions that make all this inter-process coordination possible. We'll examine how XPCServer dispatches messages by route, how XPCClient bridges XPC's callback model into async/await with configurable timeouts, and the clever two-server security pattern that container-runtime-linux uses to limit its public attack surface.