cloudflared Architecture: A Map of the Codebase
Prerequisites
- ›Basic Go knowledge (packages, interfaces, goroutines)
- ›Conceptual understanding of what Cloudflare Tunnel does (secure outbound connections to Cloudflare edge)
cloudflared Architecture: A Map of the Codebase
Cloudflare Tunnel is one of those products that sounds simple — "connect your private service to Cloudflare's edge without opening inbound ports" — but the daemon that powers it, cloudflared, is a carefully layered system spanning ~350 Go files across dozens of packages. Before you can meaningfully contribute to or debug this codebase, you need a mental model of how the pieces fit together.
This article provides that model. We'll trace the path from main() through CLI dispatch, explore the three operational modes, map the major package dependencies, and follow config resolution from YAML file to running tunnel. By the end, you'll know exactly where to look for any behavior you want to understand.
The main() Entry Point and CLI Dispatch
Everything starts in cmd/cloudflared/main.go#L51-L97. The main() function does five things in quick succession:
- Disables QUIC ECN — a workaround for bugs in ECN detection (
QUIC_GO_DISABLE_ECN=1) - Sets GOMAXPROCS via
automaxprocsto respect container CPU limits - Creates the CLI app using
urfave/cli/v2 - Registers subcommands by calling
Init()on each subsystem (tunnel, access, updater, tracing, token, tail, management) - Launches the app through
runApp(), which integrates with OS service managers on Windows/macOS/Linux
The Init() calls are crucial — they're the mechanism by which each subsystem registers its CLI commands and flags before the CLI framework parses arguments. For example, tunnel.Init(bInfo, graceShutdownC) stores the build info and the graceful shutdown channel in package-level variables so they're available when tunnel commands execute later.
flowchart TD
A["main()"] --> B["Disable QUIC ECN"]
B --> C["Set GOMAXPROCS"]
C --> D["Create cli.App"]
D --> E["Register subcommands<br/>tunnel.Init(), access.Init(), etc."]
E --> F["runApp()"]
F --> G{OS Service Mode?}
G -->|Windows| H["Windows Service Manager"]
G -->|macOS| I["Launchd Integration"]
G -->|Linux| J["systemd Integration"]
G -->|Generic| K["app.Run(os.Args)"]
The runApp() function (platform-specific, see generic_service.go, linux_service.go, etc.) is the bridge between cloudflared and the host OS service manager. On Linux with systemd, it sends the READY=1 notification after startup; on Windows, it integrates with the SCM (Service Control Manager).
Three Operational Modes
When cloudflared receives its parsed CLI arguments, it branches into one of three fundamentally different operating modes. The decision logic lives in cmd/cloudflared/main.go#L166-L184:
flowchart TD
Start["CLI Parsed"] --> Check{"isEmptyInvocation?<br/>NArg==0 && NumFlags==0"}
Check -->|Yes| Service["Service Mode<br/>handleServiceMode()"]
Check -->|No| TunnelCmd["tunnel.TunnelCommand(c)"]
TunnelCmd --> NameCheck{"--name flag set?"}
NameCheck -->|Yes| Adhoc["Ad-hoc Named Tunnel<br/>runAdhocNamedTunnel()"]
NameCheck -->|No| QuickCheck{"--url or --hello-world?"}
QuickCheck -->|Yes| Quick["Quick Tunnel<br/>RunQuickTunnel()"]
QuickCheck -->|No| ConfigCheck{"Config has tunnel ID?"}
ConfigCheck -->|Yes| Error["Error: use 'tunnel run'"]
ConfigCheck -->|No| NoAction["Error: no valid command"]
Service --> Overwatch["overwatch.AppManager<br/>Config file watching"]
Service mode activates when cloudflared is invoked with zero arguments and zero flags — the typical systemd/launchd scenario. It creates a file watcher on the config file via handleServiceMode() and uses the overwatch.AppManager to manage the tunnel lifecycle. When the config file changes, the overwatch manager restarts the tunnel service.
Tunnel command mode is the explicit cloudflared tunnel path, dispatched through TunnelCommand(). This function implements a three-way dispatch:
- Ad-hoc named tunnel: When
--nameis provided,runAdhocNamedTunnel()creates the tunnel (if it doesn't exist), optionally routes DNS, and starts it — all in one command. - Quick tunnel: When
--urlor--hello-worldis set,RunQuickTunnel()provisions an ephemeral tunnel on<random>.trycloudflare.com. - Explicit run: Users are directed to
cloudflared tunnel run <ID>for production use.
Access commands (cloudflared access ssh, cloudflared access tcp, etc.) handle the client side of Cloudflare Access, providing SSH proxying, TCP/UDP forwarding, and authentication flows. These are registered via access.Init() and operate independently from the tunnel subsystem.
Tip: If you're debugging startup issues, check whether cloudflared entered service mode vs. tunnel command mode. The absence of any flags or arguments puts it in service mode, which has a completely different initialization path.
Package Dependency Map
The codebase follows a layered architecture where higher-level packages orchestrate lower-level ones. Understanding this layering is the key to navigating the code efficiently.
graph TD
CMD["cmd/cloudflared/tunnel"] --> SUP["supervisor"]
CMD --> ORCH["orchestration"]
SUP --> CONN["connection"]
SUP --> EDGE["edgediscovery"]
SUP --> FEAT["features"]
CONN --> QUIC["quic/v3"]
CONN --> RPC["tunnelrpc"]
ORCH --> PROXY["proxy"]
ORCH --> ING["ingress"]
ORCH --> FLOW["flow"]
PROXY --> ING
ING --> MGMT["management"]
SUP --> ORCH
style CMD fill:#e1f5fe
style SUP fill:#fff3e0
style CONN fill:#fff3e0
style ORCH fill:#fff3e0
style PROXY fill:#e8f5e9
style ING fill:#e8f5e9
| Package | Responsibility | Key Types |
|---|---|---|
cmd/cloudflared/tunnel |
CLI dispatch, config assembly | TunnelCommand(), prepareTunnelConfig() |
supervisor |
Connection lifecycle, HA, retry | Supervisor, EdgeTunnelServer, TunnelConfig |
connection |
Protocol implementations (QUIC, HTTP/2) | quicConnection, HTTP2Connection, Observer |
orchestration |
Hot-reloadable proxy configuration | Orchestrator, Config |
proxy |
Request dispatch to origin services | Proxy |
ingress |
Rule matching, origin service types | Ingress, Rule, OriginService |
edgediscovery |
DNS-based edge address resolution | Edge |
features |
DNS TXT-based feature flags | FeatureSelector |
tunnelrpc |
Cap'n Proto RPC for registration | RegistrationClient |
quic/v3 |
Datagram muxing for UDP/ICMP | DatagramConn, SessionManager |
flow |
Concurrent session limiting | Limiter |
management |
WebSocket management service | ManagementService |
The dependency flow is strictly top-down: cmd/ → supervisor/ → connection/ + orchestration/ → proxy/ + ingress/. This makes the code relatively easy to reason about — lower-level packages never import higher-level ones.
Configuration Resolution and Assembly
Configuration in cloudflared comes from three sources that are merged together: the config file, CLI flags, and remote feature flags fetched via DNS. The resolution follows a well-defined search order.
Config File Discovery
The config/configuration.go#L79-L119 file defines where cloudflared looks for its YAML config:
flowchart LR
A["~/.cloudflared/"] --> B["~/.cloudflare-warp/"]
B --> C["~/cloudflare-warp/"]
C --> D["/etc/cloudflared/"]
D --> E["/usr/local/etc/cloudflared/"]
subgraph "Each directory"
F["config.yml"]
G["config.yaml"]
end
The search iterates through DefaultConfigSearchDirectories() — user directories first (~/.cloudflared, ~/.cloudflare-warp, ~/cloudflare-warp), then system directories on Unix (/etc/cloudflared, /usr/local/etc/cloudflared). Within each directory, it checks for config.yml then config.yaml. First match wins.
On Windows, the search order is different: it checks %CFDPATH% or falls back to %ProgramFiles(x86)%\cloudflared.
The prepareTunnelConfig() Assembly
Once the config file is loaded and CLI flags are parsed, the massive prepareTunnelConfig() function assembles the final supervisor.TunnelConfig. This is the single most important function to understand for anyone debugging tunnel startup — it's where all configuration sources converge.
Key decisions made in this function:
- Feature selection: Creates a
FeatureSelectorthat resolves DNS TXT records for feature flags - Post-quantum enforcement: If PQ mode is strict, forces QUIC protocol
- Ingress rule parsing: Merges config file and CLI ingress rules
- Protocol selection: Creates a
ProtocolSelectorbased on the--protocolflag and remote percentages - TLS configuration: Builds per-protocol TLS configs with correct server names (
h2.cftunnel.comfor HTTP/2,quic.cftunnel.comfor QUIC) - Edge IP settings: Resolves IPv4/IPv6 preferences and bind addresses
- ICMP router: Initializes the ICMP proxy if source addresses can be determined
Tip: When troubleshooting connection failures,
prepareTunnelConfig()is the place to look first. Most misconfiguration errors surface here before any network connection is attempted.
Data Flow: From Config to Running Tunnel
Let's trace the complete path from startup to a running tunnel, connecting all the pieces we've discussed:
sequenceDiagram
participant CLI as CLI Parser
participant TC as TunnelCommand
participant PC as prepareTunnelConfig
participant Orch as Orchestrator
participant Sup as Supervisor
participant ETS as EdgeTunnelServer
CLI->>TC: Dispatch (adhoc/quick/run)
TC->>PC: Assemble config from<br/>flags + YAML + features
PC-->>TC: TunnelConfig + OrchConfig
TC->>Orch: NewOrchestrator(config)
Note over Orch: Creates initial Proxy<br/>with ingress rules
TC->>Sup: StartTunnelDaemon()
Sup->>Sup: NewSupervisor()<br/>Resolve edge IPs
Sup->>ETS: initialize() → startFirstTunnel()
ETS->>ETS: Serve() → serveTunnel() → serveConnection()
Note over ETS: Dial QUIC or HTTP/2<br/>Register with edge
ETS-->>Sup: connectedSignal
Sup->>ETS: Start HA connections 1..N
The subcommandContext.run() method (called by runAdhocNamedTunnel or buildRunCommand) is the glue that ties everything together. It calls prepareTunnelConfig() to build the config, creates an Orchestrator with the initial ingress rules, and then calls supervisor.StartTunnelDaemon() to begin the connection lifecycle.
The Orchestrator deserves special attention here: it's created before the supervisor and holds the proxy configuration as an atomic.Value. This means the proxy can be hot-swapped later when the edge pushes new configuration — without restarting any connections. We'll explore this in detail in Article 4.
What's Next
With this map in hand, you know where the code lives and how the pieces relate. In the next article, we'll dive deep into the Supervisor — the event loop that manages multiple high-availability connections to the Cloudflare edge, handles protocol fallback from QUIC to HTTP/2, and implements the DNS-based feature flag system that enables gradual rollouts across millions of tunnels.