Cloudflared Architecture: A Map of the Codebase
Prerequisites
- ›Basic Go knowledge (goroutines, interfaces, error handling)
- ›Conceptual understanding of reverse tunnels and the Cloudflare network
Cloudflared Architecture: A Map of the Codebase
If you've ever run cloudflared tunnel run and watched your private web server appear on the public internet — without opening a single firewall port — you've witnessed a small miracle of network engineering. But what actually happens inside that binary? How does a single Go program manage QUIC connections, route traffic, handle live configuration updates, and proxy HTTP, TCP, UDP, and ICMP simultaneously?
This article is your map. We'll walk through the entire cloudflared codebase, understand how its ~50 packages relate to one another, and trace the path a request takes from the Cloudflare edge all the way to your origin service.
What Is cloudflared and Why Does It Exist?
Cloudflared is the daemon-side component of Cloudflare Tunnel. The problem it solves is deceptively simple: expose a private service to the internet without opening ingress ports on your network. Instead of the traditional model where traffic flows inbound to your server, cloudflared flips the connection — it dials out to Cloudflare's edge network and holds persistent connections open. When a request arrives at Cloudflare for your hostname, the edge pushes it down one of those existing connections to cloudflared, which forwards it to your origin service.
flowchart LR
User([Internet User]) -->|HTTPS| Edge[Cloudflare Edge PoP]
Edge -->|QUIC/HTTP2| CFD[cloudflared daemon]
CFD -->|HTTP/TCP/UDP| Origin[Origin Service]
style Edge fill:#f68b1f,color:#fff
style CFD fill:#1e3a5f,color:#fff
This architecture eliminates the need for public IPs, firewall rules, or VPN infrastructure. Cloudflared maintains up to four simultaneous connections to different edge Points of Presence (PoPs) for high availability, automatically reconnects on failure, and can be reconfigured at runtime without restarts.
The entry point is cmd/cloudflared/main.go#L51-L97, where we see the very first things cloudflared does: disable QUIC ECN (a workaround for detection bugs), register Prometheus build metrics, set GOMAXPROCS automatically, and then configure the urfave/cli application.
Directory Structure and Package Responsibilities
Cloudflared's package layout is well-organized, with clear separation of concerns. Here's a map of the major directories and what they own:
| Package | Responsibility |
|---|---|
cmd/cloudflared/ |
CLI entry point, subcommand definitions, configuration assembly |
supervisor/ |
Connection lifecycle management, HA connections, protocol fallback |
connection/ |
QUIC and HTTP/2 transport implementations, protocol selection |
orchestration/ |
Runtime configuration management with copy-on-write proxy swap |
proxy/ |
Request dispatching to origin services (HTTP, TCP, WebSocket) |
ingress/ |
Ingress rule matching, origin service type hierarchy |
quic/v3/ |
Datagram v3 UDP session multiplexing over QUIC |
edgediscovery/ |
DNS-based Cloudflare edge address resolution |
features/ |
Feature flag system using DNS TXT records |
management/ |
In-tunnel HTTP management API (chi router) |
tunnelrpc/ |
Cap'n Proto RPC definitions for edge communication |
flow/ |
Concurrent session flow limiter |
config/ |
Config file discovery, YAML parsing, layered configuration |
tunnelstate/ |
Connection state tracking for dashboard/CLI display |
metrics/ |
Prometheus metric registration and HTTP server |
diagnostic/ |
Runtime diagnostics collection |
flowchart TD
CMD[cmd/cloudflared] --> SUP[supervisor]
CMD --> ORCH[orchestration]
SUP --> CONN[connection]
SUP --> EDGE[edgediscovery]
CONN --> PROXY[proxy]
CONN --> QV3[quic/v3]
PROXY --> ING[ingress]
ORCH --> PROXY
ORCH --> ING
CONN --> RPC[tunnelrpc]
style CMD fill:#2563eb,color:#fff
style SUP fill:#7c3aed,color:#fff
style CONN fill:#dc2626,color:#fff
style PROXY fill:#059669,color:#fff
Tip: When navigating the codebase, start from
cmd/cloudflared/tunnel/cmd.go— it's the gravitational center where all components are assembled. TheStartServerfunction there is the best single function to read for understanding how the pieces fit together.
Three Operational Modes
Cloudflared isn't just a tunnel daemon. It operates in three distinct modes, selected based on the arguments provided:
flowchart TD
Start[cloudflared invoked] --> Check{Arguments?}
Check -->|No args, no flags| Service[Service Mode]
Check -->|tunnel run / flags| Tunnel[Tunnel Daemon Mode]
Check -->|Subcommand| SubCmd[Subcommand Mode]
Service --> Watcher[File Watcher + Config Manager + Overwatch]
Tunnel --> StartServer[StartServer → Supervisor]
SubCmd --> Various[tunnel create / access / tail / etc.]
Tunnel Daemon Mode is the primary mode. When you run cloudflared tunnel run my-tunnel or provide flags like --url, the action function in cmd/cloudflared/main.go#L170-L184 delegates to tunnel.TunnelCommand, which eventually calls StartServer to boot the full daemon.
Service Mode activates when cloudflared is invoked with zero arguments and zero flags — typically when running as a system service. The handleServiceMode function creates a file watcher, config manager, and the overwatch.AppManager to monitor config file changes and restart tunnels automatically.
Subcommand Mode handles administrative commands like tunnel create, tunnel list, access, tail, and management. These are registered in commands() and each has its own action handler.
The Tunnel Run Path: CLI to Running Daemon
The most important code path in cloudflared is the initialization sequence that transforms CLI arguments into a running daemon. Let's trace it step by step:
sequenceDiagram
participant CLI as CLI Parser
participant TC as TunnelCommand
participant SS as StartServer
participant PTC as prepareTunnelConfig
participant SUP as Supervisor
CLI->>TC: TunnelCommand(c)
TC->>TC: Detect named tunnel vs quick tunnel
TC->>SS: StartServer(c, info, namedTunnel, log)
SS->>PTC: prepareTunnelConfig(ctx, c, ...)
PTC-->>SS: TunnelConfig + orchestration.Config
SS->>SS: Create Orchestrator, metrics, management
SS->>SUP: StartTunnelDaemon(ctx, config, orchestrator, ...)
SUP->>SUP: NewSupervisor → Run event loop
Step 1: TunnelCommand dispatch. The TunnelCommand function determines whether we're running a named tunnel (with --name), a quick tunnel (with --url or --hello-world), or if the user made an error. Named tunnels go through runAdhocNamedTunnel which can create and route the tunnel in one step.
Step 2: StartServer assembly. StartServer is the assembly function. It initializes Sentry, sets up tracing, creates the connection Observer, calls prepareTunnelConfig, builds the management service, creates the Orchestrator, starts the metrics server, and finally launches StartTunnelDaemon in a goroutine.
Step 3: prepareTunnelConfig. This function in cmd/cloudflared/tunnel/configuration.go#L114-L280 merges all configuration layers — CLI flags, environment variables, config files — into two structs: supervisor.TunnelConfig (transport-level settings) and orchestration.Config (ingress rules and routing). It also creates the feature selector, protocol selector, TLS configs, and origin dialer service.
Step 4: StartTunnelDaemon. The StartTunnelDaemon function creates a Supervisor and calls its Run method, which enters the main event loop.
End-to-End Request Flow
Now that we know how cloudflared starts, let's follow a single HTTP request from the Cloudflare edge all the way to your origin service:
sequenceDiagram
participant Edge as Cloudflare Edge
participant QUIC as quicConnection
participant RPC as Cap'n Proto RPC
participant Orch as Orchestrator
participant Proxy as Proxy
participant Ingress as Ingress Rules
participant Origin as Origin Service
Edge->>QUIC: QUIC stream (request data)
QUIC->>QUIC: AcceptStream → runStream
QUIC->>RPC: NewCloudflaredServer.Serve
RPC->>QUIC: handleDataStream
QUIC->>QUIC: dispatchRequest
QUIC->>Orch: GetOriginProxy()
Orch-->>QUIC: *proxy.Proxy (atomic.Load)
QUIC->>Proxy: ProxyHTTP(w, tracedReq, isWebsocket)
Proxy->>Ingress: FindMatchingRule(host, path)
Ingress-->>Proxy: rule, ruleNum
Proxy->>Origin: RoundTrip / EstablishConnection
Origin-->>Proxy: Response
Proxy-->>Edge: Write response back
1. Stream acceptance. The quicConnection.acceptStream loop accepts incoming QUIC streams and dispatches each to runStream in a new goroutine.
2. RPC dispatch. In runStream, the stream is wrapped in a SafeStreamCloser and handed to the Cap'n Proto RPC server, which detects the protocol signature and routes to handleDataStream.
3. Request dispatch. dispatchRequest calls orchestrator.GetOriginProxy() — a lock-free atomic.Value load — to get the current proxy instance, then switches on the request type: HTTP/WebSocket calls ProxyHTTP, TCP calls ProxyTCP.
4. Ingress matching. Inside Proxy.ProxyHTTP, the proxy calls FindMatchingRule with the request's hostname and path. The matched rule's Service field determines which origin to forward to.
5. Origin forwarding. The proxy performs a type-switch on the matched service: HTTPOriginProxy does a standard HTTP round-trip, StreamBasedOriginProxy establishes a bidirectional stream for TCP/WebSocket, and HTTPLocalProxy serves the management API.
Tip: The
Orchestrator.GetOriginProxy()call on every request is the reason cloudflared usesatomic.Valueinstead of a mutex — this is the hottest path in the entire application, and lock-free reads are essential for performance under load.
What's Next
We've built a mental model of how cloudflared is structured: CLI parsing feeds into configuration assembly, which feeds into the Supervisor, which manages connections that accept streams and dispatch them through the proxy layer. In the next article, we'll zoom into the Supervisor — the state machine at the heart of cloudflared's reliability model — and understand how it manages four simultaneous connections, handles protocol fallback from QUIC to HTTP/2, and discovers Cloudflare edge addresses via DNS.