Read OSS

Inside the Wire: QUIC and HTTP/2 Edge Connections

Advanced

Prerequisites

  • Articles 1-2 (codebase layout and supervisor connection management)
  • Understanding of QUIC protocol basics (streams, datagrams, connection multiplexing)
  • Understanding of HTTP/2 framing and Go's net/http server

Inside the Wire: QUIC and HTTP/2 Edge Connections

In Article 2, we saw how the Supervisor establishes and manages connections to the Cloudflare edge. Now we go inside those connections. cloudflared supports two transport protocols — QUIC (preferred) and HTTP/2 (fallback) — and they have fundamentally different architectures despite implementing the same interface.

Understanding these internals is essential for debugging connection issues, because the error handling, stream dispatch, and registration flows differ significantly between protocols.

The TunnelConnection Interface

Both protocols are unified behind the TunnelConnection interface:

type TunnelConnection interface {
    Serve(ctx context.Context) error
}

It's intentionally minimal. The Serve method blocks until the connection terminates — returning nil for graceful shutdown or an error for unexpected failures. The Supervisor treats both protocol implementations uniformly through this interface.

classDiagram
    class TunnelConnection {
        <<interface>>
        +Serve(ctx context.Context) error
    }
    class quicConnection {
        -conn quic.Connection
        -orchestrator Orchestrator
        -datagramHandler DatagramSessionHandler
        -controlStreamHandler ControlStreamHandler
        +Serve(ctx) error
    }
    class HTTP2Connection {
        -conn net.Conn
        -server *http2.Server
        -orchestrator Orchestrator
        -controlStreamHandler ControlStreamHandler
        +Serve(ctx) error
        +ServeHTTP(w, r)
    }
    TunnelConnection <|.. quicConnection
    TunnelConnection <|.. HTTP2Connection

The key supporting types are:

  • Orchestrator (the connection.Orchestrator interface): Provides GetOriginProxy() for routing requests and UpdateConfig() for remote configuration changes
  • ControlStreamHandler: Manages the registration handshake and graceful shutdown
  • DatagramSessionHandler: Handles UDP/ICMP datagrams (QUIC only)
  • Credentials (connection.go#L64-L69): Carries account tag, tunnel secret, tunnel ID, and optional endpoint

QUIC Connection: Three Goroutines in an Errgroup

The quicConnection.Serve() method is the heart of the QUIC transport. It launches three concurrent goroutines in an errgroup — if any one fails, all are cancelled:

flowchart TB
    subgraph "errgroup (ctx cancels all on first error)"
        CS["Control Stream<br/>serveControlStream()"]
        AS["Accept Stream Loop<br/>acceptStream()"]
        DH["Datagram Handler<br/>datagramHandler.Serve()"]
    end
    
    QC["quic.Connection"] --> CS
    QC --> AS
    QC --> DH
    
    CS -->|Opens first stream| Edge["Edge Registration RPC"]
    AS -->|Accepts incoming streams| Dispatch["runStream() → CloudflaredServer.Serve()"]
    DH -->|Reads QUIC datagrams| Muxer["UDP Session / ICMP Demux"]
    
    style CS fill:#fff3e0
    style AS fill:#e1f5fe
    style DH fill:#e8f5e9

Goroutine 1: Control Stream — Opens the first QUIC stream to the edge (the edge expects this) and serves the registration RPC. If registration succeeds, it blocks waiting for unregistration or graceful shutdown. When it returns nil (clean unregistration), it waits for the grace period before allowing the errgroup to cancel — this lets in-flight requests complete.

Goroutine 2: Accept Stream Loop — Calls conn.AcceptStream(ctx) in a loop. Each accepted stream is dispatched in a new goroutine via runStream(). This is how proxied requests arrive from the edge.

Goroutine 3: Datagram Handler — Serves the datagram mux/demux layer for UDP sessions and ICMP packets. We'll cover this in depth in Article 5.

The errgroup pattern here means that if the control stream terminates (e.g., edge unregisters the connection), the accept loop and datagram handler are also cancelled. The defer q.Close() ensures the underlying QUIC connection is always closed.

QUIC Stream Dispatch and RPC

When a new QUIC stream arrives via acceptStream(), it enters runStream():

sequenceDiagram
    participant Edge as Cloudflare Edge
    participant QC as quicConnection
    participant SS as CloudflaredServer
    participant OP as OriginProxy

    Edge->>QC: New QUIC stream
    QC->>QC: runStream(quicStream)
    QC->>SS: Serve(ctx, noCloseStream)
    SS->>SS: determineProtocol(stream)
    
    alt Data Stream (protocol signature)
        SS->>SS: handleRequest → ReadConnectRequestData()
        SS->>QC: dispatchRequest()
        alt HTTP/WebSocket
            QC->>OP: ProxyHTTP(w, tracedReq, isWebsocket)
        else TCP
            QC->>OP: ProxyTCP(ctx, rwa, tcpRequest)
        end
    else RPC Stream (protocol signature)
        SS->>SS: handleRPC()
        Note over SS: Cap'n Proto RPC<br/>RegisterUdpSession,<br/>UpdateConfiguration
    end

The CloudflaredServer reads a protocol signature byte from the stream to determine whether it's a data stream (proxied request) or an RPC stream (management operations like UDP session registration or configuration updates). This two-protocol multiplexing over QUIC streams is elegant — it avoids the need for a separate control channel for mid-connection RPCs.

A subtle but important detail: the stream is wrapped in a nopCloserReadWriter before being passed to the server. This prevents the request handler from closing the write side of the stream prematurely — only runStream() itself calls quicStream.CancelWrite(0) on error. This design prevents partial writes and ensures clean RST_STREAM frames.

The dispatchRequest() method type-switches on the connection type: ConnectionTypeHTTP and ConnectionTypeWebsocket build an http.Request and delegate to ProxyHTTP, while ConnectionTypeTCP creates a ReadWriteAcker and delegates to ProxyTCP.

HTTP/2 Connection as an HTTP Handler

The HTTP/2 transport takes a fundamentally different approach. HTTP2Connection implements http.Handler, and the Cloudflare edge acts as an HTTP/2 client while cloudflared acts as the server:

func (c *HTTP2Connection) Serve(ctx context.Context) error {
    go func() {
        <-ctx.Done()
        c.close()
    }()
    c.server.ServeConn(c.conn, &http2.ServeConnOpts{
        Context: ctx,
        Handler: c,
    })
    // ...
}

This inverts the typical HTTP model. The edge opens HTTP/2 streams to cloudflared, and each stream triggers ServeHTTP(). The connection type is determined by examining special internal headers:

sequenceDiagram
    participant Edge as Cloudflare Edge
    participant H2 as HTTP2Connection
    participant CS as ControlStreamHandler
    participant OP as OriginProxy

    Edge->>H2: HTTP/2 stream with<br/>Cf-Cloudflared-Proxy-Connection-Upgrade: control-stream
    H2->>H2: determineHTTP2Type(r) → TypeControlStream
    H2->>CS: ServeControlStream()
    
    Edge->>H2: HTTP/2 stream with<br/>Cf-Cloudflared-Proxy-Connection-Upgrade: websocket
    H2->>H2: determineHTTP2Type(r) → TypeWebsocket
    H2->>OP: ProxyHTTP(w, req, isWebsocket=true)
    
    Edge->>H2: HTTP/2 stream with<br/>Cf-Cloudflared-Proxy-Src header
    H2->>H2: determineHTTP2Type(r) → TypeTCP
    H2->>OP: ProxyTCP(ctx, rwa, tcpRequest)
    
    Edge->>H2: Normal HTTP/2 stream
    H2->>H2: determineHTTP2Type(r) → TypeHTTP
    H2->>OP: ProxyHTTP(w, req, isWebsocket=false)

The determineHTTP2Type() function checks headers in priority order:

  1. Cf-Cloudflared-Proxy-Connection-Upgrade: update-configuration → Configuration update
  2. Cf-Cloudflared-Proxy-Connection-Upgrade: websocket → WebSocket
  3. Cf-Cloudflared-Proxy-Src present → TCP stream (WARP routing)
  4. Cf-Cloudflared-Proxy-Connection-Upgrade: control-stream → Control stream
  5. Default → Plain HTTP

Tip: These internal headers (Cf-Cloudflared-Proxy-Connection-Upgrade, Cf-Cloudflared-Proxy-Src) are stripped before forwarding to origin services. They're a private protocol between the edge and cloudflared, not intended for end users.

A key architectural difference from QUIC: HTTP/2 uses activeRequestsWG (a sync.WaitGroup) to track in-flight requests. The close() method waits for all handlers to complete before closing the underlying connection — providing graceful draining.

Control Stream Registration Handshake

Whether using QUIC or HTTP/2, the control stream follows the same registration protocol. The controlStream.ServeControlStream() method:

sequenceDiagram
    participant CS as controlStream
    participant RC as RegistrationClient
    participant Edge as Cloudflare Edge

    CS->>CS: observer.logConnecting()
    CS->>RC: RegisterConnection(auth, tunnelID, connOptions, connIndex, edgeAddress)
    Edge-->>RC: ConnectionDetails{UUID, Location, TunnelIsRemotelyManaged}
    
    CS->>CS: observer.logConnected(UUID, location)
    CS->>CS: observer.sendConnectedEvent()
    CS->>CS: connectedFuse.Connected()
    
    alt First connection && !remotely managed
        CS->>RC: SendLocalConfiguration(tunnelConfig)
        Note over CS: Push ingress rules to edge<br/>so dashboard can display them
    end
    
    CS->>CS: waitForUnregister()
    
    alt ctx.Done()
        CS->>RC: GracefulShutdown(gracePeriod)
    else gracefulShutdownC
        CS->>RC: GracefulShutdown(gracePeriod)
    end
    
    RC->>Edge: UnregisterConnection
    CS-->>CS: Return

The registration response includes the edge location (e.g., "DFW" for Dallas), a UUID for the connection, and crucially, whether the tunnel is remotely managed. If it's the first connection (connIndex == 0) and the tunnel is not remotely managed, cloudflared pushes its local configuration to the edge via SendLocalConfiguration(). This allows the Cloudflare dashboard to display the current ingress rules even for locally configured tunnels.

After registration, waitForUnregister() blocks on either context cancellation or graceful shutdown. In both cases, it sends a GracefulShutdown RPC with the configured grace period, giving the edge time to drain requests from this connection.

Observer Event System

The Observer provides a channel-based pub/sub system for connection lifecycle events:

flowchart LR
    subgraph Producers
        CS["Control Stream"]
        ETS["EdgeTunnelServer"]
        QT["Quick Tunnel"]
    end
    
    subgraph Observer
        EC["tunnelEventChan<br/>(buffered: 16)"]
        DE["dispatchEvents()<br/>(goroutine)"]
        SC["addSinkChan"]
    end
    
    subgraph Sinks
        UI["Terminal UI"]
        TT["tunnelstate.ConnTracker"]
        Met["Metrics"]
    end
    
    CS -->|Connected, Unregistering| EC
    ETS -->|Reconnecting, Disconnected| EC
    QT -->|SetURL| EC
    EC --> DE
    SC --> DE
    DE --> UI
    DE --> TT
    DE --> Met

Events include: RegisteringTunnel, Connected, Reconnecting, Disconnected, Unregistering, and SetURL. The observer uses a buffered channel (capacity 16) for events and a separate channel for sink registration. The dispatchEvents() goroutine runs forever, multiplexing between new sink registrations and event dispatch.

The non-blocking sendEvent() method drops events when the channel is full rather than blocking the caller — a deliberate choice to prevent connection handling from being slowed by slow event consumers.

Tip: The ConnTracker (registered as an event sink) is used by the protocol fallback system. When HasConnectedWith(protocol) returns true, it means at least one connection has successfully used that protocol, which suppresses unnecessary fallback attempts on other connections.

What's Next

We've now seen how data arrives at cloudflared from the edge — via QUIC streams or HTTP/2 requests. In the next article, we'll follow that data deeper into the system: how ingress rules match requests to origin services, how the Proxy layer dispatches between HTTP, TCP, and local services, and how the Orchestrator enables lock-free hot-reload of the entire proxy configuration.