Three Clients, One Interface: Reth vs. Geth vs. Nethermind in the Base Stack
Prerequisites
- ›Articles 1-2: Architecture and Startup Sequence
- ›Basic familiarity with blockchain execution client concepts
- ›Understanding of multi-stage Docker builds
Three Clients, One Interface: Reth vs. Geth vs. Nethermind in the Base Stack
Client diversity is a survival strategy for blockchain networks. If a critical bug hits one implementation, nodes running different clients keep the network alive. Base supports three execution clients — Reth (Rust), Geth (Go), and Nethermind (.NET) — and the base/node repository gives each one a Dockerfile and an entrypoint script that conform to the same interface. But behind that uniform interface, the three clients differ dramatically in build complexity, feature surface, and — most critically — consensus client compatibility.
This article compares all three paths through their Dockerfiles and entrypoint scripts, revealing design decisions that affect performance, security, and operational flexibility.
Build Pipeline Comparison
Each client's Dockerfile follows a multi-stage build pattern, but the stages and toolchains vary significantly. Here's a side-by-side comparison:
| Aspect | Reth | Geth | Nethermind |
|---|---|---|---|
| Language | Rust | Go | .NET (C#) |
| Dockerfile | reth/Dockerfile |
geth/Dockerfile |
nethermind/Dockerfile |
| Build stages | 4 (Go + Rust base + Reth build + runtime) | 3 (Go op-node + Go geth + runtime) | 3 (Go op-node + .NET build + runtime) |
| Linker | mold (with SHA256 verification) | System default | System default |
| Build profile | --profile maxperf |
Default (static build) | release config |
| Output binaries | base-reth-node + base-consensus + op-node |
geth + op-node |
nethermind (directory) + op-node |
| Runtime base | ubuntu:24.04 |
ubuntu:24.04 |
mcr.microsoft.com/dotnet/aspnet:10.0-noble |
| Build time | ~30-60 min (Rust compilation) | ~5-10 min | ~5-10 min |
The Reth build is significantly heavier than the other two. It installs the mold linker with per-architecture SHA256 verification, then builds with Cargo's maxperf profile — an optimization level that enables aggressive LTO (Link Time Optimization) and other performance-critical compiler flags. This is why Reth Docker builds take 30-60 minutes compared to 5-10 for Geth or Nethermind.
flowchart TD
subgraph "Reth Build (4 stages)"
R1["Stage 1: golang:1.24<br/>Build op-node"] --> R4
R2["Stage 2: rust-builder-base<br/>Install mold linker + deps"] --> R3
R3["Stage 3: reth-base<br/>cargo build --profile maxperf<br/>→ base-reth-node + base-consensus"] --> R4
R4["Stage 4: ubuntu:24.04<br/>Runtime image"]
end
subgraph "Geth Build (3 stages)"
G1["Stage 1: golang:1.24<br/>Build op-node"] --> G3
G2["Stage 2: golang:1.24<br/>Build geth (static)"] --> G3
G3["Stage 3: ubuntu:24.04<br/>Runtime image"]
end
subgraph "Nethermind Build (3 stages)"
N1["Stage 1: golang:1.24<br/>Build op-node (via just)"] --> N3
N2["Stage 2: dotnet/sdk:10.0<br/>dotnet publish"] --> N3
N3["Stage 3: dotnet/aspnet:10.0<br/>Runtime image"]
end
Tip: Reth builds are expensive. If you're iterating on configuration changes, use
docker compose build --no-cacheonly when you've actually changed versions. For entrypoint-only changes, rebuild is fast because Docker layer caching preserves the compiled binaries.
Supply Chain Verification in Dockerfiles
All three Dockerfiles implement the same supply chain security pattern we introduced in Part 1: clone at a pinned tag, then verify the commit hash. Let's compare the exact implementation across clients.
Reth (lines 51-53):
RUN . /tmp/versions.env && git clone $BASE_RETH_NODE_REPO . && \
git checkout tags/$BASE_RETH_NODE_TAG && \
bash -c '[ "$(git rev-parse HEAD)" = "$BASE_RETH_NODE_COMMIT" ]' || (echo "Commit hash verification failed" && exit 1)
Geth (lines 22-24):
RUN . /tmp/versions.env && git clone $OP_GETH_REPO --branch $OP_GETH_TAG --single-branch . && \
git switch -c branch-$OP_GETH_TAG && \
bash -c '[ "$(git rev-parse HEAD)" = "$OP_GETH_COMMIT" ]'
Nethermind (lines 25-27):
RUN . /tmp/versions.env && git clone $NETHERMIND_REPO --branch $NETHERMIND_TAG --single-branch . && \
git switch -c $NETHERMIND_TAG && \
bash -c '[ "$(git rev-parse HEAD)" = "$NETHERMIND_COMMIT" ]'
flowchart LR
A[versions.env] --> B["git clone --branch TAG"]
B --> C["git rev-parse HEAD"]
C --> D{"== expected COMMIT?"}
D -->|Yes| E["Proceed to build"]
D -->|No| F["FAIL: tag-moving attack?"]
The pattern is identical but with subtle differences. Reth uses git checkout tags/ rather than --branch, then clones the full repo history (no --single-branch). Geth and Nethermind use --single-branch for faster clones. Reth also adds an explicit error message ("Commit hash verification failed") while the others fail silently with a non-zero exit code.
One more interesting detail: all three also build op-node from the Optimism monorepo. But notice that the Geth Dockerfile builds op-node with make (line 14) while Nethermind uses just (line 14). Both invoke the same build target — this is likely an inconsistency that will converge over time.
Reth: The Feature-Rich Path
Reth's entrypoint at reth/reth-entrypoint is the most capable of the three, offering features the other clients simply don't have.
Flashblocks support (lines 24-29) enables sub-block latency by connecting to a Flashblocks WebSocket endpoint. When RETH_FB_WEBSOCKET_URL is set, the node receives pre-confirmation data before blocks are finalized:
if [[ -n "${RETH_FB_WEBSOCKET_URL:-}" ]]; then
ADDITIONAL_ARGS="$ADDITIONAL_ARGS --websocket-url=$RETH_FB_WEBSOCKET_URL"
echo "Enabling Flashblocks support with endpoint: $RETH_FB_WEBSOCKET_URL"
fi
Pruning configuration (lines 70-73) allows fine-grained control over which historical data to keep. The RETH_PRUNING_ARGS variable accepts arguments like --prune.receipts.distance=50000, letting operators trade disk space for historical query capability.
Historical proofs — the multi-stage initialization we covered in Part 2 — is a Reth-only feature that enables serving historical state proofs for L1 verification.
The final exec command on lines 133-158 exposes a comprehensive set of APIs including web3, debug, eth, net, txpool, and miner over both HTTP and WebSocket.
Geth: The Established Default
Despite being the default client (CLIENT=geth in .env), Geth's entrypoint at geth/geth-entrypoint offers a different set of tuning knobs focused on memory management.
The five cache parameters deserve a closer look:
| Variable | Default | Meaning |
|---|---|---|
GETH_CACHE |
20480 (MB) | Total cache pool size |
GETH_CACHE_DATABASE |
20 (%) | LevelDB read/write cache |
GETH_CACHE_GC |
12 (%) | Garbage collection cache |
GETH_CACHE_SNAPSHOT |
24 (%) | Snapshot generation cache |
GETH_CACHE_TRIE |
44 (%) | Trie node in-memory cache |
These percentage-based allocations sum to 100% of the cache pool. The heavy trie allocation (44% = ~9GB with default settings) reflects L2's read-heavy state access patterns, where trie traversals dominate execution costs.
Geth also supports conditional features that the other clients lack:
- Unprotected transactions via
OP_GETH_ALLOW_UNPROTECTED_TXS— useful for legacy transaction formats - State scheme selection via
OP_GETH_STATE_SCHEME— choosing between hash-based and path-based state storage - Rollup halt via
--rollup.halt=major(line 78) — automatically stopping the node on major version incompatibilities
Nethermind: The Minimal Path
Nethermind's entrypoint at nethermind/nethermind-entrypoint is the simplest of the three. Its power comes from Nethermind's built-in configuration system:
exec ./nethermind \
--config="$OP_NODE_NETWORK" \
The --config flag on line 47 points to a built-in network profile (e.g., base-mainnet or base-sepolia). This single flag replaces dozens of chain-specific parameters that Geth and Reth need to have passed individually.
The only optional additions are bootnodes and ethstats monitoring (lines 33-43). Nethermind also uniquely enables health checks (--HealthChecks.Enabled=true on line 58) by default.
Its Dockerfile uses a .NET-specific runtime base image (mcr.microsoft.com/dotnet/aspnet:10.0-noble) rather than plain Ubuntu, and handles cross-compilation with architecture mapping (lines 29-32):
RUN TARGETARCH=${TARGETARCH#linux/} && \
arch=$([ "$TARGETARCH" = "amd64" ] && echo "x64" || echo "$TARGETARCH") && \
dotnet publish src/Nethermind/Nethermind.Runner -c $BUILD_CONFIG -a $arch -o /publish --sc false
This remaps Docker's TARGETARCH (amd64/arm64) to .NET's architecture names (x64/arm64) since .NET uses x64 rather than amd64.
The base-consensus Asymmetry
This is perhaps the most important finding in the entire codebase, and it's easy to miss. Compare what each Dockerfile copies into the final image:
| File | Reth | Geth | Nethermind |
|---|---|---|---|
op-node binary |
✅ | ✅ | ✅ |
base-consensus binary |
✅ | ❌ | ❌ |
base-consensus-entrypoint |
✅ | ❌ | ❌ |
op-node-entrypoint |
✅ | ✅ | ✅ |
consensus-entrypoint |
✅ | ✅ | ✅ |
Look at the Reth Dockerfile's COPY commands (lines 66-74):
COPY --from=op /app/op-node/bin/op-node ./
COPY --from=reth-base /app/target/maxperf/base-consensus ./
COPY --from=reth-base /app/target/maxperf/base-reth-node ./
...
COPY base-consensus-entrypoint .
Now compare with Geth (lines 37-42):
COPY --from=op /app/op-node/bin/op-node ./
COPY --from=geth /app/build/bin/geth ./
...
# No base-consensus binary or entrypoint
Only Reth's Dockerfile builds and bundles the base-consensus binary. This creates a subtle conflict with the defaults:
flowchart TD
ENV[".env defaults"] --> C["CLIENT=geth"]
ENV --> U["USE_BASE_CONSENSUS=true"]
C --> GETH["Geth Dockerfile<br/>No base-consensus binary"]
U --> CE["consensus-entrypoint<br/>routes to base-consensus"]
CE --> FAIL["❌ Exit 1:<br/>Base client is not supported<br/>for this node type"]
style FAIL fill:#ff6666
The .env file defaults to CLIENT=geth and USE_BASE_CONSENSUS=true. But USE_BASE_CONSENSUS=true requires the Reth image. If a user follows the defaults literally, the consensus entrypoint dispatcher will fail because base-consensus-entrypoint doesn't exist in the Geth image.
In practice, this is handled by docker-compose.yml defaulting USE_BASE_CONSENSUS to false on line 17:
environment:
- USE_BASE_CONSENSUS=${USE_BASE_CONSENSUS:-false}
So the effective default is false unless explicitly set — but the .env file does set it to true. The resolution depends on which takes precedence in your Docker Compose version. This is the kind of configuration edge case that causes hours of debugging.
Tip: If you want to run base-consensus, set
CLIENT=rethexplicitly. If you want to run Geth or Nethermind, ensureUSE_BASE_CONSENSUS=false(or leave it unset in.env).
Feature Matrix Summary
| Feature | Reth | Geth | Nethermind |
|---|---|---|---|
| base-consensus support | ✅ | ❌ | ❌ |
| Flashblocks | ✅ | ❌ | ❌ |
| Historical proofs | ✅ | ❌ | ❌ |
| Pruning config | ✅ (fine-grained) | ✅ (gcmode) | Built-in |
| Cache tuning | Default | ✅ (5 params) | Default |
| Snap sync | ❌ | ✅ (experimental) | ✅ (via bootnodes) |
| Ethstats | ❌ | ✅ | ✅ |
| Built-in network configs | ❌ | ❌ | ✅ |
| Health checks | ❌ | ❌ | ✅ |
| Log level translation | ✅ (name → -v flags) | Direct (verbosity int) | Direct (name) |
| Build time | ~30-60 min | ~5-10 min | ~5-10 min |
The picture that emerges is clear: Reth is the feature-rich path with exclusive access to Base's newest capabilities (base-consensus, Flashblocks, historical proofs). Geth is the established workhorse with fine-grained cache tuning. Nethermind is the low-maintenance option that leans on built-in configuration.
What's Next
We've seen how versions from versions.json flow into Dockerfiles as pinned dependencies. But who updates versions.json — and how do they prevent downgrades, handle four different version formats, and auto-generate pull requests? In Part 4, we'll dissect the dependency_updater, a Go command-line tool that solves the surprisingly complex problem of keeping four upstream dependencies current while enforcing supply chain security.