CI/CD Pipeline: Multi-Architecture Docker Builds and the Release Process
Prerequisites
- ›Articles 1 and 3: Architecture and client differences
- ›GitHub Actions workflow syntax
- ›Docker multi-architecture image concepts (manifests, digests)
CI/CD Pipeline: Multi-Architecture Docker Builds and the Release Process
In Part 4 we saw how the dependency updater creates PRs with version bumps. Those PRs trigger a validation build, and when merged, a production build that pushes multi-architecture images to GitHub Container Registry. The pipeline produces six Docker image builds (three clients × two architectures) that fan out in parallel, then fan back in through three manifest merge jobs.
This final article dissects the GitHub Actions workflows: the fan-out/fan-in build pattern, platform-specific feature flags, multi-arch manifest merging, and the layered security hardening that complements the supply chain protections we've seen throughout the series.
Pipeline Overview: Three Clients × Two Architectures
The CI strategy spans two workflow files serving different purposes:
| Workflow | Trigger | Purpose |
|---|---|---|
docker.yml |
Push to main, version tags (v*) |
Build, push, and merge multi-arch images |
pr.yml |
Pull requests | Build-only validation (no push) |
The production pipeline in docker.yml creates nine jobs total:
flowchart TD
subgraph "Fan Out: 6 parallel build jobs"
G1["geth<br/>linux/amd64"]
G2["geth<br/>linux/arm64"]
R1["reth<br/>linux/amd64<br/>+asm-keccak"]
R2["reth<br/>linux/arm64"]
N1["nethermind<br/>linux/amd64"]
N2["nethermind<br/>linux/arm64"]
end
subgraph "Fan In: 3 merge jobs"
MG["merge-geth"]
MR["merge-reth"]
MN["merge-nethermind"]
end
G1 -->|digest artifact| MG
G2 -->|digest artifact| MG
R1 -->|digest artifact| MR
R2 -->|digest artifact| MR
N1 -->|digest artifact| MN
N2 -->|digest artifact| MN
MG --> IMG1["ghcr.io/base/node-geth<br/>+ ghcr.io/base/node"]
MR --> IMG2["ghcr.io/base/node-reth"]
MN --> IMG3["ghcr.io/base/node-nethermind"]
The matrix strategy enables true parallel builds across architectures. Each architecture runs on its own runner type — ubuntu-24.04 for amd64 and ubuntu-24.04-arm for arm64. This is native compilation, not cross-compilation via QEMU, which is critical for performance-sensitive builds like Reth's maxperf profile.
The Build Job Pattern
Let's trace the anatomy of a single build job using the Geth build as our reference. The job definition starts at line 24 of docker.yml:
sequenceDiagram
participant GH as GitHub Runner
participant GHCR as Container Registry
participant ART as Artifact Storage
GH->>GH: 1. Harden runner
GH->>GH: 2. Checkout code
GH->>GHCR: 3. Docker login
GH->>GH: 4. Extract metadata (tags, labels)
GH->>GH: 5. Setup buildx
GH->>GHCR: 6. Build & push by digest
GH->>ART: 7. Upload digest artifact
The key insight is step 6: the build doesn't push a tagged image. Instead, it pushes an untagged image identified only by its content digest (SHA256 hash), using push-by-digest=true:
outputs: type=image,push-by-digest=true,name-canonical=true,push=true
The digest is then exported and uploaded as a GitHub Actions artifact (lines 71-88):
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02
with:
name: digests-geth-${{ env.PLATFORM_PAIR }}
path: ${{ runner.temp }}/digests/*
The digest is stored as an empty file whose name is the digest hash. This is a clever trick — no file content is needed, just the filename, and it survives the artifact upload/download process cleanly.
Platform-Specific Build Features
The most interesting matrix variation is in the Reth build (lines 89-99):
reth:
strategy:
matrix:
settings:
- arch: linux/amd64
runs-on: ubuntu-24.04
features: jemalloc,asm-keccak,optimism
- arch: linux/arm64
runs-on: ubuntu-24.04-arm
features: jemalloc,optimism
The asm-keccak feature is only enabled on amd64. This feature uses hand-tuned x86-64 assembly instructions for the Keccak-256 hash function — the cryptographic primitive that Ethereum uses heavily. ARM CPUs lack the required assembly instructions, so the feature is excluded from arm64 builds.
The features are passed as a build argument (lines 134-135):
build-args: |
FEATURES=${{ matrix.settings.features }}
Both Geth and Nethermind have identical matrix configurations across architectures — no platform-specific features. Their differences are at the Dockerfile level (build toolchain), not the CI level.
Tip: The
jemallocallocator (included on both architectures) replaces the system malloc and significantly improves Reth's memory allocation performance under the heavy fragmentation patterns typical of blockchain state management.
The PR validation workflow at pr.yml mirrors the production matrix exactly but sets push: false:
- name: Build the Docker image
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83
with:
push: false
platforms: ${{ matrix.settings.arch }}
This ensures every PR is validated against the same build matrix without polluting the registry with development images.
Manifest Merging: Combining Architecture Digests
The merge jobs are where the multi-arch magic happens. Let's examine merge-geth as the representative example.
flowchart TD
A["Download amd64 digest artifact"] --> C
B["Download arm64 digest artifact"] --> C
C["Both digests in /tmp/digests/"] --> D["docker buildx imagetools create"]
D --> E["Multi-arch manifest"]
E --> F["ghcr.io/base/node-geth:main"]
E --> G["ghcr.io/base/node-geth:sha-9480ec..."]
E --> H["ghcr.io/base/node:main (deprecated)"]
The merge job downloads the digest artifacts from both architecture builds using pattern matching:
- name: Download digests
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
with:
path: ${{ runner.temp }}/digests
pattern: digests-geth-*
merge-multiple: true
Then the metadata action generates the appropriate tags (branch name, Git tag if present, long SHA):
tags: |
type=ref,event=branch # "main" for pushes to main
type=ref,event=tag # "v1.2.3" for version tags
type=sha,format=long # "sha-9480ec16531c2f222ea18eba6efed235e7210381"
Finally, docker buildx imagetools create stitches the per-architecture images into a single multi-arch manifest (lines 264-268):
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
run: |
docker buildx imagetools create \
$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.NAMESPACE }}/${{ env.GETH_DEPRECATED_IMAGE_NAME }}@sha256:%s ' *)
The printf ... * glob expands to all digest filenames in the directory — those empty files we created earlier. Each filename is a digest hash, and the @sha256: prefix makes it a valid image reference.
Geth's merge job is unique in that it pushes to two image names: node-geth (the canonical name) and node (a deprecated alias for backward compatibility). Reth and Nethermind only push to their respective names.
Security Hardening in CI
The CI pipeline implements multiple layers of supply chain security that complement the commit hash verification in Dockerfiles.
Pinned Action SHAs
Every GitHub Action reference uses a full commit SHA rather than a version tag:
# Instead of: uses: actions/checkout@v3
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0
Version tags on GitHub Actions can be moved (just like Git tags on upstream repos). Using commit SHAs ensures the workflow runs exactly the code that was reviewed, regardless of future tag changes. Comments after the SHA indicate the version for human readability.
Harden Runner
Every job starts with step-security/harden-runner:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863
with:
egress-policy: audit
This monitors all outbound network connections from the runner, providing an audit trail of every external service the build contacts. While set to audit (not block), it creates visibility into potential supply chain attacks through unexpected outbound connections.
Minimal Permissions
The workflow requests only the permissions it needs (lines 19-21):
permissions:
contents: read
packages: write
The PR workflow is even more restrictive — contents: read only, since it doesn't push images.
| Security Layer | Where | What it protects against |
|---|---|---|
| Pinned action SHAs | All workflows | Action tag-moving attacks |
| harden-runner | All jobs | Unexpected outbound connections |
| Minimal permissions | Workflow-level | Over-privileged token access |
| Commit hash verification | Dockerfiles | Upstream tag-moving attacks |
| versions.json pinning | Repository | Uncontrolled version drift |
| Anti-downgrade validation | Dependency updater | Malicious version rollbacks |
PR Validation and Issue Housekeeping
Beyond the main build pipeline, two supporting workflows keep the repository healthy.
The pr.yml workflow runs the full build matrix on every pull request. Six parallel jobs (three clients × two architectures) must all pass before merge. This catches Dockerfile issues, version mismatches, and upstream build breaks early.
flowchart LR
PR["Pull Request"] --> V["6 parallel builds<br/>(3 clients × 2 archs)"]
V -->|All pass| M["Merge allowed"]
V -->|Any fail| B["Merge blocked"]
The stale.yml workflow runs daily at 12:30 AM UTC and automatically manages inactive issues and PRs:
days-before-stale: 14
days-before-close: 5
Issues and PRs without activity for 14 days get a "stale" label and warning message. After 5 more days of inactivity, they're automatically closed. This prevents the issue tracker from accumulating zombie tickets.
The Complete CI/CD Pipeline
Let's visualize how all four workflows interact in the lifecycle of a dependency update — the most common trigger for the build pipeline:
flowchart TD
A["Daily cron (1 PM UTC)"] -->|"update-dependencies.yml"| B["Dependency updater runs"]
B -->|"New versions found"| C["Auto-create PR"]
C -->|"pr.yml triggers"| D["6 validation builds<br/>(no push)"]
D -->|"All pass"| E["Reviewer approves & merges"]
E -->|"docker.yml triggers"| F["6 production builds<br/>(push digests)"]
F --> G["3 manifest merge jobs"]
G --> H["ghcr.io/base/node-geth<br/>ghcr.io/base/node-reth<br/>ghcr.io/base/node-nethermind"]
B -->|"No updates"| I["Workflow exits"]
style A fill:#e6f3ff
style H fill:#e6ffe6
From the daily cron trigger to published multi-arch images, the pipeline is fully automated with human review as the only manual gate. The dependency updater finds new versions, creates a PR with diff links, CI validates the build, a reviewer approves, and the merge triggers production image builds.
Series Wrap-Up
Across these five articles, we've traced the complete lifecycle of the base/node repository:
- Architecture: A ~30-file deployment orchestrator with a two-service Docker Compose topology, three-layer configuration, and version-pinned builds
- Boot Sequence: Service ordering via polling loops, consensus dispatcher routing, and Reth's remarkable multi-phase initialization
- Client Comparison: Three clients with fundamentally different build pipelines and feature sets, unified by a common interface — with the critical base-consensus asymmetry
- Dependency Management: A Go CLI that handles four version formats, enforces anti-downgrade protection, and auto-generates PRs with diff links
- CI/CD: Multi-architecture builds with fan-out/fan-in manifest merging and layered supply chain security
The design philosophy that emerges is one of composition over implementation. The repository doesn't implement blockchain logic — it orchestrates upstream implementations with pinned versions, verified builds, and automated updates. Every shell script, Dockerfile, and CI workflow serves this orchestration goal. The result is that a single docker compose up can produce a production-ready Base L2 node backed by any of three execution clients, built from verified source code, running on any architecture.
Tip: The best way to solidify your understanding is to run it. Clone the repo, set
CLIENT=reth, configure your L1 endpoints in.env.mainnet, and rundocker compose up. Watch the execution client start, the consensus entrypoint poll for readiness, and the Engine API connection establish. Then read the logs alongside the entrypoint scripts — the code will make complete sense.