Transactions In Motion: Pool Architecture and P2P Propagation
Prerequisites
- ›Articles 1-3
- ›Ethereum transaction types (Legacy, EIP-1559, EIP-4844 blobs)
- ›Basic P2P networking concepts
Transactions In Motion: Pool Architecture and P2P Propagation
We've traced blocks through execution and state down to disk. Now let's follow data in the other direction — how transactions arrive at a node, get validated and pooled, propagate across the network, and eventually get included in blocks. This lifecycle involves two major subsystems: the transaction pool (with its SubPool aggregator pattern) and the devp2p networking stack. The handler struct ties them together as the central coordinator.
Transaction Types and Why Multiple Pools
Ethereum has evolved through five transaction types, each defined in core/types/transaction.go:
const (
LegacyTxType = 0x00
AccessListTxType = 0x01
DynamicFeeTxType = 0x02
BlobTxType = 0x03
SetCodeTxType = 0x04
)
Types 0x00 through 0x02 and 0x04 are "regular" transactions — they carry calldata, have similar size characteristics, and follow the same lifecycle. Type 0x03 (blob transactions, introduced by EIP-4844) is fundamentally different: each blob transaction carries up to several hundred kilobytes of blob data. This size difference has profound implications for pool management, network propagation, and eviction strategies.
A single monolithic pool can't efficiently handle both 200-byte regular transactions and 200-kilobyte blob transactions — the eviction policies, memory budgets, and persistence strategies need to be completely different. This is why Geth introduced the SubPool aggregator pattern.
The SubPool Aggregator Pattern
The TxPool struct is not itself a pool — it's an aggregator that manages multiple specialized pool implementations in lockstep:
type TxPool struct {
subpools []SubPool
chain BlockChain
stateLock sync.RWMutex
state *state.StateDB
subs event.SubscriptionScope
quit chan chan error
term chan struct{}
sync chan chan error
}
The aggregator provides a unified API while each SubPool handles its transaction type with optimized strategies:
flowchart TD
INCOMING["Incoming Transaction"] --> FILTER["TxPool.Add()"]
FILTER --> DISPATCH["Dispatch to matching SubPool<br/>(based on Filter())"]
DISPATCH --> LP["legacypool<br/>Types: 0x00, 0x01, 0x02, 0x04<br/>In-memory with journal<br/>Price-based eviction"]
DISPATCH --> BP["blobpool<br/>Type: 0x03<br/>Disk-backed (billy)<br/>Size-aware eviction"]
LP --> PENDING["Pending() — merged view"]
BP --> PENDING
PENDING --> MINER["Miner / Block Builder"]
The SubPool interface is comprehensive — it requires implementations to handle filtering, initialization, reset (on chain head changes), gas tip updates, transaction addition, pending retrieval, and status queries. The Filter(tx) method lets each subpool claim the transaction types it handles.
In eth.New(), both pools are created and composed:
legacyPool := legacypool.New(config.TxPool, eth.blockchain)
eth.blobTxPool = blobpool.New(config.BlobPool, eth.blockchain, legacyPool.HasPendingAuth)
eth.txPool, err = txpool.New(config.TxPool.PriceLimit, eth.blockchain, []txpool.SubPool{legacyPool, eth.blobTxPool})
The legacyPool.HasPendingAuth callback passed to blobpool.New is a cross-pool coordination point — it lets the blob pool check whether a pending SetCode authorization exists in the legacy pool for a given account.
LazyTransaction: Deferred Loading Optimization
When the miner requests pending transactions to build a block, loading the full transaction data for every candidate would be wasteful — especially for multi-megabyte blob transactions that might not even make it into the block. The LazyTransaction pattern solves this:
type LazyTransaction struct {
Pool LazyResolver
Hash common.Hash
Tx *types.Transaction // nil until resolved
Time time.Time
GasFeeCap *uint256.Int
GasTipCap *uint256.Int
Gas uint64
BlobGas uint64
}
A LazyTransaction carries just enough metadata (gas caps, gas limits, hash) for the miner to make ordering and filtering decisions. The full transaction is only loaded via Resolve() when actually needed:
func (ltx *LazyTransaction) Resolve() *types.Transaction {
if ltx.Tx != nil {
return ltx.Tx
}
return ltx.Pool.Get(ltx.Hash)
}
The LazyResolver interface is minimal — just a Get(hash) method. Each SubPool implements it, and the resolver is injected into the lazy transaction so it can pull the full data from whichever pool manages it.
Tip: The comment on
Resolve()explains an important design choice — the method intentionally does not cache the resolved transaction if the original pool didn't cache it. For blob transactions, blindly caching could cause memory bloat.
The devp2p Networking Stack
Transactions arrive at (and depart from) the node through the devp2p networking stack. The p2p.Server manages peer connections, protocol multiplexing, and discovery:
flowchart TD
subgraph "Discovery"
DNS["DNS Discovery"]
V4["discv4 (UDP)"]
V5["discv5 (UDP)"]
FAIR["FairMix<br/>Balanced source selection"]
end
subgraph "Transport"
RLPX["RLPx Encrypted TCP"]
end
subgraph "Protocols"
ETH_P["eth/68 Protocol"]
SNAP_P["snap/1 Protocol"]
end
DNS --> FAIR
V4 --> FAIR
V5 --> FAIR
FAIR --> RLPX
RLPX --> ETH_P
RLPX --> SNAP_P
The Protocol struct defines a devp2p sub-protocol:
type Protocol struct {
Name string
Version uint
Length uint64
Run func(peer *Peer, rw MsgReadWriter) error
DialCandidates enode.Iterator
Attributes []enr.Entry
}
The Run function is called in a new goroutine for each connected peer — it reads and writes protocol messages via MsgReadWriter. The DialCandidates field provides an iterator of potential peers to connect to, fed by the FairMix discovery mixer.
The FairMix is notable — it combines multiple peer discovery sources (DNS, discv4, discv5) with a fairness guarantee, ensuring no single source monopolizes the connection attempts. The timeout is set to 100ms, giving each source a fair chance to produce candidates each round.
The Handler: P2P-to-Blockchain Glue
The handler struct is the central coordinator between networking and blockchain subsystems:
type handler struct {
nodeID enode.ID
networkID uint64
synced atomic.Bool
database ethdb.Database
txpool txPool
chain *core.BlockChain
maxPeers int
downloader *downloader.Downloader
txFetcher *fetcher.TxFetcher
peers *peerSet
txBroadcastKey [16]byte
// ...
}
sequenceDiagram
participant Peer as Remote Peer
participant Handler as handler
participant TxFetcher as TxFetcher
participant TxPool as TxPool
participant Miner as Miner
Peer->>Handler: NewPooledTransactionHashes (announcement)
Handler->>TxFetcher: Notify(peer, hashes)
TxFetcher->>Peer: GetPooledTransactions (fetch)
Peer-->>TxFetcher: PooledTransactions (response)
TxFetcher->>TxPool: Add(txs)
TxPool-->>Handler: NewTxsEvent
Handler->>Peer: Broadcast or Announce to other peers
Miner->>TxPool: Pending() — pull txs for block
The handler coordinates three key flows:
-
Transaction propagation: When a new transaction event fires, the handler decides whether to broadcast the full transaction or just announce its hash. The threshold is
txMaxBroadcastSize = 4096bytes — transactions larger than this are only announced, and peers must explicitly request them. This is crucial for blob transactions. -
Chain synchronization: The
downloaderhandles initial sync and catching up after being offline. It coordinates full sync and snap sync modes. -
Peer management: The
peerSettracks connected peers, and thetxBroadcastKeyprovides deterministic transaction broadcast routing — each node selects a subset of peers to broadcast to based on a SipHash of the transaction hash, ensuring good network coverage without flooding.
The newHandler() constructor sets up the downloader and transaction fetcher, wiring callbacks that connect them to the pool:
fetchTx := func(peer string, hashes []common.Hash) error {
p := h.peers.peer(peer)
if p == nil { return errors.New("unknown peer") }
return p.RequestTxs(hashes)
}
addTxs := func(txs []*types.Transaction) []error {
return h.txpool.Add(txs, false)
}
h.txFetcher = fetcher.NewTxFetcher(h.chain, validateMeta, addTxs, fetchTx, h.removePeer)
Tip: The
syncedatomic bool is a critical coordination flag. Transaction processing is disabled until the node considers itself synchronized with the network. TheenableSyncedFeatures()method flips this flag, which unlocks transaction broadcasting and pool acceptance. If your node seems to not accept transactions, check whether sync has completed.
With transactions flowing through the network and into pools, and blocks executing against state, the remaining question is: how does the outside world interact with all of this? That's the RPC layer — the subject of Part 6.