The V8 Bridge: How Deno's Extension System Connects Rust to JavaScript
Prerequisites
- ›Article 1: Architecture and Crate Map
- ›Basic V8 concepts (isolates, contexts, handles, snapshots)
- ›Rust proc macro awareness
The V8 Bridge: How Deno's Extension System Connects Rust to JavaScript
In Article 1 we saw how cli/ dispatches commands and how CliFactory lazily wires services. But we glossed over the most fundamental question: how does Rust code become callable from JavaScript? The answer is deno_core — a 5,000+ line foundational crate that wraps V8 and provides the extension system connecting Rust functions to the JavaScript runtime. This article traces that bridge from the Extension struct through the #[op2] proc macro, a concrete filesystem example, the V8 snapshot optimization, and the UnconfiguredRuntime pattern that squeezes out every millisecond of startup time.
deno_core: The Engine Room
The libs/core/lib.rs file is essentially a catalog of reexports that reveals the crate's surface area. The key abstractions are:
JsRuntime— wraps a V8 isolate with an event loop, module loader, and op dispatchExtension— bundles ops and JavaScript source files into a registerable unitOpState— thread-local state bag that ops read from (permissions, file systems, etc.)ModuleLoadertrait — defines how ES modules are resolved and loaded
classDiagram
class JsRuntime {
+execute_script()
+load_main_es_module()
+run_event_loop()
+op_state() OpState
+lazy_init_extensions()
}
class Extension {
+name: &str
+deps: &[&str]
+ops: Cow~[OpDecl]~
+esm_files: Cow~[ExtensionFileSource]~
+lazy_loaded_esm_files
+enabled: bool
}
class OpState {
+put~T~(value)
+borrow~T~() &T
+borrow_mut~T~() &mut T
}
class OpDecl {
+name: &str
+is_async: bool
+slow_fn: OpFnRef
+fast_fn: Option~CFunction~
}
JsRuntime --> Extension : registers
JsRuntime --> OpState : owns
Extension --> OpDecl : contains
The runtime/mod.rs module splits the runtime into focused submodules: jsruntime (the main JsRuntime struct), jsrealm (V8 context management), snapshot (snapshot creation/loading), bindings (V8 function callback wiring), and op_driver (async op scheduling).
Tip:
deno_coreis designed for embedding — you can use it independently of Deno to build your own JavaScript runtimes. That's why it lives inlibs/rather than being tightly coupled tocli/orruntime/.
The Extension Abstraction
An Extension bundles everything needed to add a capability to the runtime:
pub struct Extension {
pub name: &'static str,
pub deps: &'static [&'static str],
pub js_files: Cow<'static, [ExtensionFileSource]>,
pub esm_files: Cow<'static, [ExtensionFileSource]>,
pub lazy_loaded_esm_files: Cow<'static, [ExtensionFileSource]>,
pub ops: Cow<'static, [OpDecl]>,
pub objects: Cow<'static, [OpMethodDecl]>,
pub external_references: Cow<'static, [v8::ExternalReference]>,
pub global_template_middleware: Option<GlobalTemplateMiddlewareFn>,
pub global_object_middleware: Option<GlobalObjectMiddlewareFn>,
pub op_state_fn: Option<Box<OpStateFn>>,
pub needs_lazy_init: bool,
pub enabled: bool,
}
The deps field declares ordering requirements — deno_fs depends on deno_web, for instance. Extensions can provide JavaScript source in three flavors: classic JS files, ES modules, and lazy-loaded ES modules that are only evaluated when first imported. The op_state_fn callback lets an extension inject state into OpState during initialization.
The extension!() macro (generated by deno_ops) creates the boilerplate for declaring extensions. Here's how deno_fs uses it at ext/fs/lib.rs:
deno_core::extension!(deno_fs,
deps = [ deno_web ],
ops = [
op_fs_open_sync, op_fs_open_async,
op_fs_mkdir_sync, op_fs_mkdir_async,
op_fs_chmod_sync, op_fs_chmod_async,
// ... ~60 more ops
],
esm = [ "30_fs.js" ],
options = { ... },
state = |state, options| { ... },
);
This macro generates deno_fs::init() and deno_fs::lazy_init() functions that return fully configured Extension instances. The lazy_init() variant sets needs_lazy_init: true, deferring the op_state_fn callback until later — crucial for the UnconfiguredRuntime pattern we'll cover shortly.
The #[op2] Proc Macro
Every op starts as a regular Rust function annotated with #[op2]. The libs/ops/lib.rs proc macro entry point is deceptively simple:
#[proc_macro_attribute]
pub fn op2(attr: TokenStream, item: TokenStream) -> TokenStream {
op2_macro(attr, item)
}
But behind this, the op2 module generates substantial code. For a sync op, it creates:
- A slow path V8 function callback that extracts arguments from
v8::FunctionCallbackInfo, converts them throughserde_v8or custom conversions, calls the Rust function, and converts the return value back to V8 - A fast path using V8's Fast API (
CFunction) that bypasses theFunctionCallbackInfooverhead for simple argument types - A metrics wrapper that records timing and success/failure when op tracing is enabled
flowchart TD
JS["JavaScript call:<br/>Deno.readFileSync(path)"]
V8["V8 engine"]
FAST{Fast API<br/>eligible?}
FASTPATH["Fast CFunction call<br/>Direct type mapping"]
SLOWPATH["Slow FunctionCallback<br/>Extract from FunctionCallbackInfo"]
CONVERT["Type conversion<br/>(serde_v8 / custom)"]
RUST["Rust op function<br/>op_fs_read_file_sync()"]
RESULT["Convert result to V8"]
JS --> V8
V8 --> FAST
FAST -->|Yes| FASTPATH
FAST -->|No| SLOWPATH
FASTPATH --> RUST
SLOWPATH --> CONVERT
CONVERT --> RUST
RUST --> RESULT
RESULT --> V8
The #[op2] macro supports several calling conventions specified via attributes: #[op2(async)] for async ops that return futures, #[op2(fast)] to force fast-path generation, #[op2(reentrant)] for ops that may call back into JavaScript. Async ops return impl Future and are driven by the op_driver module's event loop integration.
The generated OpDecl struct carries both slow and fast function pointers:
pub struct OpDecl {
pub name: &'static str,
pub is_async: bool,
pub arg_count: u8,
pub(crate) slow_fn: OpFnRef,
pub(crate) slow_fn_with_metrics: OpFnRef,
pub(crate) fast_fn: Option<CFunction>,
pub(crate) fast_fn_with_metrics: Option<CFunction>,
// ...
}
Tip: Op names follow a convention:
op_fs_read_file_syncis the sync variant,op_fs_read_file_asyncis async. The JavaScript wrapper in30_fs.jscalls the appropriate one, often providing a nicer API (e.g., accepting strings and converting to paths).
Walking Through a Real Op: deno_fs
Let's trace a Deno.readFileSync("/tmp/hello.txt") call end-to-end to see how these pieces connect.
The ext/fs/lib.rs extension declaration registers op_fs_read_file_sync among its ~60 ops. The JavaScript layer in 30_fs.js imports this op from ext:core/ops and wraps it with permission checking and argument validation.
sequenceDiagram
participant JS as JavaScript (30_fs.js)
participant Core as ext:core/ops
participant V8 as V8 Engine
participant Op as op_fs_read_file_sync
participant FS as FileSystem trait
participant Disk as std::fs
JS->>Core: Deno.readFileSync(path)
Core->>V8: Function call dispatch
V8->>Op: slow_fn / fast_fn callback
Op->>Op: Extract OpState
Op->>Op: Check permissions
Op->>FS: fs.read_file_sync(path)
FS->>Disk: std::fs::read(path)
Disk-->>FS: Vec<u8>
FS-->>Op: Result<Vec<u8>>
Op-->>V8: v8::Uint8Array
V8-->>JS: Uint8Array
The FileSystem trait in ext/fs/interface.rs is an abstraction layer — the CLI uses RealFs (backed by std::fs), but tests and the standalone binary can provide alternatives. This pattern of trait-based system access recurs throughout Deno's extension system.
The op function itself receives &mut OpState as its first argument (injected by the macro), from which it borrows the FileSystem implementation and the PermissionsContainer. The permission check happens in Rust, not JavaScript — ops are the enforcement boundary for Deno's security model.
V8 Snapshots: Build-Time Serialization
Deno's startup time optimization relies heavily on V8 snapshots. At build time, runtime/snapshot.rs creates a serialized V8 heap containing all extension JavaScript code pre-parsed and compiled:
pub fn create_runtime_snapshot(
snapshot_path: PathBuf,
snapshot_options: SnapshotOptions,
custom_extensions: Vec<Extension>,
) {
let mut extensions: Vec<Extension> = vec![
deno_telemetry::deno_telemetry::lazy_init(),
deno_webidl::deno_webidl::lazy_init(),
deno_web::deno_web::lazy_init(),
// ... ~30 extensions in specific order
];
extensions.extend(custom_extensions);
let output = create_snapshot(CreateSnapshotOptions {
extensions,
startup_snapshot: None,
// ...
}, None).unwrap();
let mut snapshot = std::fs::File::create(snapshot_path).unwrap();
snapshot.write_all(&output.output).unwrap();
}
At runtime, this snapshot is loaded as the startup_snapshot parameter of JsRuntime::new(), instantly restoring the V8 heap state. All extension JavaScript modules are already parsed, compiled, and partially evaluated — only the lazy-loaded extensions defer their JavaScript evaluation.
sequenceDiagram
participant Build as Build Time
participant Snap as create_runtime_snapshot()
participant V8B as V8 (build)
participant File as snapshot.bin
participant Runtime as Runtime (startup)
participant V8R as V8 (runtime)
Build->>Snap: Register ~30 extensions
Snap->>V8B: Execute all JS sources
V8B->>V8B: Parse, compile, evaluate
V8B->>File: Serialize V8 heap
Note over File: ~10MB snapshot blob
Runtime->>File: Load snapshot bytes
File->>V8R: Deserialize V8 heap
Note over V8R: All JS modules ready
V8R->>Runtime: JsRuntime ready in ~5ms
The snapshot contains the JavaScript side of every extension but not the Rust op bindings — those are re-registered at runtime because function pointers can't survive serialization. This is why skip_op_registration exists as an option: when loading from a snapshot, op functions are re-bound to the already-existing V8 function objects.
Extension Registration and Ordering
The common_extensions() function registers ~30 extensions in a critical order:
fn common_extensions<...>(has_snapshot: bool, unconfigured_runtime: bool) -> Vec<Extension> {
// NOTE(bartlomieju): ordering is important here, keep it in sync with
// `runtime/worker.rs`, `runtime/web_worker.rs`, `runtime/snapshot_info.rs`
// and `runtime/snapshot.rs`!
vec![
deno_telemetry::deno_telemetry::init(),
deno_webidl::deno_webidl::init(),
deno_web::deno_web::lazy_init(),
deno_webgpu::deno_webgpu::init(),
deno_image::deno_image::init(),
deno_fetch::deno_fetch::lazy_init(),
// ... 20+ more in specific order
deno_node::deno_node::lazy_init::<...>(),
ops::bootstrap::deno_bootstrap::init(...),
runtime::init(),
ops::web_worker::deno_web_worker::init().disable(),
]
}
The comment at the top of this function is a warning: this ordering must be kept in sync across four files — worker.rs, web_worker.rs, snapshot_info.rs, and snapshot.rs. If the snapshot was built with extensions in order A but the runtime registers them in order B, op IDs won't match and everything breaks silently.
Notice the last entry: deno_web_worker::init().disable(). The .disable() call replaces all op functions with noops. This registers the ops so their JavaScript import statements don't fail, but calling them from a non-worker context would panic. It's an elegant workaround for the constraint that all JavaScript imports must resolve even if the code path is unreachable.
graph LR
subgraph "Extension Registration Order"
T[telemetry] --> WI[webidl] --> W[web] --> WG[webgpu]
WG --> IMG[image] --> F[fetch] --> CA[cache]
CA --> WS[websocket] --> WST[webstorage] --> CR[crypto]
CR --> FFI[ffi] --> N[net] --> TLS[tls]
TLS --> KV[kv] --> CRON[cron] --> NAPI[napi]
NAPI --> HTTP[http] --> IO[io] --> FS[fs]
FS --> OS[os] --> PROC[process] --> NC[node_crypto]
NC --> NS[node_sqlite] --> NODE[node]
NODE --> RT[runtime ops] --> BS[bootstrap]
end
The UnconfiguredRuntime Optimization
The UnconfiguredRuntime pattern splits JsRuntime creation into two phases: V8 initialization (which can happen early, even before flags are fully parsed) and configuration (which requires the module loader, permissions, and other services).
In MainWorker::from_options(), when an UnconfiguredRuntime is available, it's hydrated with the module loader:
let mut js_runtime = if let Some(u) = options.unconfigured_runtime {
let js_runtime = u.hydrate(services.module_loader);
// ... reload cron handler from op state
js_runtime
} else {
// Full initialization path
let mut extensions = common_extensions::<...>(...);
common_runtime(CommonRuntimeOptions { ... })
};
The common_runtime() function creates a JsRuntime from scratch when no unconfigured runtime is available, passing all extensions, the snapshot, and configuration into JsRuntime::new().
After hydration, extensions that were registered as lazy_init() get their state injected via js_runtime.lazy_init_extensions() — this is where the op_state_fn callbacks fire, injecting the blob store, fetch options, cache backends, and other runtime-specific state into OpState.
Tip: The
UnconfiguredRuntimeis particularly valuable fordeno serve, where V8 can be pre-initialized while the server socket is being configured. On Unix, there's even a control socket mechanism (wait_for_start) that pre-creates the runtime before the actual command arguments arrive.
What's Next
We've traced the path from Rust to JavaScript: extensions bundle ops and JS sources, the #[op2] macro generates V8 bindings with fast and slow paths, snapshots serialize the entire JS heap at build time, and the UnconfiguredRuntime splits initialization for maximum parallelism. In the next article, we'll follow the other direction — how JavaScript modules are resolved, fetched, transpiled, and executed — covering the module loading pipeline, the resolver stack, and Deno's dual TypeScript compilation system.