Inside Oxlint: Linter Architecture and Rule System
Prerequisites
- ›Articles 1-4: Architecture, AST, Parser/Semantic, and Visitor/Traverse
- ›Basic understanding of linting tools (ESLint concepts)
Inside Oxlint: Linter Architecture and Rule System
Oxlint is the flagship application built on Oxc's foundation crates. It's a drop-in ESLint replacement that runs 50–100× faster, supporting 730+ linter rules across 15 plugin categories. Understanding its architecture reveals how all the pieces from previous articles — the arena allocator, the AST, the parser, semantic analysis, and the visitor system — come together into a production tool. This article traces a lint run from the CLI entry point through parallel file processing to diagnostic output.
CLI to LintRunner: The Orchestration Layer
The journey begins in apps/oxlint/src/main.rs, which is remarkably concise:
#[tokio::main]
async fn main() -> CliRunResult {
let command = lint_command().run();
init_tracing();
if command.lsp {
run_lsp(None).await;
return CliRunResult::LintSucceeded;
}
init_miette();
command.handle_threads();
let mut stdout = BufWriter::new(std::io::stdout());
CliRunner::new(command, None).run(&mut stdout)
}
The binary does four things: parse CLI arguments (using bpaf), decide between LSP mode and lint mode, configure threading, and delegate to CliRunner. The BufWriter wrapping stdout is a performance detail — it reduces syscalls by batching writes.
CliRunner at apps/oxlint/src/lint.rs#L30-L38 orchestrates the higher-level concerns: config loading, file discovery, filter construction, and output formatting. It then delegates the actual linting to LintRunner.
sequenceDiagram
participant CLI as main.rs
participant CR as CliRunner
participant LR as LintRunner
participant LS as LintService
participant DS as DiagnosticService
CLI->>CR: new(command)
CR->>CR: load config, discover files
CR->>LR: new(lint_service, directives_store)
LR->>LS: process files in parallel (rayon)
LS-->>DS: send diagnostics
DS-->>CLI: format and output
The LintRunner at crates/oxc_linter/src/lint_runner.rs#L19-L28 coordinates both regular oxc linting and optional type-aware linting:
pub struct LintRunner {
lint_service: LintService,
type_aware_linter: Option<TsGoLintState>,
directives_store: DirectivesStore,
cwd: PathBuf,
}
The Rule Trait and Plugin System
At the heart of oxlint's extensibility is the Rule trait at crates/oxc_linter/src/rule.rs#L16-L73. It defines three hooks:
pub trait Rule: Sized + Default + fmt::Debug {
/// Visit each AST Node
fn run<'a>(&self, node: &AstNode<'a>, ctx: &LintContext<'a>) {}
/// Run only once. Useful for inspecting scopes and trivias etc.
fn run_once(&self, ctx: &LintContext) {}
/// Run on each Jest node
fn run_on_jest_node<'a, 'c>(
&self,
jest_node: &PossibleJestNode<'a, 'c>,
ctx: &'c LintContext<'a>,
) {}
}
Most rules implement run(), which is called for every AST node. Rules that need a whole-file view (like no-duplicate-imports) implement run_once() instead. The run_on_jest_node hook is specific to Jest/Vitest testing rules.
Rules also provide configuration methods:
from_configuration()— parse ESLint-style JSON configshould_run()— file-level gating (e.g., TypeScript-only rules)
classDiagram
class Rule {
<<trait>>
+run(node, ctx)
+run_once(ctx)
+run_on_jest_node(jest_node, ctx)
+should_run(host) bool
+from_configuration(value) Result~Self~
}
class NoUnusedVars {
-options: NoUnusedVarsOptions
+run(node, ctx)
}
class NoDebugger {
+run(node, ctx)
}
Rule <|.. NoUnusedVars
Rule <|.. NoDebugger
Rule Organization
Rules are organized into plugin categories:
| Plugin | Example Rules | Count |
|---|---|---|
eslint |
no-unused-vars, no-debugger |
~100+ |
typescript |
no-explicit-any, consistent-type-imports |
~60+ |
react |
jsx-no-target-blank, no-direct-mutation-state |
~30+ |
unicorn |
prefer-array-flat-map, no-null |
~80+ |
import |
no-default-export, no-cycle |
~20+ |
jsx-a11y |
alt-text, anchor-is-valid |
~30+ |
New rules are scaffolded with just new-rule <name> <plugin>, which generates the boilerplate file, registers the rule, and formats the code.
LintContext and Semantic Data Access
The LintContext at crates/oxc_linter/src/context/mod.rs#L33-L62 is the primary interface rules use to interact with the linting infrastructure. It wraps a shared ContextHost and adds per-rule metadata:
pub struct LintContext<'a> {
parent: Rc<ContextHost<'a>>,
current_plugin_name: &'static str,
current_rule_name: &'static str,
severity: Severity,
}
Through Deref, LintContext provides direct access to the Semantic struct — which means rules can access:
- The AST via
ctx.nodes() - Scoping data — scopes, symbols, references via the
Scopingstruct - Module records — import/export information
- Comments — for directive processing
- Source text — for contextual error messages
impl<'a> Deref for LintContext<'a> {
type Target = Semantic<'a>;
fn deref(&self) -> &Self::Target {
self.parent.semantic()
}
}
This design means a lint rule like no-unused-vars can query the symbol table directly:
// From no_unused_vars - checking if a symbol is used
fn run<'a>(&self, node: &AstNode<'a>, ctx: &LintContext<'a>) {
// Access symbols and references through ctx (Deref to Semantic)
// ...
}
Tip: When writing a custom lint rule, prefer
ctx.scoping().symbol_flags(symbol_id)over re-analyzing the AST. The semantic analysis has already resolved everything — use its results rather than duplicating work.
Diagnostics, Auto-Fix, and Disable Directives
When a rule detects a violation, it reports a diagnostic through LintContext. The diagnostic system is built on miette (via the forked oxc-miette crate), providing rich error messages with source code context:
flowchart TD
Rule -->|"ctx.diagnostic()"| LC[LintContext]
LC -->|"Message"| DS[DiagnosticService]
DS --> Graphical[Graphical Output]
DS --> JSON[JSON Output]
DS --> GitHub[GitHub Actions Format]
DS --> Unix[Unix Format]
Auto-Fix
Rules can provide auto-fixes via the RuleFixer. Fixes are categorized by FixKind:
- Fix: Safe automatic fix
- Suggestion: Requires user confirmation
- Dangerous: May change program behavior
Disable Directives
The DirectivesStore at lint_runner.rs#L36-L39 manages eslint-disable comments:
pub struct DirectivesStore {
map: Arc<Mutex<FxHashMap<PathBuf, DisableDirectives>>>,
}
It uses Arc<Mutex<...>> because multiple threads process files in parallel but need to share directive state. The store checks directives with various rule name formats (e.g., typescript-eslint/no-explicit-any and @typescript-eslint/no-explicit-any).
Performance Optimizations
Oxlint achieves its 50–100× speedup over ESLint through several techniques, all building on the foundation from Articles 1–4.
Arena Pooling
Rather than creating and destroying an Allocator for each file, the linter maintains a pool of allocators. When processing a file, a worker thread borrows an allocator from the pool, resets it, uses it for parsing + semantic analysis + linting, then returns it. This eliminates allocation overhead and keeps memory warm in CPU cache — exactly the reuse pattern described in the allocator documentation (Article 2).
File-Level Parallelism with Rayon
Files are processed in parallel using rayon's work-stealing thread pool. Each file gets its own independent pipeline: parse → semantic → lint. The only shared state is the ConfigStore (rule configuration, read-only after initialization) and the DirectivesStore (behind Arc<Mutex<>>).
flowchart TB
subgraph "Rayon Thread Pool"
T1[Thread 1] --> F1[file1.ts: Parse → Semantic → Lint]
T2[Thread 2] --> F2[file2.ts: Parse → Semantic → Lint]
T3[Thread 3] --> F3[file3.ts: Parse → Semantic → Lint]
T4[Thread N] --> F4[fileN.ts: Parse → Semantic → Lint]
end
CS[ConfigStore - read only] -.->|Arc| T1
CS -.->|Arc| T2
CS -.->|Arc| T3
CS -.->|Arc| T4
F1 -->|diagnostics| DS[DiagnosticService]
F2 --> DS
F3 --> DS
F4 --> DS
AstTypesBitset
Not every rule cares about every AST node type. A rule checking for debugger statements only needs to see DebuggerStatement nodes. To avoid dispatching to rules that will immediately return, Oxc uses AstTypesBitset — a compact bitset indicating which AST node types a rule is interested in.
This bitset is computed once when rules are initialized. During traversal, the linter checks the bitset before calling each rule's run() method, skipping rules that don't match the current node type.
Linear Memory Scan
Because the AST is arena-allocated (Article 2), traversing it involves scanning through roughly contiguous memory. This is dramatically more cache-friendly than traversing a heap of individually-allocated nodes connected by pointers. Each cache line loaded from the arena likely contains multiple small AST nodes, amortizing the cost of memory access.
Tip: When profiling oxlint, you'll find that parse + semantic time dominates lint rule execution for most files. The per-rule overhead is tiny because each
run()call operates on pre-computed semantic data rather than re-analyzing the AST.
What's Next
In the final article, we'll cover the output side of the pipeline: the transformer with its Babel-compatible presets, the minifier's fixed-point optimization loop, identifier mangling, code generation with source maps, the Prettier-compatible formatter, and the NAPI bindings that expose everything to Node.js.