Read OSS

The Plugin System: Resolution, Execution, and the Art of the Iterator

Advanced

Prerequisites

  • Articles 1-3 (architecture, startup, request pipeline)
  • Understanding of Lua metatables and closures
  • Familiarity with API gateway plugin concepts (authentication, rate limiting)

The Plugin System: Resolution, Execution, and the Art of the Iterator

Kong's plugin system is the reason the project exists. An API gateway without plugins is just a reverse proxy. The plugin system transforms Kong into a programmable platform where authentication, rate limiting, logging, request transformation, and AI proxying are all implemented as composable, independently configurable modules.

This article examines how plugins are discovered, how the iterator determines which plugins apply to a given request, and how the 8-level priority resolution system selects the right configuration. We'll use the key-auth and rate-limiting plugins as concrete examples.

Plugin Discovery and Loading

Plugin discovery starts with the BUNDLED_PLUGINS map in kong/constants.lua — a list of 45 plugin names. During Kong.init(), this list drives the loading process:

assert(db.plugins:load_plugin_schemas(config.loaded_plugins))

The schema loading happens in kong/db/schema/plugin_loader.lua. For each plugin, load_subschema requires kong.plugins.<name>.schema, validates it against the MetaSubSchema, and registers it as a subschema of the plugins entity:

function plugin_loader.load_subschema(parent_schema, plugin, errors)
  local plugin_schema = "kong.plugins." .. plugin .. ".schema"
  local ok, schema = load_module_if_exists(plugin_schema)
  -- validate against MetaSubSchema
  ok, err_t = MetaSchema.MetaSubSchema:validate(schema)
  -- register as subschema
  ok, err = Entity.new_subschema(parent_schema, plugin, schema)
  return schema
end

This means every plugin's configuration schema becomes a subschema of the plugins entity in the database. When you POST /plugins with { "name": "key-auth", "config": { ... } }, the config field is validated against key-auth's specific schema — but it's stored in the same plugins table.

flowchart TD
    A[constants.lua: BUNDLED_PLUGINS] --> B[db.plugins:load_plugin_schemas]
    B --> C{For each plugin}
    C --> D["require kong.plugins.<name>.schema"]
    D --> E[MetaSubSchema:validate]
    E --> F[Entity.new_subschema]
    F --> G[Plugin config validated<br>against its own schema]
    C -->|Next plugin| C

The Collecting/Collected Iterator Pattern

The plugins iterator in kong/runloop/plugins_iterator.lua implements a two-mode execution model that's central to Kong's performance.

Collecting phases (access for HTTP, preread for stream) resolve which plugins apply to the current request and build an execution list. Collected phases (header_filter, body_filter, log, response) replay that list without re-resolving.

The phase categories are defined at lines 29–65:

NON_COLLECTING_PHASES = {
  "certificate", "rewrite", "response",
  "header_filter", "body_filter", "log",
}
COLLECTING_PHASE = "access"

This design exists for a crucial reason: during the access phase, Kong knows the matched Route, Service, and authenticated Consumer. This is the information needed to resolve plugin configurations. In later phases (header_filter, body_filter), this resolution is unnecessary and would be wasteful — the same plugins that ran in access should run in downstream phases.

The collecting iterator at lines 372–411 does the work. For each plugin, it calls load_configuration_through_combos to find the applicable config, then records the plugin+config pair for each downstream phase the plugin implements:

for j = 1, DOWNSTREAM_PHASES_COUNT do
  local phase = DOWNSTREAM_PHASES[j]
  if handler[phase] then
    local n = collected[phase][0] + 2
    collected[phase][0] = n
    collected[phase][n] = cfg
    collected[phase][n - 1] = plugin
  end
end

The collected data is stored on ctx.plugins — a table allocated from a pool, with sub-tables for each downstream phase. Each sub-table is a flat array of [plugin, config, plugin, config, ...] pairs with the count stored at index [0].

sequenceDiagram
    participant Access as access phase (collecting)
    participant Iterator as plugins_iterator
    participant HeaderFilter as header_filter (collected)
    participant BodyFilter as body_filter (collected)
    participant Log as log (collected)

    Access->>Iterator: get_collecting_iterator(ctx)
    Iterator->>Iterator: For each loaded plugin...
    Iterator->>Iterator: load_configuration_through_combos()
    Iterator->>Iterator: If config found, record in ctx.plugins
    Note over Iterator: ctx.plugins.header_filter = [plugin1, cfg1, plugin2, cfg2]
    Note over Iterator: ctx.plugins.log = [plugin1, cfg1, plugin2, cfg2]
    Iterator-->>Access: yield (plugin, config) for access handler
    
    HeaderFilter->>Iterator: get_collected_iterator("header_filter", ctx)
    Iterator-->>HeaderFilter: replay ctx.plugins.header_filter
    
    BodyFilter->>Iterator: get_collected_iterator("body_filter", ctx)
    Iterator-->>BodyFilter: replay ctx.plugins.body_filter
    
    Log->>Iterator: get_collected_iterator("log", ctx)
    Iterator-->>Log: replay ctx.plugins.log

Tip: If you're wondering why rewrite uses the global iterator instead of the collecting one — it's because the rewrite phase runs before routing. Without a matched Route, there's no way to resolve route/service-scoped plugin configurations. Only global plugins (those not bound to any Route, Service, or Consumer) execute in rewrite.

8-Level Configuration Resolution

When a plugin applies to a request, its configuration must be resolved from potentially multiple plugin instances. A rate-limiting plugin might be configured globally, on a specific service, and on a specific route+consumer combination. The most specific configuration wins.

The lookup_cfg function at lines 215–267 implements this 8-level priority lookup:

Priority Combination Specificity
1 Route + Service + Consumer Most specific
2 Route + Consumer
3 Service + Consumer
4 Route + Service
5 Consumer only
6 Route only
7 Service only
8 Global (no associations) Least specific

Each combination produces a compound key via build_compound_key at line 85:

local function build_compound_key(route_id, service_id, consumer_id)
  return format("%s:%s:%s", route_id or "", service_id or "", consumer_id or "")
end

The lookup short-circuits on the first match. If a Route+Service+Consumer configuration exists, Route+Consumer is never checked. This means the global configuration acts as a fallback — it applies only when no more specific configuration is found.

The load_configuration_through_combos function at lines 282–291 adds another layer: plugin handlers can declare no_route, no_service, or no_consumer flags that filter out those dimensions before lookup. This is rare but allows specialized plugins to opt out of certain scoping levels.

flowchart TD
    A[Request Context] --> B{Route + Service + Consumer?}
    B -->|Found| Z[Use this config]
    B -->|Not found| C{Route + Consumer?}
    C -->|Found| Z
    C -->|Not found| D{Service + Consumer?}
    D -->|Found| Z
    D -->|Not found| E{Route + Service?}
    E -->|Found| Z
    E -->|Not found| F{Consumer only?}
    F -->|Found| Z
    F -->|Not found| G{Route only?}
    G -->|Found| Z
    G -->|Not found| H{Service only?}
    H -->|Found| Z
    H -->|Not found| I{Global?}
    I -->|Found| Z
    I -->|Not found| J[Plugin does not apply]

Plugin Handler Anatomy and Priority Ordering

Every Kong plugin follows the same structure. A plugin named key-auth lives in kong/plugins/key-auth/ and contains:

  • handler.lua — Phase methods (access, header_filter, log, etc.)
  • schema.lua — Configuration validation schema
  • daos.lua (optional) — Custom database entities
  • api.lua (optional) — Admin API extensions

The handler must export a table with PRIORITY and VERSION fields, plus methods named after Nginx phases. Here's kong/plugins/key-auth/handler.lua:

local KeyAuthHandler = {
  VERSION = kong_meta.version,
  PRIORITY = 1250,
}

PRIORITY controls execution order — higher priority plugins execute first. This is critical: authentication plugins (key-auth at 1250, jwt at 1250, basic-auth at 1100) must run before authorization plugins (acl at 950), which must run before rate-limiters (rate-limiting at 910).

The key-auth plugin implements only the access phase at lines 262–273:

function KeyAuthHandler:access(conf)
  if not conf.run_on_preflight and kong.request.get_method() == "OPTIONS" then
    return
  end
  if conf.anonymous then
    return logical_OR_authentication(conf)
  else
    return logical_AND_authentication(conf)
  end
end

Contrast this with kong/plugins/rate-limiting/handler.lua, which implements both access and log phases:

RateLimitingHandler.VERSION = kong_meta.version
RateLimitingHandler.PRIORITY = 910

The rate-limiting plugin checks and enforces limits during access, then asynchronously syncs counter increments during log (when using the cluster policy with a sync rate). This multi-phase pattern is common — plugins do enforcement early and bookkeeping late.

The PDK: Plugin Development Kit API Surface

Plugins interact with Kong exclusively through the PDK — the kong.* namespace. This API surface is defined in kong/pdk/init.lua:

local MAJOR_MODULES = {
  "table", "node", "log", "ctx", "ip", "client",
  "service", "request", "service.request", "service.response",
  "response", "router", "nginx", "cluster", "vault",
  "tracing", "plugin", "telemetry",
}

Each module is loaded from kong/pdk/<name>.lua and attached to the kong global. Key namespaces include:

Namespace Purpose Example
kong.request Read incoming request kong.request.get_header("Authorization")
kong.response Send response to client kong.response.exit(403, { message = "Forbidden" })
kong.service.request Modify request to upstream kong.service.request.set_header("X-Custom", "value")
kong.service.response Read upstream response kong.service.response.get_header("Content-Type")
kong.client Consumer/credential info kong.client.authenticate(consumer, credential)
kong.log Structured logging kong.log.err("something failed")
kong.ctx Plugin-scoped context kong.ctx.plugin.my_data = "value"
kong.cache Database cache kong.cache:get(key, opts, loader_fn, ...)

The PDK provides phase-checking — calling kong.service.request.set_header() in the log phase produces an error because the request has already been sent. This enforcement prevents subtle bugs in plugin development.

Tip: When writing a plugin, use kong.ctx.plugin for data that persists across phases within a single request but is scoped to your plugin instance. Use kong.ctx.shared for data shared between plugins. Never use module-level variables for per-request state — Nginx workers handle multiple requests concurrently.

The Full Picture

Here's how the entire plugin execution pipeline comes together for a single request:

flowchart TD
    subgraph "init time"
        A[Plugin schemas loaded] --> B[Plugin handlers loaded]
        B --> C[Plugins sorted by PRIORITY]
    end
    subgraph "per-request: access phase"
        D[Get collecting iterator] --> E[For each plugin in priority order]
        E --> F[lookup_cfg: 8-level resolution]
        F --> G{Config found?}
        G -->|Yes| H[Record for downstream phases]
        H --> I[Execute plugin.access]
        G -->|No| J[Skip plugin]
        I --> E
        J --> E
    end
    subgraph "per-request: downstream phases"
        K[Get collected iterator] --> L[Replay recorded plugin+config pairs]
        L --> M[Execute plugin.header_filter]
        M --> N[Execute plugin.body_filter]
        N --> O[Execute plugin.log]
    end

In Part 5, we'll examine the database layer that stores plugin configurations, route definitions, and service metadata — the schema system that drives validation, serialization, and Admin API generation from a single source of truth.