From `kong start` to Serving Traffic: The Boot Sequence
Prerequisites
- ›Article 1: Architecture and Nginx Integration (phase model understanding)
- ›Familiarity with Lua module system and metatables
- ›Basic understanding of process management (master/worker model)
From kong start to Serving Traffic: The Boot Sequence
Knowing that Kong runs inside Nginx (as we established in Part 1) raises an immediate question: how does it get into Nginx in the first place? The answer involves a surprisingly long pipeline: shell script → resty CLI → Lua command dispatch → configuration loading → template rendering → Nginx spawn → Lua-phase initialization. Each stage builds on the previous, and a failure at any point aborts the entire boot.
This article traces that pipeline end-to-end, from the kong start command you type in a terminal to the moment the first request can be served.
CLI Dispatch: From Shell to Lua
The entry point is bin/kong, a Lua script executed via the resty CLI (OpenResty's command-line tool). The shebang line #!/usr/bin/env resty means this file runs inside a temporary Nginx process managed by resty.
The script parses the subcommand (start, stop, reload, migrations, etc.) from arg[1], validates it against a hardcoded command table at lines 19–35, and then does something clever: it constructs an inline Lua string and executes it via resty:
local inline_code = string.format([[
setmetatable(_G, nil)
package.path = (os.getenv("KONG_LUA_PATH_OVERRIDE") or "") .. "./?.lua;./?/init.lua;" .. package.path
require("kong.cmd.init")("%s", %s)
]], cmd_name, args_str)
This inline code (lines 135–141) is passed to a new resty process with injected Nginx configuration directives. The indirection exists because some commands need Nginx directives (like lmdb_* or lua_ssl_*) to be present even during CLI execution.
sequenceDiagram
participant Shell
participant bin/kong as bin/kong (resty)
participant cmd/init as kong.cmd.init
participant cmd/start as kong.cmd.start
Shell->>bin/kong: kong start -c kong.conf
bin/kong->>bin/kong: parse args, validate "start"
bin/kong->>bin/kong: inject_confs.compile_confs()
bin/kong->>Shell: resty -e 'require("kong.cmd.init")("start", {...})'
Shell->>cmd/init: dispatch("start", args)
cmd/init->>cmd/start: require("kong.cmd.start")
cmd/start->>cmd/start: execute(args)
The dispatcher at kong/cmd/init.lua is minimal — it requires the command module and calls its execute function:
return function(cmd_name, args)
local cmd = require("kong.cmd." .. cmd_name)
-- ... xpcall(function() cmd_exec(args) end, ...)
end
Configuration Loading Pipeline
The start command's execute function in kong/cmd/start.lua begins by loading configuration:
local conf = assert(conf_loader(args.conf, {
prefix = args.prefix
}, { starting = true }))
The conf_loader function in kong/conf_loader/init.lua implements a multi-stage merge pipeline:
flowchart TD
A["kong_defaults.lua<br>(hardcoded defaults)"] --> E[Merged Config]
B["kong.conf file<br>(user overrides)"] --> E
C["KONG_* env vars<br>(highest precedence)"] --> E
D["custom_conf table<br>(programmatic overrides)"] --> E
E --> F["check_and_parse()<br>(validation & type coercion)"]
F --> G["aliased_properties()<br>(backward compat)"]
G --> H["deprecated_properties()<br>(warnings)"]
H --> I["dynamic_properties()<br>(Nginx directive injection)"]
I --> J["process_secrets.resolve()<br>(vault/secret resolution)"]
J --> K["Frozen immutable<br>configuration table"]
The default values come from kong/templates/kong_defaults.lua, a Lua string that mimics an INI-format configuration file. These defaults are parsed first and form the base layer.
User configuration from kong.conf is overlaid next, followed by environment variables. The KONG_ prefix convention means KONG_DATABASE=off overrides the database property. This three-layer merge at lines 265–280 handles dynamic Nginx directives — properties like nginx_http_lua_shared_dict are parsed into structured directive tables.
The check_and_parse function from kong.conf_loader.parse validates every property against type definitions in kong/conf_loader/constants.lua. Type coercion converts strings to booleans, numbers, and arrays. Invalid values produce clear error messages.
Tip: When debugging configuration issues, set
KONG_LOG_LEVEL=debugand check for thereading config file atmessage. Kong logs every step of the configuration pipeline, including which default paths it searched.
Template Rendering and Nginx Spawn
With a validated configuration in hand, start.lua prepares the prefix directory and spawns Nginx. The prefix directory (default: /usr/local/kong) holds the rendered nginx.conf, PID files, log files, and Unix sockets.
The call at line 59:
assert(prefix_handler.prepare_prefix(conf, args.nginx_conf, nil, nil,
args.nginx_conf_flags))
This renders the kong/templates/nginx_kong.lua template — a Lua string that uses ${{VARIABLE}} interpolation and > if condition then control flow to generate the actual nginx.conf. The template contains conditional sections based on the role (traditional/CP/DP), enabled listeners, SSL settings, and more.
After template rendering, Nginx is spawned with nginx_signals.start(conf) at line 99. This starts the Nginx master process, which forks worker processes, and each worker begins executing the Lua phase hooks we explored in Part 1.
sequenceDiagram
participant start.lua
participant prefix_handler
participant nginx_signals
participant NginxMaster as Nginx Master
participant Worker as Nginx Workers
start.lua->>start.lua: conf_loader(kong.conf)
start.lua->>prefix_handler: prepare_prefix(conf)
prefix_handler->>prefix_handler: render nginx_kong.lua template
prefix_handler->>prefix_handler: write nginx.conf to prefix/
start.lua->>nginx_signals: start(conf)
nginx_signals->>NginxMaster: exec("nginx -p prefix/")
NginxMaster->>NginxMaster: init_by_lua_block → Kong.init()
NginxMaster->>Worker: fork workers
Worker->>Worker: init_worker_by_lua_block → Kong.init_worker()
The init Phase: Kong.init()
The init_by_lua_block directive runs Kong.init() in the Nginx master process before workers are forked. This means the work done here is shared across all workers via copy-on-write memory.
Kong.init() is a 175-line function that performs the following sequence:
- Load configuration from the
.kong_envfile written during prefix preparation (line 648) - Initialize the PDK via
kong_global.init_pdk(kong, config)(line 665) - Create the database connector via
DB.new(config)and connect (lines 669–693) - Check migration state — if using Postgres, verify the schema is up-to-date (lines 674–691)
- Initialize clustering — if running as CP or DP, instantiate the clustering module and optionally the RPC sync system (lines 701–713)
- Load plugin schemas via
db.plugins:load_plugin_schemas(config.loaded_plugins)(line 718) - Build the router and plugins iterator (lines 751–763) — or parse declarative config for DB-less mode (lines 724–745)
Role detection is straightforward, defined at lines 201–218:
is_data_plane = function(config) return config.role == "data_plane" end
is_control_plane = function(config) return config.role == "control_plane" end
is_dbless = function(config) return config.database == "off" end
These flags shape the initialization path. A Control Plane skips router building (it doesn't proxy traffic). A Data Plane in DB-less mode parses declarative config from YAML instead of connecting to Postgres.
flowchart TD
A[Kong.init] --> B[Load config from .kong_env]
B --> C[Init PDK]
C --> D[Create DB connector]
D --> E{DB-less?}
E -->|Yes| F[Parse declarative config]
E -->|No| G[Check migrations]
G --> H[Connect to DB]
F --> I{CP or DP?}
H --> I
I -->|CP/DP| J[Init clustering module]
I -->|Traditional| K[Skip clustering]
J --> L[Load plugin schemas]
K --> L
L --> M[Build router + plugins iterator]
M --> N[Close DB connection]
The init_worker Phase: Kong.init_worker()
After the master forks workers, each worker runs Kong.init_worker(). Unlike init(), this code runs independently in each worker process and can use Nginx's timer and event APIs.
The function at lines 813–1024 handles:
- Timer system startup — the
lua-resty-timer-nglibrary is attached tokong.timer(lines 828–832) - DB worker init —
kong.db:init_worker()(line 836) - Worker events — inter-worker communication via Unix socket (line 856)
- Cluster events — inter-node communication for cache invalidation in Postgres mode (line 864)
- Cache initialization — both
kong.cache(plugin data) andkong.core_cache(routing data) are created with shared memory dictionaries (lines 872–886) - Declarative config loading — in DB-less mode, the parsed config from
init()is loaded into LMDB (lines 912–959) - Cache warmup — pre-populates the cache with frequently accessed entities (line 964)
- Router and plugins iterator rebuild — ensures each worker has an up-to-date router (lines 970–982)
- Plugin init_worker handlers — each plugin's
init_workermethod is called (lines 987–995) - RPC and sync initialization — for hybrid mode with incremental sync (lines 1001–1013)
The eventual-consistency model for router and plugin rebuilds is key. In traditional (Postgres) mode, a background timer periodically checks whether configuration has changed and rebuilds the router if needed. This happens in the runloop handler's init_worker.before() at lines 925–1030.
Tip: The
stash_init_worker_errorfunction (lines 168–183) is Kong's safety net. If any init_worker step fails, the error is stashed and logged on every subsequent request as an ALERT. The node continues running but warns operators it needs a restart.
Bringing It All Together
The boot sequence spans two processes and multiple Lua module boundaries, but the design is methodical: each stage produces something the next stage needs.
| Stage | Process | Key Output |
|---|---|---|
| CLI dispatch | resty (temp Nginx) |
Parsed args, inline Lua code |
| Config loading | resty (temp Nginx) |
Validated, frozen config table |
| Template rendering | resty (temp Nginx) |
nginx.conf in prefix directory |
| Nginx spawn | Nginx master | Running master process |
Kong.init() |
Nginx master (pre-fork) | DB connected, schemas loaded, router built |
Kong.init_worker() |
Each Nginx worker | Timers, caches, events, plugins initialized |
In Part 3, we'll follow a request through every phase of the runloop — from rewrite through log — and see how the infrastructure built during initialization actually serves traffic.