Read OSS

CLI Layer: Commands, Views, and the Diagnostic System

Intermediate

Prerequisites

  • Article 1: Architecture and Codebase Navigation
  • Article 3: Plan and Apply Lifecycle (for understanding hooks)

CLI Layer: Commands, Views, and the Diagnostic System

Terraform's CLI layer is the user-facing surface of the codebase — where flags are parsed, output is rendered, and errors are presented. It's also where two important architectural patterns play out: the views pattern that cleanly separates rendering logic from business logic, and the diagnostics system that replaces conventional Go error handling with rich, source-attributed messages.

Understanding this layer is essential not just for contributing to Terraform's UI, but for building tools that wrap or automate Terraform, since the JSON output format is a direct reflection of the views architecture.

command.Meta: The Shared Context

As we saw in Article 1, every command embeds command.Meta. Let's look more closely at what Meta provides. The construction in commands.go#L88-L114:

meta := command.Meta{
    WorkingDir:            wd,
    Streams:               streams,
    View:                  views.NewView(streams).SetRunningInAutomation(inAutomation),
    Color:                 true,
    GlobalPluginDirs:      cliconfig.GlobalPluginDirs(),
    Ui:                    Ui,
    Services:              services,
    BrowserLauncher:       webbrowser.NewNativeLauncher(),
    RunningInAutomation:   inAutomation,
    ShutdownCh:            makeShutdownCh(),
    ProviderSource:        providerSrc,
    ProviderDevOverrides:  providerDevOverrides,
    UnmanagedProviders:    unmanagedProviders,
    AllowExperimentalFeatures: ExperimentsAllowed(),
}
classDiagram
    class Meta {
        +WorkingDir WorkingDir
        +Streams *terminal.Streams
        +View *views.View
        +Color bool
        +Ui cli.Ui
        +Services *disco.Disco
        +ShutdownCh chan struct
        +ProviderSource getproviders.Source
        +ProviderDevOverrides map
        +UnmanagedProviders map
        +RunningInAutomation bool
        +PrepareBackend() OperationsBackend
        +OperationRequest() *Operation
        +RunOperation() *RunningOperation
    }
    class PlanCommand {
        +Meta
        +Run(rawArgs) int
    }
    class ApplyCommand {
        +Meta
        +Destroy bool
        +Run(rawArgs) int
    }
    Meta <|-- PlanCommand
    Meta <|-- ApplyCommand

Meta serves as a dependency injection container. Rather than having each command independently construct services, provider sources, and terminal streams, they all share a single pre-configured Meta. This has two benefits: consistency (every command sees the same provider source) and testability (tests can construct a Meta with mock services).

The ShutdownCh field is worth noting — it's a channel that receives a value for every interrupt signal. Commands can select on this channel to implement graceful shutdown. As we saw in Article 3, this eventually propagates to the stopHook inside the graph walk.

Anatomy of a Command: PlanCommand Case Study

The PlanCommand at internal/command/plan.go#L18-L118 is the canonical example of how operation commands work:

func (c *PlanCommand) Run(rawArgs []string) int {
    // 1. Parse view arguments (e.g., -no-color)
    common, rawArgs := arguments.ParseView(rawArgs)
    c.View.Configure(common)

    // 2. Parse command-specific flags
    args, diags := arguments.ParsePlan(rawArgs)

    // 3. Create the command-specific view
    view := views.NewPlan(args.ViewType, c.View)

    // 4. Prepare the backend
    be, beDiags := c.PrepareBackend(args.State, args.ViewType)

    // 5. Build the operation request
    opReq, opDiags := c.OperationRequest(be, view, args.ViewType, args.Operation, args.OutPath, args.GenerateConfigPath)

    // 6. Collect variables
    opReq.Variables, varDiags = args.Vars.CollectValues(...)

    // 7. Execute
    op, err := c.RunOperation(be, opReq)

    // 8. Return exit code
    return op.Result.ExitStatus()
}
sequenceDiagram
    participant User
    participant Run as PlanCommand.Run()
    participant Args as arguments.ParsePlan
    participant View as views.NewPlan
    participant BE as PrepareBackend
    participant Op as RunOperation

    User->>Run: terraform plan -out=plan.tfplan
    Run->>Args: ParsePlan(rawArgs)
    Args-->>Run: typed args struct
    Run->>View: NewPlan(viewType, baseView)
    View-->>Run: PlanView
    Run->>BE: PrepareBackend(stateArgs)
    BE-->>Run: OperationsBackend
    Run->>Op: RunOperation(backend, opReq)
    Op-->>Run: RunningOperation
    Run-->>User: exit code

This seven-step pattern is shared across plan, apply, refresh, and import. The arguments package provides structured, typed flag parsing per command rather than raw string manipulation. The view is created early so that even flag-parsing errors can be rendered in the correct format (human or JSON).

Tip: The -detailed-exitcode flag for plan returns exit code 2 when there are changes to apply (line 113-114). This is invaluable for CI/CD pipelines that need to detect plan results without parsing output.

The Views Layer: Human vs JSON Output

The views system at internal/command/views/view.go#L17-L35 is one of Terraform's cleaner architectural decisions:

type View struct {
    streams             *terminal.Streams
    colorize            *colorstring.Colorize
    compactWarnings     bool
    runningInAutomation bool
    configSources       func() map[string][]byte
}

Each command defines its own view interface. For example, the plan view supports methods like Diagnostics(), Operation() (for displaying the plan summary), and HelpPrompt(). Behind this interface, there are two implementations:

  1. Human view — renders colorized, wrapped text output with ASCII art status indicators
  2. JSON view — emits structured JSON events, one per line, for machine consumption
classDiagram
    class View {
        +streams *terminal.Streams
        +colorize *colorstring.Colorize
        +Diagnostics(diags)
    }
    class PlanHuman {
        +view *View
        +Diagnostics(diags)
        +Operation(plan)
        +HelpPrompt()
    }
    class PlanJSON {
        +view *JSONView
        +Diagnostics(diags)
        +Operation(plan)
    }
    View <-- PlanHuman : embeds
    View <-- PlanJSON : uses JSONView

The -json flag switches between implementations. This design means the business logic in command Run() methods never contains output formatting — it calls view methods like view.Diagnostics(diags) and the view decides how to render. This clean separation is why Terraform's JSON output is reliable and complete — it goes through the same code paths as human output.

Hook-Based Progress Reporting

As we discussed in Article 3, hooks are the observer mechanism during graph walks. The CLI layer provides two hook implementations that turn these callbacks into user-visible output.

The UiHook (in internal/command/views/hook_ui.go) renders the familiar human-readable progress display:

aws_instance.web: Creating...
aws_instance.web: Still creating... [10s elapsed]
aws_instance.web: Creation complete after 45s [id=i-1234567890]

The JSONHook (in internal/command/views/hook_json.go) emits structured events:

{"@level":"info","@message":"aws_instance.web: Creating...","type":"apply_start","hook":{"resource":{"addr":"aws_instance.web"},"action":"create"}}
sequenceDiagram
    participant Node as ResourceNode
    participant ECtx as EvalContext
    participant UiHook as UiHook / JSONHook
    participant Output as Terminal / JSON Stream

    Node->>ECtx: Hook(PreApply)
    ECtx->>UiHook: PreApply(id, action, prior, planned)
    UiHook->>Output: "aws_instance.web: Creating..."
    Note over UiHook: Start timer goroutine
    UiHook-->>ECtx: HookActionContinue
    Node->>Node: provider.ApplyResourceChange()
    Node->>ECtx: Hook(PostApply)
    ECtx->>UiHook: PostApply(id, newState, err)
    UiHook->>Output: "Creation complete after 45s"

The UiHook starts a timer goroutine in PreApply that prints "Still creating..." messages at intervals. The PostApply callback stops the timer and prints the final status. This design means the hook has no knowledge of what the node is doing — it only observes transitions.

The Diagnostic System: tfdiags

Perhaps the most pervasive architectural pattern in Terraform's codebase is the diagnostic system. Instead of Go's conventional error returns, almost every function in Terraform returns tfdiags.Diagnostics — a slice of Diagnostic values that can carry both errors and warnings.

The Diagnostic interface at internal/tfdiags/diagnostic.go#L12-L28:

type Diagnostic interface {
    Severity() Severity
    Description() Description
    Source() Source
    FromExpr() *FromExpr
    ExtraInfo() interface{}
}

The Diagnostics slice type at internal/tfdiags/diagnostics.go#L24:

type Diagnostics []Diagnostic

The Append() method (lines 49-60) is notably polymorphic — it accepts Diagnostic, Diagnostics, error, hcl.Diagnostics, and multierror.Error, normalizing them all into the same representation. This means code at any layer can append any error-like value:

var diags tfdiags.Diagnostics
result, err := doSomething()
diags = diags.Append(err)  // works with any error type
flowchart TD
    HCL["HCL Parser"] -->|"hcl.Diagnostics"| Diags["tfdiags.Diagnostics"]
    Config["Config Loader"] -->|"tfdiags"| Diags
    Core["terraform.Context"] -->|"tfdiags"| Diags
    Provider["Provider gRPC"] -->|"errors → tfdiags"| Diags
    Diags --> View["Views Layer"]
    View --> Human["Human Output<br/>(with source snippets)"]
    View --> JSON["JSON Output<br/>(structured)"]

Why is this better than error? Three reasons:

  1. Warnings alongside errors — A function can return warnings without stopping execution. This is essential for Terraform's deprecation workflow, where behaviors change gradually.

  2. Source attribution — Diagnostics carry Source with file, line, and column information. When rendered, this produces the familiar output with source code snippets pointing to the exact problematic configuration line.

  3. Expression contextFromExpr() captures the HCL expression and evaluation context that produced the diagnostic. This enables the rich error messages that show which variable values led to the problem.

The convention throughout the codebase is:

var diags tfdiags.Diagnostics
// ... do work, appending to diags ...
if diags.HasErrors() {
    return nil, diags
}
return result, diags  // may still contain warnings

This pattern appears hundreds of times. It replaces the traditional if err != nil { return err } with something that preserves all diagnostic information through every layer of the call stack.

Terminal Detection and Streams

The terminal package provides TTY detection and width measurement. The Streams struct wraps stdin, stdout, and stderr with metadata about whether each is a terminal and its column width. This information flows through Meta to the views, which use it to decide:

  • Whether to enable color output
  • How wide to wrap text
  • Whether to show interactive prompts
  • Whether to display progress spinners (which require terminal control codes)

The TF_IN_AUTOMATION environment variable sets RunningInAutomation on the View, which suppresses certain messages that assume a human is running commands interactively.

What's Ahead

We've now explored every layer from CLI to core engine to provider plugins. The final article in this series examines the configuration loading system — how .tf files are parsed through HCL, assembled into module trees, and lazily evaluated during graph walks to produce the cty.Value results that drive everything else.