Autonomy Tier Model

The Autonomy Tier Model defines five progressive levels of AI capability in physical systems.

It illustrates how execution authority evolves from human-led assistance to fully governed and adaptive autonomy, and why stronger engineered governance becomes essential at higher tiers.


From Allowed Execution to Governed Execution Across Tiers

Each tier represents a change in how execution is handled, not just how intelligence is applied, and builds on the semantic world model and continuous commissioning established by the Trust Boundary Stack.

Tier 1 – Assisted Execution

AI provides analytics, insights, and basic recommendations to support human decision-making. All actions remain fully under human control.


Tier 2 – Augmented Execution

AI actively supports control loops and automation sequences. It can suggest parameter adjustments or trigger predefined routines, but final execution stays within traditional rule-based or human-approved systems.


Tier 3 – Supervised Execution

AI provides real-time recommendations, predictions, and optimization proposals that can influence control decisions. However, all actions with potential physical impact still require human approval, predefined rules, or traditional automation logic before execution.

  • AI augments human operators with rich semantic context and scenario analysis
  • Proposals are evaluated but not autonomously executed
  • Oversight remains primarily human or rule-based, with AI serving as a strong decision support layer

This tier improves performance and situational awareness and keeps final execution authority with humans or deterministic systems.


Critical Transition

The shift from supervised execution to governed execution defines the boundary between assisted systems and autonomous systems.

Below this point (Tier 1-3), execution is allowed but not enforced by architecture.
Beyond this point (Tier 4-5), execution is governed by machine-enforceable constraints
.

It is not a change in intelligence.
It is a change in how execution is controlled.


Tier 4 – Governed Execution

AI operates autonomously within explicitly defined policy constraints and validated performance envelopes. Every proposed action is checked against a continuously maintained semantic world model before execution.

  • Operational scope is explicitly bounded and monitored
  • Actions undergo real-time validation against safety, operational, and ethical constraints
  • Continuous commissioning ensures ongoing alignment between AI behavior and system intent

This is the first tier in which the Trust Boundary becomes non-negotiable.
Governance shifts from primarily human oversight to machine-enforceable control at the point of physical action. Risks that were previously isolated now have the potential to become systemic if the boundary is not rigorously enforced.


Tier 5 – Adaptive Autonomy

AI not only executes within governed constraints but also continuously refines its own optimization strategies and local constraints in response to changing conditions, while preserving full traceability and escalation paths to human operators.

  • The system adapts in real time to dynamic environments and new information
  • Self-refinement of strategies occurs within the immutable envelope defined by the Trust Boundary
  • All adaptations maintain verifiable provenance and respect for core safety and ethical limits

At this level, the potential for emergent system-wide consequences is highest. The Trust Boundary must be especially robust to enable safe adaptation without compromising accountability or alignment with human intent.


Architectural Dependencies in Physical Systems

Physical systems in the built environment are defined by layered architectural dependencies: safety-critical interdependencies between subsystems, constraints imposed by physics and real-world inertia, and outcomes that are often irreversible.

These dependencies are manageable under human supervision or rule-based control in Tiers 1–3. However, as AI advances to Tier 4 and Tier 5, autonomous action combined with cross-domain reasoning can turn these dependencies into sources of systemic vulnerability, producing cascading effects and loss of traceability that traditional oversight cannot reliably contain.

Progression through the tiers is not merely a technology upgrade.
It is a governance upgrade.

Each higher tier demands stronger semantic context, enforceable constraints at the point of execution, and continuous validation to maintain alignment between AI objectives and physical reality over time.


Beyond Execution: Implications at Scale

As AI execution advances from supervised assistance to governed and adaptive autonomy, the implications extend far beyond system performance. At lower tiers, failures remain localized and are typically correctable through human oversight. At Tiers 4 and 5, failures can propagate rapidly across interconnected subsystems and environments.

This amplifies risks to safety, reliability, accountability, and broader stakeholder outcomes.


The deeper consequences, and the responsibilities they impose, are examined in Ethics at Scale.

-> Continue to Ethics at Scale