Dictionary
The Dictionary of AI Architecture
Key terms for thinking clearly about decision systems in AI adoption. These definitions represent my usage—use them, cite them, build on them.
Decision Architecture
The structure that determines how decisions are owned, validated, escalated, and audited inside an organization. Decision architecture exists whether or not it’s explicit—the question is whether you’ve designed it intentionally or inherited it accidentally.
In AI adoption, decision architecture becomes critical because AI can automate tasks without clarifying who owns the decisions those tasks embed.
The Judgment Gap
The space between what AI recommends and what a human actually decides. In healthy systems, this gap is explicit and managed. In unhealthy systems, the gap closes invisibly as recommendations become decisions without conscious human evaluation.
The judgment gap is where professional value lives. Organizations that eliminate it are eliminating the professional judgment they’re supposed to provide.
Compliance Debt
The accumulated risk from deploying AI capabilities faster than governance controls. Like technical debt, compliance debt compounds silently until an audit, incident, or regulatory action forces visibility.
Governance-by-Design
The principle that governance constraints should be embedded in system architecture rather than documented in policies. Governance-by-design means the system enforces compliance, not just describes it.
Policy Theater
The appearance of governance without operational enforcement. Policies exist, training was conducted, boxes were checked—but the system has no logging, no escalation triggers, no audit trail. Compliance exists on paper while risk exists in production.
Operational Coherence
The degree to which an organization’s tools, workflows, and decision rights operate as a unified system rather than a collection of fragments. High coherence means clear ownership, consistent data, smooth handoffs. Low coherence means tool proliferation, manual reconciliation, decision fatigue.
Tool Proliferation Entropy
The organizational chaos that results from adopting AI tools faster than decision architecture can absorb them. Each new tool adds a dashboard, a handoff, and a validation gap. Speed increases while clarity decreases.
Automation Creep
The gradual expansion of AI from execution tasks into judgment territory without explicit approval. It typically progresses: AI drafts → AI recommends → AI decides (with rubber-stamp approval) → AI decides (without meaningful oversight).
Human-in-the-Loop Fallacy
The assumption that having a human “supervise” AI outputs provides accountability. In practice, undefined supervision creates responsibility without structure—humans are nominally accountable for decisions they cannot meaningfully review.
The fallacy is treating “human in the loop” as a solution when it’s actually a question. What is the human supposed to evaluate? How long should they spend? Without answers, it’s a label, not a safeguard.
Layer Bleed
When execution-layer activities silently expand into judgment territory without anyone adjusting oversight or compliance controls. The AI that “extracts data” starts “assessing complexity.” The boundaries blur; accountability doesn’t follow.
Recommendation Anchoring
The cognitive bias where humans anchor on AI recommendations even when asked to exercise independent judgment. The recommendation becomes the starting point, narrowing rather than supporting judgment.
Supervision Atrophy
The degradation of human oversight capability that occurs when professionals rely heavily on AI over time. As AI handles more tasks, humans lose the skills and context needed to evaluate AI outputs critically.
The Tuesday Morning Test
A practical test for AI governance: Can you explain, on a random Tuesday morning, who made a specific decision, what information they had, and why they decided as they did? If you can’t answer this for any decision in your system, you don’t have governance—you have hope.
Reconstruction Capability
The ability to fully recreate the decision process for any significant case within a defined timeframe (typically 24 hours). Without reconstruction capability, you cannot demonstrate what happened when asked.
Escalation Trigger
A defined condition that causes the system to route a case to elevated review automatically. The key is that escalation is structural—it happens because the system routes it, not because someone notices a problem.
Decision Rights
The authorization structure that defines who can approve, override, escalate, or finalize different types of decisions. Decision rights should be explicit, mapped to roles, and enforced by the system.
Validation Gap
The space where AI outputs could be checked but aren’t. Errors propagate silently because nobody defined what “correct” looks like or built systems to catch deviations.
Professional Abdication
When regulated professionals delegate their judgment to AI systems without maintaining meaningful oversight. The professional remains nominally responsible, but actual judgment has shifted to the machine.
Architectural Ambiguity
The state where an organization has AI capabilities deployed without clear decision architecture—unclear ownership, undefined supervision, implicit rather than explicit governance. It is the absence of intentional design.
Structural Fragility
The hidden brittleness in systems that have accumulated automation without governance. Structural fragility doesn’t manifest in normal operations—only when stress tests the system. An audit, a lawsuit, a major complaint reveals that the organization can’t explain what their systems do.
- Your Next Step
Let's Build Your Advantage
If you are ready to move beyond discussion and start implementing intelligent solutions that deliver a measurable impact, let's talk. I am selective about the projects I take on, focusing on partnerships where I can create significant, lasting value.