The Tuesday Morning Test
- Essay
The Tuesday Morning Test
A simple diagnostic for AI governance: On any random Tuesday morning, can you explain who made a specific decision, what information they had, and why they decided as they did? If not, you have a structural problem.
The Paradox
Pick a random case from your system—something from last week that involved AI assistance. Answer these questions:
- Who made the key decision(s)? A specific person with a name.
- What information did they have? What AI outputs did they see? What additional context?
- What did they actually evaluate? What factors did they consider?
- How long did they spend? Was it appropriate for the complexity?
- Where is all this documented? In the system, retrievable, auditable.
If you can answer all five with specificity and documentation, your governance is functioning. If you can’t, you have a gap.
Why Tuesday Morning
I call it the Tuesday Morning Test because that’s when governance gets tested in real life—not during planned audits when everyone’s prepared, but on random days when a client calls upset, or a regulator asks questions.
Tuesday morning is arbitrary. It’s meant to be arbitrary. Real accountability requires being able to explain yourself on any day, about any case, without warning.
Common Failures
The “Someone” Problem: Decisions were made by “someone on the team” but there’s no record of who specifically. Accountability is collective, which means it’s nobody’s.
The “Approved” Problem: The log shows “Approved by John” but John can’t remember this specific case. He approved it—he approves dozens daily—but he didn’t document judgment.
The “AI Did It” Problem: The AI generated the output and a human clicked approve, but there’s no record of what the human actually reviewed.
The “We Could Probably Find It” Problem: The information exists somewhere—in emails, in notes, in someone’s head—but reconstructing it would take days.
The Test Reveals System Design
When an organization fails the test, the failure is rarely about the specific case. It’s about the system that processed the case.
The system didn’t require specific accountability. The system didn’t log the right information. The system didn’t enforce meaningful review.
Blaming the people misses the point. The system created the behavior.
Running the Test
- Select 3-5 random cases from the past month that involved AI
- For each, try to answer the five questions within 30 minutes using only system documentation
- Document where you succeeded and where you hit gaps
- Identify system changes that would close the gaps
- 5.Implement changes and test again next quarter
Governance-by-Design
Reconstruction Capability
- Your Next Step
Let's Build Your Advantage
If you are ready to move beyond discussion and start implementing intelligent solutions that deliver a measurable impact, let's talk. I am selective about the projects I take on, focusing on partnerships where I can create significant, lasting value.