This podcast was generated by Google’s NotebookLM
EN
DE
NL
In the current AI hype cycle, LinkedIn is flooded with “clean” diagrams showing agents, memory, and tools. They make building an agentic enterprise look as simple as wiring an API.
It is a lie.
The reality of Agentic AI isn’t found in the “reasoning” of the model; it is found in the authority boundaries of the organization. If you are building an agentic environment without a clear hierarchy of responsibility, you aren’t building a system—you’re building a liability.
To build safely, we must stop talking about “features” and start talking about the Governance Stack.
1. The Hierarchy of Authority: Who Actually Rules?
In a responsible system, authority only flows downward. Overrides never flow upward. This is the non-negotiable order of operations:
Layer 1: Real-World Law & Company Policy
Before a single line of code is written, the system is bound by GDPR, labor laws, works council agreements, and internal risk appetite. This is the ultimate “No.”
Layer 2: Human-Before-the-Loop (HB4L) — The “Product Owner”
Think of HB4L not as a technical step, but as a role. This is the Product Owner for Delegated Intelligence.
-
The Task: Connecting real-world customer demands, management ideas, and complaints to the agent’s behavior.
-
The Responsibility: Deciding what we are willing to delegate and where a human must sign off.
-
The Reality: This layer is slow. It requires uncomfortable conversations about accountability. If HB4L is missing, the system is illegitimate.
Layer 3: Model Context Protocol (MCP) — Cognitive Constraints
MCP is the supervisor of the agent’s mind. It defines what the agent is allowed to think about and which tools it even knows exist.
-
Mental Model: MCP defines the “HTML/DOM”—the structure of the world.
-
The Weight: Defining an MCP is a heavy act of responsibility. If you define the MCP, you own the outcomes of the agent’s reasoning.
Layer 4: Agentic Client Protocol (ACP) — The Security Gate
ACP is the protector of the organization. It is a strict, boring, non-reasoning enforcement layer.
-
Mental Model: ACP is the “CSS”—the global rules—but with a firewall.
-
The Task: Checking if this caller has the permission to trigger this action at this moment.
-
The Rule: ACP must be powerful enough to enforce rules, but “dumb” enough to never invent them. It says “Yes” or “No,” never “Instead, do this.”
2. Why You Should Start with ACP (The Practical Shortcut)
Waiting for a perfect Human-Before-the-Loop strategy can take months. To move fast without breaking the company, start with ACPs.
By defining the ACP first, you create a hard safety envelope based on existing company policy (e.g., “Agents can never email customers without approval”).
-
ACP provides immediate protection.
-
MCP can then be designed with more freedom because the “deadly” actions are already blocked.
-
HB4L can evolve as the organization learns from the safe sandbox you’ve built.
3. The “Connector” Trap: Why Integration is Power
We often mistake model quality for agentic capability. It’s why ChatGPT feels “smooth” compared to open-source alternatives. It’s not just the model; it’s the connectors (Google Drive, Gmail, Slack).
Connectors are the gateway to real adoption.
-
Theory: “We have a sovereign model.”
-
Reality: If the agent can’t reach the systems where work happens (with proper auth and permissions), it is a toy, not a tool.
-
Strategy: Agentic AI won’t be won by who has the “best” model, but by who owns the everyday surfaces where work happens.
4. Final Verdict: Craziness or Maturity?
If you find yourself worrying about who owns the MCP, or whether the ACP is becoming too “smart” and bypassing human intent—good.
That isn’t “crazy thinking.” That is the sound of an architect realizing that Agentic AI is an organizational transformation, not a software update.
The goal is not to build an autonomous agent. The goal is to build a governed system that survives reality.
| The Red Flag | Why it’s Dangerous | The Fix |
| “The Agent decides which tool to use based on the prompt.” | This bypasses MCP. If the agent can “discover” tools on its own, it can bypass your risk strategy. | Tools must be explicitly whitelisted in the MCP layer before the agent even starts reasoning. |
| “The ACP will intelligently fix user errors.” | If the ACP (the gatekeeper) starts being “helpful” or interpreting intent, it is no longer a guardrail—it’s an unmanaged co-pilot. | The ACP must be “dumb.” It should only allow or deny based on hard rules. No interpretation. |
| “We’ll define the safety rules inside the System Prompt.” | Prompts are “vibes,” not protocols. They are easily bypassed by jailbreaks or hallucinations. | Move non-negotiable rules into the MCP and ACP protocols where they are enforced by code, not “suggestions.” |
| “The engineering team is responsible for the AI’s behavior.” | Engineers own the plumbing, not the purpose. This creates “Janitor Humans” who clean up messes they didn’t authorize. | Assign an HB4L Lead (Agentic Product Owner) who explicitly signs off on what the agent is allowed to decide. |
| “We have 50 different ACPs for 50 different tasks.” | This is governance spaghetti. You cannot audit 50 different permission sets. | ACPs should mirror your company’s organizational authority (e.g., HR-Access, Finance-Read, Public-Comm). Keep them few and stable. |
| “The agent can override a human rule if it’s more efficient.” | This is the death of responsibility. | Authority only flows down. No agent or protocol may ever override an HB4L or Company Policy directive. |

And at the end, I would like to give you a goodie 🙂
Checklist: The MCP Decision Review
Focus: Delegation of responsibility instead of technical code acceptance.
1. Identity & Mandate (The ‘Who’ and ‘What’)
- [ ] Unique name: Does the agent have a functional name that clarifies expectations? (e.g.
Contract Risk Scannerinstead ofAI Assistant 1) - [ ] Role definition: Is the specific task being delegated set out in writing?
- [ ] Escalation triggers: Are the ‘hard stop’ conditions defined? (When must the agent be handed over to a human being?)
2. The conceptual framework (the MCP fabric)
- [ ] Explicit assumptions: What business rules have been stored in the MCP? (e.g. ‘When in doubt, always err on the side of safety, not profit’).
- [ ] Knowledge boundaries: Which data sources can the agent rely on, and which must it ignore?
- [ ] Dealing with hallucinations: How does the agent behave when information is missing? (Does it have permission to say ‘I don’t know’?)
3. Guard rails & power (ACP check)
- [ ] Write permissions: Is the agent allowed to execute actions independently (ACP) or only suggest drafts?
- [ ] Budget/limit: Are there financial or quantitative limits to the agent’s actions?
- [ ] Audit trail: Is it ensured that every decision can be traced back to a specific version of the MCP?
4. Leadership & error culture (the ‘human factor’)
- [ ] Ownership: Which person (manager/lead) takes responsibility if the agent makes a ‘wrong’ decision within their MCP?
- [ ] Feedback loop: How does the agent (or team) find out that a result did not meet expectations? (HB4L process).
- [ ] Psychological safety: Is the team prepared to analyse an error made by the agent as a ‘design error in the MCP’ instead of looking for a scapegoat?
The ‘Sanskrit sentence’ certificate (at the end of the meeting)
The review is only passed if all three questions are answered with ‘yes’:
- Understanding: Do we know exactly why the agent will make the decision they do?
- Responsibility: Are we prepared to stand by this decision in front of the customer/board as if we had made it ourselves?
- Willingness to learn: Do we already have the date for the next review (retrospective) in our calendar?
