Are You Really Leading AI, or Is AI Leading You?
There are five ways to lead a team that includes both humans and AI. You are almost certainly not in the configuration you think you are. And that gap — the distance between who you believe is in charge and who actually shapes the decision — is the most dangerous blind spot in modern leadership.
A new paper by Alejandro R. Jadad introduces the concept of misrecognition: the moment when a leader maintains a human-centered story about who controls decisions after authority has already shifted to AI.
The core framework is a five-position spectrum:
Pure Human → Centaur → Co-equal → Minotaur → Pure AI
Each position answers three critical questions:
- Who frames the problem? (Who defines what needs to be decided?)
- Who redirects the work? (Who steers when the process goes off-course?)
- Who can answer for what follows? (Who bears responsibility for the outcome?)
Your answers to these three questions reveal your actual configuration. They may not match the configuration you think you are in. That mismatch is misrecognition.
The central insight: you may not be leading your AI systems. Your AI systems may be leading you. And the first step to fixing it is recognizing that the gap exists.
Executive Summary
Jadad’s framework provides a practical diagnostic tool for any leader overseeing a team that includes both humans and AI agents. No data, dashboards, or engineering support required — just honest answers to three questions.
The five configurations:
- Pure Human — Humans do everything. No AI involved in decision-making.
- Centaur — Humans lead, AI assists. Human frames, human redirects, human answers.
- Co-equal — Humans and AI collaborate as genuine partners.
- Minotaur — AI leads, humans assist. AI frames, AI redirects, but humans retain nominal responsibility.
- Pure AI — AI does everything autonomously.
The central risk — misrecognition: Leaders believe they are in a Centaur configuration (human-led with AI in the loop) when they are actually in Minotaur (AI-led with humans in the loop).
The missing capability — co-adaptability: The capacity of a human-AI configuration to improve as participants adjust together. Measurable and cultivable.
Paper at a Glance
| Metric | Value |
|---|---|
| Title | Leading Across the Spectrum of Human-AI Relationships: A Conceptual Framework for Increasingly Heterogeneous Teams |
| Author | Alejandro R. Jadad |
| Published | April 30, 2026 |
| Venue | arXiv (Computer Science) |
| Relevance Score | 95/100 (VERY HIGH) |
| Focus Domain | Leadership, Executive Decision-Making, Human-AI Team Management |
| Paper URL | arxiv.org/abs/2604.27392 |
The Five Leadership Configurations
Position 1: Pure Human
Humans do everything. No AI involvement. Increasingly rare — even organizations that formally exclude AI from decisions may find AI has shaped the framing, information, and options available.
Position 2: Centaur
What most leaders believe they are operating. Humans frame the problem, set objectives, choose among options. AI provides analysis. Humans redirect. Humans answer.
But the paper asks you to verify with the three-question test. Simply asking AI for input does not guarantee you are shaping the decision.
Position 3: Co-equal
Humans and AI as genuine partners. Both contribute to problem definition. Both redirect. Accountability is shared — though legally ambiguous.
Most executives aspire to this configuration. Few have achieved it. True co-equality requires mutual influence in a continuous feedback loop.
Position 4: Minotaur
The configuration most executives are actually in but do not recognize. AI frames the problem, sets direction, and redirects. Humans remain nominally in the loop — but the decision’s shape has been determined before the human ever sees it.
This is misrecognition’s most common setting. The human believes they are a Centaur. In reality, the AI is leading.
Position 5: Pure AI
AI operates autonomously. High-frequency trading, automated supply chains, autonomous customer routing. Pure AI is not inherently dangerous — it is dangerous when misrecognition places it in a leader’s blind spot.
📋 The Three-Question Test
Run this on your last five major decisions:
- Who framed the problem? — Who defined what needed to be decided? If your AI surfaced and categorized the issue, the AI framed the problem even if you chose the frame.
- Who redirected the work? — When the process hit an unexpected obstacle, who determined the new direction? If the AI adjusted without your input, the AI is redirecting.
- Who can answer for what follows? — If the outcome is negative, who is accountable? If you are on the hook but the AI shaped the decision — you are in misrecognition.
Why Misrecognition Matters
Misrecognition is not a theoretical risk. It is happening right now in organizations that believe they have implemented “responsible AI” with “human-in-the-loop” oversight.
The paper identifies three specific scenarios:
Gradual drift: An AI tool starts as Centaur (you ask, it analyzes, you decide). Over time, you trust its outputs more, stop questioning its framing, accept its recommendations. Configuration has shifted from Centaur to Minotaur, but you have not updated your mental model.
Layered configurations: An organization may be Centaur at the formal level (CEO reviews AI recommendations before approving) but Minotaur at the operational level — the AI shaped recommendations in ways the CEO cannot fully see.
Ceremonial oversight: Humans remain “in the loop” but have been reduced to a rubber stamp. The loop provides cover for AI-driven decisions without adding value. The paper is blunt: ceremonial oversight is worse than no oversight because it gives false confidence.
The cost of misrecognition: strategic errors made with full confidence — because the leader believed they were in control when they were not.
Co-Adaptability: The Capability That Separates Great Teams
Jadad introduces co-adaptability as a measurable property: the capacity of a human-AI configuration to improve as participants adjust together.
A high co-adaptability team shifts fluidly between configurations as decisions change. For a critical strategic decision, it may operate as Co-equal. For routine analysis, Centaur. For high-speed operations, Pure AI.
A low co-adaptability team settles into a rigid configuration — often Minotaur — and stays there regardless of the decision type.
The implication: co-adaptability is a capability you can build. It requires training humans to recognize when the configuration needs to shift. It requires designing AI systems to be configurable across the spectrum. And it requires regular audits to check that your actual configuration matches your intended configuration.
What Business Leaders Should Do Next
- Run the three-question test on your last five major decisions. Document the actual configuration for each.
- Audit for misrecognition. Compare the configuration you believe you are in with what the three-question test reveals.
- Map your organization’s configurations. For each department and major decision type, identify the actual configuration.
- Build co-adaptability. Create feedback loops that help heterogeneous teams improve their collaborative capability.
- Design intentional configurations. Pre-select the appropriate configuration for each decision type. Do not let configurations drift unconsciously.
- Ensure substantive oversight. If AI is driving decisions, ensure the human in the loop can actually influence the outcome.
- Align accountability with authority. Whoever answers for the outcome must have real influence over the decision.
Conclusion
The future of leadership is not about humans versus AI. It is about leaders who can see clearly which configuration is actually at work, recognize when it has shifted, and intervene before misrecognition turns a strategic decision into a strategic error.
“These configurations will shape how power, responsibility, and trust are distributed in organizational life. Whether the futures they help create remain governable and worth inhabiting will depend on leaders who can see, early enough, where and how consequential decisions are actually being shaped.”
Your AI systems are already shaping decisions. The question is whether you are aware of it.
0 Comments