Early in my career, I trained and operated dogs from life saving rescue to counter-terror tasks. Embedding dogs into operational units that had no prior experience working with dogs, gave me a practical understanding of what it actually takes to integrate nonhuman intelligence into human organizations.
Now that AI agents are taking their first steps into our workforce, I can’t help but see the parallels. Many of the challenges we face today, from training to day to day collaboration, are ones we have already solved in a different domain.

The prevailing mindset in tech is that AI agents will integrate and work with us side by side almost instantly, simply because they are “advanced”. History suggests otherwise.
Humans have more than 15,000 years of experience working with non human intelligence. What that experience shows is that in cross intelligence cooperation, more intelligence can be counter productive.
My three main lessons:
- There is no such thing as super intelligence, only different forms of intelligence suited for different roles.
- Intuitive communicators outperform autonomous problem-solvers when working with humans.
- Sustainable, reliable integration requires ramp up and long term relationships.
Insight 1: Super-intelligence is a myth. The right intelligence for the right problem
There is a fundamental disconnect between how intelligence is discussed in AI, largely rooted in mathematics and computer science, and how intelligence is understood in biology and cognitive science.
The AI discourse is often driven by a semi religious belief that intelligence is inherently “super-natural”, somewhere between alien entities that will dominate us and god like systems that will usher in the end of days. This belief quietly encourages the assumption that the hard parts of integration will somehow take care of themselves.
If you ask people who actually study intelligence, it’s a different picture.
In cognitive and life sciences, it is widely accepted that intelligence cannot be placed on a single ladder and ranked from low to high. It is multi-dimensional, context dependent and modular. Different species occupy different peaks in a complex cognitive landscape.
In practical applications, this means choosing the right organism for the task based on physical, sensory, cognitive, behavioral and social traits.
This is true not only across species. Even within dogs, different breeds are chosen for different roles and are not interchangeable.
The idea that off the shelf AI agents will autonomously perform all tasks equally well is no more realistic than expecting a single dog breed to excel at every mission.
Insight 2: Communication beats intelligence
The clearest example of this is the wolf — dog comparison. Wolves outperform dogs at autonomous problem solving. They persist longer, try more strategies and solve puzzles faster when left on their own. Dogs give up earlier, not because they are less capable, but because they are doing something else. When a task becomes difficult, dogs stop and look back at the human. Wolves do not. Dogs assume collaboration; wolves assume autonomy.
That difference matters in real operational settings. In K9 context, much of the human–dog communication is non verbal and happens under stress. The dog is constantly reading where their handler is looking, how they are positioned and whether they’re tense or relaxed. A pause, a shift in stance or eye contact can signal slow down, redirect, or hold position without a word being said. That kind of coordination only works because the dog is tracking intent in context, not just commands.
This is also why dogs succeed at simple human guided tasks where wolves and chimps fail. Dogs follow pointing and gaze naturally. They treat human movement as communication, not noise. That ability, more than raw problem solving power, is what makes them reliable partners.
Today’s AI models are still built to be overly accommodating, often taking far too long to complete tasks, while lacking real understanding and memory of human intent as it unfolds in real time.
Insight 3: Co-training matters and long term pairing creates accountability
When working with dogs professionally, the more critical the mission, the more time is spent in mutual training, where both the handler and the dog learn each other’s strengths and limitations. Even the best trained dogs have blind spots. Dogs don’t intuitively understand why passing on opposite sides of a pole causes the leash to lock. Every dog has irrational triggers, 4X4 wheels or specific animals, that can instantly override years of training and derail tasks they have successfully completed thousands of times.
AI agents have their own versions of these non intuitive failure modes. You do not discover them in demos. You discover them through repeated, realistic collaboration.
Long term relationships matter just as much. K9 units work because the same dog and handler stay together, building shared expectations over time. The entire team understands the capabilities and limitations of the handler-dog pair. AI agents need similar continuity: persistent memory, stable workflows and long term human counterparts.
But continuity on its own is not enough. We also need clear role definitions and accountability frameworks. Without clarity on what an agent owns and what it does not, and how its output is evaluated, organizations end up treating agents like temporary contractors rather than durable teammates.
The bottom line
The lesson from integrating dogs into human centric organizations is not about making systems smarter. It’s about making them better partners and creating a process in which the relationship can form over time. The hardest part of that integration is us: humans and organizations need the right habits, patience and accountability to collaborate with another kind of intelligence day after day.
–
Further reading
The New Breed, Kate Darling
Explores how humans have historically formed working relationships with machines and why trust, norms, and roles matter more than raw capability.
The Genius of Dogs, Brian Hare
Shows why dogs are uniquely good collaborators with humans, not because they are the smartest animals, but because they evolved to read human intent and communicate across species.
God, Human, Animal, Machine, Meghan O’Gieblyn
Examines how modern AI narratives echo older religious ideas about intelligence, agency, and control, helping explain why “super-intelligence” myths persist.

Leave a comment