Leading Like a Gardener: Why Adaptability Beats Efficiency in the AI Age
What ‘Team of Teams’ Teaches Us About Navigating AI’s Exponential Era
Why Admiral Nelson and Ant Colonies Hold the Key to Leading in an AI-Complex World
In 1805, Admiral Horatio Nelson stood on the deck of his flagship at the Battle of Trafalgar, facing a powerful enemy fleet. Conventional naval wisdom at the time dictated tight, centralized control, officers waited anxiously for detailed orders from their admiral, communicated painstakingly by flags, vulnerable to misinterpretation, smoke, or enemy fire. But Nelson chose a different path. He shared clear intent, established trust, and then simply allowed his captains to adapt and respond to real-time conditions.
He famously said,
“No captain can do very wrong if he places his ship alongside that of the enemy.”
Nelson understood something deeply human: in moments of genuine complexity, rigid efficiency breaks. True strength lies not in perfect execution, but in adaptability, autonomy, and trust.
Two centuries later, General Stanley McChrystal rediscovered Nelson’s principle during his command in Iraq. Faced with a decentralized, adaptive enemy, McChrystal realized his traditional military structure built on efficiency, hierarchy, and precise control was rapidly becoming obsolete. The battles they faced couldn’t be won through micromanagement. Like Nelson, McChrystal shifted to a model he called a “Team of Teams”, where clarity of purpose, shared understanding, and empowered execution replaced rigid control.
But this isn’t just a military lesson. The same fundamental truth quietly plays out in nature every day, just look at an ant colony. Many assume the queen ant’s role is passive, even insignificant. She doesn’t gather food, build tunnels, or directly defend the colony. She simply ensures the continuous creation of new ants. On the surface, her role seems inefficient or even redundant. Yet her quiet, consistent presence is precisely what allows the colony’s decentralized system to thrive. Individual ants are trusted, adaptive decision-makers, responding to real-time complexity without centralized command. The queen’s apparent inaction is the colony’s greatest strength, enabling continuous adaptability and growth.
We find ourselves today facing a challenge no less complex and uncertain. AI-driven change is not just accelerating; it’s growing exponentially. Just like Nelson’s captains, we can’t thrive by controlling every detail or perfecting execution. And just like the ant colony, our true advantage won’t come from rigid efficiency or centralized control. It will emerge from cultivating clear intent, trusting autonomy, and empowering teams to respond adaptively to the complexity around us.
I’ve felt this shift personally, clearly and vividly, in my own journey with AI. For years, I chased productivity and optimization, trying to keep pace, trying to execute flawlessly. But recently I paused, stepping back and asking deeper questions. My thinking began to shift, aligning less with automation-first efficiency and more with augmentation-first adaptability. I saw clearly that the true advantage in AI isn’t performing faster, but thinking clearer, delegating thoughtfully, reflecting constantly, and adapting continuously.
In previous posts like the Iron Man Mentality, AI for Humanity,
and my recent reflections on Augmentation: The New Strategic Frontier, I’ve explored how a similar shift from control and efficiency to adaptability and empowerment has unlocked deeper clarity, strategic alignment, and human potential in my work and thinking.
This moment isn’t just mine, it’s ours. We’re all standing together at this crossroads, navigating complexity none of us fully understands yet. Like Nelson’s captains, like the ants quietly securing their colony’s future, we’re learning that our strongest moments aren’t defined by flawless execution, but by empowered adaptability.
As we step forward together into this complex, AI-accelerated era, the question we face isn’t whether we can become more efficient. The real question is whether we can clearly cultivate the trust, adaptability, and empowerment needed not just to survive but to truly thrive.
Complexity vs. Efficiency: The New Leadership Imperative
In his book Team of Teams, General Stanley McChrystal vividly contrasts two ways of thinking and leading, approaches that initially seem similar, yet are fundamentally different: managing “complicated” systems versus navigating “complex” environments.
“Complicated” problems, he explains, are linear and predictable, like assembling a car or optimizing a factory. These tasks respond well to centralized efficiency, precision, and detailed instructions. For over a century, this Taylorist mindset dominated organizations, producing enormous productivity and economic growth.
Yet today, most challenges we face aren’t merely complicated; they’re complex. Complex environments like ecosystems, markets, or AI-driven networks, are interconnected, unpredictable, and emergent. Complexity is caused mainly by the number of connections and not number of nodes. They don’t respond to centralized efficiency or micromanaged control. Complexity demands continuous adaptation, real-time responsiveness, and decentralized judgment.
McChrystal learned this lesson firsthand when leading special operations teams in Iraq. His traditional command-and-control structures faltered dramatically against a nimble, decentralized opponent. Efficiency was no longer a strength, it became a critical vulnerability. McChrystal shifted his organization radically, embracing adaptability, empowerment, and trust. This enabled his forces to respond fluidly, decisively, and effectively in real-time complexity.
This insight isn’t unique to military strategy. Look at what happened to Nokia, Kodak, and countless others who relied heavily on efficiency and predictability. Each missed critical shifts driven by exponential complexity, clinging tightly to linear methods until it was too late. Efficiency, once the strategic gold standard, became their strategic downfall.
AI pushes this principle even further. AI grows and evolves exponentially, not linearly. Like grains of rice doubling on a chessboard, the impact quickly surpasses our linear thinking. The tools we relied on to manage predictable growth—standardization, hierarchical control, micromanagement—are inadequate against AI’s complexity.
Yet the real obstacle isn’t technological; it’s psychological. As I’ve explored previously in “Leading Through What You’re Afraid Of”, our quiet, unspoken fears around AI’s impact are powerful blockers. Leaders and teams often hesitate, not because the tools aren’t ready, but because we aren’t addressing the human fears beneath our actions. Adaptability requires psychological safety, an environment of trust where teams openly acknowledge fears and adapt confidently.
So what does adaptability clearly look like in practice? It means cultivating clarity of intent, delegating thoughtfully, and empowering teams to respond fluidly, just like Nelson’s captains, ant colonies, or McChrystal’s teams. And practically, it means embracing an augmentation-first mindset: placing humans at the strategic center, amplifying our adaptability, judgment, and reflection with AI, not replacing it with automation.
As you read this, ask yourself clearly and honestly:
Are we explicitly prioritizing adaptability and decentralized trust within our teams or still optimizing primarily for linear efficiency?
Have we openly acknowledged and addressed the fears and uncertainties our employees feel about AI’s implications to the job market, their roles and to our own fear?
These aren’t rhetorical questions. They’re strategic imperatives.
We stand clearly together at this crossroads. Efficiency and automation were powerful strategies in a predictable past, but adaptability and augmentation are the new competencies we need to thrive in our complex, AI-driven present.
Now is the time for us to recalibrate our approach, not merely updating our tools, but clearly reorienting our mindset and leadership around adaptability and trust. The lessons are here, vividly illustrated by Nelson, McChrystal, and the quiet resilience of ant colonies. It’s up to us to clearly recognize and apply them.
Embracing Augmentation-First: The Practical Shift from Efficiency to Adaptability
If we accept that adaptability, trust, and empowered decision-making are essential in navigating our AI-driven complexity, the practical question becomes: How exactly do we implement this?
I’ve written deeply about the “Augmentation-First” mindset in my recent blog “Augmentation: The New Strategic Frontier.” In short, Augmentation-First places human judgment, adaptability, reflection, and creativity at the core of organizational strategy. It leverages AI intentionally as a tool that amplifies human strengths, rather than replacing them through automation.
We don’t need to rehash this concept fully here; rather, our goal is explicitly practical to clarify precisely how Augmentation-First aligns with the adaptability and trust principles we’ve discussed, and how we can begin implementing it within our organizations.
Organizations adopting an Augmentation-First approach clearly align with three core principles highlighted vividly in Team of Teams:
Adaptability: Humans empowered to respond dynamically and strategically to complexity.
Trust: Building psychological safety clearly as the foundation for agile collaboration and innovation.
Reflection: Explicitly embedding structured thinking and judgment into the organizational rhythm.
How Augmentation-First Looks in Practice: A Quick Example
Consider Netflix. When Netflix transitioned from mailing DVDs to streaming online content, they didn’t just automate their existing processes, they leveraged adaptability, human judgment, and strategic reflection to navigate the profound shift. Unlike Blockbuster, who stuck rigidly to their traditional efficiency-driven model, Netflix clearly embodied an Augmentation-First mindset, empowering teams to experiment, reflect, pivot, and reinvent continuously. This adaptability and human judgment not only allowed Netflix to survive but to thrive and define the future of entertainment.
Practical Steps to Start Implementing Augmentation-First
Here are three clear, actionable steps our teams can start implementing right away, shifting toward Augmentation-First and adaptability-focused leadership:
1. Implement Weekly Reflection Sessions:
Schedule dedicated weekly sessions (even just 30 minutes) for your team to pause, reflect, and discuss strategic alignment, adaptability, and lessons learned. Clearly encourage open conversations about what’s working, what’s not, and what assumptions need to be challenged.
2. Empowered Delegation Experiment:
Intentionally delegate one meaningful task or decision to your team this week. Provide clear intent and purpose, but avoid detailed instructions or micromanagement. Clearly observe how the team adapts, collaborates, and strategically navigates the task.
3. Openly Address Team Fears Around AI:
Set up a safe, honest, and clearly structured conversation within your team to acknowledge and discuss their fears or uncertainties about AI and its exponential impact. Clearly frame this as an essential strategic conversation for unlocking adaptability, trust, and innovation aligned with insights from my previous blog “Leading Through What You’re Afraid Of”.
These practical actions aren’t merely symbolic. They unlock deeper AI fluency, not just scaling faster execution, but clearly scaling wisdom, judgment, and strategic adaptability. By embracing Augmentation-First, we avoid the risk of scaling bad decisions, and instead build strategic clarity and adaptive resilience.
Reflection Questions for Our Strategic Clarity
As we move forward, here are clear reflection questions designed for our strategic clarity and organizational adaptability:
“Are we explicitly using AI to automate tasks without first clarifying their strategic adaptability and alignment?”
“Have we clearly prioritized psychological safety enough in our teams to enable genuine adaptability and innovative responses to AI impact on them?”
“Are our leaders explicitly modeling curiosity, humility, and adaptability or implicitly still valuing efficiency, certainty, and control?”
These questions aren’t rhetorical. They clearly help illuminate your strategic blind spots, enabling you to recalibrate your leadership and organizational culture for adaptability and sustainable differentiation.
Connecting Back to Our Broader Journey
This shift toward Augmentation-First isn’t isolated; it’s clearly part of our broader journey together. We’ve discussed why efficiency-driven, automation-first strategies are no longer sufficient, vividly illustrated through Nelson’s Captains, ant colonies, and McChrystal’s team of teams. Augmentation-First clearly embodies these historical and biological insights translating them into practical, actionable strategies designed to navigate AI’s exponential complexity.
In adopting Augmentation-First practically, we’re not just adapting to AI, we’re clearly and intentionally harnessing its full strategic potential, not merely to move faster, but to think clearer, delegate smarter, and adapt continuously.
Together, let’s clearly and intentionally shift, not just our tools, but our mindset, from efficiency to adaptability. It’s not merely a strategic preference; it’s the new strategic imperative for thriving in the AI-driven era.
From Chess Masters to Gardeners: Leadership in the AI Era
General Stanley McChrystal offers a powerful metaphor for leadership in complex environments. He contrasts two leadership styles:
Chess Masters carefully position and control each piece, planning every move and relying on predictable outcomes. In stable, linear situations, this control works brilliantly.
Gardeners, by contrast, don’t seek direct control over every element. Instead, they cultivate environments that enable organic growth, adaptability, and resilience. They set clear conditions, nourishing the soil, ensuring sunlight, providing water and then trust the seeds they’ve planted to grow naturally.
In complexity, the gardener mindset thrives. It acknowledges explicitly what complexity demands: decentralized decision-making, trust, adaptability, and reflection. Just like Admiral Nelson trusting his captains or an ant queen enabling colony-wide resilience by simply laying eggs, the gardener creates the environment clearly and then steps back, letting the natural dynamics of adaptability unfold.
What struck me most when reading Team of Teams was how powerfully it validated something I’d already deeply felt and articulated in my ‘AI for Humanity’ post, the necessity of what I called an ‘inversion of control’. Before encountering McChrystal’s concept of leaders as gardeners, I proposed the Greenhouse Model as a practical way to explicitly flip traditional, rigid control and instead cultivate environments of trust, reflection, and adaptive autonomy. When I discovered McChrystal describing leadership in precisely these terms, gardeners nurturing adaptable teams rather than chess masters controlling each move, it resonated clearly and deeply. It affirmed explicitly that our leadership shift, inspired not by hierarchy but by genuine empowerment and adaptability, isn’t merely idealistic; it’s strategically imperative in this AI-driven complexity.
How can we begin adopting a gardener leadership approach in our own organizations? Here are three practical, concise steps we can start implementing this week:
Clarify Intent Instead of Controlling Details:
Clearly share our strategic vision and priorities, then trust teams to make their own decisions aligned with that intent.
Cultivate Psychological Safety:
Create regular, open forums for our teams to honestly discuss their fears, challenges, and ideas, laying the groundwork for genuine adaptability.
Make Space for Reflection and Experimentation (Your Own Greenhouse Day):
Schedule dedicated, structured time explicitly for reflection and experimentation allowing teams to test ideas freely without immediate pressures or judgment.
This is how we truly unlock the power of AI: not through faster execution of tasks alone, but through deeper wisdom, reflection, and adaptive thinking. A gardener mindset clearly positions us not just to respond to AI-driven complexity, but to truly thrive within it.
We’re all learning to lead again in this new era. We don’t have to be experts. We just need to be humble enough to listen, curious enough to explore, and brave enough to create environments that allow others to flourish. The seeds we plant today in our teams and organizations will define our adaptability and growth tomorrow.
Now, it’s our turn to embrace this gardener mindset to step back from controlling each move and instead nourish the soil, trusting that if we set the right conditions, extraordinary growth will follow.
Loved it :)
Roi, your piece is a powerful call to rehumanize leadership in the AI era. The gardener metaphor and augmentation-first mindset land deeply—this isn’t just smart strategy, it’s necessary wisdom. Thank you for reminding us that adaptability, not control, is how we’ll truly thrive.