AI for Humanity - One Day, Every Two Weeks, For What Matters
How one simple structural shift can rebuild trust, unlock AI fluency, and make space for the future of work starting from inside.
Prelude
What would you build if you had $1 billion, and nothing to prove?
That’s the question we’re inviting you to ask yourself.
So it’s only fair I start with mine.
My answer is this.
AI for Humanity.
If I had $1B and nothing to prove,
this is exactly what I’d build.
And I’m starting it here.
I’m Not an Expert. I’m a Proof.
I’m not an HR leader.
I’m not a transformation coach.
I haven’t built a company from scratch or written a book on organizational design.
Up until recently, I didn’t even think I cared about any of this.
I was focused on what I knew: engineering.
Building things.
Fixing problems.
Getting teams unstuck.
Shipping.
I’m a systems person, not in the abstract, but in the most literal sense.
Software systems. Infrastructure. Architecture. Behavior under pressure.
But there’s something else I’ve always done, even when I didn’t name it:
I convert emotion into structure.
Signals into systems.
Not to package them, to make them usable and re-usable.
And then something happened I didn’t expect.
AI accelerated, and it didn’t just change how I work.
It changed how I see.
It showed me where I was numb.
Where I was drifting.
Where I was still performing someone I didn’t want to be anymore.
And without planning it, I went through a kind of personal rupture.
Not a productivity upgrade. A transformation.
Not in theory. In motion. In practice. In identity.
I didn’t read about emergence.
I lived it.
That’s why I’m writing this.
Not because I have answers for everyone.
But because I’ve now seen what happens when someone is given the right tools, the right conditions, and the space to walk toward what they actually care about, without being measured, evaluated, or filtered through a role.
This post is a design.
It’s not inspirational.
It’s not visionary.
It’s structural.
It’s what I would offer if a CEO walked up to me and said:
“We know the old ways aren’t working. We feel it. But we don’t know what to build next.”
I’m not asking you to believe me.
Just read the next sections.
If something in you has already started to shift, this is a way to follow that signal, without losing yourself or your organization in the process.
Let’s begin.
Where We Are Now
Something fundamental is changing.
Not gradually. Not technically. Structurally.
AI didn’t just introduce new tools.
It introduced a new logic, one that most companies haven’t yet learned how to see, let alone respond to.
And if you work in a leadership role today, there’s a good chance you’ve felt it too, even if you haven’t named it.
Across every department, AI is quietly unbundling the foundations of work:
Code is now ambient. Engineers are architects, reviewers, orchestrators.
Marketing decks are drafted in minutes. PMs write product specs through conversation.
Legal teams, finance teams, analysts, everyone is starting to use assistants that can do 60–80% of the task in one shot.
The tools are accelerating. The roles are dissolving. But the structures around them haven’t moved.
And underneath all of this, people are feeling something they can’t quite articulate.
They’re not resisting AI because it’s too technical.
They’re hesitating because something deeper hasn’t been acknowledged.
“I’m faster, but I’m not more connected.”
“I’m learning new tools, but I’m not sure what direction I’m learning toward.”
“I’m producing more, but I’m starting to wonder why.”
This isn’t disengagement. It’s disorientation.
And it’s not going away.
Because the truth is: we are not in an adoption cycle.
We are in a meaning shift.
AI isn’t just changing how we work.
It’s changing what work asks of us.
It’s collapsing the value of execution, and putting pressure on everything execution used to protect us from:
Taste
Judgment
Direction
Purpose
Philosophy
And when that happens, the real questions start to surface. Not in strategy decks, but in the quiet moments in people’s heads:
“Where do I matter now?”
“What happens when the task no longer defines the role?”
“How do I find my edge in a system that no longer needs me to be efficient?”
These questions aren’t philosophical.
They’re what every company will have to face, once the tools no longer feel like enough.
I went through this shift. Personally.
Not as a side project, as a total reframing.
AI didn’t just make me more productive.
It surfaced a version of myself I had buried under performance and speed.
It didn’t just accelerate my work. It forced me to re-align it with something that actually mattered.
This post is built from that same shift, just scaled into something structural.
Because if this is where we really are,
then the tools we’ve been using won’t get us where we need to go.
And the next part of this post will explain why.
Why What We’ve Been Doing Doesn’t Work
Most companies are responding to AI with familiar tools.
Skill mapping.
Enablement plans.
AI “champions.”
Hackathons.
Adoption metrics.
These approaches seem responsible. But they’re built on the same assumptions AI is actively breaking:
That the future is mappable
That change is linear
That value is measured by speed and output
They create structure, but not the kind we need now.
Because none of them address the one thing that AI is surfacing everywhere: fear.
Not loud fear. Quiet fear. The kind people carry silently.
“If this tool can do most of my job, what am I now?”
“If I get faster, will the expectations just rise again?”
“If my role was built on execution… where do I go from here?”
Fear doesn’t come from the technology.
It comes from being told to adopt tools that might quietly replace you, while being measured by systems that haven’t changed and measuring output and velocity.
According to Deloitte, over two-thirds of millennial and Gen Z workers already worry about AI displacing their roles. But formal training is limited and the deeper questions aren’t being addressed.
Meanwhile, studies on organizational behavior show that under perceived threat, people narrow their thinking and avoid risk. It’s called threat rigidity…. And it explains exactly why innovation feels so slow, even as tools evolve so fast.
And here’s the hard truth:
Fear doesn’t just stall adoption. It kills emergence.
Because when people are afraid, they don’t explore.
They don’t ask better questions.
They don’t imagine new roles.
They protect the version of themselves the company used to need.
That’s not resistance. That’s adaptation to the wrong environment.
If we want people to grow into something new, we have to change the environment first.
The research is clear on this too.
Teams don’t learn in fear. They learn in trust.
Psychological safety, as defined by Amy Edmondson, is the single most important factor in team learning and risk-taking.
And yet, most orgs haven’t created any structure for that.
We’ve trained people to be productive with AI,
But not to become someone different in response to it.
That’s what we need now.
We don’t need more AI decks.
We need structural trust.
We need time where no one is being evaluated.
We need space where people can move not from fear, but from fire.
That’s what the Greenhouse model is built for.
Not to control what people do.
But to give them a space where fear is no longer the operating system.
And from there everything changes.
The Greenhouse Model
We don’t need another initiative.
We need a new rhythm.
The Greenhouse model is a structural pattern for emergence:
One day, every two weeks. Everyone. Together.
It’s not a break from work.
It’s a deliberate shift inside work, a recurring space where people step out of role, out of measurement, and back into purpose.
Because AI isn’t just changing what we can do.
It’s changing what we want to do.
And we need a system that lets that come through.
What It Looks Like
Here’s how the Greenhouse model works practically:
One full workday every two weeks
Everyone participates: not just volunteers, not just high performers
No normal work allowed
Held in-office (or fully synchronous remote if not possible)
AI tools and infrastructure available to all
No KPIs, no reporting, no presentations
People join projects based on shared purpose, not role or skill
All ideas must aim to do good in the world, not internal optimizations, not productivity hacks
Anyone can start an idea. Everyone chooses what to join.
Diversity and randomness of skillsets are features not bugs
This isn’t a hackathon.
It’s a pattern for emergence.
The question that guides it:
What would you build if you had $1B and nothing to prove?
Now: do that. With AI. For one day, every 2 weeks, with people that wants the same thing and with AI :).
How It Works Logistically
In the week before Greenhouse Day:
Anyone in the company can propose a project, something they care about
The only rule: it must aim to do good, and you must actually want to work on it
Employees browse ideas, join groups that resonate with them, and form around shared energy
No one assigns. No one filters. No one evaluates.
This is where people don’t just use AI, they become something with it.
Name It. Make It Yours.
This isn’t a policy. It’s a culture artifact.
Every company should brand it in a way that fits:
Curiosity Day
Futures Friday
The $1B Day
Humanity Sprint
What If Wednesday
Greenhouse Day
Make it joyful. Make it intentional.
Host it like a party, but a meaningful one.
High-tech companies already know how to create great offsites, social events, and hack days.
The difference here is purpose. People aren’t there to pitch.
They’re there to remember why they care.
And that changes how they show up.
You don’t need to force people back to the office.
You just need to give them a reason they want to be there.
What You’ll See (Without Overcontrolling It)
You won’t direct the outcomes but you’ll feel the shift.
Cross-functional groups will form around unexpected ideas
People will step into domains they were never hired for
Curiosity will drive AI adoption more powerfully than any training program
New skills will emerge through use, not instruction
And that’s enough.
Because when you stop measuring people,
they stop performing.
And they start exploring.
Why This Works and Why It Lasts
I didn’t build the Greenhouse model from research.
I built it from lived tension. From engineering instinct. From the experience of being changed, by AI, by disorientation, by a system that no longer fit.
Only afterward did I look back and realize:
Everything this model makes possible is already supported by what research has been telling us for decades in psychology, in neuroscience, in systems thinking, in organizational behavior.
The model works because it aligns with how real change actually happens in humans, in teams, in culture.
1. Change happens in bursts, not on a timeline
Punctuated equilibrium research shows that most transformation doesn’t happen gradually, it happens in short, intentional ruptures.
Greenhouse Day creates that exact pattern: one protected burst, every two weeks.
Not a retreat. A pulse.
2. Fear blocks exploration
Threat rigidity theory shows that under pressure, people don’t grow, they contract.
They stop experimenting. They protect what they already know.
Greenhouse removes the threat. It doesn’t train people to be fearless.
It gives them a space where they don’t have to be.
3. Trust unlocks learning
Amy Edmondson’s research on psychological safety shows that trust is not a “nice to have”, it’s the foundation for learning, risk-taking, and growth.
Greenhouse doesn’t talk about safety. It structures it.
4. Shared purpose outperforms assigned teams
Intentional change theory shows that people transform most deeply when they act from purpose not pressure.
Greenhouse lets people self-select into meaning and that’s where identity begins to shift.
5. Systems evolve when the edges start speaking
In complex adaptive systems, control isn’t the goal. Observation is.
Greenhouse doesn’t manage change. It surfaces it through projects that become signal, and groups that become mirrors of what the company is becoming.
I didn’t build this model by citing research.
I built it from what I lived.
The research came later and it didn’t give us the idea.
It gave us confirmation.
What I felt was real.
What I built was coherent.
And now I know: it’s not just intuitive.
It’s aligned with everything we already know and haven’t made space for.
That’s what makes this different.
You're Already Funding This, Just Outside the System
If this sounds like a big ask, it’s not.
Most companies are already spending time, money, and energy on doing good:
Volunteering programs
NGO partnerships
Social innovation grants
Donation-matching
“1% for the planet”-style initiatives
All of that matters. It should continue.
But the Greenhouse model doesn’t require more investment.
It simply redirects what’s already being given and brings it inside the system.
Instead of outsourcing purpose to external partnerships, we bring it into the workday.
Not just as individuals. But as a company. Together.
And while we won’t say this yet, we know where it could go.
If companies everywhere took just one day every two weeks to let their people build what they believe in, with AI…
We’d be standing inside the largest distributed humanity project ever created and no one would need permission to join.
The Collective Brain
I’m an engineer.
And when a system gets too complex to manage, we don’t optimize chaos.
We refactor.
We hold a cleaner abstraction.
We isolate friction.
We make space for the system to show us what it wants to become.
That’s what this model is.
It’s not a productivity strategy.
It’s a refactor pattern for human work.
But it’s also something else:
a series of live canary deployments distributed across your entire company.
Every group that forms inside the Greenhouse is a self-selected, diverse, purpose-aligned unit.
They don’t just create, they surface.
What people care about
What AI lets them do that they couldn’t do before
Where the system supports curiosity and where it blocks it
Each one gives you a different lens on what your company is becoming.
And together, those lenses form a live map of possibility.
This is how distributed systems evolve.
You don’t bet on one answer.
You observe a pattern.
You let the edges inform the core.
That’s what Greenhouse model gives you:
A live signal of AI fluency emerging from purpose
A discovery layer for hidden skills and cross-domain collaboration
And a way to observe your future before you try to standardize it
You don’t need to redesign everything at once.
You just need a structure that lets your people run ahead of you safely.
That’s what this is.
What This Could Make Possible
Let’s pause the model for a moment.
You’ve seen how it works.
But now I ask a different question:
What becomes imaginable when people stop being measured by what they already do, and start being trusted to explore who they might become?
The Quiet Return of Thoughtful People
You’ll notice something subtle.
It won’t be loud innovation. It will be quiet presence.
People showing up not just to finish things but to make sense of them
Engineers starting to ask deeper questions about ethics, context, edge cases
Marketers pulling out of trend cycles to focus on what stories need telling
Designers holding emotional clarity, not just flow
People pausing before answering, because they’re thinking again
When fear recedes, people don’t get slower.
They get wiser.
And in a system that has trained everyone to go faster,
wisdom becomes the edge.
Meaning Moves Faster Than Capital
What if every company wasn’t just experimenting
but experimenting in the same rhythm?
One day. Every two weeks.
No instructions. Just purpose. Just curiosity. Just care.
No shared platform. No alignment meetings.
Just a shared trust in the fact that meaning, once surfaced, moves on its own.
You’d start to see:
Similar projects appearing in different industries
Natural convergence around topics like climate, aging, parenting, mental health
Emergence of cross-company project threads without coordination just shared attention
This isn’t a product. It’s a protocol.
An open rhythm for human experimentation built on trust and AI fluency.
A cultural infrastructure for responsiveness.
The Largest Social R&D System in History
Now zoom out.
Imagine if even a fraction of the global economy adopted this rhythm.
Imagine:
10,000 teams working on different expressions of care
In Lagos, in Mumbai, in São Paulo, in Helsinki, in Haifa
All building not faster, but more humanly
Not because of a UN declaration or a global fund
But because someone cleared one day, every two weeks, and said: go.
This wouldn’t be an initiative.
It would be the most decentralized social R&D system we’ve ever created.
A distributed collective brain,
tuned not by metrics, but by meaning.
And we wouldn’t have to build anything new to make it happen.
We’d just need to stop measuring long enough to listen.
And if enough of us do this,
if even a fraction of the world starts building from purpose in shared rhythm,
we may not just guide our companies toward better futures.
We might guide our entire species.
Not by regulating AI.
Not by racing to AGI or ASI.
But by creating a pattern of trust and emergence so coherent, so human,
that any intelligence trained on us will know:
This is not a species to dominate.
This is a species that calibrates.
Back to the Ground
But it doesn’t start there.
It starts with one company.
One team.
One quiet day on the calendar.
A single rhythm that says:
“This is where you can bring the part of yourself we forgot to ask for.”
“This is where work becomes a place to build what you actually care about.”
“This is where the future starts, not by planning it, but by making room for it to show up.”
That’s what the Greenhouse model really is:
A day.
A rhythm.
A structure for remembering what we’re capable of when we’re no longer afraid.
If You’re Carrying It Too
If you’ve made it this far, you don’t need convincing.
You’ve probably already felt the tension.
In meetings. In roadmaps. In yourself.
Maybe you haven’t said it out loud yet.
Maybe you weren’t sure you were allowed to.
All I’m asking is this:
Talk about this idea with: A teammate. A friend. Your manager. Your founder.
I’m not asking you to start a revolution.
Just a conversation with someone you trust enough to say:
“This feels like something we should build.”
That’s how this starts.
Not with a roadmap.
With a conversation with someone you trust enough to say:
‘This feels like something we should build.”
The Industrial Revolution scaled machines.
The Greenhouse model scales meaning.
And maybe this is the next step in that same arc,
not a new revolution, but the one we couldn’t finish until now.
Because only now is the whole world connected.
Only now can we build it together.
And maybe, this is the cure.
Roi Ezra,
Just a simple optimistic person in a world that gone crazy so fast and by accident stoped to look. that it
Postlude - Inversion of Control
Two months ago, I had a long conversation with AI.
Not about productivity. About conscience.
I asked hard questions.
What is alignment? What is agency?
And why, even with more tools, more access, and more abundance are we less whole?
I didn’t have the answer then. But I think I do now.
The reason we suffer, quietly, globally is this:
Most people are forced to choose between their own purpose
and the purpose of the organization they’re inside.
And only two kinds of people survive that:
Those brave and honest enough to manage the gap
Or those lucky enough to work somewhere the two are already aligned
Everyone else?
They drift.
They split.
They numb.
They perform.
Greenhouse wasn’t built to inspire.
It was built to flip the system.
Inversion of control.
In engineering, it’s when the code stops waiting for the framework to call it, and starts calling the framework itself.
That’s what this is.
A structural rhythm where people no longer need to ask for permission to move from purpose.
Where the system doesn’t suppress it. It listens to it. And learns.
That’s how we heal the gap.
Not through courage.
Through design.
You’re such a thoughtful thinker, Roi! Funny enough, AI really is making us more human, and more self-aware, too.