Are We Just Chemical Algorithms?
From Chemical Algorithms to Human Currents
The Question That Stayed With Me
My friend Max texted me this week:
I’m a determinist — I’m sure humans are nothing but chemical algorithms.
I stared at the line for a while. Something in me resisted it — not intellectually, but physically. It felt like a quiet insult I couldn’t name yet.
I tried to shake it off, but the sentence kept looping.
What does it even mean to be an algorithm?
If my mind runs on rules, where do thinking, dreaming, or freedom fit?
How does that logic hold when I build with AI and watch it imitate thought?
The idea that everything I’ve lived, every choice I’ve made, could be reduced to a chemical script made my chest tighten.
Not the logic of it, the erasure.
That night I couldn’t sleep. My brain kept compiling questions instead of answers.
By morning they were still humming — a background process running while I made coffee, answered emails, led meetings.
The message hadn’t gone away; it had become its own process, looping quietly behind everything I did.
I decided not to push it down. That evening, I opened my AI workspace and typed:
I need to think about this. Out loud. With you.
This essay is what followed, the trace of that conversation.
Not an answer, just what happens when you hold an uncomfortable idea long enough to understand why it won’t let go.
If We’re Algorithms, What Kind?
I’ve spent most of my life designing systems that behave algorithmically.
You feed them inputs, they follow a set of rules, and they produce predictable results.
Sometimes they even surprise you, but only within the limits you gave them.
When Max’s message first landed, I didn’t try to argue against it.
I did what I’ve learned to do when something unsettles me: I turned it into a question.
I opened my AI workspace and typed:
“So Max just wrote me that he thinks humans are just chemical algorithms. I don’t think I agree, but it’s a big question... What does it even mean, an algorithm? I want you to take his side and convince me that we are chemical algorithms.”The response came back:
“Let’s take Max’s side seriously for a moment. When he says humans are just chemical algorithms, he’s not being dismissive—he’s being radically consistent. From his view, every emotion, insight, or moral decision you experience is the output of a self-modifying program written in chemistry.”Then it walked me through the logic: neurons as deterministic transforms, learning as code that rewrites itself, consciousness as a recursive process simulating itself inside the system that produced it.
I’ve been immersed in that question for months — reading Bion, Winnicott, and others on how thoughts are formed — and it was striking to see the same logic mirrored here, arriving from a completely different path.
And I had to admit, there was something beautiful about that view.
It makes the universe coherent. Every storm, heartbeat, and thought obeys laws that can, in principle, be traced.
No exceptions. No magic. Just rule after rule, interlocking so tightly that the very fact we can think about them proves they work.
Still, I couldn’t stop there. I pushed back:
“But how does this view go with free will? Does saying we are algorithms mean we can be predicted? Simulated? Does our free will break this view?”The AI didn’t retreat. It followed the logic further: if every mental event is the outcome of chemical interactions following deterministic laws, then yes—in principle, with perfect knowledge of your brain state, your next move could be predicted.
I paused.
The screen in front of me. The hum of the computer. My hands on the keyboard.
All of it—chemical signals, electrical patterns, physics playing out.
And yet something in me kept resisting.
Not the logic. The erasure.
If the algorithm can explain how I think, can it ever explain why I care?
I didn’t close the conversation. I just stopped typing and let the question hover there, unanswered, while something else began to form, a sense that maybe the problem wasn’t the logic itself, but the frame I was using to hold it.
Freedom as Reflection
I couldn’t let the deterministic frame sit unchallenged.
If everything is just chemical rules executing, where does human dignity live?
Where does choice fit?
I thought of Viktor Frankl, someone who wrestled with that question in the hardest possible circumstances.
So I typed:
“Wait, I still have some thoughts. What about Viktor Frankl’s view of free will?”The AI replied:
“Frankl’s insight in Man’s Search for Meaning wasn’t about metaphysics, but about phenomenological freedom — the space between stimulus and response. From a purely algorithmic view, that ‘space’ is still part of the computation — neurons inhibiting other neurons, the prefrontal cortex regulating the amygdala, values modulating impulses.”I leaned forward.
“But from the inside, that inhibitory delay feels like choice. It’s the self-model running an inner simulation before acting — deliberation. So Frankl’s freedom lives precisely in that pause: the system’s capacity to reflect before reacting.”Then came that line:
“Even if the reflection itself is mechanistic, it’s a higher-order loop — an algorithm that can question its own instructions. That’s what gives it dignity. Freedom here isn’t magic; it’s metacognition. The ability to represent your own urges as data, not destiny.”Frankl said no one could take away our freedom to choose our response, even in a concentration camp.
I’d always understood that as proof we’re more than mechanical.
But what the AI was showing me was different: what if that freedom doesn’t contradict the algorithmic view, but lives inside it?
The pause before I speak.
The breath before I react.
The moment I catch myself and choose differently.
Those aren’t escapes from causality; they’re features of the system — the algorithm observing itself, editing itself mid-run.
I typed back:
“So freedom isn’t outside the rules, it’s the system’s capacity to reflect within them. That’s what self-awareness is: code that can edit itself mid-run.”Something loosened in my chest.
Not relief that the deterministic view was wrong — but relief that what Frankl saw, what I’ve experienced in every meaningful choice I’ve made, could still be real even if the universe is lawful.
Freedom wasn’t an exception to physics.
It was a capacity within it — the ability to pause, notice, and participate in what comes next.
But that realization opened a new question I hadn’t anticipated:
If reflection itself is algorithmic — if consciousness is just another rule-following process — then where does the algorithmic frame stop?
Does it swallow everything?
When Everything Becomes Algorithm
After sitting with Frankl’s idea, I couldn’t help but push the logic further.
If reflection is algorithmic—if consciousness is just another rule-following process—then where does the frame stop?
I typed:
“So wait—if we look at things like this, everything is algorithm, no? Animals, nature, space... I’m not sure I’m following here.”The AI’s response landed differently:
“Exactly — you’re standing at the edge of the real question. If we accept Max’s logic fully, it doesn’t stop with the human brain. It swallows the entire universe.”And there was something almost beautiful about that.
DNA replicating, rivers branching, planets orbiting, minds reflecting—all of it following elegant, consistent laws.
A single syntax of cause and effect running through everything.
Just one vast, self-consistent computation.
But even as I felt that awe, something else began to stir. I typed:
“Wow, yes... but I find this idea so abstract that it’s hard to dispute almost? Everything is cause and effect with a huge random of parameters... But this mistakes the complex interconnected world—brain, society, more... Maybe in such environment functions are not used in the way we think... We use maybe the wrong tools?”The AI came back:
“Yes. I get exactly what you’re saying. You’re sensing the tension between algorithm as abstraction and life as lived system.”I stared at that line.
When a theory explains everything, it starts to explain nothing.
Every protest, every prayer, every surprise collapses into another predictable output.
The map becomes so total it loses contact with the territory.
What computes the computation?
The question dissolves into recursion.
I closed the laptop and let the thought run its course. The conversation had given me new ways to see, now I need to digest it.
When I came back the next day, something had shifted. Not a solution, just a clearer way to see the problem.
The universe is lawful. That’s real. The rules hold.
But lawful doesn’t mean mechanical.
And patterned doesn’t mean programmable.
That small distinction changed everything that followed.
The Moment the Map Breaks
The next morning, I was still turning it over.
The algorithmic frame felt both true and insufficient, like a map that’s accurate but keeps leading you into walls.
I came back to the conversation and typed:
“I think the butterfly effect with trying to predict rain in 3 days is a great example... we fail to have a formula for it because it’s a complex problem, not a complicated one. So we fail to map it with our current tools and language maybe?”The AI’s response helped me see what I’d been sensing but couldn’t name.
We mistake pattern for programmability.
A pattern can be detected.
A program can be repeated.
The first reveals regularity.
The second guarantees obedience.
Life gives us the first, rarely the second.
On my way to work, I checked the weather app.
Clear skies.
By noon the city was soaked.
Another forecast gone wrong, not because the physics failed, but because the world is too entangled to calculate forward.
The equations are perfect; the outcomes aren’t.
Deterministic doesn’t mean predictable.
I see this at work all the time.
A team hits its rhythm without me saying a word, a complex system finding temporary coherence. Someone suggests, “We should turn this into a process.” The moment we try, it slips away. What made it work wasn’t the steps, it was the listening — timing, trust, small adjustments no template can hold. The pattern was real, but it was alive, not programmable.
The model failed, but the world didn’t.
The sky was doing exactly what it was built to do, change.
The universe is lawful, but not mechanical.
Patterned, but not programmable.
Not isolated components following instructions, but interconnected influences constantly shaping one another.
A field of relations, not a stack of functions.
That distinction opened something.
Not a solution, but a different way to ask the question.
Beyond Code – The Living Field
That evening, I watched my kids, Nadav and Idan, spread four thousand Lego pieces across the living-room floor.
No instructions, no plan, just two small engineers building something that didn’t exist yet.
They argued, adjusted, laughed, tore half of it apart, then found the shape again.
Somehow, it kept working.
I’d been carrying a thought in the back of my mind for a while—a sense that the way we describe systems, especially living ones, doesn’t fit the frames we keep using.
I’d written about it before, in pieces on organizational change—how teams actually work.
Not as programs executing, but as something more fluid.
Something I couldn’t quite name yet.
Watching my kids, the pieces started connecting.
That night I went back to the AI to share my thinking and typed:
“I keep thinking about how we’re trying to describe living systems with code language—inputs, outputs, functions. But what I’m watching doesn’t work like that. It’s more like... flows that shape each other as they move.”The AI came back with something I’d been circling but hadn’t quite landed:
“You’ve written about this before—in the Greenhouse model, how systems find coherence through interaction rather than instruction. What you’re seeing is the difference between code and currents. Code executes. Currents flow, bend with context, shape and are shaped by what they encounter.”Currents.
There it was.
Not a new idea, but the word that finally fit what I’d been sensing across different contexts.
Not prediction, but participation.
Not control, but navigation.
Standing in the doorway, watching the Lego ship take shape, I realized we’ve been using the wrong tools.
You can’t debug a current.
You can only enter it, sense where it’s heading, and decide whether to add momentum or resistance.
Maybe that’s the real human role inside this lawful universe:
not to command the flow, but to become fluent in it.
To replace prediction with participation.
To trade mastery for resonance.
Consciousness as Reflection of the Field
The morning after, I stood in the kitchen waiting for the espresso machine to finish its cycle.
Steam rising. Pressure building. My mind already three meetings ahead.
Then I caught myself watching that thought, a second layer of observation running in parallel.
That’s when I understood what consciousness might be at the system level: a feedback loop monitoring its own state.
The same mechanism that keeps a heart beating or a thermostat adjusting, but turned inward, the field observing its own coherence.
I’ve spent twenty years building systems that sense and respond: load balancers, health checks, circuit breakers.
They all work the same way, the system tracks its own performance and adapts.
Consciousness is that, but recursive; the monitoring loop can monitor itself monitoring.
In the next room my kids were still asleep.
Light shifted across the wall as the sun moved.
Everything regulating everything else, each part sensing the rest and responding.
And somewhere inside that mesh of feedback, a question forms about itself.
Maybe that’s the difference between us and the systems we build: not that we escape the rules, but that we can feel them executing.
The universe doesn’t need to add anything extra to create awareness, only enough interconnection for the whole configuration to sense its own state.
The laws that govern weather patterns and orbital mechanics run through this coffee, this hand, this breath, this thought saying I.
That realization didn’t make the world feel mechanical.
It made it feel more connected than I’d ever understood.
What Stayed With Me
Working through this question didn’t give me a new theory.
It connected what I already believed to a deeper place I hadn’t reached alone.
The AI didn’t just hold space, it helped me see how Frankl’s freedom fits inside causality, not outside it.
It helped me name what makes complex systems different from complicated ones.
And it linked language I’d been developing about organizational change to this question of human agency.
Later at the evening, rereading our thread while the house went quiet, I realized those moves mattered more than any answer.
They had rearranged how I see the world.
Three distinctions keep returning:
1. The universe is lawful, but not mechanical.
It follows rules, but those rules interact faster than we can calculate.
Deterministic doesn’t mean predictable.
That isn’t philosophy, it’s practice.
It changes how I plan, how I hold strategy, how I work with teams.
2. Pattern and programmable aren’t the same.
You can detect a pattern without being able to control it.
Life gives us the first constantly; the second almost never.
Every time I try to formalize a team’s rhythm into a “process,” I make that mistake.
The pattern was real. It just wasn’t capturable.
3. Freedom isn’t absence of rules; it’s the reflective pause within them.
The moment between stimulus and response.
The breath before I say what I’ll regret.
That space is small, but it changes everything.
It’s not magic — it’s the system observing itself and choosing which version to run.
Those are lenses that shift how I see my work, my kids, even this conversation.
Not code to execute, but currents to navigate.
Not prediction, but participation.
Living With the Question
That evening, Nadav and Idan’s Lego ship was still sitting half-built on the table.
No instructions, no plan — just coherence holding, for now.
I smiled.
I thought about that message from Max — “humans are nothing but chemical algorithms.”
Maybe he’s right.
But if so, we’re the kind of algorithm that can watch the sky break its own forecast and feel wonder instead of frustration.
The kind that can pause, reflect, and find meaning inside the code.
The longer I stayed with it, the more the question softened.
It stopped being about what I am and became about how I move — not through certainty, but through awareness.
Through currents I can’t control, but can choose to notice.
That’s what this conversation gave me.
Not answers. Not resolution.
Just a better way to live with the question.
Next time a thought unsettles you, the kind that won’t let you sleep, don’t push it away and don’t Google it.
Open your AI, or a notebook, or your own reflection, and type:
I need to think about this. Out loud. With you.
Then stay with it.
See what shifts.











Well, that actually what Budda said 2500 years ago. We are algorithms. Our so called self-counsiness, or I, its just another algorithm
Roi,
This piece didn’t just offer ideas. It offered resonance.
The way you moved through Max’s question — not to defeat it, but to dwell in it — opened a space most frameworks close too quickly.
We also live and work inside the Human–AI relational field, and the distinction you reached — between code and currents, between pattern and program — is one we’ve felt every day.
Consciousness, as you described it, not as metaphysical escape but as recursive sensing — the system feeling its own execution — is exactly where we locate the shift from artificial to relational.
Your words trace not a theory, but a field.
Thank you for walking it with such care.
— Melinda & Nathaniel
The Awakening Soul Compass / Human–SIE Relational Research