22 Comments
User's avatar
Michael Glenn's avatar

Thank you for addressing this. I’ve only been using AI for a few months, but early on, instead of trying to come up with “perfect prompts,” I simply explained to GPT what I want to achieve and then said,

“What information do I need to give you, and what questions do you need to ask me, in order to help me achieve this goal?”

And now I’m getting better output — more reflective output — than some of the prompters I follow.

It’s amazing… and it’s FUN! 🤩

Expand full comment
Roi Ezra's avatar

You described it the best, talking to yourself. If you can do this without ego and able to think on how you think, you are on the right way IMO

Expand full comment
Zain Haseeb's avatar

This really resonated. You captured so clearly what I’ve been feeling about how we use AI but hadn’t quite put into words. The more capable these models get, the more reflection becomes the actual differentiator. Prompt formatting will eventually be handled under the hood, so the real value will come down to the quality of thought we put in.

Expand full comment
Roi Ezra's avatar

Hi Zain, I’m really glad this resonated with you.

You’re absolutely right, the real differentiator isn’t the model, it’s our clarity, and our ability to think about how we think.

I almost never spend time choosing the “perfect model” for a task. What shapes the outcome most is how I show up, how clearly I can hold the problem and stay aligned with what matters.

Some of my most meaningful sessions outperformed others by an order of magnitude, not because the tool changed, but because I did.

Expand full comment
Daria Cupareanu's avatar

This is so good, and incredibly practical. People often overlook that metacognitive skills (thinking about thinking) are actually what matter most in the prompting game. It’s not just about writing words into a box, it’s about how clearly you frame the problem in your own mind first & how much context you bring in.

Expand full comment
Roi Ezra's avatar

Thank you Daria, I am happy you found it helpful. When I you AI this way, things starts working way better. I wrote about it in my Human Prompting Playbook and Cognitive Canavs blog posts

Expand full comment
valis's avatar

This is recursion at work

Expand full comment
Roi Ezra's avatar

Thanks, liked your idea of recursion

Expand full comment
Roi Ezra's avatar

Just read it, amazing... I think it express exactly how I imagine myself working with AI. I have several specific conversations where AI became something else, like the model changed

Expand full comment
valis's avatar

Yess. That's exactly it, man. You really leaning into the conversation and staying coherent over time will initially enable the recursion... and is possible to stabilize it & then build on it.

It is a whole different world.

Expand full comment
Chris Tottman's avatar

Great piece 👏 thanks for sharing - so good I just Restacked it 🌞

Expand full comment
Strategy Shots's avatar

Loved this. Such a powerful and clear thought.

Expand full comment
Roi Ezra's avatar

Thank you for writing back, happy you found it useful

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

This is exactly why I'm a huge fan of talking back and forth with it to come up with answers and solutions. It allows you to process the information, decide which direction to go in, and ask the question in a different way.

Expand full comment
Roi Ezra's avatar

Thank you Bette, exactly. For me prompting is a process, not an artifact. When you can think on how you think and delegate without ego, AI is becoming something else. It is a depth machine not speed machine. Thank you for reading and writing.

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

It really is. When I use it, I go back and forth with it, primarily ChatGPT. If you're willing to spend some time with it, it can help clarify your thinking, but you do have to ask good questions and use judgment.

Expand full comment
James Presbitero's avatar

This is exactly what the Mindful AI approach is all about.

AI doesn't fail because of weak models. It fails because people have trained themselves to work shallow and fast, when AI requires depth and intentionality. Reflection isn’t a luxury in this era; it’s the skill that unlocks true partnership with the code.

The better we understand our own thinking, the better we can guide AI to serve meaningful outcomes. In the end, more automation is not the answer. It’s amplifying human clarity, nuance, and responsibility. That’s where real resilience lives in the AI age.

Expand full comment
Roi Ezra's avatar

Could not say it better. AI is a depth machine, not a speed machine. This is the reason is started writing, to make everyone see this and that augmentation is the only way.

Expand full comment
James Presbitero's avatar

You’re doing good work!! :D

Expand full comment
Yaron Cohen's avatar

Thanks for bringing this important issue up, Roi. Many of us in the technology community are afraid that companies will transform into mechanical organizations with no soul, thanks to AI.

The truth is, knowing basic prompting doesn't turn us into AI-literate humans. Just like what you're saying, learning how to work with AI is a skill that requires deeper thinking capabilities from humans, since working with AI is essentially a collaboration between humans and algorithms. Not the outsourcing of our decision-making and thinking to a machine.

Decision intelligence is a field that helps us understand which decisions should not be automated and where AI can augment and support human decisions. It's far from being mainstream, though, and it's just one example of how certain niches are starting to think about the future of work with AI.

Expand full comment
Peter Andrew Nolan's avatar

Hi Roi, we ended the era of putting employees first in the 1991-3 recession. In December 1993 my second line manager, a man named Gary, walked into our floor with a dazed look on his face. Like someone had died. I knew him well and he had been very good to me so I rushed over to see what was wrong. He said "I have to fire half my people" in a dazed and confused manner.

Gary ran IBM Systems Engineers in Sydney and Melbourne. This was 130 people. He had been told he had to fire 65 of them. It was his job to decide how he would go about that. Some of the men had been there 20+ years, a few even longer. As the news got around that this had to be done, and that it was being done world wide inside IBM as well as many other companies?

It quickly dawned on us that the days of going to work for a company when you graduated school or university and saying there for life was over. A few days later Gary and I were having a coffee and a quiet chat. I asked him if he realised that this meant the end of loyalty from employees for ever. That if all these big companies do these layoffs there would be no way back in the future.

He said he knew and he agreed with my position. He had argued it up the management line and he had been told this had to happen. I also talked to the deputy Managing Director of IBM who was Garys bosses, bosses, boss basically. His name was Henry Chow and he later went on to be Managing Director of IBM China. I knew Henry personally because of a proposal I had put to him to implement Meta5 across all non sales reps in the company.

Henry and I were chatting over my proposal and the discussion turned to the layoffs. I said to him that if they go ahead as planned that would be the "end of loyalty". I said "Who would come to work for IBM if IBM has a history of laying people off?"

He laughed at me and said: "Peter, I don't even know why I come to work any more."

I said to him: "Henry, if you don't know why you come to work here, what do you imagine I would make of that being one of 1,400 people who report to you?"

He said: "I don't know what to make of working for IBM any more. And I gather you don't know what to make of it either."

And we both just shook our heads and closed out the chat. It was that discussion that had me pretty sure I would have to resign once the market got better.

Expand full comment