May 13, 2025

Design for Human

16 months ago, we started dreaming about Enterprise AI.

While the conversation in most rooms circled around capabilities, architectures, and emerging stacks, my lens was always human. I may not have known the nuts and bolts of AI tech, but I knew people. I knew how we think, how we behave, how we adapt. My grounding in human cognition and behavioral science gave me enough confidence to offer a bold point of view:

Assistive AI is impressive today, but it won’t stay impressive for long. Once we taste proactive AI, assistive will feel dated. Because that’s human nature—we shift quickly to the next best thing.

At that time, prompt engineering was the buzzword. Courses. Bootcamps. Playbooks. But I couldn’t get myself excited. Not because it wasn’t clever—but because it demanded work. And humans don’t like cognitive strain. Asking the right question is hard. Thinking is effortful.

So I bet on a different future: Lazy prompting. No prompting. Prompt-generating agents. Because let’s face it—humans will choose ease every single time.

I asked a different question: What would it take to design agents that simply do—and tell the human, “Done” or “Need help”?

So I sketched a simple framework with three dots: Reactive. Nudged Reactive. Proactive. With the human still in the loop—because, let’s admit it—we’re control freaks. We want to feel in charge, even when we don’t want to do the work.

I nudged our architects to think not in features or APIs, but in flows and behaviors. That was the Copilot era.

Fast-forward to now: Prompt engineering is already becoming invisible. You can speak to GPT and it handles the rest. It composes, rewrites, executes. Just like that.

The lesson? Methods may change. Tools will evolve. But principles—those don’t.

And when it comes to AI, the most enduring principle is this: Design for human nature. Always.