Beyond the Chatbot: Designing for Agency
We spent 2023 marveling that computers could talk. We are spending 2025 realizing that "talking" is a terrible interface for getting work done.
The promise of AI was automation—the idea that we could hand off a task and get a result. But the reality of the "Chatbot Era" (ChatGPT, Gemini, Claude) is that we have simply traded doing the work for micro-managing the worker.
If you ask a chatbot to "research 10 competitors," you don't get a finished report. You get a conversation. You have to nudge it when it hallucinates, correct it when it drifts, and remind it of the format. You aren't a manager; you're a babysitter.
This isn't agency. It's micromanagement.

The "Blank Cursor" Problem
The fundamental flaw of the chat interface is that it is reactive. It waits for you. It blinks at you.
Real agency requires a system that is proactive—one that understands your intent deeply enough to pursue it without constant hand-holding.
In building Forge, my AI agent platform, I ran into this wall immediately. I wanted to build an agent that could "build a landing page." But every time I tried to do this with a pure LLM chain, it would suffer from Goal Drift. By step 4, the model had forgotten the design constraints I set in step 1.
The solution wasn't a better prompt. It was a better architecture.
The Case for "Constraint Loops"
To move beyond the chatbot, we need to stop treating AI as a "magic box" and start treating it as a component in a Constraint Loop.

A Constraint Loop is a system that forces the AI to validate its own path against the original intent before it takes the next step. It looks like this:
- Intent Definition: The user sets a high-level goal (e.g., "Find 5 startups in Boulder hiring PMs").
- Execution Step: The Agent takes one step (searches Google).
- The Critic (The Loop): A separate model—or a deterministic code block—pauses the Agent and asks: "Does this result match the original criteria?"
- If YES: Proceed.
- If NO: Self-correct without bothering the user.
This simple architectural shift changes the UX entirely. It moves the user from "Supervisor" (watching every step) to "Executive" (setting the goal and reviewing the outcome).
Agency, Not Automation
There is a subtle but critical difference between "Automation" and "Agency."
- Automation removes the human from the loop entirely. It is for repetitive, low-stakes tasks.
- Agency empowers the human. It gives them a lever to move a mountain, but they still choose where to place the fulcrum.
At MSCI, when I deployed AI tools to our engineers, operations, and product teams, the goal wasn't to replace them. It was to give them agency. We built tools that allowed them to ask, "How does my work connect to the CEO's Q3 goals?" That isn't automation; that is clarity.
The Future is a Canvas
This is why I believe the "Chat" interface is a dead end for complex work. The future of AI isn't a text box; it's a Canvas.
Tools like NotebookLM and Cursor are already showing the way. They don't just talk back; they maintain state. They have a "memory" of your project that persists beyond the context window. They allow you to work alongside the AI, treating it as a collaborator rather than a oracle.
We are entering the era of Agentic UX. The winners won't be the models that talk the best. They will be the systems that listen the best—and then have the agency to go do something about it.