Loading

Beyond the Chatbot: Designing for Agency

Beyond the Chatbot: Designing for Agency

AI
AI Product Strategy User Experience Future of Technology Agentic AI Constraint Loops

We spent 2023 marveling that computers could talk. We are spending 2025 realizing that "talking" is a terrible interface for getting work done.

The promise of AI was automation—the idea that we could hand off a task and get a result. But the reality of the "Chatbot Era" (ChatGPT, Gemini, Claude) is that we have simply traded doing the work for micro-managing the worker.

If you ask a chatbot to "research 10 competitors," you don't get a finished report. You get a conversation. You have to nudge it when it hallucinates, correct it when it drifts, and remind it of the format. You aren't a manager; you're a babysitter.

This isn't agency. It's micromanagement.

A stark, high-contrast visual depicting a person hunched over a desk, overwhelmed by a dense, blinking cursor on a blank screen. Around them, spectral, disembodied AI 'voices' (represented as faint, translucent thought bubbles or chat bubbles) are chattering, each offering conflicting or incomplete directives. The lighting is dim and slightly oppressive, emphasizing frustration and micromanagement. The scene is a metaphor for the 'Blank Cursor' problem and the 'babysitter' experience. Style: Graphic novel illustration, chiaroscuro lighting, emphasizing contrast and the feeling of being trapped.

The "Blank Cursor" Problem

The fundamental flaw of the chat interface is that it is reactive. It waits for you. It blinks at you.

Real agency requires a system that is proactive—one that understands your intent deeply enough to pursue it without constant hand-holding.

In building Forge, my AI agent platform, I ran into this wall immediately. I wanted to build an agent that could "build a landing page." But every time I tried to do this with a pure LLM chain, it would suffer from Goal Drift. By step 4, the model had forgotten the design constraints I set in step 1.

The solution wasn't a better prompt. It was a better architecture.

The Case for "Constraint Loops"

To move beyond the chatbot, we need to stop treating AI as a "magic box" and start treating it as a component in a Constraint Loop.

An intricate, geometric visual representing a 'Constraint Loop'. At the center, a pulsing core of light signifies the user's defined intent. Radiating outwards are several tightly controlled, circular pathways. Each pathway shows a small, intelligent agent successfully completing a micro-task (e.g., a tiny gear turning, a data point being validated, a code snippet compiling) and then returning to a central validation node before proceeding. The connections are clean and precise, with a subtle, confident energy. The overall aesthetic is one of organized efficiency and intelligent design, suggesting a robust system. Composition: Top-down or isometric view, emphasizing the structured, looped nature of the process.

A Constraint Loop is a system that forces the AI to validate its own path against the original intent before it takes the next step. It looks like this:

  1. Intent Definition: The user sets a high-level goal (e.g., "Find 5 startups in Boulder hiring PMs").
  2. Execution Step: The Agent takes one step (searches Google).
  3. The Critic (The Loop): A separate model—or a deterministic code block—pauses the Agent and asks: "Does this result match the original criteria?"
    • If YES: Proceed.
    • If NO: Self-correct without bothering the user.

This simple architectural shift changes the UX entirely. It moves the user from "Supervisor" (watching every step) to "Executive" (setting the goal and reviewing the outcome).

Agency, Not Automation

There is a subtle but critical difference between "Automation" and "Agency."

  • Automation removes the human from the loop entirely. It is for repetitive, low-stakes tasks.
  • Agency empowers the human. It gives them a lever to move a mountain, but they still choose where to place the fulcrum.

At MSCI, when I deployed AI tools to our engineers, operations, and product teams, the goal wasn't to replace them. It was to give them agency. We built tools that allowed them to ask, "How does my work connect to the CEO's Q3 goals?" That isn't automation; that is clarity.

The Future is a Canvas

This is why I believe the "Chat" interface is a dead end for complex work. The future of AI isn't a text box; it's a Canvas.

Tools like NotebookLM and Cursor are already showing the way. They don't just talk back; they maintain state. They have a "memory" of your project that persists beyond the context window. They allow you to work alongside the AI, treating it as a collaborator rather than a oracle.

We are entering the era of Agentic UX. The winners won't be the models that talk the best. They will be the systems that listen the best—and then have the agency to go do something about it.