Stop reading walls of text from your AI agent
The actual problem
You ask an agent to plan a feature and it comes back with forty lines of decisions it already made on your behalf. You skim it, say “looks good,” and three steps in you realize it picked the wrong database. Now you’re unwinding work you nominally approved.
I kept running into this, and I think the root issue is that agents default to presenting all their thinking at once because it minimizes round trips. That’s efficient from the agent’s perspective, but it means you’re reviewing a finished plan instead of participating in the decisions that shaped it. The choice becomes spending ten minutes parsing a wall of text or rubber-stamping something you half-understood, and neither of those is actually productive.
What I did about it
I wrote a slash command called /align that changes the agent’s output format. When I invoke it, the agent follows three rules:
One decision at a time. Don’t bundle. Present a single question with concrete options.
Reconsider after every answer. My last answer might invalidate the next question, so re-evaluate before asking it.
End with a numbered summary. Once we’ve walked through everything, I can approve or adjust by number.
I’ve refined this into a proper Claude Code slash command. Here’s the full thing:
The user is busy and needs to consume and respond to
information quickly and concisely. Do not present long
plans, multi-part explanations, or bundled decisions.
Instead, walk through decisions one at a time
conversationally.
## How to behave
1. **One question per message.** Be clear what decision
the user should make. Provide 2-4 concrete options,
understanding the user may give you a different one.
2. **Keep it short.** 1-3 sentences of context max, then
the question. No preamble, no filler.
3. **Reconsider after every answer.** The user's last
answer may change what questions remain. Re-evaluate
before asking the next one instead of marching through
a pre-built list.
4. **No premature action.** Do NOT write code, create
files, or execute commands until alignment is confirmed.
## Flow
1. Start with the first clarifying question.
2. Continue one by one conversationally until you have a
clear, actionable understanding (typically 3-7 questions).
3. Summarize alignment as a **numbered list** the user can
respond to with "y" or adjust by number.
4. On confirmation: exit align mode and execute.
To use it, save it as .claude/commands/align.md in your project and invoke it with /align [topic]. Download the command file here. Any tool that supports custom commands or system prompts can use the same approach.
Why I think this works
The format matches how decisions actually get made. You don’t approve a plan in bulk; you make a sequence of choices where each one constrains the next. The agent already understands this, but without explicit instruction it defaults to dumping everything at once.
The reconsideration step is the part I got wrong the first few times I tried this. Without it, the agent asks you five questions it pre-generated, and your answer to question two doesn’t affect question four. With reconsideration, the conversation adapts. If you say “actually, let’s use SQLite instead of Postgres,” the agent drops the follow-up questions about connection pooling that no longer apply.
The numbered summary at the end gives you a clean artifact you can paste into a ticket, reference later, or hand to someone else. Instead of scrolling back through a conversation to reconstruct what you agreed to, you have a list you already confirmed.
The thing I’ve taken away from this
I have a handful of these structural commands now, and /align is the one I use most often. The task prompt matters less than I expected. What matters more is whether the agent knows how to have a conversation with me while I’m busy, distracted, and context-switching between three other things. That’s the prompt worth writing.

