Let the (AI) Product Tell You What It Wants to Be
The teammate who knows your product better than you do
For years, let the product tell you what it wants to be has been among the handful of philosophies that guide my product work.
It’s a metaphor.
But the last couple of weeks have made this literal — and it’s been surreal and delightful every single time.
I’ve been on lots of teams that build products. I’ve never had the product itself on the team before.
I can’t stop thinking about it!
The Longstanding Metaphor
The phrase, let the product tell you what it wants to be, has always meant this: start building, expose it to reality, and through that process you’ll discover what actually needs to exist. As you iterate and learn, the product starts to take shape. Almost like it’s nudging you, whispering what’s not quite right yet.
As you learn from making and exposing what you’ve made to the elements, you start to understand what the product wants to be — in a way that you never could have based on theory and research before you really got started.
The product will tell you what it wants to be.
That’s all this phrase used to mean in my mind, and I’ve never been compelled to write about it.
Look Who’s Talking
But it’s no longer just a metaphor for me.
I’m getting toward the end of a POC (to de-risk the technical feasibility) for an AI voice-chat product in a niche domain, and this thing is literally telling me what it wants to be.
Like, out loud. In English.
I Can’t Wait to Test Discuss the New Version
When I finish making a change to the way I’m managing context — instructions prompt, tools available to the agent, or documents index for RAG retrieval — I’m no longer excited to immediately test it. I’m excited to talk with it about the changes.
The first thing I do after launching the app, rather than testing the changes, is to start a voice chat and say this:
“Hi, this is Dan. Please enter dev mode [allows it to break from persona and discuss technical topics, or anything out of bounds for typical conversations with users]. We just made changes X, Y, and Z — with the hope of improving A, B, and C. Can you see the new instructions about that? And does that all make sense? Any concerns or callouts?”
And then we have a conversation about the changes to the product. The product and I have a conversation. About the product. (What the hell?!)
It then, out loud and in perfect English, tells me what may give us trouble with the new version, what I should watch out for during testing, and ideas to try if those issues pop up.
Eventually, I say goodbye. I end the chat and start a new one, to actually smoke test or run planned eval cases — with fresh context.
Why Did You Do That?
When I move on to testing, I test until I encounter an issue. But instead of making a note of it, for later triage and analysis, I say this:
“Please enter dev mode. You just said X, but I think you probably should have said Y. Can you explain why you said X? Please include a rundown of tool calls with details and talk me through how you arrived at that response.”
And it does! It typically admits that it said the wrong thing — but it goes way beyond that. It may explain what data it accessed, where there was a gap in the information available, and that it tried to infer detailed advice in the absence of sufficient ground-truth information or instructions.
Or, it may explain that there’s a phase in the instructions that it took to mean X, when really I must have meant to prescribe Y behavior.
And then we brainstorm...
Dan: “What details from a tool call remain in context as the chat continues?”
AI Agent: “I’m able to reference the tool name, input string, and response data for each prior tool call in the conversation.”
D: “So you could effectively make notes to self as input on a tool call, if instructed to? And if instructions said to make a tool call that documents your full plan for every multi-turn interaction, you could then reference that plan to guide your step-by-step advice to the user across multiple turns?”
A: “Yeah, that would definitely work, actually!”
D: “So you could do that with one of your existing tools and ignore the data that comes back, unless it’s helpful in advising the user? Like just as a way of experimenting with this pre-planning pattern? And then we could add a dedicated tool for this purpose, if it’s working?”
A: “Yes, I think that would work well to validate the usefulness of the pattern.”
D: “Ok, then I’m going to update the instructions to request a plan up-front, which you’ll write as input to an existing tool you already have access to, and then we’ll test again to see how effective it is. Where in the instructions would work best for you to parse and leverage this new bit of instructions?”
A: “It would make sense to explain it after the XYZ section, toward the top of the instructions, to establish the importance of this new gating step in our workflow. But you may want to also give a one-line reminder at the end of the instructions, since recency is a factor in how I’ll weight the importance of content in the context window.”
D: “Ok, great. I’ll make the changes and talk to you again soon!”
It feels more like managing a team member, than it feels like testing a product in development. I offer feedback, and we work together to figure out how to optimize the agent’s performance.
(And that use of a tool call’s input as a way of allowing the agent to do notes-to-self and lightweight reasoning steps — even though the realtime model does not itself offer thinking/reasoning — has proven to be a winner.)
This has happened over and over in the last couple weeks, and I chuckle with wide eyes every damn time.
I’ve been on lots of teams that build a product, but I’ve never had the product itself “on the team” to weigh in on how it could be better and next steps to accomplish that.
What’s the Lesson?
If there’s a lesson here, it’s not sunk in yet for me, I guess. This is not so much a lesson I’ve learned as it is a delightful experience that seemed worth sharing. Just because it’s so surreal.
As always, if you get started and listen carefully, then the product will eventually tell you what it wants to be. That’s happening already in this POC, and it’ll need to keep happening post-POC, in order for this product to make it out the door, find its place in the market, and win.
But also, if you’re building an AI product, the product may literally be able to tell you what it wants to be.
And that feels like a brand new thing.