Chat isn't the ideal starting UX for insight
Chat feels natural until you realize it's making you do all the work.
In the early days of large language models, chat felt like the magical interface. The sheer wonder of saying "Explain quantum physics like I'm five" and watching an LLM construct a perfect response was why ChatGPT became a breakthrough success. Instead of wrestling with tools, filters, and Google's increasingly code-like search queries to get blue links, these chat-based interfaces suddenly made LLMs feel human and accessible.
But somewhere along the way, we stopped questioning whether chat made sense for every task and interaction. The fact that we quickly moved from asking systems natural questions to discussing "prompt engineering" (where you need specialized syntax to get the right output) shows we're straying away from what AI was supposed to solve.
Leading with Chat when it doesn't make sense
Chat has become the poster child for modern AI interfaces. Every app, tool, and startup has added natural language chat to their existing workflows as their "AI-led interface." We've stopped questioning whether this is even the right interface for the task at hand.
It feels ironic when design tools like Figma or Canva tout open-ended chat interfaces as the default for generating and presenting designs, as if the very reason these visual tools exist is no longer relevant. Demo videos show users typing long, structured prompts into Salesforce Einstein GPT's chat-based workflow assistance, creating a whole new silo of ‘users’ who can truly get the juice out of it.
Chat is often rolled out to show off AI visibly rather than solving user problems more effectively. Even Google isn't exempt. The multiple open-ended Gemini chat boxes in Gmail and Drive demonstrate how context-unaware the AI is, waiting for users to initiate conversations about things that should have been obvious from the start.
“We made painting feel like typing.”
We're forcing rich, multi-dimensional problems through a single, narrow channel. As Amelia Wattenberger brilliantly puts it: "We made painting feel like typing, when we should have made typing feel like painting." You should check out her site: Our interfaces have lost their senses (ideally on desktop) to experience the beautiful artwork and design.
Humans think and spot patterns across modalities. We understand spatial relationships through visualization, grasp trends through charts, and absorb details and nuances through text. Each medium has its strengths: text excels at depth and precision, visualizations reveal patterns and relationships, and audio conveys tone and urgency. Yet most AI applications have converged on a single mode: the text box.
Having to exhaustively write out what you want a computer to do is still equivalent to writing code, not using a product. Traditional software solved this through useful symbols and interfaces that concentrate loads of functionality into context-aware actions. We don't want to abandon that progress and retreat to pure command-line interfaces, even if they're powered by natural language and voice.
Chatting with your dashboard isn’t the answer
Nowhere is this more apparent than in analytics or, at a meta level, "generating insights." When you engage in chat-based investigation of data for insights, you're essentially doing natural language SQL, perhaps at a more abstract level. Just like natural language prompts have evolved to demand "prompt engineering" and specific query construction to maximize value, you end up needing domain experts who can construct the right questions.
Power BI's Q&A feature, Tableau Pulse, and ThoughtSpot Sage are all good-faith attempts to improve on the dashboard/reporting framework of BI tools. But slapping a natural language Q&A interface on the same old framework doesn't address the core issue: users still need to hunt for the right information and think of the right questions.
Data visualization pioneer Edward Tufte said, "I think it is important for software to avoid imposing a cognitive style on workers and their work." With AI today, we truly have an opportunity to move beyond that limitation.
Better ways for insight delivery
A simple framework for thinking about insight interfaces is to think of them evolving along this spectrum:
This is not to say that we will drop away from the lower modes of insight access, but rather consume insights through all of these means.
Apple Intelligence's notification summaries, despite their imperfect rollout, demonstrate the power of contextual, proactive AI. A noisy group chat condensed to "Plans tonight cancelled, pencilled in for next weekend" is genuinely useful without having to chat to ask for summaries or probe for specific queries. To be clear, querying for information through chat is still useful. It should be available when needed, but 80% of the users and use cases are addressed by having an AI approach the data available proactively.
Similarly, we are at a place where company insights can move into a narrative form that makes meaning out of the noise of information.
Draft, then refine
Imagine you are going to discuss an important problem with the leader of your business unit. The default approach is to bring an initial presentation or document that serves as the launch pad for the meeting. The decision makers will often have more questions, and you will have more data in the back pocket to answer those specific questions, but the discussion always starts with the initial editorialized story you bring to the table.
Business insights should work similarly. Instead of starting with a blank chat box, the system generates initial insights automatically, presents them in appropriate formats (charts, tables, summaries), and then offers contextual ways to drill down or refine. This flips the interaction model. Chat can still play a role for refinement, clarification, or ad-hoc queries, but it's not the primary interface.
The best part about this approach is that not only does it solve the cold start problem, but it also reduces the prison of being tied to a specific structure or format. Just like a newspaper would use charts, images, text, and headlines to tell stories, this approach can truly take on interesting ways to visualize and communicate the core insight to the user that goes beyond what’s achievable by putting the burden of responsibility on the user.
The future will involve truly ambient insight delivery
The de rigueur form for generative AI seems to be chat interfaces. It’s visible and is easily mappable to initial LLM formats, but it is severely limiting in several ways. Chat excels for brainstorming, explanation, and refinement. But for insights, exploration, and pattern recognition, we need interfaces that match how humans actually think and work.
Narrative, proactive AI will be the natural step up in terms of presenting insights. Eventually, we will evolve into tools that truly dispense ambient insights.
Imagine an insight engine always running, irrespective of what tools you are using or meetings you are on. Adding a line in a presentation should immediately surface relevant insights around that core idea dynamically for the user to consider. Talking about a topic in a meeting should instantly be complemented by insights surfaced as streaming support. Right-clicking on an interesting news article should immediately allow you to explore insights relevant to the product, business line, or competitive offering from your company. The possibilities are endless.
When insights get truly ambient, it would be like having an uber-present, extremely smart insight intelligence watching and giving you the right insight at the right time. We are some ways away from this. However, it’s time we move beyond chat to better ways of surfacing insights.
~Babbage Insight