Should organizations mandate AI for Business Intelligence?
Only proactive companies turn AI mandates into real foresight.
The clock is ticking for the marketing executive cramming for tomorrow’s quarterly budget review. She has a hunch that her Facebook ad spend has started cannibalizing organic conversions in Southeast Asia. However, her dashboards don’t have that data, and her analyst team has yet to respond despite the 48-hour notice.
She does the inevitable. She uploads the CSV to ChatGPT and poses her questions to it. By the time she returns with her coffee from the machine, she has a chart, two slides worth of conclusions, and incredible new insights.
Across the company, the same process is being repeated by product managers, engineers, and customer service reps. They are all a little more productive and a little more reckless with the reassurance statement on their company’s website:
“Your data privacy is protected across our Company.”
The BI AI paradox
For organizations looking at AI, there is a paradox:
Option 1: “We ban it until we figure it out”. This works until an employee drags a CSV into ChatGPT and says, “Show me the conversion impact,” because it’s about 50x faster than waiting for an internal ticket.
Option 2: “We blanket mandate it top down”. Everyone is now part of an AI-first company. But you've mandated chaos at scale if your data stack is a junk drawer of half-labeled metrics.
Put this way, the paradox of using AI for business intelligence feels like an impossible choice. Damned if you do, and damned if you don’t.
The Wild West Problem
People love to have their issues solved with AI so much that 35% of employees across organizations are reportedly willing to pay out of their pocket. With employees at most organizations using chatbots to solve work problems, a Shadow AI economy has emerged. $50K per year SaaS tools are losing to thousands of $20 consumer subscriptions.
This free-for-all gen AI wilderness involves personal homegrown workflows and a wide variety of tools. It has many issues:
Multiple truths: Different teams working off their own AI outputs could end up sparking endless reconciliation meetings and discussions.
Speed over accuracy: As teams chase instant answers, fixing bad decisions later could cost far more.
Erosion of trust: Credibility evaporates once executives prefer “quick AI answers” over BI processes. No one will wait for a week-long deep dive by the analyst teams.
Compliance risks: Sensitive data being uploaded to public models makes a mockery of every data and governance promise on corporate websites.
Shadow AI is a symptom of unmet demand.
If the company provides no suitable alternative, people will start looking for their own ways to make their jobs easier and better.
So, Blanket AI mandates?
“Using AI effectively is now a fundamental expectation of everyone at Shopify,” said the leaked memo from Shopify’s CEO, Tobias Lütke, earlier this year. It went on to state that employees would be evaluated on their AI use.
The mandates have rolled in thick and fast, across Meta, Microsoft, Google and thousands of medium-to-small companies. No one wants to be left behind. And no one wants fragmented shadow AI. Nearly half of the board members of Fortune 100 companies now need AI proficiency as boards increasingly seek to oversee AI adoption in their companies.
Organizations are contending with two clashing energies. One group of employees can’t get enough of these AI tools, while a whole section would rather not change how they work.
Consequently, under pressure to “show wins” to boards and CXOs, business heads bolt on whatever AI copilot demo looks the shiniest. This haphazard implementation sometimes works in the short term. Most often, it dies in pilot purgatory because it’s a patch on novelty that’s faded or has created a baroque new workflow that no one wants.
It’s not that surprising that the MIT Nanda report found that 95% of GenAI pilots failed to deliver ROI when you think of it in terms of blanket top-down mandates.
Tower of Babel
It’s especially tough to just bolt on copilots for business intelligence because of two fundamental issues:
Data silos: Over 40% of enterprise data comes from 50+ apps. Different teams often bring different values for the same metric. An example: “The product dashboard showed 45K MAU. The marketing report said 52K. The growth team was celebrating hitting 48K”.
Poor data lineage: Without proper governance or auditability, AI models could amplify noise instead of surfacing the truth.
An arbitrary bolt-on of a shiny new BI copilot creates a Tower of Babel situation that breaks trust. Executives start doubting dashboards, teams second-guess models, and eventually, everyone drifts back to their own DIY spreadsheets and shadow AI tools.
It’s worse than the Wild West because the credibility of internal AI-assisted business intelligence gets tattered. Employees sneak off to get real answers elsewhere.
Creating a Proactive Insight Engine
Mandate or no mandate is a false dichotomy, anyway. The ideal way to think of AI in business intelligence is less about slapping a copilot for the analytics team and more as a rethink of how insights are generated. Instead of treating it as ‘how can we speed up our analysis by x%’ it should be ‘how do we unlock insights proactively’.
It’s really a function of two distinct elements. a) How the organization views AI and b) What’s its approach to business intelligence with AI.
Here are some tenets that we believe will help move towards creating a proactive insight engine:
Shift from productivity to foresight: Instead of thinking about how to make today’s workflows faster, which ones can be eliminated by surfacing insights proactively? The real leverage comes when AI pushes answers before the exec even asks.
Design for decisions instead of reports and dashboards. Even when you pilot, think of flowing insights to where choices are made, such as weekly reviews, boardrooms, product sprints, etc. Do not think of it as empowering existing reports or BI processes.
Trust and auditability are table stakes. Proactive insights must be reconcilable, explainable, and traceable. Otherwise, speed simply amplifies noise.
Start narrow, scale deep. Pick a few high-value workflows (E.g., campaign analysis, churn, anomalies), let AI own them end-to-end, then expand.
The best data-driven teams of the future will use AI to accelerate insight and to surface new insights, allowing them to make rapid decisions.
~Babbage Insight




