Every AI optimist has Jevons' paradox on speed dial.
Here’s Satya Nadella referring to it when the DeepSeek model dropped, which was more efficient than existing LLMs, triggering widespread fear that we wouldn’t need so much compute and investment in AI if models could be an order of magnitude more efficient:
For the uninitiated, the Jevons’ paradox argument goes like this: when steam engines became more efficient, coal consumption increased because efficiency made steam power so much more useful.
There are several examples of this:
Improving road networks to ensure smoother, faster flow of traffic eventually increases traffic and new congestion levels
Making a data center more efficient eventually results in more data centers (and more energy consumption), not less.
Naturally, with fear of AI destroying jobs or invested capital, Jevon’s paradox has been paraded to assuage fears. Eventually, demand for x will increase, not decrease.
Will Jevon’s paradox apply to AI-generated insights? Will we need more data analysis teams and managers to handle the surge in demand?
We talked about the “vast middle” of the pyramid involved in getting to insights earlier. If Jevon’s paradox applies here, we’d expect this middle to grow as net demand for insights increases.
However, this reasoning contains two critical assumptions:
Insights behave like coal or electricity, where more is always better.
Insight-generating AI will function as a copilot, making human data analysts and managers more efficient while preserving their role.
Both assumptions may be flawed:
Insights Aren't Infinitely Elastic
Insights have natural consumption limits. You need the right insight at the right moment to make a decision, and then you move on. Having 400% more insights doesn't create 400% more value. It creates noise.
There's a cognitive bottleneck: humans can only process so much information before decision paralysis sets in. There's also an action bottleneck: every meaningful insight spawns real-world actions that take time to implement. You can't act on insights faster than reality allows.
This doesn't mean demand won't increase at all. Cheaper insights will democratize access across organizations, and more people will consume insights. But this expansion hits limits quickly because of how insights actually work in practice.
More importantly, the Jevons paradox assumes AI will function as a "copilot". But that's not the actual goal.
The Pilot vs. Copilot Distinction
The real question isn't whether AI will make humans more productive. It's whether we'll need the humans in the loop every time we want to generate insights.
A copilot AI enhances human analysts, preserving the traditional pyramid structure where data flows up through multiple interpretation layers before reaching decision-makers. This would indeed trigger Jevons-style job growth.
But a pilot AI connects decision-makers directly to insights, bypassing the interpretive layers entirely. The goal is for executives who know nothing about data pipelines to understand and act on insights without needing a human translator.
If this is the actual direction of development, then it fundamentally changes what happens to organizational demand.
Where Complexity Migrates
AI-generated insights won’t eliminate data organizational complexity (especially in the short term). Instead, it relocates it.
In traditional data organizations, complexity lives in the "vast middle", teams of analysts, managers, and insight generators who clean, interpret, and contextualize data for decision-makers. When AI handles this interpretation layer, this complexity migrates to two places:
The data infrastructure layer below and
The decision-making layer.
AI systems need pristine data pipelines, maintained schemas, and robust instrumentation. Unlike human analysts who can work around messy data, AI pilot systems require industrial-grade data operations. Job postings for analytics engineering roles rose 114% last year. AI tooling is the biggest area of investment for data teams in 2024.
The top becomes more important, too. When insights flow directly to decision-makers, the quality of decision-making becomes the primary bottleneck. Leaders must develop new skills to consume and act on machine-generated insights, which raises the bar on speed.
The New Organizational Structure
This creates a "barbell" structure rather than a pyramid. At the bottom, engineers build and maintain the data infrastructure that feeds AI systems. In the middle, AI handles insight generation, which gets productized rather than being human-intensive. At the top, more decision-makers consume insights directly and act on them.
The "vast middle" gets automated and productized. The human roles that remain are either highly technical (data engineering) or highly strategic (decision-making). The interpretive layer in between becomes software.
This explains why we're seeing massive investment in data infrastructure even as AI capabilities soar. Companies aren't just buying AI models; they're rebuilding their entire data operations to support AI-driven insights.
Implications
So does Jevons' paradox apply to AI and insights? Yes and no.
It doesn't apply to the human layer that processes insights. Demand for human analysts and managers won't surge because insights aren't infinitely elastic and because AI aims to bypass rather than enhance human interpretation.
But it applies to the infrastructure layer. Cheaper insights drive massive demand for the data engineering, pipeline maintenance, and system architecture that make AI insights possible.
The consolidation we're seeing (Salesforce acquiring Informatica, ServiceNow acquiring Data World) reflects this reality. Companies are integrating AI capabilities with data infrastructure because there is value in controlling the entire pipeline at the very bottom.
~Babbage Insight
Profound! The new org structure makes a lot of sense. Though in my view it will not only be purely decision makers in the top layer. There will also be operational folks who do the activities. for e.g B2B Salespeople. One of the things to consider is that AI will provide recommendations based on metrics but it wont necessarily be aware of the human processes which generate those metrics. If say the recommendation from AI is to reduce customer contact time to below a certain threshold, the 'how' of this recommendation still would need to be figured out by the operations teams.
Of course, one could then also replace the operational teams with agents. In any case, we will see a bit of democratization of insight consumption and activation where we wont need data teams but then the operational teams will need to be equipped with a more experimental orientation (willing to A/B test insights from AI) rather than a pure process orientation.