95% of Corporate Generative AI Projects Fail: An MIT Study

by Francisco Santolo

The MIT Media Lab published the most comprehensive study to date on generative AI in organizations: only 5% of pilots generate measurable impact. The main cause is not technological but organizational.

95% of Corporate Generative AI Projects Fail: An MIT Study

The MIT Media Lab, through its NANDA initiative, published "The GenAI Divide: State of AI in Business 2025," possibly the most comprehensive study to date on the real use of generative artificial intelligence in organizations.

The work draws on 150 interviews with business leaders, 350 employee surveys, and the analysis of 300 public cases of generative AI implementation. Its goal: to understand why a few achieve significant results while the majority remains stuck in unfulfilled promises.

Only 5% of pilots generate a rapid impact on revenue or P&L. The remaining 95% produce no measurable benefits.

The main cause is not technological but organizational. The MIT calls it the learning gap: the inability of companies to integrate AI models into their workflows, structures, and cultures (the human component of augmented intelligence).

ROI does not come from where the most is invested. More than half of budgets go to sales and marketing, but the greatest return appears in back-office automation: reduced outsourcing, less dependence on external agencies, and greater administrative efficiency.

The technology source matters. Tools acquired from specialized vendors have a success rate close to 67%, while internal developments reach barely a third of that figure.

Startups show a different dynamic. Companies founded by 19- or 20-year-olds reached more than 20 million dollars in revenue in less than a year, by focusing on solving a single problem with precision and in partnership with external platforms.

The employment impact is silent but structural. There are no massive layoffs, but many companies choose not to fill administrative or support vacancies, which anticipates a redesign of the workforce composition.

The gap is cultural, human, and strategic. It is a problem of absorptive capacity, as Cohen and Levinthal called it: the ability of an organization to recognize, assimilate, and apply external knowledge. The MIT describes it as a learning gap: the inability to institutionalize collective learning.

As I have explained for some time, present and future organizations must embrace strategic ambidexterity: the simultaneous capacity to exploit current units with efficiency and resilient positioning, and to explore (incubate, acquire) new ventures toward offensive and defensive disruption.

And AI, powered by the right business frameworks, must be integrated into both zones. The governance of both and the level of team autonomy to experiment with AI must also be adapted per zone, within the framework of a corporate strategy with AI and stakeholders at the center.

It is the strategic balance of ambidexterity that creates truly antifragile companies.

What the MIT shows is an empirical update of the innovator's dilemma described by Christensen. Large corporations, trapped in their own systems of incentives, metrics, and processes, fail to capitalize on potentially disruptive technologies and fall defeated by them.

It is due to a lack of organizational plasticity and, as Christensen warns, rational incentives (maximizing revenue, profitability) that tie them to a present that still works, even though it is already in the process of being displaced.

In contrast, small and agile startups, with lightweight structures and extreme focus, manage to scale in months what incumbents find unattainable.

This should challenge us: how do we design structures that, without sacrificing core efficiency, maintain the flexibility needed to absorb disruptions? How do we become ambidextrous?

How do we escape the trap of elite consulting firms that drive transformations without being able to transform themselves?

How do we return to exploring the necessary innovation frameworks if they became associated with value destruction?

The learning gap is, at its core, a cultural gap.

Integrating AI at the heart of strategy demands much more than software licenses: it requires redesigning how we learn, how we decide, and how we organize. It is not a matter of CAPEX or OPEX, but of redesigning how we learn, decide, and organize.

Above all, it requires that leaders themselves adopt AI as a language. Internalize the concepts and their implications. Understand the tools (which are very simple and based on natural language) and their potential linked to the business and operating model.

Training and autonomy. Training in tools is not enough; teams must be empowered to experiment and decide. Different governance and action focuses in exploitation and exploration.

Spaces for play and validation. Innovation requires environments where making mistakes is acceptable and learnings are capitalized. But this implies methodology and frameworks to limit those risks and errors.

Intrapreneurial mindset. Seeing each collaborator as a transformation agent capable of detecting opportunities and prototyping them with AI. Listening and experimentation are the skills of the future. Learning to validate before executing or scaling.

Learning in short cycles. Test, measure, adjust, validate. The scientific method applied to management.

Hybrid teams. Humans and algorithms interacting according to the value each brings: productivity, creativity, judgment. AI as a collaborator, not just a productivity tool. Active listening to the customer. AI should be applied to resolve real frictions, anticipate needs, and improve experiences. It can promote and complement our empathy, emotional intelligence, and relational capacity. It can enhance teamwork, active listening, and collaboration.

I have long maintained that we are not talking about isolated artificial intelligence, but augmented intelligence: the fusion of human capabilities and algorithms. We are no longer just humans: those of us who understand, adapt, and adopt the new paradigm become augmented humans.

The MIT confirms this indirectly. Projects that fail are those that must generate something new (e.g., sales): because they demand augmented intelligence and it is not present.

Back-office and productivity automations that repeat what already exists require less augmented intelligence and rely on independent AI capability.

The startups that thrive, on the other hand, are those that enhance human capability: teams that adopt AI as an extension of their thinking, skills, and strategic judgment. Companies that place it at the heart of strategy. AI-native companies adapted to the new paradigms.

We need to train leaders and teams capable of learning with AI, deciding with AI, co-creating with AI. Understanding what each public AI deployment strategically enables for the business.

It is not a matter of generating or adopting the most sophisticated model; technology commoditizes rapidly. Sustainable advantage does not lie in access to the technical, but in the strategic and cultural capacity to integrate it.

Entry barriers and differentiators are built from strategy: from the business and operating model. It is not a technical revolution but a revolution in business frameworks.

Many executives and directors today face a critical comprehension gap regarding what is truly at stake at the strategic and competitive level.

The great contribution of the report is not pointing out that 95% fail, but showing that success does not depend on AI itself, but on the organizational capacity to integrate it with purpose, coherence, and continuous learning.

That 5% will be the companies of tomorrow. And they are rapidly becoming the companies of today.

AI is a catalyst. The engine is culture, strategy, organizational learning capacity, and co-creation. The organizations of the future are learning organizations.

Organizations where human beings can co-create with AI, generate individual and collaborative augmented intelligence, empower AI and let themselves be empowered. Reaching a new level of development. Augmented humans.


What to read next from Francisco Santolo