AI isn’t underperforming. Leadership just hasn’t caught up yet
LISTEN TO THIS ARTICLE

AI investment is soaring but returns remain elusive. The problem is not the technology – it is the absence of three critical leadership capabilities.
by Faisal Hoque, Pranay Sanklecha, Paul Scade
A recent study of 6,000 senior executives found that, while 69% reported that their companies actively used artificial intelligence (AI), 90% said that the technology has had no impact on productivity. These findings mirror the surprisingly limited productivity gains that emerged from the digital revolution, during which widespread adoption of computing technologies produced little measurable growth across the broader economy. But that era has a crucial lesson for us today. While average productivity growth was low, the top 5% of ‘frontier’ firmsenjoyed gains that were more than four times higher than those of the 95% of ‘laggards’. What differentiated these firms was not the technology available to them. It was how they adapted their organizations to fully exploit the new technology. Today, the same pattern is being repeated with AI. But the organizational change demanded is more far-reaching this time, and the leadership challenge involved in delivering that change is more intense.
AI cuts across the organization in ways that previous technologies did not. Earlier innovations could often be implemented at the departmental level with limited cross-functional impact because previous generations of technology tended to automate existing processes or enhance the information flowing through them. The promise of AI is much more far-reaching. Even modest AI initiatives cut across strategy, operations, culture, governance, and ethics simultaneously, demanding a leadership approach that works at those intersections rather than just within traditional silos.
Three features sharpen this challenge. First, as AI agents take on defined roles within organizations, leaders will increasingly need to manage hybrid teams comprising both human and AI components. Decisions about the establishment and management of team structures and workflows will reshape not just operational processes but also organizational identity and culture. Second, AI implementation forces leaders to confront the fundamental assumptions embedded in the tools they adopt: questions about values, purpose, and knowledge that have important operational and strategic consequences. Third, the change AI brings will be continuous rather than episodic. Leaders cannot manage their way through a single transition and expect years of stability; they must instead learn to navigate an ongoing transformation while maintaining organizational coherence. This is a challenge that compounds in difficulty as multiple AI initiatives are assessed, tested, and deployed in parallel.
The stakes involved in tackling these issues are already significant, and they will only increase as AI models mature. Businesses that fail to adapt will find their competitive advantage being hollowed out by both AI-native competitors and the established players who move quickly to embrace these new capabilities and position themselves at the new frontier.
The talent and institutional knowledge needed to deliver effective AI transformation initiatives already exists inside most large organizations.
Three essential capabilities
The talent and institutional knowledge needed to deliver effective AI transformation initiatives already exist inside most large organizations. What is missing are the capabilities that are needed to channel that talent so that it can navigate ongoing transformative change with confidence. Leadership development for the AI era must treat strategy, technology, culture, and ethics as an integrated discipline. Three capabilities are essential for bringing these disparate areas together: 1) an AI-specific leadership mindset that goes beyond technical literacy; 2) the philosophical proficiency to navigate questions that no algorithm can answer; and 3) repeatable management systems that make continuous innovation a permanent organizational competence rather than an occasional intervention. These capabilities are essential not just for CEOs and leaders with technology-related functions, but for leaders in all functions across the organization.
A transformation-ready mindset treats AI not as a tool to be deployed but as a force that reconfigures how the organization works, and therefore as a core responsibility of every senior leader.
Pillar I – A leadership mindset geared specifically to AI transformation
In most companies, AI is still being treated as a technology problem. That mindset is the first thing that needs to change. When AI is viewed first and foremost as a new technology to be implemented, the natural tendency is to hand responsibility to the organization’s technology leadership function while the rest of the leadership team monitors progress from a distance. The result is that the effects on culture, workforce composition, decision-making, and organizational identity – dimensions of the business that AI will reshape profoundly – end up with no clear owner. A transformation-ready mindset treats AI not as a tool to be deployed but as a force that reconfigures how the organization works, and therefore as a core responsibility of every senior leader.
The persistence of the old approach can be detected in the mismatch between transformational ideals and traditional structures that persist in many organizations. Even as CIOs describe their roles in increasingly strategic terms, the organizational and cultural dimensions of AI transformation remain without a clear owner. The 2026 AI & Data Leadership Executive Benchmark Survey found that 93% of Fortune 1000 data leaders identify culture and change management as the primary barrier to AI adoption, while just 7% point to technology. Yet the leadership roles charged with AI implementation remain overwhelmingly focused on technology. Until senior leaders grasp that AI transformation is something they must lead, the gap between investment and value delivery will persist.
A 2025 study conducted with engineers at a large software company showed the dynamics that persist in many businesses – even those that recognize the challenges involved in encouraging AI uptake. When the business tried to roll out a new coding assistant to its more than 28,000 engineers, it encountered an unexpected barrier. Even though the company invested significant resources in encouraging uptake of the new tool, the study found that those who adopted it were at risk of being perceived negatively. When participants were asked to rate the competence of the engineer who had produced a piece of code, the engineer was assessed as 9% less competent when the assessors were told that the code had been produced with the help of the AI assistant than when they were told that the same code was produced without assistance. A separate survey unsurprisingly found that engineers were wary about adopting the coding assistant precisely because they expected to be judged as less competent if they did so.
This study points toward the kind of disconnects that require more than good intentions on the part of the senior leadership team. Even when the company actively encouraged uptake, the fact that the tech rollout was not integrated with a broader culture change initiative meant that the new tool was treated as a threat in two different ways: assessors mistakenly viewed it as a source of inferior code, while engineers saw using the assistant as damaging to their reputation. Full and effective adoption of the new tool would require a cultural solution to both problems. And cultural solutions of this kind cannot be delegated to an organization’s tech leaders, because the signals that shape assessment norms and professional credibility are set at the very top.
Closing this gap involves developing a specific set of leadership competencies. Leaders must learn to manage hybrid workforces in which humans and AI capabilities are fully integrated. They need to understand that AI agents and personas increasingly carry specific behavioral traits, decision-making authorities, and interaction patterns that must be intentionally designed and governed. Finally, they must treat managing the human response to AI transformation as a fundamental leadership responsibility. This means being equipped to lead reskilling at scale – not as a training program to be delegated to HR, but as an organization-wide cultural challenge. None of these competencies is developed by studying AI technology alone. They are developed by learning to lead organizations that are defined by the relationships between AI and human contributions.

Widespread AI adoption is now demanding that leaders be able to engage with philosophical ideas at the explicit level
Pillar II – Philosophical proficiency
The ability to surface, examine, and defend the foundational assumptions that shape decisions has always been a valuable part of effective leadership. However, in most companies, this philosophical proficiency is developed and deployed in an informal way, with leaders operating on the basis of implicit assumptions and intuitive reasoning. Widespread AI adoption is now demanding that leaders be able to engage with philosophical ideas at the explicit level. Every AI tool a company adopts arrives with philosophical commitments built into it – assumptions about what counts as appropriate communication, what evidence should be privileged, and how conflicts should be resolved. These commitments are currently being made – consciously or unconsciously – by engineers and developers who may never encounter the organizations that use their products. They nevertheless impact how AI models work at scale and affect thousands of decisions every day. Leaders who cannot recognize and interrogate these embedded assumptions will find their organizations shaped by commitments they never examined and never chose.
What leaders require is not a philosophy degree but a working proficiency in three core philosophically grounded topics.
- The first is ontological, concerned with questions about what things are: What is a business? What is an employee? What is a customer? These are not semantic games – they shape product design, business models, and corporate identity. Consider the different ways in which Apple and Meta answer the question “What is customer data?” For Meta, customer data is a resource that is available to be extracted and monetized. For Apple, it is the digital embodiment of a relationship with the customer. A major part of Meta’s business model is built on selling access to the data the company extracts from its customers. Apple, meanwhile, treats privacy as “a fundamental human right” and largely avoids the commodification of the customer data it holds.
- The second category is epistemological, concerned with questions about what can be known and what counts as reliable evidence. Leaders make judgments about epistemic issues dozens of times a day, but they rarely do so with any kind of formalized standards in mind. Some businesses do take such standards very seriously, however. Toyota’s principle of Genchi Genbutsu (“go and see for yourself”) encodes a view about the priority of direct engagement with relevant information over data collected, digested, and packaged by others. Questions about how to apply epistemic standards are becoming increasingly pressing as AI models take over analytical and decision-making roles. These tools arrive with their own standards, chosen by engineers who have never worked with your business and built into the model from the outset. Leaders who cannot interrogate the assumptions on which an AI model works are essentially handing over the right to determine their standards of truth to third parties with no oversight.
- The third category is ethical, concerned with questions about what is right and what a company is willing to defend at cost. When the Pentagon demanded that AI company Anthropic sign contracts allowing “any lawful use” of the company’s models, Anthropic refused. A clear, pre-existing commitment to ethical red lines meant the company was unwilling to compromise over the use of its technology in autonomous weaponry or domestic surveillance. Anthropic stuck by its ethical principles even though the decision came with significant costs, including losing a large contract and being labelled a supply chain risk by the U.S. government. Genuine ethical clarity is both a moral necessity and a strategic one: leaders who have not done the hard work of determining where their lines fall risk making incoherent and counterproductive decisions when a crisis arrives.
What organizations need – and what only senior leaders can establish and maintain – is a repeatable portfolio management system that treats AI innovation as a structured pipeline.
Pillar III – Repeatable management systems
Most organizations approach AI implementation as a series of disconnected projects – a pilot here, an experiment there – without any systematic way to move from the identification of opportunities to operational deployment. The result is a predictable cycle: initial enthusiasm generates a flurry of proofs of concept, most of which never scale. Initiatives that do reach production often underperform because they were never evaluated for fit with the organization’s strategy, culture, processes, or existing technology. Meanwhile, projects that have genuine potential to deliver real value languish in backlogs because no mechanism exists to surface and prioritize them. Over time, this pattern erodes organizational confidence in AI itself. Teams that have watched three or four promising pilots stall become skeptical of the next one, and senior leaders who have approved budgets without seeing returns become reluctant to approve more. The irony is that the problem was never the technology or even the individual projects. It was the absence of any infrastructure for managing innovation as a continuous discipline.
What organizations need – and what only senior leaders can establish and maintain – is a repeatable portfolio management system that treats AI innovation as a structured pipeline. In such a system, ideas enter through a centralized intake process and are scored against objective criteria – strategic alignment, feasibility, risk, resource requirements – so that decisions about what to pursue are based on evidence rather than advocacy or enthusiasm. Projects that pass initial screening move on to detailed assessment, where they are evaluated for fit with the organization’s broader architecture: its purpose and strategy, its people and culture, its processes and governance, and its existing technology. Only those that demonstrate alignment across these dimensions advance into experimentation, and only those that survive rigorous testing, move into production.
Stage gates at each transition prevent resources from being consumed by initiatives that lack viability, while regularly scheduled portfolio reviews ensure that the overall mix of projects remains balanced across time horizons, risk levels, and strategic objectives. Earlier projects are deliberately sequenced to build the data infrastructure, governance frameworks, and organizational capabilities that later, more ambitious initiatives will require.
Lloyds Banking Group offers a concrete example of this approach in action. The bank operates a cross-functional body – its AI and Ways of Working Control Tower (formerly known as the GenAI Control Tower) – that is responsible for evaluating and ranking AI initiatives against the organization’s strategic objectives and then directing resources to projects according to their priority. This repeatable management system includes a structured assessment process, with senior leadership retaining decision rights over portfolio balance and project progression. With the control tower as its management system, Lloyds deployed more than fifty generative AI solutions in 2025, with more than 200 use cases in the portfolio backlog, according to Dr. Rohit Dhawan, the bank’s head of AI. Lloyds expects to create £100 million in value in 2026 from these projects after delivering £50 million in value in 2025.
Portfolio management systems of this kind do not run themselves. They require senior leaders to actively manage and take responsibility for the pipeline, enforcing the necessary stage-gate discipline and making the difficult calls about which initiatives to stop. Without the mindset described in Pillar 1 and the philosophical clarity described in Pillar 2, even the best-designed system will default to the path of least resistance.
When algorithms can synthesize information faster than humans and execute effective decisions autonomously, the leader’s role shifts.
A new era of leadership
The three pillars described here – an AI-specific leadership mindset, philosophical proficiency, and repeatable management systems – are practical capabilities, not abstractions. But taken together, they amount to a redefinition of what senior leadership actually involves. For most of the last half-century, the dominant image of the executive has been that of the decision-maker: the person who synthesizes information, allocates resources, and selects among options that others have prepared. AI is reshaping the role. When algorithms can synthesize information faster than humans and execute effective decisions autonomously, the leader’s role shifts. It becomes less about making decisions and more about determining the terms on which decisions get made.




