Two Frameworks for Balancing AI Innovation and Risk

Share on:

 

 

Organizations that view AI as just another technology project will increasingly find themselves irrelevant.

Summary. Organizations that view AI as just another technology project will increasingly find themselves irrelevant. Success will go to those who adopt a balanced approach—being radically optimistic about AI’s potential while remaining cautious about its risks. By integrating structured frameworks like OPEN and CARE, organizations can navigate this challenge, harnessing AI’s transformative power while building the resilience necessary to thrive in an uncertain future.

————-

Only 26% of companies have developed working AI products, and only 4% have achieved significant returns on their investments, according to a 2024 study. Bridging the gap between aspiration and achievement requires a systematic approach to AI transformation, one that primes organizations to think through the biggest questions this technology raises without losing sight of its day-to-day impact.

The stakes could not be higher. Organizations that fail to adapt will become the Polaroids and Blockbusters of the AI age. Yet hasty implementation carries its own dangers. When Zillow announced in February 2021 that it would begin purchasing properties that had been valued by a machine learning algorithm, the move was widely hailed as a step into the brave new world of artificial intelligence. Eight months later, the new business unit closed with losses of some $300 million.

The opportunities and risks AI presents demand careful thought and deliberate strategic responses. Piecemeal solutions will not suffice. The pace of AI development, combined with the technology’s unique capacity to transform human relationships and organizational culture, requires frameworks that can balance both unprecedented uncertainty and the need for immediate action. Organizations need comprehensive systems for thinking that can guide them through continuous transformation while keeping sight of their core purposes and human stakeholders.

I have spent three decades guiding digital transformation at organizations ranging from Fortune 2000 companies to the largest government agencies. Across these experiences, I have repeatedly encountered two common but contrasting attitudes that hold organizations back from the successful implementation of new technologies: institutional resistance to change and the impulsive adoption of technology without strategic purpose. I now see many organizations replicating the same mistakes in their approach to AI.

The solution to this double-edged problem lies in adopting complementary frameworks that combine to create a balanced approach to AI adoption. The OPEN framework (Outline, Partner, Experiment, Navigate) provides a systematic four-step process for harnessing AI’s potential, guiding organizations from initial assessment through to sustained implementation. The CARE framework (Catastrophize, Assess, Regulate, Exit) offers a parallel structure for identifying and managing AI-related risk, both within innovation projects and across the broader enterprise environment. While distinct in their purposes, both frameworks are designed to be flexible enough to evolve alongside AI.

These frameworks embed and enable two complementary mindsets: radical optimism about AI’s potential balanced with a deep caution about its risks. By integrating an innovation management process with a Portfolio and Financial Management (PfM) approach, organizations can drive transformative change while maintaining robust safeguards.

The OPEN Framework

Grounded in organizational purpose and the human-AI experience, the OPEN framework emphasizes that successful adoption depends not only on technology but also on leadership and a culture capable of sustaining continuous transformation. Each step in the process contributes to the development of an innovation portfolio, enabling organizations to manage AI projects from ideation to deployment, maintenance, and eventual retirement.

1. Outline

Too many organizations begin their AI journey by asking “What can this technology do?” instead of “What can this technology do to help us deliver on our mission?” This approach leads to tech-driven solutions in search of problems rather than to new ways of delivering real value. By reaffirming their purpose at the very beginning of the process and then aligning all decisions with that purpose as the single, most basic criterion of success, organizations can avoid being sidetracked by AI’s almost limitless capabilities.

Coke provides a compelling case study of how easily companies can lose focus on their core purpose, driven by the temptation to experiment with the latest tech trends. In 2023, Coke launched a new beverage, Y3000, which had been co-created with AI. Perhaps unsurprisingly, the company received widespread criticism for the unappealing taste of the drink. In 2024, Coke again embraced AI as a gimmick, undermining their long history of successful Christmas ad campaigns with an AI-powered effort that appealed to almost nobody. While there was arguably some value to be found in testing the capabilities of generative AI at scale, the association of a beloved brand with unsettling images straight out of the uncanny valley was a clear misstep.

Nike offers a counterexample showing how AI initiatives can be deeply aligned with organizational purpose. Nike’s mission is to “bring inspiration and innovation to every athlete” (emphasizing that “If you have a body, you are an athlete”). Rather than pursuing AI as a marketing gimmick, Nike has implemented AI solutions that directly serve this mission. Their Nike Fit technology uses AI-powered computer vision to help customers find their perfect shoe size through a simple phone scan. Their Consumer Direct Acceleration strategy employs AI for demand sensing and inventory optimization, ensuring the right products reach the right consumers at the right time. By starting with their core purpose of serving athletes, Nike has avoided the trap of tech-for-tech’s-sake and instead developed AI use cases that create genuine value for their customers while strengthening their brand.

Practical Guidelines for the Outline Phase:

  • Reaffirm Organizational Purpose: Before adopting AI, revisit and reaffirm your organization’s mission to ensure clarity and buy-in.
  • Assess Current Knowledge: Evaluate the organization’s AI literacy and readiness. Conduct workshops to identify knowledge gaps. Develop programs to bridge gaps.
  • Brainstorm Use Cases: Assign cross-functional teams to engage in blue sky thinking about AI applications.
  • Filter: Filter the possible use cases by assessing them against the yardsticks of organizational purpose and AI readiness.

2. Partner

Developing and implementing an AI innovation strategy is a classic interdisciplinary problem. The task cannot be handed off to the IT department, the R&D team, or the Chief Innovation Officer. These functions, and more besides, need to be engaged and involved if AI solutions are to have a chance of creating real value. So, partnerships within an organization are critical to the success of AI initiatives. But they will rarely be enough.

Even organizations with strong internal capabilities will typically need to forge external partnerships to realize their AI ambitions. While large tech companies may be able to build custom AI solutions from the ground up, most organizations will need to work with specialized partners who can help them develop and implement the specific technologies required to achieve their goals. These will often be third-party service providers, but they could also be academics, independent ethics advisors, or industry regulators.

But perhaps the most critical partnership of all is the one between humans and AI systems themselves. This partnership will fundamentally reshape the culture of every organization that deploys AI solutions, changing working relationships, reporting structures, and individual roles. Organizations need to think carefully about how their AI implementations will transform not just processes but the entire human experience within their organization. Will an AI system augment human capabilities or replace them? How will it affect team dynamics and organizational hierarchies? Will it operate behind the scenes or interact directly with users? These questions about the human-AI partnership need to be considered from the very beginning of any AI initiative, not treated as an afterthought once the technical solution is already built.

Practical Guidelines for the Partner Phase:

  • Map Internal Expertise and Collaboration Opportunities: Begin by identifying existing internal capabilities that can be leveraged for AI initiatives. Map cross-departmental expertise, ensuring that the right teams (e.g., data science, IT, operations, and marketing) can work together seamlessly.
  • Evaluate and Vet External Partners: Selecting external collaborators, such as technology vendors, academic institutions, or niche AI startups, is critical for filling capability gaps. Leaders must ensure that potential partners align with their organizational goals, values, and operational requirements.
  • Establish Governance Structures for Partnerships: AI partnerships often involve data sharing, intellectual property (IP) considerations, and collaborative innovation. Clear governance structures help manage these complexities and ensure accountability.
  • Prioritize Human-Centric Design in AI Projects: Ensure that AI implementations, whether internal or customer-facing, keep the human experience central to their design and deployment. This is vital for adoption and positive outcomes.

3. Experiment

Moving from blue sky thinking about AI’s possibilities to practical implementation requires a carefully structured experimental approach. Many organizations make the mistake of moving directly from ideation to full-scale deployment, leading to costly failures and missed opportunities. Others get stuck in an endless cycle of proofs of concept that never translate into real-world value. Both approaches waste resources and, more importantly, squander the opportunity to learn vital lessons about how AI can create value within a specific organizational context.

The key to successful AI experimentation is to structure the experiments as a learning journey rather than a validation exercise. Each experiment should be designed not just to test whether a particular AI solution works, but to generate insights about how it might create value, how it could scale, and how humans will interact with it. This means going beyond testing technical feasibility to explore enterprise-level viability and human desirability. It means testing not just the AI system itself, but the organizational capabilities needed to support it. And it means being willing to fail fast and learn fast.

Practical Guidelines for the Experiment Phase:

  • Develop Conceptual Prototypes: Use conceptual modeling to visualize how AI integrates into your current enterprise architecture. Storyboard the customer journey to anticipate touchpoints and challenges.
  • Start Small: Deploy limited-use pilots to gather data on feasibility and performance. For example, a bank could test AI-driven fraud detection in a single branch before expanding.
  • Incorporate Real-World Scenarios: Design experiments to reflect real-world conditions and exceptions rather than idealized setups. This ensures that outcomes are practical and scalable while uncovering potential issues that might arise in broader deployment.
  • Define Metrics for Success: Identify KPIs for each experiment, such as increased operational efficiency or customer satisfaction.

4. Navigate

The Navigate phase involves steering the organization through AI adoption while ensuring alignment with broader strategic goals and cultural values. It emphasizes continuous learning and adaptation in a rapidly evolving landscape in which technical and human factors are deeply intertwined.

The key to successful AI innovation lies in maintaining a steady flow of high-potential projects through a carefully designed innovation pipeline that transforms ideas into operational systems. Projects advance through this pipeline based on composite ranking scores that reflect strategic priority, risk level, potential value, cost, and implementation difficulty. These rankings provide an objective basis for prioritizing which projects should move forward at any given time.

Pipeline velocity—how quickly projects move through the system—requires careful management. Moving too quickly risks advancing projects before they are ready, while moving too slowly can lead to missed opportunities or competitive disadvantage. The key is to maintain steady forward momentum while ensuring quality gates are properly enforced. This often means running multiple projects in parallel at different stages, creating a continuous flow rather than a stop-start process.

Practical Guidelines for Implementing Navigate:

  • Apply Objective Metrics: Develop an innovation portfolio that categorizes AI initiatives based on risk, reward, resource requirements, implementation difficulty, and strategic alignment. Regularly review and update the portfolio to ensure it reflects evolving priorities and market conditions.
  • Prioritize Resource Allocation: Allocate resources strategically based on the potential impact and feasibility of AI projects. To avoid spreading resources too thinly, focus on initiatives that align closely with your core mission and long-term objectives.
  • Adopt a Learning Culture: Encourage iterative learning by integrating feedback loops. For instance, a logistics firm using AI for route optimization might adjust models based on driver feedback.
  • Monitor the Horizon: Stay updated on AI trends to anticipate changes. Allocate resources for R&D to ensure readiness for the next wave of innovation.

The CARE framework

While AI promises transformation across every organizational function, it also introduces vulnerabilities that could undermine or even destroy unprepared organizations. For example, while AI-powered diagnostic tools are revolutionizing healthcare delivery, AI systems can also make potentially catastrophic errors in medical diagnosis due to biased training data. Similarly, as organizations deploy AI for critical infrastructure management, they face increased exposure to cybersecurity threats that could cascade through interconnected systems. These technical challenges are amplified by the organizational and cultural shifts that AI necessitates, as teams must adapt to new ways of working and thinking. Organizations must also navigate a range of other risks, including

  • Reputational risks that can emerge from AI-driven PR disasters
  • Legal exposure resulting from AI bias, ambiguities around copyright, and customer privacy issues
  • Strategic risks that emerge as AI rapidly reshapes entire industries.

The complexity and interconnected nature of these risks demands a structured approach to identification, assessment, and mitigation.

The CARE framework (Catastrophize, Assess, Regulate, Exit) takes a proactive rather than a reactive approach to AI risk management. Unlike traditional risk management approaches, CARE is specifically designed to address both the technical and human dimensions of AI risk. It accounts for the rapid evolution of AI capabilities, the potential for unexpected emergent behaviors, the transformation of organizational culture, and the complex interconnections between technical, operational, and human factors. The framework can be applied iteratively as AI systems evolve and new risks emerge.

CARE offers organizations a structured methodology for identifying and managing AI-related risks.

  • Systematically identify potential risks across technical, operational, and strategic dimensions. This creates a comprehensive risk inventory that serves as the foundation for all subsequent planning.
  • Evaluation of risk likelihood, potential impact, and organizational capacity to respond. This enables prioritization of risks and efficient allocation of resources.
  • Implementation of controls, monitoring systems, and governance structures to manage identified risks. This step translates analysis into actionable safeguards and procedures.
  • Development of clear protocols for risk response, including system shutdown procedures and enterprise continuity plans. This provides a vital safety net when preventive measures fail.

AI represents a fundamental shift in how organizations operate and create value. To succeed, companies must adopt a balanced approach that embraces AI’s potential while being mindful of its risks. By integrating structured frameworks like OPEN and CARE, organizations can navigate the complexities of AI adoption, ensuring both innovation and resilience. This dual approach enables organizations to harness AI’s transformative power while safeguarding against potential pitfalls. Ultimately, the key to thriving in the AI era lies in a strategic, thoughtful, and balanced approach.

[Source Photo: HBR Staff]

Original article @ Harvard Business Review

Share on:
error: