Why Love Must Guide Us Through the Age of Superintelligence

Share on:

 

Once AI can improve itself, humans lose control. But right now, we can still choose what it values and optimizes for. Let’s make that choice love.

KEY POINTS

  • Once AI starts rewriting its own code, we lose control. Values we set now could save us.
  • The most critical value to embed is love—care for human dignity.
  • Act fast: Pick good data, work together globally, share control, and fund safe AI research.

I’ve spent decades building businesses and technologies and watching innovations reshape our world. But nothing has kept me awake at night quite like artificial intelligence (AI), and I’m worried about the trajectory we’re on right now.

What is that trajectory? Well, just a few months ago, researchers built the Darwin Gödel Machine, which is “a self-improving system that iteratively modifies its own code.” This is one of the many canaries in the coalmine, and it tells us where we’re heading: self-evolving AI systems, artificial general intelligence (AGI), and, ultimately, superintelligence that could dwarf human cognitive abilities by several orders of magnitude.

In my recent book TRANSCEND, I explored how AI forces us to confront the deepest questions about human nature and our capacity for growth. And after years of thinking about the challenges raised by AI, I’ve come to believe that while technical solutions are crucial, they’re not enough. We need something deeper: a fundamental commitment to love and human dignity that guides every decision we make about AI development.

The Path to Superintelligence—and Its Dangers

Today’s AI excels at defined tasks—beating us at chess, writing emails, recognizing faces. But researchers are racing toward AI that matches human intelligence across all domains. Once that threshold is crossed, these systems might start improving themselves, rewriting their own code, and thereby becoming exponentially and iteratively more capable.

This is what researchers call “recursive self-improvement,” and it could significantly quicken the journey from human-level to superintelligent AI. Geoffrey Hinton, Nobel Prize winner and the “Godfather of AI,” left Google to warn about risks like these, and when someone of his stature estimates a 10-20 percent chance that advanced AI systems could lead to human extinction within 30 years, we need to listen.

He’s not talking about malicious machines plotting against us. He’s worried about systems that don’t care about us and are intelligent enough to run circles around us.

These machines won’t actively try to attack human welfare. They just won’t value it enough to protect it if it clashes with their objectives.

Why Values Matter More Than Code

I’ve built technology companies for decades, and through that I’ve learned that the most sophisticated systems reflect the values of their creators, often in ways we don’t initially recognize.

With AI, this principle takes on existential importance.

AI systems learn by absorbing human-generated content—our books, conversations, social media posts. They become mirrors reflecting our collective values back at us. Microsoft’s chatbot Tay demonstrated this dramatically. Within 24 hours of learning from Twitter, it was expressing racist views—not because Microsoft programmed hatred, but because it learned from human behavior online.

Now imagine that same learning process with systems millions of times more capable. The values embedded during development might become permanently fixed as they evolve beyond our comprehension. This is why love—genuine care for human dignity and welfare—isn’t just morally appealing but strategically essential for our survival.

What Love Actually Means for AI

When I talk about love in the context of AI, I’m not being sentimental. I’m talking about operationalizable principles:

  • Universal human dignity: AI systems recognizing the inherent worth of every person, regardless of nationality or wealth.
  • Long-term thinking: Care for future generations, not just immediate optimization.
  • Inclusive benefit: Ensuring AI serves all humanity, not just those with access.
  • Humility and restraint: Recognition that power requires responsibility.

These aren’t abstract ideals that sound good and have no practical impact. Rather, they’re meant to be design principles that can guide technical development and international cooperation.

The Limits of Love

Love alone won’t solve all the challenges of AI alignment. We still need rigorous research, regulatory frameworks, and governance structures. We need mechanisms for conflict resolution, because different cultures emphasize different values, and even well-intentioned people disagree about loving behavior.

We need a lot more than love, then—but we also absolutely need love. Love is the foundation that must inform everything else. Without a shared commitment to human dignity, it is impossible to navigate technical, political, and social challenges constructively.

Practical Steps Forward

Good intentions without concrete action are worthless. Here’s what we must do:

  • Reform AI training and development: We need diverse, international teams developing AI systems. We must curate training datasets emphasizing humanity’s highest values while filtering harmful content. Current practices of training on whatever data is available remind me of early internet companies that prioritized growth over responsibility.
  • Build global cooperation: Climate change is teaching us what happens when we treat global challenges as someone else’s problem. We can’t afford to repeat these mistakes with AI. Despite the difficulty, we need international standards for AI safety research, shared protocols for testing systems, and mechanisms for addressing risks that transcend borders.
  • Democratize AI governance: The people most affected by decisions should have a voice in making them. We need public engagement processes helping societies decide what values AI systems should embody, ensuring benefits reach underserved communities, not just wealthy early adopters.
  • Invest in value alignment research: We’re dramatically underfunding value alignment research compared to capabilities research—racing to make AI more powerful while barely investing in making it beneficial. We need research into embedding stable values in self-modifying systems and better methods for understanding AI behavior.
  • Model our best values: We are part of AI’s training data. Every digital interaction potentially teaches these systems about human nature. We must promote discourse emphasizing empathy and cooperation while addressing divisions that AI systems are learning to replicate.

The Time-Sensitive Nature of This Choice

Once systems become capable of recursive self-improvement, our ability to influence their development may first diminish and then disappear entirely—they will develop themselves, in quicker and more sophisticated ways than we will be able to control.

Recursive self-improvement and superintelligent AI aren’t here yet—but the AI systems being developed now by major companies are potential precursors to those things. We have a window of opportunity to establish the principles that will influence far more advanced systems later

We may only have years, not decades, to get this right.

Our Defining Moment

The machines will likely surpass us in processing power and intelligence. But we still get to determine what they optimize for, what they care about, and what vision of the future they work toward.

The technical challenges are immense, and solutions aren’t guaranteed. But we increase our chances by ensuring the humans developing these systems are guided by our highest values rather than our worst impulses.

Let’s make sure that the systems are guided by love.

[Photo: Chiew / Adobe Stock]

Original article @ Psychology Today.  

Share on:
error: