Algorithms and the Erosion of Humanity

Share on:

 

Algorithms mirror flaws and fuel mistrust. Surveillance erodes privacy. Mindful choices, collective action—laws, literacy—can reclaim humanity.

KEY POINTS

  • Algorithms echo us. To fix them, we must evolve—starting with how we think and act.
  • Pause before reacting. Mindful choices turn digital fights into real dialogue.
  • Curate your feeds with care. Ditch outrage, seek voices that spark clarity.

Do you remember the early days of social media? The promise of connection, of democratic empowerment, of barriers crumbling and gates opening? In those heady days, the co-founder of Twitter said that “Twitter was a triumph of humanity, not of tech,” and rather than laughing, everyone clapped.

Today’s reality has turned out a bit differently. Algorithms are fueling mistrust, fracturing society through surveillance and division, and eroding the foundations of authentic human connection. We find ourselves increasingly isolated despite being more “connected” than ever.

This outcome is not an inevitable byproduct of progress. It is a consequence of human choices—and, crucially, a reflection of who we are. And it is possible to reverse course, but it will take conscious effort to reshape both these systems and ourselves.

The Trust Crisis

Media platforms care more about clicks than the truth. MIT research shows false information spreads significantly faster than truth online, and the Facebook Papers showed that anger generated more engagement than understanding or compassion.

The result is declining trust—as the 2025 Edelman Trust Barometer reports, global trust in media sources of all kinds, including social media, is declining. And the result of that is a fractured public square where agreement on a shared reality is elusive.

As I have argued both here and in my recent book Transcend: Unlocking Humanity in the Age of AI, algorithms don’t actually create these problems—they amplify them. Artificial intelligence (AI) systems are trained on human behavior and human culture, learning from what we say, do, and produce. In essence, algorithms hold up a mirror to humanity, reflecting back both our finest qualities and our darkest impulses.

When we see division, mistrust, and outrage dominating our feeds, we’re not just witnessing technological failure—we’re confronting our own nature. The algorithm didn’t invent our tendency to pay more attention to threats than to good news, or our inclination to seek information that confirms what we already believe. It simply learned these patterns from us and then amplified them at an unprecedented scale.

This mirror effect is both sobering and empowering. It’s sobering because it forces us to acknowledge that the problem isn’t just “out there” in the technology—it’s also within us. But it’s empowering because it means we have agency. If algorithms reflect what we are, then by changing ourselves, we can change them.

The Fracturing of Society

When we can’t straightforwardly believe what is reported, we no longer have a common foundation of facts that we can agree on. And when we don’t have this common foundation, dialogue becomes dispute, and conversation turns into conflict. Political discourse on platforms like X often spirals into polarized shouting matches, as algorithms amplify divisive voices while marginalizing moderation.

The assault on our social fabric extends beyond information manipulation to the erosion of privacy. Research shows 81 percent of Americans believe the risks of data collection outweigh its benefits, yet we continue to feed these systems with every click, scroll, and pause. This isn’t just an individual concern—when nothing remains truly private, authentic social relationships become impossible.

The constant threat of surveillance creates a chilling effect on genuine expression. We self-censor in conversations, knowing our words might be captured and shared. We become performative rather than vulnerable, guarded rather than open. When people know they’re being watched—by algorithms, by potential viral exposure—they start policing their own behavior and others’, weakening the diverse viewpoints essential for healthy democracy.

The recent Coldplay concert incident exposes this cruel reality: flawed judgment and questionable behavior, transformed into viral spectacle. Algorithms don’t distinguish between newsworthy events and personal humiliation; they amplify whatever maximizes clicks, leaving individuals defenseless against viral shaming. Without spaces for true privacy, we lose the foundation that allows deep human connection to exist.

Reclaiming Our Humanity

We have been complicit in creating this world of misinformation, mistrust, division, and surveillance. But in this very fact lies the possibility of salvation—what we have helped create, we can also help change.

Like everything that is worth doing in life, it cannot be done alone. We will need a mixture of individual awareness and collective action if we are to push back against algorithmic dystopia.

Collectively, we need robust privacy laws, investment in ethical AI, and widespread digital literacy programs. A striking example of governmental action here is Finland’s long-running media literacy education program, which aims to foster critical thinking in the consumption of media, a skill desperately needed in a world awash with misinformation.

Or to take another example, Danish courts ruled in 2024 that individuals own the copyright to their own faces, establishing that using someone’s image without consent—even in public spaces—constitutes copyright infringement. This landmark decision recognizes biometric data as personal property, giving citizens legal recourse against unauthorized viral exposure.

We also need to change at the individual level, and this requires more than good intentions—it demands specific practices that rewire our relationship with digital stimuli.

  • Start with your emotional responses. Before sharing or reacting to content, pause and ask: “What am I feeling right now? Anger? Fear? Moral outrage?” These emotions aren’t wrong, but they’re often the very feelings algorithms exploit to drive engagement. By recognizing them, you begin to reclaim choice in your responses.
  • Practice what mindfulness traditions call “beginner’s mind” when encountering opposing viewpoints. Instead of immediately judging or dismissing, approach disagreement with curiosity: “What might I not understand about this perspective?” This single shift can transform algorithmic conflict into genuine dialogue.
  • Curate your digital environment with intention. The information we consume shapes how we think, feel, and lead. Be deliberate. Unfollow sources that thrive on outrage and division. Replace them with voices that challenge you constructively and expand your perspective. This isn’t about escaping discomfort—it’s about cultivating clarity in a world designed for distraction.

The Choice Is Ours

What we’re witnessing isn’t technological inevitability—it’s the consequence of countless human choices, and it’s still possible for us to change the outcome by changing our choices. The age of algorithms doesn’t have to erode our humanity.

We can and we must take back control. This requires dual action: reshaping algorithms at the collective level through political and legal intervention, while simultaneously rewiring our own responses at the individual level through mindful practices.

And when our children and grandchildren want to know what we did, the mirror of technology will show them what choices we made.

Let’s hope we make the right ones.

[Photo: HadK / Adobe Stock]

Original article @ Psychology Today.  

Share on:
error: