How to build artificial intelligence that truly serves humanity: Design for emotion, not just efficiency.
KEY POINTS
- Empathy in AI design isn’t just ethical—it’s psychologically essential for human well-being.
- Human-centered AI design principles mirror core tenets of humanistic psychology.
- The future of mental health may depend on how empathetically we build our digital tools.
I’ve been thinking about artificial intelligence (AI) design principles a lot lately. Every week, another story breaks about AI gone wrong—facial recognition that can’t see ethnic faces properly, hiring algorithms that screen out certain candidates, chatbots that turn hostile or that even try to blackmail users. And every time, the response is the same: “We need better data. More training. Smarter algorithms.”
But what if we’re missing the point entirely?
What if the real problem isn’t that our AI isn’t smart enough—it’s that it isn’t kind enough?
The Real Cost of the Emotional Gap
Here’s something that keeps me up at night: We’re building systems that make life-or-death decisions, and we’re designing them for efficiency and precision.
Imagine a patient portal that delivers test results with all the warmth of a parking ticket: “Abnormal results detected. Schedule follow-up appointment.” That’s it. No context. No reassurance. Just anxiety-inducing bureaucracy at 2 a.m. when someone can’t sleep and decides to check their results.
Now imagine the same information delivered differently: “Your test results show some areas that need attention. I know this can feel worrying—that’s completely normal. Your provider will walk through exactly what this means and discuss the next steps with you.”
Same information. Completely different emotional impact.
The second version doesn’t require advanced AI or breakthrough algorithms. It requires that the system design genuinely cares about how another human being might feel.
What Empathy Actually Means (Beyond the Buzzwords)
Let’s be honest—“empathy” has become one of those words that gets thrown around in every corporate presentation, right alongside “synergy” and “disruption.” But strip away the consultant-speak, and empathy is actually pretty simple.
It’s asking yourself this: If I were the person on the receiving end of this system, how would I want to be treated?
When you’re looking for a job and get rejected, do you want a one-word email that says “No”? Or would you prefer something that actually acknowledges your humanity?
When a platform removes your post, do you want to feel like you’ve been processed by a robot, or like someone actually considered your perspective?
Empathy isn’t something that just happens. It takes deliberate effort to practice. You have to actively pause and ask yourself, “How would this feel if it were happening to me?” You have to seek out perspectives different from your own. You have to resist the temptation to assume your experience is universal.
The hardest part? Empathy often slows you down in the short term. It means taking time to understand context you might otherwise ignore. It means designing for edge cases that represent real people, even when those people aren’t your primary market. It means having uncomfortable conversations about whose voices aren’t being heard.
Why This Matters More Now Than Ever
The “move fast and break things” mentality made sense when we were building basic social media features. If you broke someone’s ability to poke their friends, the world didn’t end.
But we’re not building trivial features anymore. We’re building systems that decide who gets hired, who qualifies for loans, who gets flagged at airport security. When these systems “break things,” they break people’s lives.
For instance, MIT researchers found that facial recognition systems fail 35 percent of the time for Black women—but less than 1 percent for white men. Amazon discovered their hiring algorithm was systematically discriminating against women.
These aren’t just statistics. Behind every percentage point is a person who got stopped at security for no reason, or didn’t get called back for a job they were qualified for. The psychological damage adds up. Trust erodes. People start to see AI systems as working against them instead of for them.
And here’s the thing that really bothers me: Most of these problems are preventable. Not through better math, but through better listening.
Who Is Getting It Right (and Why It Matters)
Some organizations are figuring this out. Duolingo could have built a language app that just marked your answers wrong. Instead, they chose encouragement: “Almost! Try focusing on the pronunciation…” It’s a small thing, but it keeps people learning instead of quitting.
Spotify doesn’t just analyze what you listen to—they seem to understand why you’re listening. Their playlists feel personal, not algorithmic. Like someone who actually gets your taste in music made them for you.
These organizations aren’t doing anything technically revolutionary. They’re just remembering that behind every user account is an actual human being with feelings, frustrations, and hopes.
The Psychology of Feeling Heard
People form emotional relationships with technology whether we design for it or not. When your GPS says “recalculating” in that patient voice, you feel differently than when an error message barks at you in all caps.
When systems acknowledge uncertainty (“I think this might be what you’re looking for, but I’m not sure”), people trust them more than systems that pretend to be infallible. When platforms explain their decisions and provide ways to appeal them, users feel empowered instead of powerless.
This isn’t just about being nice. People are more likely to use and recommend systems that make them feel understood—even when those systems make occasional mistakes.
3 Questions That Change Everything
I’ve started asking three questions whenever I design/evaluate an AI system:
- What does this person actually need right now? Not what our data says they should need, but what they’re probably feeling and hoping for.
- How will this affect their sense of dignity and control? Are we empowering them or making them feel like a case number in a queue?
- Who might we be harming accidentally? What communities or situations are we not considering because they weren’t in our training data?
These questions don’t need a Ph.D.—just genuine curiosity about others’ experiences.
The Path Forward
Look, I’m not suggesting we abandon efficiency or stop optimizing systems. I’m suggesting we optimize for the right things.
Instead of just measuring clicks and conversions, what if we measured whether people felt heard and respected? Instead of only testing for accuracy, what if we tested for psychological safety?
Organizations like Stanford’s Human-Centered AI Institute and the Partnership on AI are developing frameworks to make this practical. They’re showing that you don’t have to choose between powerful AI and human-centered AI.
Companies that lead with empathy won’t just build better products—they’ll gain a competitive edge by earning lasting trust in a skeptical age.
What We’re Really Building
Here’s what I’ve come to believe: We’re not just building artificial intelligence. We’re building the emotional infrastructure of the future.
Every algorithm we ship, every interface we design, every automated decision we deploy—it all shapes how people feel about living in a world where machines make choices that affect their lives.
We can build a future where AI systems treat people like problems to be solved, where interactions feel cold and transactional, and where people feel increasingly alienated from the technology that surrounds them.
Or we can build something different: AI that recognizes the full complexity of human experience. Systems that enhance our humanity instead of diminishing it. Technology that makes people feel more understood, not less.
The choice is ours.
References
Jeffrey Dastin. Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. October 10, 2018.
[Source Photo: Gannvector / Shutterstock]
Original article @ Psychology Today.