AI clones mimic you, threatening identity. Laws aren’t keeping up; we need a veto and clear consent rules to protect your digital self.

KEY POINTS

  • AI clones mimic us without consent, disrupting identity and causing stress.
  • We need laws like Denmark’s to block unauthorized AI copies.
  • A digital likeness veto and consent rules can protect our digital self and story.

Imagine waking up to a video of yourself going viral online. It’s your face, your voice, your mannerisms. But the words? Nothing you’ve ever said.

This isn’t science fiction. It’s the AI Doppelganger Dilemma, a growing crisis that’s shaking the foundations of identity, trust, and autonomy in our digital age. As artificial intelligence advances, it can now replicate our voices, faces, behaviors, and even thought patterns with eerie precision. These digital “twins” can act in our likeness, speak on our behalf, and make decisions that mimic our style.

As long as these digital twins are acting for us and with our consent, they can be incredible productivity tools. But what happens when an artificial version of you exists that’s not under your control? And what does that mean for who you are?

The Psychological Toll of a Digital Double

We often think of identity as a fixed thing: the unique character made up of our values, quirks, and choices. But psychological research suggests that identity is really more like a story we tell ourselves. We build our sense of self from memories, actions, and the feeling that we are in control of our own narrative. Now imagine that you have an AI clone out there appearing in a commercial and endorsing a product you’d never touch. Or perhaps someone is using your digital twin to push political opinions that you find distasteful.

When a digital version of you acts independently, it can feel like a breach of your personality, as if it’s a violation of your very existence. I’ve spoken with people who have seen deepfakes of themselves online, and they describe the experience as like a gut-punch, as if they were watching a stranger wear their skin. This kind of misrepresentation and loss of control can threaten our sense of who we are and may even leave us feeling as if we lack agency in the world. After all, if someone else can take control of your image and narrative, then what is it that makes you you, in the end?

The Law’s Lag Behind Technology

While laws like the Take it Down Act provide protection from non-consensual sexual deepfakes, there are few legal tools to prevent other kinds of impersonation. In the U.S., some states have “right of publicity” laws, designed to protect against an unauthorized use of a person’s likeness, but these were created to protect celebrities, not everyday people. They are woefully outdated for an era in which AI can clone not just your face but your entire persona. The European Union’s General Data Protection Regulation (GDPR) offers stronger protections, but it doesn’t address the creation of synthetic versions of a person that have been created from their public posts or videos.

Take the case of a small-business owner I met last year. A deepfake video showed her “promoting” a shady investment scheme she’d never heard of. By the time she discovered that the video was out there, her clients were confused and her reputation had taken a hit. Her legal recourse? Minimal. All she could do was clear up after the mess and hope it didn’t happen again.

Redefining Identity Theft

Traditional identity theft involves using stolen credit cards or personal details to create a “paper” identity. This is different—it’s the theft of your person. It involves losing control over what’s said or done in your name.

This kind of impersonation cuts deep. It’s not just about privacy; it’s about autonomy, the bedrock of mental health. We need to feel we own how the world sees us. When an AI doppelgänger disrupts that, it can leave us feeling powerless.

What We Can Do About It

This isn’t a problem we can ignore. Governments and businesses need to act now to ensure that individuals can maintain control of their digital representations. We need to:

  • Establish a Legal Right to Refuse: We need a “digital likeness veto,” a legal right to stop companies from building AI models of us, even if they are using public data. If someone wants to simulate you, they should need your sign-off first. Denmark has recently moved in this direction, with a proposed law that would give citizens copyright over their own likeness, providing a powerful protection against deepfakes. Other nations need to follow the same path.
  • Demand a Clear Consent Code of Conduct: Until the appropriate legislation is in place, responsible organizations must fill the gap. Some companies are starting to act already, experimenting with “digital twin clauses” that spell out how replicas can be used. This is a step in the right direction, but more is needed. Businesses must sign up to a code of conduct that requires explicit permission when using AI to replicate someone’s identity, whether in an ad or research or for the creation of a virtual assistant or entertainment product.

Owning Your Identity in the AI Age

At its core, the AI Doppelganger Dilemma forces us to ask: If a machine can replicate your speech, likeness, and behavior patterns, what’s left that remains truly yours?

In a world in which your digital double could outlive you or act against you, protecting your identity isn’t just a technical issue. It’s about safeguarding your voice, your choices, your story, your self.

We need to fight for the right to define ourselves. If we lose the power to control how we’re seen, we risk losing the essence of who we are.

[Photo: sarinrat / Adobestock]

Original article @ Psychology Today.  

Share on:
error: