Related
- Types of Intellectual Property
- IP Proponents Do Not Even Know The Difference Between Patent, Copyright, Trademark …
- right to publicity/invasion of privacy
- The Rise Of A New Intellectual Property Category, Ripe For Trolling: Publicity Rights
- The Problem with Intellectual Property, n.9
As a friend told me, “I saw this article, and thought you might either be amused or horrified as congress attempts to use copyright law to solve problems caused by … copyright law.”
Trey Popp, Who Will Own Your Digital Twin?, The Pennsylvania Gazette (24 Dec 2025)
Who Will Own Your Digital Twin?

Law professor Jennifer Rothman is an expert in “the ways intellectual property law is employed to turn people into a form of property.” As we enter an era of deepfake videos, voice clones, and digital replicas of human beings, she worries that the United States Congress is on the cusp of a horrible mistake.
By Trey Popp

There were so many things I wanted to ask Jennifer Rothman, but first she had some questions for me. The Nicholas F. Gallicchio Professor of Law is an international authority on the right of publicity, a legal framework designed to protect individuals against the commercial misappropriation of their name, likeness, or other recognizable aspects of their persona. The word publicity can give a misleading impression, because the right in question is a bulwark of personal privacy in the United States. Along with copyright law, it offers a primary means of defense against the exploitation or abuse of someone’s personal identity. And that was why Rothman wanted answers from me before our Google Meet video call could really get going.
As required by Pennsylvania law, I had started our conversation by requesting her permission to record it. I do this every time I want to tape a call, and 95 percent of the time the answer is a perfunctory yes. Not Rothman. “Is the recording for your personal use?” she asked. Well, I planned to use an AI voice transcriber tool to create a Q&A transcript, I said. “I’m not saying it would be secret,” she replied. “But you’re not going to post it anywhere?” No, I assured her—the actual recording would rest only with me.
Everyone should ask that question, but almost no one does. “Given my background in filmmaking and the film industry,” Rothman wrote to me later, “I have been aware for decades of how footage can be reused and misleadingly edited. Given my focus on intellectual property laws, I have also long been attentive to how someone might use my copyrighted material or use my name or likeness.” But in the last several years, technological advances in AI-generated voice clones and deepfake videos have upped the stakes. “Everyone should be paying greater attention to what steps they can take to limit exposure,” Rothman said. “I was recently advised, for example, to remove outgoing voicemail messages that use your own voice.”
Rothman has lately turned her scholarly attention to “the ways intellectual property law is employed to turn people into a form of property.” In November 2024, she delivered a prestigious lecture hosted by the Copyright Society titled Copyrighting People. “Many of you may be on the verge of jumping out of your seats to proclaim, ‘You can’t copyright people!’” she began. “And at one level this is true—people, in and of ourselves, as human beings, are not copyrightable subject matter. But the interplay between copyright law and people is far more complicated than this truism reveals.” As AI tools supercharge the creation of deepfake imagery and audio, laws addressing digital replicas of human beings are on a “collision course with publicity and privacy rights.”
A seminal skirmish on this emerging battlefront took place in April 2023, when a song titled “Heart on My Sleeve” went viral on streaming services including Spotify, Apple Music, and YouTube. Featuring vocals that sounded just like the megastar singers Drake and the Weeknd, the track had actually been created with AI tools by a TikTok user known as Ghostwriter977. “I was a ghostwriter for years and got paid close to nothing just for major labels to profit,” this person (or entity) proclaimed on the social media platform X, later declaring themselves “open for business” to create AI records for the likes of rapper Travis Scott, 21 Savage, or any other artist interested in earning royalties “without lifting a finger.”
Universal Music Group swung their lawyers into action, and recording industry lobbyists descended on Capitol Hill, where two bills were soon introduced into Congress. In February 2024 Rothman testified before the US House of Representatives Subcommittee on Courts, Intellectual Property, and the Internet, where she took aim at both bills for failing to prevent what she considered the existential imperative in this dawning era of digital deepfakes. “No one should own another person,” she emphasized, using boldface in the written copy of her remarks. “Unfortunately, each of the two draft bills to address the problems of AI and performance rights essentially do exactly this—they allow another person, or most likely a company, to own or control another person’s name, voice, and likeness forever and in any context.” Permitting such ownership “in perpetuity,” she added, “violates our fundamental and constitutional right to liberty and should be prohibited.”
Rothman has continued to be outspoken on this issue. This past August she authored a commentary in The Regulatory Review arguing that a revised draft of Congress’s NO FAKES Act retains some of the most “chilling” implications of the original version. “A person’s replica could appear,” she observed, “in pornographic contexts and doing things that the person had no real awareness were authorized”—all with the blessing of a legal licensing regime. “Such an outcome would work against the bill’s stated objectives of protecting individuals from being exploited by AI technology and would worsen the deception of the public.”
In November she talked with me about the perilous waters our society is entering and how the law can best help us navigate them.
In September OpenAI released Sora 2, a text-to-video model and social media app that allows users to upload their own likeness and create AI-generated videos incorporating their own face and voice—and grant other users the ability to create videos featuring them. So, have you uploaded your face yet?
Definitely not.
Why not? OpenAI declared that users will have “end-to-end” control over these synthetic likenesses—including the ability to view, remove, or revoke access to any drafts of videos that contain them. What’s to worry about?
I don’t have the highest confidence that these will be able to be locked down. One of the videos that I’ve shown in recent lectures is a Sora 2 creation where the actress Jenna Ortega is replicated as her Wednesday character in conversation with an animated character from Family Guy, with a pretty close voice rendition of her in addition to her likeness. I’m sure she didn’t give permission for that. And then it’s posted by someone else who then could create their own. So I wouldn’t personally upload myself.
I have it on good authority that there are Philadelphia area students who have surreptitiously uploaded pictures of their teachers to Sora 2 to create mocking videos that the teachers would not like at all, if they discovered them. What legal recourse does such a teacher have? And would it only be against a minor child, or would there be an actionable claim against OpenAI itself?
Good questions. There are a variety of laws, and they differ from state to state. Pennsylvania has a statutory right of publicity, which is a right that limits unauthorized uses of a person’s identity. It may be somewhat limited to people with commercial value, or uses in a commercial context. But Pennsylvania also has what’s called a common law right, which is not as bounded as the statute, that might well apply here: We have an unauthorized use of a person’s identity—maybe their name, likeness, or voice—and it’s for the defendant’s advantage. It doesn’t have to be a commercial advantage; the student could be getting, you know, “cred,” or followers online. So there might be an actionable claim there, and it’s not a separate law if a minor does it. Obviously, if it had sexual content, there are whole other laws that would also apply. It also could rise to the level of intentional infliction of emotional distress. It could potentially be defamatory. So there are a lot of laws that could potentially apply.
The more difficult question is: How do you get it taken down? Are the teachers going to actually sue the students, and go through litigation, and hire a lawyer? There’s some friction in the process. So it’s not just about law.
Regarding OpenAI and its liability, OpenAI is not hosting things—it is helping people create things. So their argument is: We’re not the bad actor here. Somebody else is. That’s their defense in defamation claims. It’s early days for this litigation. There are some negligence claims against OpenAI which are interesting, for ChatGPT creating defamatory speech and even fake documents about people, and whether OpenAI can be liable for that—whether we count them as a speaker in that instance, and if they could have any knowledge that their platform created something that was defamatory. There also may be some negligence claims that the whole system is negligently designed not to protect against defamation, not to protect against the use of people’s identities without permission.
When Sora 2 went live, OpenAI announced that users would be able to create videos featuring copyrighted characters unless rights holders explicitly opted out. Three days later, the company backtracked, announcing an opt-in model where rightsholders would have to grant permission first before characters could be used in these videos—which sounds similar to how they treat ordinary users who upload their faces. But even after the switch to an opt-in framework, some Sora 2 users were nevertheless able to prompt the tool into reproducing copyrighted characters on which it had been trained—like the Wednesday example you mentioned. What’s going on here? Is there a single legal framework that’s well suited to address this bundle of issues, or does it really depend on the circumstance—like whether we’re talking about a commercially valuable persona versus an ordinary person?
Some of the laws appear to limit claims to those who are commercially valuable—or limit it to claims involving uses in merchandise, or advertising, or products. But in most jurisdictions, there are other laws that would apply. It’s obviously easier to make a claim if you’re a famous person who regularly licenses your identity for a lot of money. But that’s not necessarily because of the law so much as economics. It’s very worthwhile for you to pay a lawyer to sue—and you’ll recover. Whereas many of these laws have no statutory damages, and even the ones that do may not have enough to make it worthwhile to bring claims.
Then, of course, you have the whack-a-mole problem: one person posts it, then someone reposts it. So in a scenario where you have students who are putting up mocking videos of their teachers, the teacher may even succeed in getting it down from one location—but then it will appear in a whole bunch of other locations. So then you have to have partnerships with the platforms, who may not want to take things down, or may not have an automated system. There’s also some uncertainty in the law about whether there’s liability for those third-party posts on various platforms.
The line between likeness and character can be murky. James Bond is a character embodied by particular actors. If the rights holder to the Bond franchise wants to permit AI videos modeled on the character, but the Bond-portraying actor Daniel Craig doesn’t, who gets to decide?
That’s a very complicated issue, and it’s actually been litigated not in the context of AI, but in the context of other uses of actors’ identities. There’s a conflict between state publicity rights that the actor has to control their identity, and the copyright holder’s copyright to be able to reuse the work it owns. One of the most famous cases involves the actors from the Cheers sitcom. George Wendt and John Ratzenberger sued Paramount and the airport company Host International, which set up Cheers bars in airports, that had animatronic robots of the characters Norm and Cliff. The actors sued, saying these robots were based on them, even though they were manifestations of characters that could be owned and licensed by the copyright holders of Cheers.
The district court said these robots—which were initially called Norm and Cliff but then changed to Hank and Bob—looked nothing like George Wendt and John Ratzenberger. I actually have a picture of them in my book, and they really do not look like the actors—but they evoke the characters. So the audience would be like, That reminds me of Norm and Cliff, and then they might conjure up the image of the actors. But the Ninth Circuit reversed the district court and said, Whether the robots look like them or not is a matter for a jury to decide—you can’t decide as a matter of law. It said that the actors’ right of publicity claims were not preempted by copyright law, because the objection was not to the use of the copyrighted work, but the use of their faces—their likenesses—and so if the robots looked like them, [which was for a jury to determine], it would be a right of publicity claim.
Let’s back up a little. Flesh-and-blood human beings are not copyrightable. Why not?
That’s right. The starting point is that things that are protected by copyright need to be in a “fixed form,” and we are not fixed. And copyrightable material needs to be authored—and although we think of ourselves as authoring our own lives, that’s not what is meant. It’s supposed to be about human external creations. Those are the sorts of things that we consider to be copyrightable.
But you write that “there is rising pressure to copyright the attributes of people, including the code for our digital selves.” And you also write that “copyright plays a much larger role in propertizing and controlling people than is often thought, and this function is likely to grow in the era of AI.” What’s a recent example of the pressure to copyright attributes of people?
So, isolated attributes are not going to be copyrightable in the same way that people are not. But if someone’s voice is captured in a recording, if your image is captured in a photograph, or in a motion picture video, all of that is copyrightable. And copyright law says you can reuse this copyrighted footage. So if you have one photograph of someone, and you’re the copyright holder, you should be able to make copies of that photo. And you should also be able to make a derivative work from that photo—so, use the photo in a new context, or alter it in some way. That means that the copyright holder can wield some control over how the person initially captured in the photo appears in the world.
As we get more sophisticated with our technology, you could create a voice clone or a digital replica of someone. Is that copyrightable? This is an open question right now, but if that is copyrightable, or the computer code that creates the digital replica is, then under copyright law that copyrighted work could be reproduced many different times in different contexts, without the underlying person really having a say in how that copyrighted work is used. These are emerging questions, and there are also new laws that are addressing digital replicas, some of which are modeled on how we understand copyright law—which may not be the right way we want to think about property rights in a human being.
In a recent paper you wrote that the visual effects house Metaphysic has proposed registering digital replicas of its clients, which include Tom Hanks, with the US Copyright Office to protect against others using or creating digital replicas of them. If you were Tom Hanks and a company was proposing to copyright attributes indelibly connected to your persona as a performer, what worries would you have about the ways that might backfire on you?
I would be very worried. Metaphysic is so far claiming that they’ll only allow the actual person depicted to seek registration. So they’re trying to be a good actor in this space. And obviously this is something the US Copyright Office will decide. But I would worry very much about the Tom Hankses of the world signing over the rights to their digital replica to a studio as a condition of being in a movie—or perhaps to a manager or an agent—because then that person would have the rights to the performers’ digital selves, and given technology, could replicate their performances in ways that they wouldn’t be able to control. And if it were you, you might be deeply troubled by what your digital replica is shown saying and doing.
If you were a young actor who aspired to the kind of success Tom Hanks has achieved, what worries would you have about some company copyrighting Tom Hanks’ digital clone?
So I would be worried as a young performer that all of these replicated performances by already established people will disrupt access to new jobs. There are also efforts to create new performances by famous dead people, both in stadium tours and in movies. ElevenLabs has apparently gotten the rights to a host of famous dead people to be voice clones. And even the [digital] de-aging of Tom Hanks means that rather than casting a new actor to play a younger version of that character, you could just have one performer de-aged to play themselves over time. So that’s disruptive of potential jobs. Beyond that, there’s going to be pressure that, to get a job, you’ll be asked to sign away rights to your voice and your likeness and performance, unless there’s much more robust legal protection.
Do you mean in the way that we all sort of agree to fine print that we don’t read whenever we click on an online service?
Very much so. Even companies like X and Instagram, if you look at their terms of service, they say anything you upload to the site, we have a non-exclusive license to use in any way we want. So they could actually create digital replicas of you, and you’ve agreed to that in writing by agreeing to the terms of service. That’s also very troubling.
I think actors have a little bit of protection from their union, SAG-AFTRA, to the extent that they’re engaging in union work; the movie studios and producers have agreed not to use digital replicas of people without permission. But that’s not true in other spheres. So I’m particularly worried where people don’t have those sorts of union protections, or if before they ever get to the union they’ve already signed their rights away. I think there’s particular pressure on student-athletes, who now are able to commercialize their identities. And this starts all the way in middle school, with kids being recruited as minors; their parents could sign away the rights to an agent who comes knocking for what seems like a lot of money at the time, but then would give this other person perpetual rights over this child’s likeness and voice and name, even when they’re grown.
When someone produces a video by prompting a generative-AI tool, who owns the copyright to that creation? Under US law, if I take a photograph, or record a video with a video camera, a copyright vests in me as the human author immediately upon the creation of that work. Does that change if I produce an image or video by typing some textual prompts into a chatbot interface?
This is another cutting-edge question that’s currently being litigated. The Copyright Office is taking the position, and I think it’s an appropriate one, that we need to have human authorship for copyright protection. But that doesn’t mean you can’t have an AI assist. For instance, the movie industry has been using AI in visual effects for decades, and you still get a copyright in your movie. So the sorts of questions that the Copyright Office is engaging with, and courts are just starting to tease out, involve how much contribution to the AI output the human has to give. So, how much did you put in your prompt? How much did you adjust the output? They’re really difficult issues to tease out. So I think the line that requires human authorship is going to remain, but there’s going to be allowance of some copyright protection for AI contributions that are shaped by human beings. But where we’ll draw the line on that is still a moving target.
Photographers control the copyright to images they’ve taken of another person who has given their consent to be photographed. Are there aspects of AI-powered digital replicas that call out to be treated differently from the way we’ve treated photography?
With digital replicas, the impact on the person who’s captured—if we say that that’s a copyrightable work—is so much greater than just a still image that I think we need to treat it differently.
Do you think digital replicas and voice clones should be considered copyrightable subject matter?
I do not think copyright as we currently understand it in the United States is a good fit for rights over a person’s identity. And if we use this framework, we need some special rules to protect both the people depicted and the public.
Because of uncertainty around this question, some states have already passed digital replica rights, and Congress has also proposed a new digital replica right. This has led, in a broader sense, to what I call the “identity thicket,” where we have a whole bunch of laws that extend rights over the same attributes of the same person. Some of them stay with the person. Some of them appear to be capable of being owned or controlled by others. And this creates legal chaos, market chaos, and also jeopardizes a person’s ability to control their own identity.
I think that whether we fold digital replicas and voice clones into the copyright system, or we create a separate statutory regime that is like copyright for digital replicas and voice clones, either way we need to have some special rules. And the reason we need special rules is to protect the underlying person who can then be reanimated, seemingly doing and saying things they didn’t do. This is both to protect the person who’s being depicted, but also to protect the public—who may not be able to discern whether it’s an authentic performance by the person, or whether the person authorized it, or even whether it’s AI-generated or not. And I think that is a very, very dangerous world for us to be living in, in which people can’t distinguish between authentic recordings and fake ones.
You just touched on an issue you’ve also written a lot about: the transferability of rights over a person’s identity—the ability of somebody to transfer ownership of a person’s name, likeness, or voice from one party to another. Tell me why that’s such a critical issue in this context.
Normally we’re not very concerned about transfers in copyright. If you write a book, and someone wants to buy it from you and distribute it, you would transfer the copyright to them and then they could make copies of it, sell it, market it. You’d be thrilled. If someone wants to make a movie out of your book, you can transfer the rights and they can do that. But it’s very different when you’re dealing with digital replicas or voice clones. Because if you transfer those rights, then someone else, the copyright holder or digital replica holder, could now be legally entitled to reproduce your digital replica. And that’s a very different matter. The same question arises in state publicity rights, which is whether we want to allow them to be transferable or not. I have strongly advocated that when it comes to a living person’s name, likeness, or voice, that we need to keep those rights vested in the person themselves, and that no one else should be able to own someone’s likeness or voice.
So you think that should be an inalienable right—one that we simply don’t allow someone to sell or give away, even if they want to.
Exactly. The counterargument is, But if someone wants to sell their voice or their likeness and they get money for it, why wouldn’t we let them? I mean, it may not be very smart, but if they want to do that, shouldn’t they get to do that? And the answer is that we actually don’t let people do anything they want. We don’t let people sell their votes. We don’t let people sell themselves into slavery, or commit to permanent servitude, even if somebody was giving them a lot of money. And we also sometimes prohibit things that are detrimental to society—and we’re all harmed when all of a sudden people don’t own their own likenesses or voices and are being replicated with no one having ongoing control over their identity.
Congress introduced a bill called the NO FAKES Act in 2023, which was reintroduced this year in a revised version. With 10 cosponsors split evenly between Democratic and Republican senators, its stated aim is to establish nationwide protections for artists, public figures, and private individuals against unauthorized use of their likenesses or voices in deepfakes and other synthetic media. But you’ve argued that the bill risks making “things worse than the status quo by erecting a federal law that would legitimize deceptive uses of digital replicas rather than appropriately regulate them.” In your view, where does this bill go wrong?
I will say that this version of the bill is much better than prior versions. And it’s better than the No AI Fraud bill, which allowed absolute transfers of digital replica rights. Here there is a licensing regime—but it’s 10 years in duration. And there are some protections for minors, which is good. But ultimately it fails on the two most important metrics that we should be thinking about for regulating digital replicas and deepfakes.
The first is: Is the particular manifestation of the digital replica authorized by the person depicted? NO FAKES fails in that regard, because it allows for these long-term licenses without the person necessarily approving what’s done with the digital replicas during that duration. And once that license expires, it’s not clear that the digital replica couldn’t continue to be used thereafter, because they have rights to anything created during the licensing period! The law also allows authorized representatives to enter these contracts without any supervision by the depicted individual. So that’s the first metric: The bill allows digital replicas and voice clones to be made without the specific authorization of the person depicted—what I call “deceptive authorization.”
The second problem is that this law doesn’t address whether they’re deceptive uses. So even if you consider an authorized use of a voice clone or digital replica by the person depicted, it could still deceive the public into thinking it’s an authentic performance. And that’s something as a society we should be very concerned about, both for protecting against disruption in elections or political discourse, but also so that people can knowingly have a sense of what really happened and what didn’t.
Essentially, this law would create a huge market in digital replicas, including ones that aren’t specifically authorized by those depicted—also including creating a federal right in digital replicas for dead people—and without having any protections for the public from being deceived. So it could actually turbocharge the creation of this very high-value federal IP right to generate more digital replicas, rather than be mitigating that concern.
So if this bill fails so comprehensively on these metrics, what do you think is the better course: declining to pass new legislation, and instead just relying on existing law and the courts to work out some of the thorny issues? Or making specific changes to the NO FAKES bill?
There is a lot of law already on the books. And we have recent litigation in New York involving voice clones of actors. Two voice actors brought a suit under New York’s right of publicity and privacy laws, and they succeeded at the initial stage of the litigation. I think they will ultimately prevail. So there’s a lot of law that applies.
I very much start with the principle “do no harm.” So if a federal law would make it worse, we shouldn’t pass it. Now, there are reasons why we might want some clarity with federal law. It’s very difficult to navigate the hundreds of different state laws that have been passed in this area. But another challenge is that the NO FAKES bill, as written, has a very complicated preemption provision that leaves in place many of these state laws. So it doesn’t really solve the problem of not having one clear standard across the country, and instead creates a new layer of conflicting rights. That is not ideal, and if we’re going to go down that path, we should at least be focused on the key things with the legislation, which is that the person depicted has actually meaningfully authorized the use, and not just in a bit of legal hocus pocus; and second, that we’re targeting concerns over deceiving the public.
We’ve mostly talked about the interest the people have in controlling their likeness amid an increasingly fraught technological context. But there are other concerns too. US law contains provisions that can weaken people’s ability to lodge copyright claims, or privacy or right of publicity claims—and the First Amendment is one. So I wonder: How much does the First Amendment itself threaten our ability to control our own likenesses in this era of AI deepfakes, and how much control do you think we should be willing to give up in order to preserve values like freedom of speech?
I’m really happy you raised that. Because there’s a countervailing interest here that we can lose sight of. We’re talking about questions like: Should someone who wants to make money off of some other human being get to control that other person’s identity? But then, of course, there’s our ability to comment on people around us, and to use their image in doing so. If we make more robust a system of protecting people’s identities, you can imagine politicians who don’t like a recording of something, or police who have engaged in some potential misconduct and were recorded, saying, “That’s a fake. Take it down.” And if that’s our starting point, well, for the private platforms controlling these choices, it’s much easier for them to just take stuff down. And even authentic recordings could also fall into the bucket of being taken down automatically.
So if we set up legal regimes that make that the pressure point—just take it down and fight about it later—then we have some real First Amendment concerns: it could prevent commentary, particularly on public figures including politicians, that would be robustly protected by the First Amendment.
We need to make sure that in highlighting the importance of being able to control your own identity from exploitation, we not forget that we also want breathing room for creative caricature of public figures and politicians, and for commentary. So we can’t forget the potential dark side of overzealous enforcement of identity rights, which is them being weaponized to shut down very important public conversations and documentation of real world events.
We’ve discussed some pretty thorny challenges. How sanguine are you about our ability as a society, circa 2025, to work our way through them?
I guess I’m somewhat hopeful, but it is a daunting path we’re on. I think we’re going to have to work in partnership with technology companies to develop tools to help detect when things are AI-generated. But there’s a bit of an arms race there. I think, hopefully, we can develop tools of authentication, as well as certification that something is not AI-generated. But it’s just much too early to tell what tools are going to work—and also what the marketplace is going to want.
I have hopes that maybe this will lead to a resurgence in live theater—that people will be, like, You know, the way you know it’s really happening is that we’re in the room. Or that people will want to hear more live music.
But at the same time, I think we need to recognize that some people may be perfectly happy with AI-generated background music on Spotify that helps them focus and is cheaper for everyone, even though maybe not so great for the ecosystem of recording artists. And the acting community may need to tolerate some AI-generated performances in the same way that they tolerate reality shows and animated series that also replace work for actors. So I think it’s still too early to know what the public’s preferences are going to be. And the technology is still evolving in terms of what tools we’re going to have that will help us navigate this space.
So I think the law needs to focus on making it better and not worse. And I think the key metrics, which are often lost in legislation, are again that people have knowingly agreed to be depicted in the exact way in which they are; and that the public is not intentionally deceived. And if we keep our eye on that ball— rather than just figuring out how can we harness a lot of market value from this moment—we will be on a better path.




You must log in to post a comment. Log in now.