What a new state law means for sex-related deepfakes and elections
Go Deeper.
Create an account or log in to save stories.
Like this?
Thanks for liking this story! We have added it to a list of your favorite stories.
You've heard the old saying: “seeing is believing.” But with the rise of deepfakes and synthetic video and audio, you may soon doubt your eyes and ears.
Governments are trying to figure out what to do about technologies that generate or manipulate digital media — especially video and audio content in a way that's tough for viewers to distinguish from authentic material. There is growing concern over deepfakes’ ability to spread misinformation and damage reputations.
Earlier this week, a new law in Minnesota made it a crime to create sex-related deepfakes or use deepfakes to influence an election — the penalties are up to five years in prison and $10,000 in fines for disseminating them.
University of Minnesota Associate Professor of Media Law Chris Terry was a guest on Morning Edition with Cathy Wurzer to talk about what this means for Minnesotans.
Turn Up Your Support
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
The following is a transcript lightly edited for clarity. Listen to the full conversation by clicking on the player above.
How easy is it for someone to make a deep fake?
The technology is advancing rapidly, even as we speak. The best way to compare it is: what it used to take to Photoshop a single photo one can do with video now, and it's relatively easy.
And how good are they?
Well, like most things, there are some really good ones. And there are some obvious fake ones. And, of course, the concern behind the bill is the ones that are in the first category where they are pretty good and would pass a cursory examination by a reasonable person, which is the standard provision.
Do you think legislation can raise some constitutional questions, like First Amendment conflicts?
Well, the interesting thing about this bill is that it's like basically two bills about deepfakes that they crammed together. So there's the deepfake pornography part of the bill, and then there's the deepfake political advertising-related part of the bill.
Minnesota has a revenge porn statute, which is very closely related to the pornography part of the deep fake bill here. And that's probably constitutionally OK, given that the Minnesota State Supreme Court upheld the state's provision on revenge porn in the Casillas case a couple of years ago. The political advertising part or the political speech part of the deepfake, I think, is a little more questionable in terms of the First Amendment.
Is there an immunity clause for internet service providers in this?
There is a Section 230 immunity provision. That's an important addition to the bill that removes the liability for dissemination across a platform like X, Bluesky or Facebook. And it focuses more, or at least it tries to focus more, on the people who are creating and disseminating these videos.
It does not, however, contain a provision that provides the same liability protections to broadcasters. And one of my immediate concerns with the First Amendment issues is that if a political campaign were to employ a deepfake and was to send that to a [commercial] station, and they were legally qualified candidates, they don't have a choice, they've got to run that ad as is and they would potentially not have protection for having done that.
Can you explain more about the part of the law that deals with elections and why it is important? And do you have concerns about using deepfakes in political ads?
The law is basically a provision that prevents deepfakes from being used in a way to influence an election; basically, to a candidate’s denigration 90 days before elections. Now, that's a really specific timeline, but it's closely tied to many political advertising provisions. So it's not out of left field.
But it does mean on day 91 before the election, you can do this freely without concern. And that, of course, is going to cause all kinds of problems because if a deepfake is going to be disseminated on the internet, it's still going to be up. But at that point, only the people that are sharing it, not necessarily the people who created it, are potentially going to have some legal liability.
As an academic, how concerned are you about where we're going with deepfakes and AI?
Well, my concern with AI is an entirely separate one. My concern with deepfakes is that it's regularly a deceptive issue. And in terms of commercial speech and political advertising speech, this deception has a very specific definition that's kind of old at this point and it doesn't necessarily include these sort of new technological means in it.
Do I think it's a problem that people will use face technology in a political ad? That's going to happen. But is it radically different than things that we see one of the provisions in the bill allows you to use a real person to imitate someone? Much of the Angie Craig ad that we saw was an ad attacking Angie Craig in the last election that was run by the Congressional Leadership Fund. They had an Angie Craig lookalike driving around in a golf cart. This law would not stop that kind of thing, only altered use of Angie Craig's actual face over that imitator’s body.
Do you have any suggestions for listeners that might help them spot deep fakes?
There's usually a pretty obvious line in a deep fake. One thing that the technology hasn't quite gotten around yet is that it's often hard to match video to video in a way that is completely transparent. Now, I don't know that that will always be the case, but it is certainly the case now. But the best way to make sure that something's real is to try to verify its authenticity through another source.