💾 Archived View for nicholasjohnson.ch › 2022 › 04 › 24 › implications-of-synthetic-media captured on 2024-03-21 at 15:26:28. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-11-04)

➡️ Next capture (2024-05-12)

🚧 View Differences

-=-=-=-=-=-=-

 _  _ _    _        _              _     _                      
| \| (_)__| |_  ___| |__ _ ___  _ | |___| |_  _ _  ___ ___ _ _  
| .` | / _| ' \/ _ \ / _` (_-< | || / _ \ ' \| ' \(_-</ _ \ ' \ 
|_|\_|_\__|_||_\___/_\__,_/__/  \__/\___/_||_|_||_/__/\___/_||_|

🔗 Return to homepage

📆 April 24, 2022 | ⏱️ 6 minutes read | 🏷️ computing

Implications of Synthetic Media

A few months ago, I wrote an entry titled "The Privacy Implications of Weak AI¹". This entry is a continuation of my thoughts about AI, specifically synthetic media.

A.I. and automation are subjects people avoid thinking about because they're scary. I can't fault anybody for that because they're right. The way weak AI is already being used is extremely worrying. It doesn't bode well for the future, but we can't find solutions without discussing the problem. So today, I thought I'd explore another way weak A.I. might disrupt society.

In case you're not familiar with the term "deepfake", it refers to AI-generated media² (synthetic media) where a person in a picture or video is digitally replaced with somebody else. The goal is for the replacement to be so seamless that it's impossible to tell the difference. Right now, deepfakes³ are pretty good and they're getting better all the time. This has huge implications.

Plausible Deniability

Blackmail

You might initially think, as I did, that blackmail will get a lot easier. You won't even need real incriminating photos or videos of someone any more. You can just generate it as needed. But the problem is, every semi-computer-literate person will be able to generate convincing deepfakes. As deepfakes become more common and the public becomes more aware of them, blackmail using photos, videos, audio, etc. will become impossible because the victim can always plausibly deny it.

Even if you have real blackmail material on someone, all the victim needs to do is claim it's deepfaked and it will be impossible for a third-party to be sure one way or the other without more context. So blackmail will become harder, not easier.

Law

I suspect deepfakes will cause photo, video, and audio evidence to be taken less seriously in a court of law. As creating deepfakes becomes easier and more accessible to everyone, courts will increasingly have to rely on contextual information, without taking the authenticity of the media itself for granted.

Sure video, image, and audio editing tools have been around for a while. But it takes resources for humans to fake evidence. It takes skill and time, or at least some money to pay someone else to do it and not tell anybody. Courts have to ask "does the claimant have the resources to fabricate evidence?". It's not trivial, but with deepfakes, it is. Anybody can effortlessly create convincing fakes.

Deepfakes change the game by reducing the cost of creating fakes. In the future, only motive will be required to fake evidence, not resources.

Nudes

This one's just a hunch, but I predict sending nudes will become more common given that the nudes will be deniable if they end up in the wrong hands. The original recipient may know that the nudes are real, but will anybody else believe them? So I think the deniability will increase people's willingness to send intimate media.

The software for faking nudes already exists.⁴

Social Engineering

But there's more than just increased plausible deniability. Deepfakes will change the social engineering⁵ game.

I imagine it like that scene in the first Terminator movie where terminators can fake people's voices after hearing them once. You can just record someone's voice, then train an A.I. to replicate it. Unless there's a law against it, police might use this to trick suspects and obtain information from them.

On the other side of the law, black hat hackers will certainly use deepfakes to social engineer corporations and institutions. In fact, it already happened when a voice deepfake was used to scam a CEO out of $243,000⁶.

The Infopocalypse

The central subject which we seem to be orbiting is the infopocalypse. That is, when sock puppets and deepfakes become absolutely pervasive everywhere on the internet. And I have to mention sock puppets because they go hand in hand with deepfakes in an important way.

Right now, what prevents bots from overtaking the internet is mainly CAPTCHA⁷, phone registration, and bot detection systems. CAPTCHA is a technique to tell humans and computers apart. As A.I. improves, bots will eventually be able to do all the things that humans can do, including passing CAPTCHA. They'll also be able to bypass bot detection and, with some money, buy phone numbers.

We have to assume that as time passes, it will take less and less resources for anyone to create their own personal army of convincing bots. Combining this with deepfakes will make it nearly impossible to tell human from machine. Unless new techniques for bot prevention are developed, online platforms may run rampant with spam, disinformation, and sock puppets.

So new techniques will have to be developed to tell humans and machines apart and, hopefully, those techniques still allow for online anonymity. Internet protocols and applications will have to be adapted to defend against this new threat model.

I don't want to overstate the problem. Assuming online protocols and platforms find ways to deal with bots, people with good sources will continue seeing reliable information and people with bad sources will continue being brainwashed by nonsense. Bots or no bots, people who check their sources will always be better informed than those who don't. I don't think that aspect is going to change, although it may get more difficult to decide if sources you've been newly introduced to are trustworthy.

Human-Bot Relationships

Now, broadening the subject even more to synthetic media as a whole, not just deepfakes, there's another way I believe the social landscape will be radically changed.

Maintaining relationships with real people takes effort. With synthetic media and convincing chat bots, a lot of people will probably opt for relationships with synthetic, digital A.I. systems instead of other human beings. This could be really destructive to the social fabric. The word "loner" will take on a whole new meaning.

What worries me the most is how addictive these A.I. chat bots could potentially be. We've already seen how bad social media and smartphone addiction is. Maybe it's too early to worry about this, but if A.I. chat bots pass the Turing test⁸ and become capable of real-time audio and video calls, there will probably be less human connection in society.

If you're looking for some inspiration, two good films depicting human-bot relationships are Her⁹ and Ex Machina¹⁰. Those films both depict A.I. taking human form, which goes a bit outside the scope of synthetic media, but synthetic media by itself probably wouldn't make good film.

Art and Self-Expression

Synthetic media will also revolutionize art and self-expression. Imagine online gaming where your face, body, and mannerisms are superimposed onto your avatar. Imagine going to see a movie with you and your friends as stars of the show. Imagine more interactive art.

I don't think synthetic media used for self-expression is necessarily a net good though. Giving people new ways to express themselves is good, but not if they use it as a means of escaping the world like in the movie Ready Player One¹¹. We don't want to give people yet another way to be bought off by extreme capitalists and distracted from the problems happening in the real world.

Conclusion

Predicting the future is somewhat of a fool's errand. We'll only know for sure how synthetic media is going to transform society as time passes. But, I believe I've made some good predictions, and I hope I at least get more people thinking about it. Thanks again for reading.

References

🔗 [1]: The Privacy Implications of Weak AI

🔗 [2]: AI-generated media

🔗 [3]: deepfakes

🔗 [4]: The software for faking nudes already exists.

🔗 [5]: social engineering

🔗 [6]: a voice deepfake was used to scam a CEO out of $243,000

🔗 [7]: CAPTCHA

🔗 [8]: Turing test

🔗 [9]: Her

🔗 [10]: Ex Machina

🔗 [11]: Ready Player One

Copyright 2020-2024 Nicholas Johnson. CC BY-SA 4.0.