Knowledgebase
  • Welcome!
  • Threats
    • Software
      • Malware
      • Ransomware
      • Macros
    • Hardware
      • Flipper Zero
        • Firmware
          • 🐬flipperzero
            • Getting Started
          • 🐬flipper-xtreme
            • Wiki
              • Key Combos
              • Generic Guides
              • iButton key file format
              • SubGhz
              • How to add new SubGHz frequencies
              • Sub-GHz Remote
              • LF RFID key file format
              • NFC Flipper File Formats
              • Infrared Flipper File Formats
              • BadKB
              • Asset Packs
              • Unit tests
              • OTA Updates
              • How To Build
              • Hardware Targets
              • Flipper Build Tool
              • FAP (Flipper Application Package)
              • Flipper Application Manifests (.fam)
          • 🐬roguemaster
          • 🐬unleashed
    • Human
      • Advanced Persistent Threats (APTs)
      • Social engineering
      • Phishing
      • Typosquatting
    • Disinformation
      • Black Propaganda
      • White Propaganda
      • Grey Propaganda
      • Info Warfare
      • Political Warfare
      • Astroturfing
      • Greenwashing
      • Bluewashing
      • Whisper Campaigns
      • Push Polling
      • "Joe Jobs"
      • False Flags
      • Deep Fakes
  • About
    • Ports
      • 20
      • 21
      • 22
      • 23
      • 25
      • 587
      • 2323
      • 53
      • 80
      • 194
  • Tools
    • Radio Frequency & SubGHZ
      • gnuradio
      • hackrf
    • Digital Forensics
      • afflib
    • Reverse Engineering
      • binwalk
      • radare2
    • Hardware & Virtualization
      • qemu
      • freerdp2
      • util-linux
      • lvm2
    • VPN Providers
      • ProtonVPN
      • NordVPN
      • ExpressVPN
      • Surfshark
      • CyberGhost
      • Private Internet Access
    • Database, Cloud, & Firewalls
      • sqlmap
      • cewl
      • gobuster
      • fwbuilder
      • clamav
    • Enumeration & Lists
      • crunch
      • aflplusplus
      • ffuf
      • maltego
        • maltego-teeth
      • getallurls
    • Penetration Testing
      • beef-xss
      • wifite
      • burpsuite
      • metasploit-framework
    • Passwords & Auth
      • john
      • hashcat
      • hydra
      • cryptsetup
    • Surface Intelligence
      • theharvester
      • subfinder
      • dsniff
      • dnsrecon
      • dirb
      • nikto
      • legion
      • spiderfoot
    • Networks & Wireless
      • nmap
      • impacket-scripts
      • tcpdump
      • traceroute
      • wireshark
      • responder
      • aircrack-ng
      • netcat
      • kismet
      • ubertooth
      • routersploit
      • apache2
      • ettercap
      • bettercap
      • bettercap-ui
      • freeradius
      • bind9
      • samba
      • net-snmp
      • tcpreplay
    • Social Media
      • sherlock
    • Miscellaneous
      • git
      • libnfc
      • llvm-defaults
  • Operating Systems
    • Ubuntu
      • Installation
        • Switching
          • From Windows
          • From macOS
          • From a different Linux
        • Applications
        • Ubuntu PreInstalled
    • Linux Mint
      • Installation Guide
        • Verify your ISO image
        • Choose the right edition
        • Boot Linux Mint
        • Create the bootable media
        • Install Linux Mint
        • Hardware drivers
        • Language support
        • EFI
        • Multimedia codecs
        • System snapshots
        • Pre-installing Linux Mint (OEM Installation)
        • Where to find help
        • Boot options
        • Partitioning
        • Multi-boot
      • User Guide
        • Grub Boot Menu
        • Snap Store
        • Chromium
        • Bluetooth
        • Windows ISOs and multiboot USB
        • How to upgrade to Linux Mint 20
        • Edge ISO Images
        • Lost Password
        • Upgrades
        • Printers and Scanners
        • How to upgrade to Linux Mint 21
      • Troubleshooting Guide
        • Expectation
        • Responsibility
        • Change
        • Reproducibility
        • Observation
        • Environment
        • What
        • When
        • Why
        • Errors
        • Where
        • How
      • Translation Guide
        • Using Launchpad
        • Verify your translations
        • Localization
      • Developer Guide
        • Getting Started
          • Setup
          • Technology
        • Mint Tools
        • Cinnamon
        • XApps
        • Development
          • Daily Builds
          • Coding Guidelines
          • Optimizing JS with Cinnamon
          • Building
    • Kali Linux
      • Installation
        • Installing Kali Linux
        • Bare-bones Kali
        • Installing Kali on Mac Hardware
        • Dual Booting Kali with Linux
        • Making a Kali Bootable USB Drive
        • Dual Booting Kali with macOS/OS X
        • Dual Booting Kali with Windows
        • BTRFS Install (Kali Unkaputtbar)
        • Deploying Kali over Network PXE/iPXE Install
      • Virtualization
        • Running Kali Linux as a Virtual Machine in Windows
        • Installing VMware on Apple Silicon (M1/M2) Macs (Host)
        • Customizing a Kali Vagrant Vagrantfile
        • Kali inside Proxmox (Guest VM)
        • Installing VMware on Kali (Host)
        • Installing VirtualBox on Kali (Host)
        • Import Pre-Made Kali VMware VM
        • Kali inside Parallels (Guest VM)
        • Kali inside Vagrant (Guest VM)
        • Kali inside VMware (Guest VM)
        • Kali inside VirtualBox (Guest VM)
        • Import Pre-Made Kali VirtualBox VM
        • Kali inside Hyper-V (Guest VM)
        • Kali inside UTM (Guest VM)
        • Kali inside QEMU/LibVirt with virt-manager (Guest VM)
        • Improving Virtual Machine Performance for VMware
        • Installing VMware Tools (Guest Tools)
        • Installing VirtualBox Guest Addition (Guest Tools)
        • Installing Hyper-V Enhanced Session Mode (Guest Tools)
        • Converting VMX to an OVA
      • USB
        • Making a Kali Bootable USB Drive (Linux)
        • Making a Kali Bootable USB Drive (macOS/OS X)
        • Updating Kali Linux on USB
        • Making a Kali Bootable USB Drive on Windows
        • Standalone Kali Linux 2021.4 Installation on a USB Drive, Fully Encrypted
        • Adding Persistence to a Kali Linux Live USB Drive
        • Adding Encrypted Persistence to a Kali Linux Live USB Drive
        • USB Boot in VirtualBox
        • USB Boot in VMware
      • Kali On ARM
        • BeagleBone Black
        • Acer Tegra Chromebook 13" (Nyan)
        • ASUS Chromebook Flip (Veyron)
        • Banana Pro
        • Banana Pi
        • CubieBoard2
        • CuBox-i4Pro
        • CubieTruck (CubieBoard3)
        • Gateworks Newport
        • CuBox
        • Gateworks Ventana
        • NanoPi NEO Plus2
        • NanoPi2
        • Mini-X
        • NanoPC-T3
        • ODROID-C0/C1/C1+
        • ODROID-XU3
        • ODROID-U2/U3
        • ODROID-C2
        • Pinebook
      • Containers
        • Kali Linux LXC/LXD Images
        • Official Kali Linux Docker Images
        • Installing Docker on Kali Linux
        • Using Kali Linux Docker Images
        • Using Kali Linux Podman Images
      • WSL
        • Win-KeX SL
        • Win-KeX ESM
        • Preparing a system for WSL
        • Win-KeX
        • Win-KeX Win
      • Cloud
        • Digital Ocean
        • AWS
        • Azure
        • Linode
      • Kali NetHunter
        • Installing NetHunter On the OnePlus 7
        • Installing NetHunter On the Gemini PDA
        • Installing NetHunter
        • Installing NetHunter On the TicWatch Pro 3
        • Installing NetHunter On the TicWatch Pro
        • NetHunter Application - Terminal
        • NetHunter BadUSB Attack
        • NetHunter Bluetooth-Arsenal
        • NetHunter Chroot Manager
        • NetHunter Components
        • NetHunter Custom Commands
        • NetHunter Home Screen
        • NetHunter DuckHunter Attacks
        • NetHunter HID Keyboard Attacks
        • NetHunter Exploit Database SearchSploit
        • NetHunter Kali Services
        • NetHunter MAC Changer
        • NetHunter MANA Evil Access Point
        • NetHunter Man In The Middle Framework
        • NetHunter KeX Manager
      • Tools
        • Installing Tor Browser on Kali Linux
        • Kali Tools
        • Installing snapd on Kali Linux
        • Metasploit Framework
        • Installing Flatpak on Kali Linux
        • Submitting tools to Kali
        • Removed Tools From Kali
      • Troubleshooting
        • Discovering Problems With Download Speed
        • Common Cloud Based Setup Information
        • The Basics of Troubleshooting
        • Troubleshooting Installations Failures
        • Troubleshooting Wireless Drivers
        • Minimum Install Setup Information
      • Kali Development
        • Contributing run-time tests with autopkgtest
        • Custom CuBox Image
        • Custom Beaglebone Black Image
        • Custom EfikaMX Image
        • Custom Chromebook Image
        • Custom MK/SS808 Image
        • Custom Raspberry Pi Image
        • Custom ODROID X2 U2 Image
        • Setting up a system for packaging
        • Intermediate packaging step-by-step example
        • Introduction to packaging step-by-step example
        • Getting the best out of the Kali Bot
        • Advanced Packaging Step-By-Step Example (FinalRecon & Python-icmplib)
        • Generate an Updated Kali ISO
        • Creating A Custom Kali ISO
        • Building Custom Kali ISOs
        • Rebuilding a Source Package
        • Recompiling the Kali Linux Kernel
        • ARM Build Scripts
        • Preparing a Kali Linux ARM chroot
    • Arch Linux
      • Installation Guide
      • Frequently Asked Questions
      • General Recommendations
      • Applications
        • Office & Docs
        • Internet
        • Multimedia
        • Science
        • Security
        • Utilities
        • Others
      • Arch compared to other distributions
    • NetBSD
      • Calls and Errors
      • Libraries
      • Lua Modules
      • Devices and Drivers
  • Law, Policy, and Ethics
    • Fair Use
    • DMCA
      • 🗄️Notable Cases
        • MGM Studios Inc. v. Grokster, Ltd.
        • Viacom International, Inc v YouTube, Inc
        • Capitol Records, Inc. v. Thomas-Rasset
        • Perfect 10, Inc. v. Amazon.com
        • Recording Industry Association of America (RIAA) v. Diamond Multimedia Systems, Inc.
        • A&M Records, Inc. v. Napster, Inc.
        • BMG Music v. Gonzalez
        • Sony Computer Entertainment America (SCEA) v. Connectix Corp.
        • Columbia Pictures Industries, Inc. v. Fung
        • Warner Bros. Entertainment Inc. v. RDR Books
        • BMG Music v. John Doe
        • Universal Music Group v. Veoh Networks, Inc.
        • Universal Music Group v. MySpace, Inc.
        • UMG Recordings, Inc. v. MP3.com, Inc.
        • Cartoon Network LP v. CSC Holdings, Inc.
        • Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd.
        • Viacom International Inc. v. Google Inc.
        • Tiffany (NJ) Inc. v. eBay Inc.
        • Perfect 10, Inc. v. Visa International Service Association
        • Universal City Studios Productions LLLP v. Reimerdes
        • Recording Industry Association of America (RIAA) v. Lime Group LLC
        • Sony BMG Music Entertainment v. Tenenbaum
        • Viacom International Inc. v. Time Warner Cable Inc.
        • UMG Recordings, Inc. v. Shelter Capital Partners LLC
        • Sony Computer Entertainment America Inc. v. Bleem LLC
        • Universal City Studios, Inc. v. Corley
        • Ticketmaster Corp. v. Tickets.com, Inc.
        • Authors Guild, Inc. v. Google, Inc.
        • Perfect 10, Inc. v. Cybernet Ventures, Inc.
        • Tiffany (NJ) Inc. v. Ningbo Beyond Home Textile Co., Ltd.
        • Google Inc. v. American Blind & Wallpaper Factory, Inc.
        • Columbia Pictures Industries, Inc. v. Redd Horne, Inc.
Powered by GitBook
On this page
  • Technical Summary
  • What is a deepfake?
  • What are they for?
  • Is it just about videos?
  • How are they made?
  • Who is making deepfakes?
  • What technology do you need?
  • How do you spot a deepfake?
  • Will deepfakes wreak havoc?
  • Will they undermine trust?
  • What’s the solution?
  • Are deepfakes always malicious?
  • What about shallowfakes?

Was this helpful?

Edit on GitHub
  1. Threats
  2. Disinformation

Deep Fakes

Deep fakes are manipulated videos/images using AI. Used maliciously for political manipulation, fraud, misinformation. Legal/ethical concerns arise, measures needed to prevent harm.

PreviousFalse FlagsNextPorts

Last updated 4 months ago

Was this helpful?

Technical Summary

Deep fakes refer to artificially manipulated videos or images in which an individual's face is replaced with someone else's face. The term "deep fake" comes from the use of deep learning techniques, specifically generative adversarial networks (GANs), to generate realistic-looking images or videos that are difficult to distinguish from real ones. In recent years, deep fakes have become a growing concern, as they have been used for various malicious purposes, such as spreading false information, manipulating political discourse, and conducting fraud.

Deep fakes are created using artificial intelligence (AI) algorithms, specifically GANs. GANs consist of two neural networks, a generator and a discriminator, that are trained together to generate realistic images or videos. The generator creates the fake images or videos, while the discriminator evaluates how realistic they are. Over time, the generator learns to create images or videos that are increasingly difficult for the discriminator to distinguish from real ones.

To create a deep fake, the generator is trained on a dataset of images or videos of the person whose face is going to be replaced. The generator learns to create a digital model of the person's face and then replaces the face in the original video or image with the generated face. The result is a video or image that looks like it was originally filmed with the replaced person's face.

Potential uses of deep fakes: While the technology behind deep fakes can be used for positive purposes, such as in the film industry or for educational purposes, there are concerns about the malicious uses of deep fakes. Some potential uses of deep fakes include:

  1. Political manipulation: Deep fakes can be used to manipulate political discourse by creating fake videos of political leaders or candidates saying or doing things that they did not actually say or do.

  2. Fraud: Deep fakes can be used to conduct fraud by creating fake videos of individuals claiming to be someone they are not, such as a bank CEO or a government official.

  3. Revenge pornography: Deep fakes can be used to create fake videos or images of individuals engaging in sexual activities, which can be used to shame or blackmail them.

  4. Misinformation: Deep fakes can be used to spread false information or conspiracy theories by creating fake videos or images that support a particular narrative.

Legal and ethical implications: The use of deep fakes raises several legal and ethical concerns. One concern is the potential for deep fakes to be used to harm individuals or organizations. For example, deep fakes could be used to damage the reputation of a political candidate or to conduct fraud against a business. This could result in significant financial or reputational damage.

Another concern is the potential for deep fakes to be used to spread false information or propaganda. Deep fakes could be used to manipulate public opinion, sow discord, or create confusion about what is real and what is not.

In response to these concerns, several countries have passed or are considering passing laws to regulate the use of deep fakes. For example, in the United States, several states have passed laws that make it illegal to create or distribute deep fakes for malicious purposes. Additionally, several social media platforms have implemented policies to detect and remove deep fakes from their platforms.

Deep fakes are a growing concern as the technology behind them becomes more sophisticated. While the technology can be used for positive purposes, such as in the film industry or for educational purposes, there are concerns about the malicious uses of deep fakes. The potential for deep fakes to be used to harm individuals or organizations, spread false information, or manipulate public opinion raises several legal and ethical concerns. As such, it is important to continue to monitor the development and use of deep fakes and to implement measures to prevent their malicious use.

What is a deepfake?

Have you seen Barack Obama call Donald Trump a “”, or Mark Zuckerberg brag about having “”, or witnessed Jon Snow’s for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to in a politician’s mouth, , or ? Then it’s time to make a deepfake.

What are they for?

Is it just about videos?

No. Deepfake technology can create convincing but entirely fictional photos from scratch. A non-existent Bloomberg journalist, “Maisy Kinsley”, who had a profile on LinkedIn and Twitter, was probably a deepfake. Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.

How are they made?

University researchers and special effects studios have long pushed the boundaries of what’s possible with video and image manipulation. But deepfakes themselves were born in 2017 when a Reddit user of the same name posted doctored porn clips on the site. The videos swapped the faces of celebrities – Gal Gadot, Taylor Swift, Scarlett Johansson and others – on to porn performers.

It takes a few steps to make a face-swap video. First, you run thousands of face shots of the two people through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process. A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. To perform the face swap, you simply feed encoded images into the “wrong” decoder. For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A. For a convincing video, this has to be done on every frame.

Who is making deepfakes?

What technology do you need?

How do you spot a deepfake?

Poor-quality deepfakes are easier to spot. The lip synching might be bad, or the skin tone patchy. There can be flickering around the edges of transposed faces. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe. Badly rendered jewellery and teeth can also be a giveaway, as can strange lighting effects, such as inconsistent illumination and reflections on the iris.

Will deepfakes wreak havoc?

We can expect more deepfakes that harass, intimidate, demean, undermine and destabilise. But will deepfakes spark major international incidents? Here the situation is less clear. A deepfake of a world leader pressing the big red button should not cause armageddon. Nor will deepfake satellite images of troops massing on a border cause much trouble: most nations have their own reliable security imaging systems.

Will they undermine trust?

The more insidious impact of deepfakes, along with other synthetic media and fake news, is to create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events.

“The problem may not be so much the faked reality as the fact that real reality becomes plausibly deniable,” says Prof Lilian Edwards, a leading expert in internet law at Newcastle University.

As the technology becomes more accessible, deepfakes could mean trouble for the courts, particularly in child custody battles and employment tribunals, where faked events could be entered as evidence. But they also pose a personal security risk: deepfakes can mimic biometric data, and can potentially trick systems that rely on face, voice, vein or gait recognition. The potential for scams is clear. Phone someone out of the blue and they are unlikely to transfer money to an unknown bank account. But what if your “mother” or “sister” sets up a video call on WhatsApp and makes the same request?

What’s the solution?

Ironically, AI may be the answer. Artificial intelligence already helps to spot fake videos, but many existing detection systems have a serious weakness: they work best for celebrities, because they can train on hours of freely available footage. Tech firms are now working on detection systems that aim to flag up fakes whenever they appear. Another strategy focuses on the provenance of the media. Digital watermarks are not foolproof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked.

Are deepfakes always malicious?

What about shallowfakes?

Many are pornographic. The AI firm Deeptrace found 15,000 deepfake videos online in September 2019, a near doubling over nine months. A staggering and 99% of those mapped faces from female celebrities on to porn stars. As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.

Audio can be deepfaked too, to create “” or ”” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being . The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

A comparison of an original and deepfake video of Facebook chief executive Mark Zuckerberg. Photograph: The Washington Post via Getty Images
Comparing original and deepfake videos of Russian president Vladimir Putin. Photograph: Alexandra Robinson/AFP via Getty Images

Another way to make deepfakes uses what’s called a generative adversarial network, or Gan. A Gan pits two artificial intelligence algorithms against each other. The first algorithm, known as the generator, is fed random noise and turns it into an image. This synthetic image is then added to a stream of real images – of celebrities, say – that are fed into the second algorithm, known as the discriminator. At first, the synthetic images will look nothing like faces. But repeat the process countless times, with feedback on performance, and the discriminator and generator both improve. Given enough cycles and feedback, the generator will start producing of completely nonexistent celebrities.

Everyone from academic and industrial researchers to amateur enthusiasts, visual effects studios and porn producers. Governments might be dabbling in the technology, too, as part of their to discredit and disrupt extremist groups, or with targeted individuals, for example.

It is hard to make a good deepfake on a standard computer. Most are created on high-end desktops with powerful graphics cards or better still with computing power in the cloud. This reduces the processing time from days and weeks to hours. But it takes expertise, too, not least to touch up completed videos to reduce flicker and other visual defects. That said, plenty of tools are now available to help people make deepfakes. Several companies will and do all the processing in the cloud. There’s even a mobile phone app, , that lets users add their faces to a list of TV and movie characters on which the system has trained.

Chinese face-swapping app Zao has caused privacy concerns. Photograph: Imaginechina/SIPA USA/PA Images

It gets harder as the technology improves. In 2018, discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deepfakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed.

Governments, universities and tech firms are all funding research to detect deepfakes. Last month, the first kicked off, backed by Microsoft, Facebook and Amazon. It will include research teams around the globe competing for supremacy in the deepfake detection game.

into thinking someone “said words that they did not actually say”, in the run-up to the 2020 US election. However, the policy covers only misinformation produced using AI, meaning “shallowfakes” (see below) are still allowed on the platform.

A woman watches a deepfake video of Donald Trump and Barack Obama. Photograph: Rob Lever/AFP via Getty Images

There is still ample room for mischief-making, though. Last year, when Elon Musk smoked a joint on a live web show. In December, Donald Trump flew home early from a Nato meeting when genuine of other world leaders apparently mocking him. Will plausible deepfakes shift stock prices, influence voters and provoke religious tension? It seems a safe bet.

Last year, Cameroon’s minister of communication dismissed as fake news a video that Amnesty International believes shows Cameroonianthe country’s .

Donald Trump, who admitted to boasting about grabbing women’s genitals in a recorded conversation, later suggested . In Prince Andrew’s BBC interview with Emily Maitlis, the prince taken with Virginia Giuffre, a shot her attorney insists is genuine and unaltered.

Not at all. Many are entertaining and some are helpful. Voice-cloning deepfakes can when they lose them to disease. Deepfake videos can enliven galleries and museums. In Florida, the Dalí museum has a deepfake of the surrealist painter who introduces his art and . For the entertainment industry, technology can be used to improve the dubbing on foreign-language films, and more controversially, resurrect dead actors. For example, the late James Dean is due to star in Finding Jack, a Vietnam war movie.

Coined by Sam Gregory at the human rights organisation Witness, shallowfakes are videos that are either presented out of context or are doctored with simple editing tools. They are crude but undoubtedly impactful. A shallowfake video that slowed down Nancy Pelosi’s speech and made the US Speaker of the House sound slurred on social media.

In another incident, Jim Acosta, a CNN correspondent, was temporarily banned from White House press briefings during a heated exchange with the president. A shallowfake video released afterwards appeared to show him making contact with an intern who tried to take the microphone off him. It later emerged that the video had been , making the move look aggressive. Acosta’s press pass was later reinstated.

The UK’s Conservative party used similar shallowfake tactics. In the run-up to the recent election, the Conservatives with the Labour MP Keir Starmer to make it seem that he was unable to answer a question about the party’s Brexit stance. With deepfakes, the mischief-making is only likely to increase. As Henry Ajder, head of threat intelligence at Deeptrace, puts it: “The world is becoming increasingly more synthetic. This technology is not going away.”

complete dipshit
total control of billions of people’s stolen data
moving apology
put new words
star in your favourite movie
dance like a pro
96% were pornographic
voice skins
voice clones
phoned by a fraudster who mimicked the German CEO’s voice
utterly realistic faces
online strategies
make contact
make them for you
Zao
US researchers
Deepfake Detection Challenge
Facebook last week banned deepfake videos that are likely to mislead viewers
Tesla stock crashed
footage emerged
soldiers executing civilians
the tape was not real
cast doubt on the authenticity of a photo
restore people’s voices
takes selfies with visitors
reached millions of people
sped up at the crucial moment
doctored a TV interview
A woman watches a deepfake video of Donald Trump and Barack Obama.
Original and deepfake videos of Vladimir Putin
Chinese face-swapping app Zao
A comparison of an original and deepfake video of Facebook chief executive Mark Zuckerberg.