www.fgks.org   »   [go: up one dir, main page]

Currently Reading

Fake videos are getting easier, and governments are in on the game

Biz & Tech // Business

Fake videos are getting easier, and governments are in on the game

In a video released by Xinhua, China’s state-run press agency, a young man with a shock of brown hair and rimless glasses made his debut Thursday as the newest member of the team.

“Hello, everyone,” the anchor said, before introducing himself as an English-speaking digital composite modeled on the looks and voice of a real Xinhua host.

He is the world’s first artificial intelligence anchorman, developed by Xinhua and Sogou, a Chinese search engine. The edited video looks authentic; the content, benign.

Halfway around the world, the White House press secretary shared a video showing CNN’s Jim Acosta struggling with a White House intern to hold onto a microphone during a tense exchange with President Trump.

Fact-checkers and other experts say the video, which was first shared by Paul Joseph Watson, a conspiracy theorist associated with the far-right website InfoWars, was sped up to make it look like Acosta chopped the woman’s arm with his hand. Other versions of the video, believed to be authentic, showed him slowly raising his hand, appearing to gesture to the president. The White House pulled Acosta’s press pass Wednesday.

Governments have long manipulated images and released propaganda films — think of Joseph Stalin’s habit of “disappearing” political opponents from Soviet photographs. But the week’s events highlight how Silicon Valley technology is accelerating the blurring of reality and fiction. “Deepfakes,” highly realistic altered images created by artificial intelligence, which originated in the world of porn, could soon spread to other realms.

Some observers worry that such videos present a real danger for business, if a clip of a CEO saying outrageous things were released by a short seller; for democracy, if politicians publish fictitious videos about their opponents; and for society at large.

“When you see video, you still think that you are peering into reality,” David Ryan Polgar, a tech ethicist, said. “The struggle now is that we are blurring the lines between reality and fiction. That’s extremely dangerous for our notions of truth, what happened and what didn’t.”

It used to be that creating realistic fake videos required a lot of software knowledge and computer hardware. Then came the democratization of fake video.

In 2017, an anonymous Reddit user, who went by the screen name “deepfakesapp,” created a program that could scan videos and still photos of one person and paint that person’s features onto another person in a separate video. The tool was free, readily available and accompanied by instructions for people without computer science degrees.

Now social media spread fake videos at warp speed. One video appears to show “Wonder Woman” star Gal Gadot performing in a pornographic scene. Another depicts what a love child of Trump and German Chancellor Angela Merkel might look like. The Xinhua broadcasts use more or less the same techniques.

The ability to produce such realistic videos represents a triumph of computer science. It demonstrates the leaps researchers have made in deep neural networks, a set of algorithms modeled loosely on the human brain and taught to recognize patterns.

The videos have become increasingly convincing. Fighting them has required its own sophisticated computer work.

This year, three computer science researchers from the State University of New York at Albany found a flaw in many of these videos. Deepfake algorithms don’t typically use photos or videos where people have their eyes closed, so the videos they generate don’t feature people blinking. Siwei Lyu, an associate professor of computer science, said he and his team designed an artificial intelligence that could detect where blinking was absent in faked videos with 95 percent accuracy.

The team published its findings in June. Less than three weeks later, a group of anonymous software developers wrote to Lyu saying his tactic had backfired. They now understood the need to use photos of people with eyes open and shut.

“Once they notice that you have a technique to detect the fake video, they will improve their methods to circumvent that detection,” Lyu said.

Ultimately, technology must fight the very problem it created, according to Polgar. Big tech companies should offer their vast stores of imagery and the algorithms they use to help detect fakes, and doctored videos should be banned and taken down when identified, he said.

Y Combinator, a San Francisco tech investment group that offers money and mentorship to early stage startups, announced in March that it is looking to fund startups that could solve the problem of fake video.

“The tech to create doctored videos that are indistinguishable from reality now exists, and soon it will be widely available to anyone with a smartphone,” the startup incubator said. “We are interested in funding tech that will equip the public with the tools they need to identify fake video and audio.”

The month before, Sam Altman, the organization’s president, had tweeted about being fooled by a fake video:

“Today was the first day I fell for an AI-generated fake video with major geopolitical implications. Luckily the people who showed it to (me) held my phone while I was watching it. But whoa. The world is gonna get weird.”

Melia Russell is a San Francisco Chronicle staff writer. Email: melia.russell@sfchronicle.com Twitter: @meliarobin