Facebook Partners With Microsoft And Several Universities To Improve Deepfake Detection

By Amit Chowdhry ● Sep 10, 2019
  • Facebook recently announced it is partnering with Microsoft and academic institutions to set up a Deepfake Detection Challenge (DFDC)

Facebook recently announced with the Partnership on AI, Microsoft, and academics at Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY to set up a Deepfake Detection Challenge (DFDC). The DFDC will award up to $10 million in grants and awards to inspire innovation and make it easier to detect fake content.

“Data sets and benchmarks have been some of the most effective tools to speed progress in AI. Our current renaissance in deep learning has been fueled in part by the ImageNet benchmark. Recent advances in natural language processing have been hastened by the GLUE and SuperGLUE benchmarks,” said Facebook chief technology officer Mike Schroepfer. “‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online. Yet the industry doesn’t have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open-source tools to detect deepfakes.”

Schroepfer also pointed out that the goal of the challenge is to produce technology that everyone can use for better detecting when AI has been used to alter a video in order to mislead the viewer. The Deepfake Detection Challenge is going to include a data set and leaderboard as well as grants and awards to spur the industry for creating new ways of detecting and preventing media manipulated via AI from being used to mislead others.

“In order to move from the information age to the knowledge age, we must do better in distinguishing the real from the fake, reward trusted content over untrusted content, and educate the next generation to be better digital citizens. This will require investments across the board, including in industry/university/NGO research efforts to develop and operationalize technology that can quickly and accurately determine which content is authentic,” added UC Berkeley professor Hany Farid.

The governance of this challenge will be facilitated and overseen by the Partnership on AI’s new Steering Committee on AI and Media Integrity — which is made up of a broad cross-sector coalition of organizations like Facebook, WITNESS, Microsoft, and others in civil society and the technology, media along with academic communities. No Facebook user data will be used in this data set.

“People have manipulated images for almost as long as photography has existed. But it’s now possible for almost anyone to create and pass off fakes to a mass audience. The goal of this competition is to build AI systems that can detect the slight imperfections in a doctored image and expose its fraudulent representation of reality,” noted MIT professor Antonio Torralba.

“To effectively drive change and solve problems, we believe it’s critical for academia and industry to come together in an open and collaborative environment. At Cornell Tech, our research is centered around bridging that gap and addressing technology’s societal impact in the digital age, and the Deepfake Detection Challenge is a perfect example of this. Working with tech industry leaders and academic colleagues, we are developing a comprehensive data source that will enable us to identify fake media and ultimately lead to building tools and solutions to combat it. We’re proud to be a part of this group and to share the data source with the public, allowing anyone to learn from and expand upon this research,” explained Cornell Tech associate dean and professor Serge Belongie.

University at Albany-SUNY Professor Siwei Lyu noted that the fact that deepfakes are generated from algorithms rather than cameras means they can still be “detected and their provenance verified.” And Prof. Lyu said that there are several promising new methods for spotting and mitigating the harmful effects of deepfakes including procedures for adding ‘digital fingerprints’ to video footage to help verify its authenticity.