Facebook Moves to Detect and Remove Deepfake Videos
Facebook has announced plans to ban deepfake videos.
In a blog post, Monika Bickert, the company’s vice-president for global policy management, acknowledged that “while these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.”
Bickert said that “misleading manipulated media” will be removed if it has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. Videos will also be removed if they are the product of AI or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
“This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words,” Bickert said. “This approach is critical to our strategy and one we heard specifically from our conversations with experts.
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
Jake Moore, cybersecurity specialist at ESET, said that deepfakes are increasingly more difficult to spot, and AI is required to help detect them. “Fake videos of famous or powerful people can be extremely manipulative, causing extremely damaging effects in some cases. It is a bold claim from Facebook to ban all such false videos from their platform, as the software used to recognize them is still in its immature phase and requires more research to be effective.
“Most videos are altered in some way before they land on social media so there is the potential of teething problems with false positives- or even letting a number of genuine deepfakes slip through the net. Not only do we need better software to recognize these digitally manipulated videos, we also need to make people aware that we are moving towards a time where we shouldn’t always believe what we see.”
Facebook has been involved with deepfake detection, launching the Deep Fake Detection Challenge last year, and partnering with Reuters to help media identify deepfakes and manipulated media through a free online training course. Source: Information Security Magazine