Microsoft’s new software that detects deepfakes launched ahead of US vote
06 September, 2020
Microsoft has unveiled software that can help spot “deepfake” images or videos, increasing the set of programs made to fight the hard-to-detect images ahead of the US presidential election.
The Video Authenticator software analyses a graphic or each frame of a video, looking for proof manipulation that can also be invisible to the naked eye.
Deepfakes are photos, videos or audio tracks clips altered using artificial intelligence to seem authentic and so are already targeted by initiatives on Facebook and Twitter.
“They could may actually make persons say things they didn’t or to be places they weren’t,” said a company blog post on Tuesday.
Microsoft said it has partnered with the AI Foundation in San Francisco to help make the video authentication tool available to political campaigns, news outlets and others involved in the democratic process.
Deepfakes are the main world of online disinformation, which authorities have warned can carry misleading or completely false messages.
Fake posts that appear to be real are of particular concern prior to the US presidential election in November, especially after false social media posts exploded in number through the 2016 vote that brought Donald Trump to power.
Microsoft also announced it built technology into its Azure cloud computing platform that lets creators of photos or videos add data in the backdrop that can be utilised to check whether imagery has been altered.
The technology titan said it plans to check this program with media organisations including the BBC and the New York Times.
Microsoft is also working with the University of Washington and others on helping people be more savvy in terms of distinguishing misinformation from reliable facts.
“Practical media knowledge can permit us all to believe critically about the context of media and be more engaged citizens while still appreciating satire and parody,” the Microsoft post said.
Source: