Microsoft, Google, Amazon and Other Tech Companies Join Forces to Combat Election-Related Deepfakes

A group of 20 leading tech companies have signed an accord to combat AI misinformation ahead of the 2024 elections, focusing specifically on deepfakes.

ADVERTISEMENT

Tech Companies Sign Pact to Combat Deepfakes

A group of 20 tech companies, including Microsoft, Google, Amazon, and IBM, have come together to combat AI misinformation in the upcoming elections. The focus of their efforts is deepfake technology, which can create deceptive audio, video and images to deceive voters or provide false voting information.

Deepfakes have become a serious concern in election campaigns around the world, with their use increasing by 900% year over year. The rise of AI-generated content has raised fears of election-related misinformation and the potential impact it can have on the democratic process.

By signing the accord, these tech companies are taking a proactive stance against the spread of deepfakes and committing to implement measures to detect and address the distribution of such content on their platforms.

Challenges in Combating Deepfakes

While the accord is a step in the right direction, there are many challenges in effectively combating deepfakes. The technologies used for identifying and watermarking deepfakes have not advanced quickly enough, and there are ways to bypass these protective measures.

Platforms are also faced with the challenge of addressing AI-generated text, images, and videos in a fair and unbiased manner. Detecting AI-generated content can be difficult, and existing solutions often exhibit bias against non-native English speakers.

Furthermore, the invisible signals used in AI-generated images have not yet been widely adopted in audio and video generators, making it even harder to identify and combat deepfakes.

Commitments Made by Participating Companies

The 20 tech companies that have signed the accord have made eight high-level commitments to address the issue of deepfakes. These commitments include assessing model risks, actively detecting and addressing the distribution of deepfake content on their platforms, and providing transparency to the public regarding their processes.

While the commitments are voluntary, they reflect the industry's recognition of the importance of secure and trustworthy elections. The participating companies are taking steps to tackle AI-generated election misinformation and protect the integrity of democratic processes.