Software
Meta: AI Content in Election Misinfo < 1% on Its Apps
2024-12-03
At the beginning of the year, a significant concern emerged regarding the potential misuse of generative AI in global elections. The fear was that it could be employed to spread propaganda and disinformation. However, as we reach the end of the year, Meta presents a different perspective. It claims that these fears did not materialize on its platforms. The company's findings are based on an extensive analysis of content related to major elections in various countries such as the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil.
Meta's Policy Effectiveness
During the election period, there were instances of confirmed or suspected use of AI in an improper manner. But, the volumes were relatively low. Meta's existing policies and processes proved to be sufficient in reducing the risk associated with generative AI content. In fact, ratings on AI-related content during these major elections represented less than 1% of all fact-checked misinformation. This indicates that Meta's measures were effective in curbing the spread of such content.Meta's Imagine AI image generator took a proactive stance by rejecting 590,000 requests to create images involving key figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the lead-up to election day. This was a crucial step in preventing the creation of election-related deepfakes.Impact on Coordinated Influence Campaigns
Meta discovered that coordinated networks of accounts aiming to spread propaganda or disinformation only achieved incremental productivity and content-generation gains by using generative AI. The company's focus on the behaviors of these accounts rather than the content they post allowed it to take down these covert influence campaigns effectively. Even with the use of AI, Meta's ability to detect and act against such activities remained intact.Preventing Foreign Interference
Meta took down around 20 new covert influence operations worldwide to safeguard against foreign interference. It was noted that the majority of the disrupted networks did not have authentic audiences, and some even used fake likes and followers to give the impression of greater popularity. This highlights Meta's commitment to maintaining the integrity of its platforms and protecting the electoral processes.Meta also pointed a finger at other platforms, stating that false videos about the U.S. election linked to Russian-based influence operations were frequently posted on X and Telegram. This emphasizes the need for a collective effort in addressing the challenges posed by generative AI in elections.As Meta takes stock of the year, it will continue to review its policies and make any necessary changes in the coming months. This proactive approach ensures that it remains at the forefront of addressing the evolving issues related to generative AI and elections.