Meta is being called out for its complacency in preventing the spread of misinformation, hate speech an incitement content around the world on Facebook.
The social media giant has been criticized for fueling and amplifying racial, religious and ethnic violence in countries including Bangladesh, Indonesia, South Sudan and Sri Lanka.
Senate Select Committee on Intelligence Chair Mark R. Warner of Virginia sent a letter to Meta CEO Mark Zuckerburg pressing the company on its efforts to combat the spread of misinformation, hate speech and incitement content. According to reports, 84 percent of Facebook’s misinformation budget is allocated for the United States, where only 10 percent of users live.
“In its pursuit of growth and dominance in new markets, I worry that Meta has not adequately invested in the technical, organizational, and human safeguards necessary to ensuring that your platform is not used to incite violence and real-world harm,” Warner, pointing to evidence, acknowledged by Meta, that the platform was used to foment genocide in Myanmar, said. “I am concerned that Meta is not taking seriously the responsibility it has to ensure that Facebook and its other platforms do not inspire similar events in other nations around the world.”
More than 110 languages are supported on Facebook as of October 2021, according to Warner’s letter, and users and advertisers posted in more than 160 languages. Despite these facts, Facebook’s community standards are available in less than half of the languages. Facebook has said artificial intelligence is used by the social media platform to identify hate speech in more than 50 languages and native speakers review content in more than 70 languages.
“Setting aside the efficacy of Facebook’s AI solutions to detect hate speech and violent rhetoric in all of the languages that it offers, the fact that Facebook does not employ native speakers in dozens of languages officially welcomed on its platform is troubling — indicating that Facebook has prioritized growth over the safety of its users and the communities Facebook operates in,” Warner wrote, citing documents provided by Facebook whistleblower Frances Haugen. “Of particular concern is the lack of resources dedicated to what Facebook itself calls ‘at-risk countries’ — nations that are especially vulnerable to misinformation, hate speech, and incitement to violence.”
In Ethiopia, Facebook reportedly did not have systems to flag harmful posts in the country’s two most spoken languages. An internal report from March 2021 documents that armed groups in Ethiopia used Facebook to incite violence against ethnic minorities, recruit and fundraise.
“In the wake of Facebook’s role in the genocide of the Rohingya in Myanmar — where UN investigators explicitly described Facebook as playing a ‘determining role’ in the atrocities — one would imagine more resources would be dedicated to places like Ethiopia. Even in languages where Meta does have experience, the systems in place appear woefully inadequate at preventing violent hate speech from appearing on Facebook,” Warner wrote, citing an investigation conducted by the non-profit Global Witness, which was able to post ads in Swahili and English ahead of the 2022 general elections in Kenya that violated Facebook’s stated Community Standards for hate speech and ethnic-based calls to violence.
Warner wrote that this incident is not isolated, and Facebook has an impact “on fragile societies across the globe” which poses regional and possibly global risks.