Facebook, Kenya, and the Threat of Political Violence

0
2328
FILE - Civilians flee as security forces aim their weapons at a hotel complex attacked by al-Shabab extremists, in Nairobi, Kenya on Jan. 15, 2019. Facebook has failed to catch Islamic State group and al-Shabab extremist content in posts aimed at East Africa as the region remains under threat from violent attacks and Kenya prepares to vote in a closely contested national election, according to a new study released Wednesday, June 15, 2022. (AP Photo/Khalil Senosi, File)

TWO WEEKS AGO, Mercy Ndegwa—a director of public policy at Meta, the parent company of Facebook—described in a blog post the various steps the company was taking in order to “help ensure a safe and secure general election in Kenya” on August 9, against concerns of election-related violence. Ndegwa wrote that Meta had been preparing for the vote for the past year, and had been working hard to “reduce the spread of misinformation, detect and remove hate speech, improve digital literacy and help make political advertising more transparent,” among other steps. In the six months leading up to April 30, Ndegwa said, Facebook “took action on more than 37,000 pieces of content for violating our hate speech policies on Facebook and Instagram in Kenya,” and blocked or removed more than 42,000 items that contravened Meta’s policies against inciting violence.

Facebook, Ndegwa noted, was going to be paying particular attention to abuse involving female public figures, and was working with a team of people with experience in Kenya to understand and remove gender-based slurs in a number of local languages. She also added that Facebook had a strict policy of requiring advertisers who wanted to run political ads in Kenya to “undergo a verification process to verify their identity and that they live in the country,” as well as other checks to ensure that their ads complied with Facebook’s policies. In the six months leading up to April 30, Ndegwa wrote, about 36,000 ad submissions targeted to Kenya were rejected before they ran because advertisers did not complete the authorization process or failed to include a disclaimer identifying the ad buyer.

A week later, researchers at Global Witness, a human-rights group, and Foxglove Legal, a British nonprofit that scrutinizes the relationships between tech companies and governments, released a report describing their attempts to buy ads targeting Kenya that included hate speech. The ads included dehumanizing language directed at specific  tribal groups, as well as calls for violence—including rape and genocide—in both English and Swahili. All of the ads were eventually approved to run. “This follows a similar pattern we uncovered in Myanmar and Ethiopia,” the researchers wrote, “but for the first time also raises serious questions about Facebook’s content moderation capabilities in English.” In the past, Facebook has praised the “super-efficient AI models” the company uses to detect hate speech, Global Witness noted, adding that its report on hate-speech ads is “a stark reminder of the risk of hate and incitement to violence on their platform.” Days after the Global Witness report, Neha Wadekar reported for the Washington Post that “the prevalence of violent and inflammatory content on the platforms poses real risks in this East African nation, as it prepares for a bitterly contested presidential election.” Nanjala Nyabola, a Kenyan technology researcher, told the Post that content-moderation failures suggest “a deliberate choice to maximize labor and profit extraction, because they view the societies in the Global South primarily as markets, not as societies.”

New from CJR: The growing culture of censorship by PIO

In response to its findings, Global Witness reported, Meta acknowledged that “there will be instances where they miss things and take down content in error, as both machines and people make mistakes.” (Global Witness also suggested that Ndegwa’s blog post was written in response to its own report, and published early in order to preempt it.) Despite the detailed lists of precautions Ndegwa described, Global Witness says it was able to successfully submit more ads calling for violence even after the blog post was published. Following the release of the Global Witness report, Danvas Makori, head of Kenya’s National Cohesion and Integration Commission, told a press conference that Facebook “is in violation of the laws of our country. They have allowed themselves to be a vector of hate speech and incitement, misinformation, and disinformation.” Makori gave Meta a week to comply with speech laws, or else the service would be blocked from operating in Kenya.

The day after Makori made this pronouncement, however, Joseph Mucheru, a cabinet secretary in the Kenyan government responsible for internet and communications technologies, posted a message on Twitter that seemed to contradict Makori’s warning. “Media, including social media, will continue to enjoy PRESS FREEDOM in Kenya,” he wrote. “Not clear what legal framework NCIC plans to use to suspend Facebook. Govt is on record. We are NOT shutting down the Internet.” Other government ministers made similar statements. Bridget Andere, African policy analyst at Access Now, a nonprofit human-rights group, told Wired magazine that the country lacked a legal framework by which NCIC might suspend Facebook in the country—and that extra-legal methods risked playing into the hands of authoritarian regimes.

“Platforms like Meta have failed completely in their handling of misinformation, disinformation, and hate speech in Tigray and Myanmar,” Andere told Wired. “The danger is that governments will use that as an excuse for internet shutdowns and app blocking, when it should instead spur companies toward greater investment in human content moderation.” Global Witness, meanwhile, said its research “points to a broken system. For one of the world’s wealthiest companies, with staggering reach and a responsibility not to facilitate division and harm, Facebook can and should do better.” The group added that in 2020, following pressure from advertisers to stop profiting from hate speech, Mark Zuckerberg, Meta’s CEO, said that the company was going to do more to tackle the problem. However, Global Witness noted that “our repeated findings — in Myanmar, Ethiopia and now Kenya — raise serious questions about whether these commitments were followed through.”

Here’s more on Facebook and hate speech:

  • Failure to remove: Documents leaked last year by Frances Haugen, a former Facebook employee, showed that staffers at the social media network repeatedly sounded the alarm on the company’s failure to remove or down-rank posts inciting violence in countries like Ethiopia, CNN reported. The documents show workers warned managers about how Facebook was being used by “problematic actors,” including states and foreign organizations, to spread hate speech and incite violence. CNN said the documents “also indicate that the company has, in many cases, failed to adequately scale up staff or add local language resources to protect people in these places.”
  • Complicity: In 2018, the United Nations categorized the beating, displacement, and killing of tens of thousands of Rohingya Muslims in Myanmar as a genocide. In a separate report, the agency concluded that Facebook “played a determining role” in the violence, by allowing members of the army and other anti-Rohingya elements to spread messages of hate and calls for violence. A group of six civil-society organizations in Myanmar wrote an open letter to Mark Zuckerberg, saying the social network’s behavior relied too much on third parties, failed to engage with local human-rights workers on important issues, and exhibited “a lack of transparency.”
  • Blame game: Global Witness pointed out in its report on Kenya that the acceptance of ads with hate speech and calls for violence “is not the fault of the individual content moderators, who all too often are asked to undertake deeply traumatising work—including in Kenya—with scant regard for their mental health and decent working conditions.” The group also noted that, earlier this year, a former moderator at Facebook filed a lawsuit in Kenya against Meta and its local partner, Sama, alleging moderators suffer from poor working conditions including “irregular pay, inadequate mental health support, union-busting, and violations of their privacy and dignity.”

By Mathew Ingram

Columbia Journalism Review

LEAVE A REPLY

Please enter your comment!
Please enter your name here