/ 18 December 2018

Women across political spectrum receive Twitter abuse – study

The sample of women selected included all UK parliamentarians and members of the US Congress and Senate
The sample of women selected included all UK parliamentarians and members of the US Congress and Senate, as well as journalists from the Daily Mail, Gal Dem, The Guardian, Pink News, The Sun, Breitbart and the New York Times. (Kacper Pempel/Reuters)

Women of colour bear the brunt of Twitter abuse. This is according to a joint study conducted by Amnesty International and global artificial intelligence software product company Element AI.

The study, the largest of its kind into online abuse against women on Twitter, was released on Tuesday and found that women of colour were 34% more likely to be mentioned in abusive or problematic tweets than white women.

According to the study, black women were disproportionately targeted — being 84% more likely than white women to be mentioned in abusive or problematic tweets.

During the course of the study, researchers surveyed millions of tweets received by 778 journalists and politicians from the United Kingdom and the United States throughout 2017 “representing a variety of political views, and media spanning the ideological spectrum”.

READ MORE: Black? A woman? Read why you’re more likely to be a victim of online trolls

The sample of women selected included all UK parliamentarians and members of the US Congress and Senate, as well as journalists from the Daily Mail, Gal Dem, The Guardian, Pink News, The Sun, Breitbart and the New York Times.

The study found that online abuse targets women from across the political spectrum, with politicians and journalists facing similar levels of online abuse. Both liberals and conservatives were targeted, researchers found.

Just over 7% of tweets sent to the women in the study were deemed problematic or abusive. According to the study, this amounts to just over one million problematic or abusive mentions of the 778 women across the year — or one every 30 seconds on average.

Amnesty International said it has “repeatedly asked Twitter to make available meaningful and comprehensive data regarding the scale and nature of abuse on their platform, as well as how they are addressing it”.

“Such data will be invaluable for anyone seeking to understand and combat this barrier to women’s human rights online. In light of Twitter’s refusal to do so, it is our hope that this project will shed some insight into the scale of the problem.”

The organisation referred to the current “watershed moment”, during which women around the world have spoken out against abuse using social media platforms. Amnesty International lambasted Twitter for failing to effectively tackle violence and abuse on the platform.

This has “a chilling effect on freedom of expression online and undermines women’s mobilisation for equality and justice — particularly groups of women who already face discrimination and marginalisation,” the organisation said.

The study follows on from Amnesty International’s Toxic Twitter report, which was launched in March. The report outlined the human rights harms facing women on the social media platform and proposed steps that Twitter could take to address this.

The new study also contains a series of recommendations to Twitter, including that the company should share comprehensive information about the nature and levels of violence and abuse against women on the platform, and how they respond to it.

READ MORE: Online violence targets women journalists

Amnesty International further recommended that Twitter should improve its reporting mechanisms, provide more clarity about how it identifies violence and abuse, and undertake “far more proactive measures” in educating users about security and privacy features on the platform.

The study points to the pitfalls of platforms like Twitter using automated systems to help manage online abuse.

“As it stands, automation may have a useful role to play in assessing trends or flagging content for human review, but it should, at best, be used to assist trained moderators, and certainly should not replace them,” the study found.

“Human judgment by trained moderators remains crucial for contextual interpretation, such as examination of the intent, content and form of a piece of content, as well as assessing compliance with policies.”