Trends

Abusive Tweets Hurled at Women Every 30 Seconds: Report

Women are abused on Twitter every 30 seconds, and minority women are harassed more often, according to an Amnesty International report released Tuesday.

Volunteers for Troll Patrol, a crowdsourcing project set up by Amnesty International to process large-scale data about online abuse, sorted through 288,000 tweets sent to 778 woman politicians and journalists in the UK and the United States last year.

More than 6,500 volunteers from 150 countries signed up to participate in Troll Patrol. They spent 2,500 hours analyzing the tweets.

The subjects, who were both liberal and conservative, all had active unprotected Twitter accounts with fewer than a million followers.

Software firm Element AI used advanced data science and machine learning techniques to extrapolate data about the scale of abuse women face on Twitter.

Key findings:

  • Black women were 84 percent more likely than white women to be mentioned in abusive or problematic tweets;
  • In general, women of color were 34 percent more likely to be mentioned in abusive or problematic tweets than w white women;
  • The abuse was targeted at both liberal and conservative woman politicians, as well as women in both left- and right-leaning media; and
  • Problematic or abusive tweets constituted 7.1 percent of the tweets sent to the study participants, which, when extrapolated, works out to 1.1 million tweets being sent to the 778 participants over the year, or one every 30 seconds.

“Troll Patrol means we have the data to back up what women have long been telling us — that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked,” said Milena Marin, the project’s senior advisor for tactical research.

“Women of color were much likely to be impacted, and black women are disproportionately targeted,” Marin noted. “Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices.”

The report “is very disturbing,” Rob Enderle, principal analyst at the Enderle Group, told the E-Commerce Times.

Abusive and Problematic Tweets

Here is how Amnesty International defines abusive and problematic tweets:

  • “Abusive tweets include content that promote violence against or threats of people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. Examples include physical or sexual threats, wishes for the physical harm or death, reference to violent events, behaviour that incites fear or repeated slurs, epithets, racist and sexist tropes, or other content that degrades someone; and
  • “Problematic tweets contain hurtful or hostile content, especially if repeated to an individual on multiple or occasions, but do not necessarily meet the threshold of abuse. Problematic tweets can reinforce negative or harmful stereotypes against a group of individuals (e.g. negative stereotypes about a race or people who follow a certain religion).”

Abusive content violates Twitter’s rules, Amnesty International pointed out.

“Policing social media is a thankless and near-impossible task,” noted Laura DiDio, principal at ITIC.

“Where do you even begin to police and track millions of users and billions of daily tweets? Some things are bound to fall through the cracks even if you have a veritable army of ‘good taste police’ monitoring the tweets,” she told the E-Commerce Times.

It is possible to have “basic policies and guidelines around foul, offensive, racist, or threatening language,” said DiDio, “but there are subtleties, differences in languages, idioms and emojis which muddy the waters and make for gray areas.”

Amnesty International “has repeatedly asked Twitter to publish data regarding the scale and nature of abuse on their platform, but so far the company has failed to do so,” said Troll Patrol’s Marin.

“Troll Patrol isn’t about policing Twitter or forcing it to remove content,” she pointed out. “We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithm.”

Twitter has been developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically.

Twitter’s Response

“With regard to your forthcoming report, I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Vijaya Gadde, Twitter’s legal, policy and trust and safety global lead, told Amnesty International when the organization shared its findings with the company.

“It is unclear how you have defined or categorized such content, or if you are suggesting it should be removed from Twitter,” he continued. “We work hard to build globally enforceable rules and have begun consulting the public as part of the process — a new approach within the industry.”

That response does not sit well with Enderle.

“When presented with evidence of abuse at massive scale, Twitter wants to define terms rather than focusing on ending the abuse,” he fumed. The firm looks like it’s being run by misogynistic idiots.”

How Twitter Fights Abuse

Twitter “uses a combination of machine learning and human review to adjudicate abuse reports and whether they violate our rules,” Gadde said. “Context matters when evaluating abusive behavior and determining appropriate enforcement actions.”

Some factors Twitter takes into consideration:

  • Whether the target is an individual or a group of people;
  • Whether the report was filed by the target or a bystander; and
  • Whether the behavior is newsworthy and in the legitimate public interest.

“Abuse, malicious automation, automation and manipulation detract from the health of Twitter,” Gadde noted. “We are committed to holding ourselves publicly accountable towards progress in this regard.”

The social media site provides follow-up notifications to individuals who report abuse, Gadde said. It also provides recommendations for additional actions that individuals can take to improve their Twitter experience, such as using the block or mute feature.

“I’d set up a deep learning system with oversight and be transparent about the rules and actions,” Enderle suggested. Twitter also should “provide an escalation path for those who feel they are unfairly blocked, and modify as needed.”

Twitter “is well aware that it is always a tweet away from being caught in the crossfire between the opposing views of conservatives and liberals, who clash loudly and often,” DiDio said, “so it must take a measured and cautious approach.”

Richard Adhikari

Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology. Email Richard.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by Richard Adhikari
More in Trends

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

E-Commerce Times Channels