On 21 February 2018, Twitter suspended thousands of accounts on the social media and microblogging platform amid a hue and cry of political or social engineering and rumors of crackdowns on automated bots masquerading as real people.
The company said in a statement:
Twitter’s tools are apolitical, and we enforce our rules without political bias. As part of our ongoing work in safety, we identify suspicious account behaviors that indicate automated activity or violations of our policies around having multiple accounts, or abuse. We also take action on any accounts we find that violate our terms of service, including asking account owners to confirm a phone number so we can confirm a human is behind it. That’s why some people may be experiencing suspensions or locks. This is part of our ongoing, comprehensive efforts to make Twitter safer and healthier for everyone.
Twitter also referred us to its list of enforcement options, including requiring account holders to provide a phone number or email address to prove they are legitimate.
Speculation mounted online that the suspended accounts were “bots” connected to Russian social media influence operations. The company did not tell us how many accounts were affected or suspended, or how many actual account holders were affected by the increased scrutiny. However, Twitter did announce that it would no longer allow users to put up identical posts on multiple accounts, or perform functions like retweeting or “liking” simultaneously on more than one account. A spokesperson, Yoel Roth, said:
These changes are an important step in ensuring we stay ahead of malicious activity targeting the crucial conversations taking place on Twitter — including elections in the United States and around the world.
That same day, the blogging platform Medium reportedly canceled several accounts connected to relatively high-profile conspiracy theorists, one of whom released a video saying he planned to sue Medium for “civil rights violations” because it discriminated against him for being a white male.
However, Medium had already announced several changes to its community rules in a 7 February 2018 post:
We have all seen an increase and evolution of online hate, abuse, harassment, and disinformation, along with ever-evolving campaigns of fraud and spam. To continue to be good citizens of the internet, and provide our users with a trusted and safe environment to read, write, and share new ideas, we have strengthened our policies around this type of behavior.
The site’s guidelines about hate speech state outright that they do not promote violence or hatred:
We do not allow content that promotes violence or hatred against people based on characteristics like race, ethnicity, national origin, religion, disability, disease, age, sexual orientation, gender, or gender identity.
We do not allow posts or accounts that glorify, celebrate, downplay, or trivialize violence, suffering, abuse, or deaths of individuals or groups. This includes the use of scientific or pseudoscientific claims to pathologize, dehumanize, or disempower others. We do not allow calls for intolerance, exclusion, or segregation based on protected characteristics, nor do we allow the glorification of groups which do any of the above.
We do not allow hateful text, images, symbols, or other content in your username, profile, or bio.
Novak, Matt. “Conservative Twitter Users Lose Thousands of Followers, Mass Purge of Bots Suspected [Updated]”
Gizmodo. 21 February 2018.
Martineau, Paris. “Alt-Right Leaders Can No Longer Spread Disinformation on Medium.”
The Outline. 21 February 2018.
Medium. “Updating Our Rules.”
7 February 2018.
Medium. “Medium Rules.”
7 February 2018.
Ingram, David. “Twitter Bars Tactics Used by ‘Bots’ to Spread False Stories.”
Reuters. 21 February 2018.