Social media bots are fake accounts that use automated or semi-automated programming to infiltrate platforms and shape how humans behave online.
Using artificial intelligence to both mimic and stimulate human behavior, bots sow chaos across various social media platforms. They include Twitter, where Elon Musk is calling for more transparency in a highly publicized data debacle, and Facebook, where posts by bots have influenced people’s perception of politics (that type of conduct is also called coordinated in authentic behavior).
Why Are Social Media Bots Used?
Groups or individuals create social media bots for a number of purposes. Companies may sell the fake accounts to other users for money, or political groups may use them to share content that aims to polarize and troll viewers.
By and large, there are two kinds of bots: automated ones, like those that automatically retweet a specific hashtag every time it is posted, and semi-automated bots, which are typically fake accounts that humans operate. Both types of bots can be used to propagate hate speech, spread propaganda, sway public opinion, and sell goods or services.
Some bots are programmed to boost engagement or follower count, while others are meant to spur insidious speech and prompt nefarious action. Either way, social media bots can be a serious problem — particularly because social media users are often unable to distinguish between them and accounts operated by real people.
A peer-reviewed study from Stony Brook University published in 2021 analyzed over 3 million tweets authored by 3,000 bot accounts and compared those tweets’ language with that of 3,000 genuine accounts. When researchers looked at the bot accounts individually, they seemed like they were operated by humans. But, when researchers analyzed the accounts as a whole, however, the researchers realized the accounts were seemingly clones of each another.
In recent years, bot usage has increased. And with that increase, cyber security experts have attempted to ring the alarm on their threat to our digital ecosystem. In fact, the European Commission launched its Action Plan Against Disinformation in 2018, specifically, to address social media bots as a technique to “spread and amplify divisive content and debates on social media” and disseminate dis- and misinformation. Similarly, the U.S. Department of Homeland Security (DHS) has also launched efforts to combat disinformation on social media, including tips for identifying bot accounts.
Why Identify Bots?
Not only is it important to identify social media bots to inhibit the spread of false information, but, also, a routine of removing bot followers from your profile could boost your account’s ranking on a platform.
With few or no bot followers, your profile’s content is more likely to appear at the top of feeds, clearing opportunities for you to receive “likes”, “retweets”, “shares” or “comments” (i.e., “engagement”), based on sites’ algorithms.
In other words, though removing bot accounts from your follower lists may decrease your total number of followers, such actions will ensure that those who do follow you are human and engage with your content in a meaningful manner.
What Are Some Common Bot Behaviors?
The DHS Office of Cyber and Infrastructure Analysis described common methods in which social media bots influence and/or engage with humans online — also known as “attacks” — such as:
- Click or like farming. That’s when bots inflate the fame or popularity of a website through liking or reposting content. These types of bots also allow people to buy similar fake accounts to boost their number of followers.
- Hashtag hijacking. This method uses hashtags to focus an attack on a specific audience using the same hashtag.
- Repost storms. This happens when a parent social media bot account initiates an attack, and then a group of bots instantly reposts that attacking post.
- Sleeper bots. These are bots that remain dormant and then wake up to do a series of posts or retweets over a short period of time.
- Trend jacking. This method uses trending topics to focus on and attack an intended targeted audience.
If you come across any of the above-mentioned patterns, we recommend reporting the suspicious activity to administrators of the hosting social media platform.
What Do Bot Accounts Look Like?
Spotting a bot can be a tedious task.
You could try a bot-detection tool like Botometer, which describes itself as a “machine learning algorithm trained to calculate a score where low scores indicate likely human accounts and high scores indicate likely bot accounts.” But there are limitations to those types of services; some things only a human eye can catch.
Here are a few questions that the Snopes engagement team considers when weeding through our lists of followers on Instagram, Twitter and Facebook, and removing bot accounts:
- Does the account have a profile pic? Sometimes a bot account will lack a profile picture. Spotting a profile account without a picture is often our first indication that the account may be inauthentic.
- Does the account use a generic username? Often with bot accounts, we’ll see a generic username that was likely created as part of an automated system. These often combine names and then end in a series of numbers.
- What is the follower count? Often with just a couple dozen followers, bot accounts tend to have lower follower counts compared to authentic accounts.
- What are the account’s privacy settings? Some bot accounts will have privacy settings on high, giving no or limited public access to their profile.
- How many posts have they shared? For instance, on Instagram, bot accounts often only have a few grid images.
- What is the quality of the content shared? On Twitter, for example, bot accounts may be programmed to automatically re-share posts with a certain hashtag. If an account appears to only share a certain type of content, we are suspicious of it.
This page is part of an ongoing effort by the Snopes newsroom to teach the public the ins and outs of online fact-checking and, as a result, strengthen people’s media literacy skills. Misinformation is everyone’s problem. The more we can all get involved, the better job we can do combating it. Have a question about how we do what we do? Let us know.
“A Collection of Tips for Combating Online Misinformation Like a Pro.” Snopes.Com, https://www.snopes.com/collections/international-fact-checking/. Accessed 23 July 2022.
Botometer by OSoMe. https://botometer.iuni.iu.edu. Accessed 23 July 2022.
DHS’ Coordination Efforts to Combat Disinformation on Social Media | Office of Inspector General. https://www.oig.dhs.gov/node/6297. Accessed 23 July 2022.
“In Reversal, Twitter Plans to Comply with Musk’s Demands for Data.” Washington Post. www.washingtonpost.com, https://www.washingtonpost.com/technology/2022/06/08/elon-musk-twitter-bot-data/. Accessed 23 July 2022.
Martini, Franziska, et al. “Bot, or Not? Comparing Three Methods for Detecting Social Bots in Five Political Discourses.” Big Data & Society, vol. 8, no. 2, July 2021, p. 205395172110335. DOI.org (Crossref), https://doi.org/10.1177/20539517211033566.
“Snopes Tips: How to Spot Coordinated Inauthentic Behavior.” Snopes.Com, https://www.snopes.com/articles/385721/coordinated-inauthentic-behavior-2/. Accessed 23 July 2022.
“Snopestionary: Misinformation vs. Disinformation.” Snopes.Com, https://www.snopes.com/articles/386830/misinformation-vs-disinformation/. Accessed 23 July 2022.
“Snopestionary: What Is ‘Coordinated Inauthentic Behavior’?” Snopes.Com, https://www.snopes.com/articles/366947/coordinated-inauthentic-behavior/. Accessed 23 July 2022.
Social Media ‘bots’ Tried to Influence the U.S. Election. Germany May Be Next. https://www.science.org/content/article/social-media-bots-tried-influence-us-election-germany-may-be-next. Accessed 23 July 2022.
“Study Suggests New Strategy to Detect Social Bots |.” SBU News, 30 Nov. 2021, https://news.stonybrook.edu/homespotlight/study-suggests-new-strategy-to-detect-social-bots/.
Uyheng, Joshua, and Kathleen M. Carley. “Bots and Online Hate during the COVID-19 Pandemic: Case Studies in the United States and the Philippines.” Journal of Computational Social Science, vol. 3, no. 2, 2020, pp. 445–68. PubMed Central, https://doi.org/10.1007/s42001-020-00087-4.