How Many Fake Accounts Are on X (Twitter)?
If you’ve spent any time on X (formerly known as Twitter), you’ve likely encountered bots. These are accounts that aren’t run by real people – instead, they’re automated by software. Some bots are fairly harmless or even useful (like weather alert or news bots), but many fake accounts are created for less noble reasons. They might spam you with unwanted ads, try to scam users with fake offers, spread misleading information, or artificially boost follower counts and trending topics. In short, bots can make the experience on X worse by polluting real conversations and distorting what’s popular or true.
Why do bots matter? For one, they can skew our perception of what’s happening online. If thousands of automated accounts are pushing a certain topic or repeatedly liking a post, it might seem like a trend or a widely-held opinion – even if it’s just a coordinated gimmick. Bots also matter to anyone using X for business, marketing, or networking, because fake engagement isn’t just benign fluff; it can impact real decisions (like which news to believe or where ad money goes). Elon Musk famously called spam bots the “single most annoying problem” on the platform and made eliminating them a rallying cry when he moved to buy Twitter. (He even joked that if he got a Dogecoin for every crypto scam bot he saw, he’d have 100 billion Dogecoin!) Clearly, fake accounts have drawn enough attention to spark debate at the highest levels.
So, how many accounts on X are actually fake? The honest answer: it depends who you ask and how you define “fake.” Let’s dig into what various experts and studies have estimated, and why the numbers vary so much.
How Many Accounts on X Are Fake? Estimates Vary
X’s official stance (back when it was Twitter) was that fake or spam accounts make up less than 5% of its user base. Twitter repeatedly reported in regulatory filings that under 5% of “monetizable” daily users were false or spam accounts. This would mean that out of all active users who see ads, 19 out of 20 are real people. However, many people – including researchers and Twitter’s new owner – have challenged that number as being unrealistically low.
Independent researchers have long suspected a higher bot presence. For example, a 2017 academic study suggested that between roughly 9% to 15% of Twitter profiles were bots. In early 2022, an analysis by Cyabra (a tech company that detects inauthentic behavior) estimated around 11–14% of Twitter accounts might be fake. These figures already double or triple the official 5% claim. And they still might be conservative.
When Elon Musk was considering buying Twitter in 2022, he publicly questioned the 5% figure. Musk asserted that fake and spam accounts could make up at least 20% of the user base – possibly more. This wasn’t just idle talk; it became a sticking point in the acquisition deal. Musk’s camp argued that Twitter’s value (and ad reach) would be lower if a significant chunk of users were bots. In other words, if one in five accounts (or more) is fake, that’s a big problem.
Around the same time, two social media analysis firms – SparkToro and Followerwonk – teamed up for a deep dive into Twitter’s active users. Their May 2022 study examined over 44,000 random active accounts and found that approximately 19.4% of active Twitter accounts were likely fake or spam. That’s nearly one in five accounts, lining up with Musk’s ballpark figure and about four times the official claim. Importantly, SparkToro noted they used a “conservative” definition of fake (focusing on truly inactive or bot-like accounts), so the real percentage might even be a bit higher.
To round out the range of opinions, there have been some extreme estimates too. In mid-2022, a former FBI security specialist named Dan Woods argued that Twitter’s bot problem was massively underestimated. After analyzing the platform’s traffic, he speculated that over 80% of Twitter accounts could be bots. That would mean four out of five accounts aren’t genuine – an eye-popping claim that far exceeds most other estimates. Most experts view that 80% figure with skepticism, as it likely uses a very broad definition of “bot.” Still, it shows just how little consensus there is about the true scale of fake accounts.
Why do these estimates differ so much? One big reason is definitions. Twitter’s 5% was measuring accounts that engage in spam or platform manipulation – a narrow slice of bad actors that the company could detect. Outside researchers often cast a wider net, counting accounts that post inorganically, exhibit bot-like patterns, or even those that are simply inactive. For instance, SparkToro’s analysis defined “fake” accounts as those not operated by an actual human – including spam bots, propagandists, and dormant accounts that never see your posts. If you include all those categories, you naturally get a higher percentage than Twitter’s limited definition.
Another reason is methodology. Different studies use different data samples and techniques. Some look at specific subsets of users (like active English-speaking accounts, or accounts tweeting about certain topics), which can skew results. Bot activity isn’t evenly spread across all of X – it tends to concentrate in certain areas, like cryptocurrency discussions, politics, or trending hijack hashtags. In fact, one analyst found that while about 10% of Twitter’s overall users were fake, the proportion jumped to nearly 20% for tweets about crypto. In short, there’s no single agreed-upon number, but a reasonable takeaway is that well above 5% of X accounts are likely fake. Many experts put it in the 10–20% range, and some believe it’s even higher.
And what about now, under Elon Musk’s ownership? Musk vowed to defeat “spam bots” and even claimed success early on, but the jury is still out. By late 2023, some researchers observed that bot activity was still rampant – possibly even worse than before. One analysis of X during a major political event (the 2024 U.S. election season) found numerous bot networks amplifying false narratives. In other words, despite new policies and mass suspensions, fake accounts continue to thrive on X. The battle against bots is ongoing, and their true numbers remain a moving target.
Real-World Impacts of Fake Accounts on X
Why should we care about a bunch of bots on a social network? Unfortunately, fake accounts can have very real effects on the user experience and even on society at large. Here are a few ways bots affect X in daily use:
Spam and Scams: Many bots exist purely to bombard people with promotions or fraudulent schemes. If you post about popular topics (or if you’re a high-profile account), you might notice weird replies with links to sketchy products, crypto giveaways, or phishing sites. These are classic spam bots. They clutter up conversations and can trick unsuspecting users into clicking malicious links. Crypto scam bots have been especially notorious on Twitter – impersonating celebrities and promising giveaways to steal crypto from users. (Those were the bots Elon Musk joked about needing a Dogecoin for every time he saw one.)
Misinformation and Fake News: Bots are often used as force-multipliers for spreading false or misleading information. A single person (or a small group) can control thousands of bot accounts to amplify a lie or conspiracy theory, making it trend and seem more credible. We’ve seen this with election disinformation, public health rumors, and more. For example, during the 2016 U.S. election, tens of thousands of automated accounts linked to Russia pumped out divisive content and “fake news” to influence voters. More recently, during election debates and big political moments, researchers have found swarms of bots pushing hashtags like “#RiggedElection” or praising specific candidates. By the time the platform removes these accounts (if they remove them at all), the false narratives have already reached millions of people. In short, bots can warp public discourse and make fringe ideas appear mainstream.
Inflated Trends and Engagement: Have you ever wondered how certain obscure hashtags suddenly trend, or why a low-quality tweet sometimes racks up an unnatural number of likes and retweets? It’s often bot activity at work. Bot networks can manipulate engagement metrics – they follow, like, and retweet in coordinated ways to create a false impression of popularity. Some shady marketers and influencers even purchase bot followers or bot engagement to boost their numbers. The result: the trending section or your feed might show content that’s not organically popular at all. This not only misleads users, but it also pressures creators or businesses to chase fake virality. Legitimate users and brands have to compete with an army of fake accounts hyping each other up, which can be frustrating.
Harassment and Replies: Beyond big-picture trends, bots can directly affect your personal experience on X. Bot accounts have been used to harass or troll individuals by bombarding them with negative replies. Let’s say a customer posts a complaint about a product – a bot network might flood the replies defending the company or attacking the user, making it look like lots of people hold that view. Similarly, bots can mass-report an account they dislike in an attempt to get it suspended. These tactics can create a hostile environment and silence real voices.
It’s worth noting that not all automated accounts are bad. Some bots serve useful purposes, like posting updates from news sites, tracking weather emergencies, or even posting fun content (there are bots that share art or random trivia, for example). These kinds of accounts operate automatically but transparently, and many users enjoy or rely on them. The real problem is with fake accounts designed to deceive, manipulate, or abuse. Those are the bots that give the whole bot ecosystem a bad name and make X a worse place to be.
A Note on Political Influence
Bots and fake accounts have a particularly troubling impact on politics and civic discourse. We touched on misinformation, but it goes beyond just false stories. Political bot networks can create illusory popularity for candidates or policies. For instance, a politician might seem to have an enthusiastic online fan base – but if a chunk of that “support” is from bots, it’s smoke and mirrors. There have been cases where bot armies, numbering in the thousands, were unleashed to heap praise on one candidate and smear opponents, all while looking like ordinary folks chiming in. This can sway public perception, dominate conversations, and even intimidate real users who feel outnumbered by what appears to be a crowd of zealots.
Authoritarian governments and well-funded groups have also been known to deploy bots for propaganda. In various countries, fake accounts have been caught spreading government messaging or drowning out dissenting voices. During international conflicts or elections, the volume of bot-driven tweets often surges. All of this matters because it can subtly influence undecided voters or shift the range of what ideas people think are popular. While bots alone probably don’t change someone’s deep-held beliefs, they do shape the online environment in which we form opinions. For a small business owner or everyday user, the key takeaway is to be aware: on social media, the loudest “grassroots” waves of support or outrage might not be as organic as they look.
Why Bots Matter for Small Businesses on X
If you run a small business or manage a brand’s presence on X, you might wonder: “Okay, bots are out there – but do they really affect me or my marketing?” The answer is yes, in several important ways:
Wasted Ad Spend: X, like other social platforms, makes money from advertising. As a small business paying for ads or promoted posts, you want real humans (potential customers) to see and click your content. But if a chunk of those impressions or clicks come from bots, that’s money down the drain. Fake accounts don’t buy products or services. Unfortunately, bots do end up “viewing” and even clicking ads. Studies of digital advertising have found that a significant percentage of online ad traffic comes from non-humans. In fact, billions of dollars are lost every year to ad fraud, much of it driven by bots generating fake clicks. For a small business with a tight ad budget, this invalid traffic can hurt your marketing ROI. You might see decent click numbers on a campaign, but if many of those clicks are bots, you’ll scratch your head wondering why no one is converting to a sale – it’s because those “people” were never real in the first place.
Skewed Analytics: Even if you’re not running ads, you probably track your X engagement metrics closely (followers, likes, retweets, link clicks, etc.). Bots can mess with those analytics. Let’s say you gain 500 new followers after a campaign – looks great, right? But if 100 of them are bot accounts, your follower growth is artificially inflated. Similarly, a post might get an impressive number of retweets, but some could be from bot accounts that auto-share content. This makes it harder to gauge what content actually resonated with real customers. It can also lead to misguided decisions. For example, you might pour effort into replicating a promotion that a lot of accounts engaged with, not realizing half the engagement was bots and the real audience wasn’t that excited. In short, bots are noise in your social media data, and you need to account for that.
Follower Quality and Trust: On social media, the quality of your followers often matters more than the quantity. Having 10,000 followers is nice – but not if 5,000 of them are fake profiles with cartoon avatars and zero purchasing power. A high bot follower count can even hurt your brand’s credibility. Savvy users can tell if an account’s followers look fishy (for instance, if a business with a local customer base somehow has thousands of followers from random countries or lots of obviously fake profiles, it raises eyebrows). Potential customers or partners might question if you’ve artificially inflated your influence. Plus, if you ever try to do a targeted campaign (say, offering a promo to your followers), bot followers are wasted reach; they won’t engage or convert.
Customer Experience: Fake accounts can interfere with how real customers interact with your brand on X. For example, a common tactic is for scam bots to reply to customer complaints by impersonating customer support. A bot might pretend to represent your company and ask the upset customer to “click here for help,” leading them to a phishing site. This not only puts your customers at risk, it also damages your reputation if people can’t tell the difference between your real account and imposter bots. Additionally, if your brand’s tweets get swarmed by spam replies, it can be off-putting. Genuine customers might not bother wading through a thread full of junk to find useful info or join the conversation.
In summary, bots matter to small businesses because they distort the reality of your social media performance and can directly or indirectly cost you money. Brands need to stay vigilant that the engagement they’re seeing (and paying for) is genuine. It’s not about witch-hunting every new follower, but rather having a healthy skepticism of glittery numbers and being prepared to dig a little deeper into your metrics.
Tips for Spotting and Dealing with Bots
The bot problem on X can feel overwhelming – after all, you can’t personally purge all the fake accounts on the platform. But you can take steps to identify and avoid bots in your own interactions. Here are some practical tips and tell-tale signs to help you spot fake accounts:
Check the Profile Details: Bot accounts often have incomplete or suspicious profiles. Red flags include no profile photo (or a generic one), empty bio sections, and oddly formatted usernames (especially those with a random string of numbers or weird characters). If an account following you is named @JohnSmith123456 with no bio and a default avatar, there’s a good chance it’s not a real person.
Look at Activity Patterns: One hallmark of bots is unnaturally high activity. If an account is tweeting, liking, or following people at a pace that no human could sustain (say, hundreds of posts in a day, or posts every few minutes around the clock), it’s probably automated. You can click on a profile and scroll through their tweets – if you see dozens of posts within the same hour, repeating the same content or links, that’s a big warning sign.
Generic or Copy-Paste Content: Many bots aren’t very original. They might post the same comment to many accounts or use very generic phrases. For example, you might notice different accounts replying to various tweets with the exact same weird compliment or the same promotional message. That kind of cookie-cutter behavior usually means it’s a bot network. Authentic users tend to have more varied and context-specific interactions.
Account Age and Follower Ratio: Check when the account was created. Brand-new accounts that jump into discussions or start tagging you out of the blue can be suspect, especially if they were created just days ago. Also look at their follower-to-following ratio. Bots often follow thousands of accounts but have only a handful of followers (because real users don’t organically follow them back). A profile that follows 2,000 people but has 10 followers might well be a bot farm account.
Too Good (or Bad) to Be True Offers: Be wary of accounts that message or reply to you with offers that sound too good to be true (“Congratulations, you won a free iPhone! Click here!”) or urgent requests (“Your account is in danger, log in here ASAP!”). Scammers use bots to blast these out widely. The accounts behind these messages are usually recently created and have a lot of spammy posts. If you get a strange DM or reply, check the profile before you even think of clicking anything.
So, what do you do when you think an account is a bot? Don’t engage with it. Arguing or debating with a bot is like shouting into the void – it won’t change anything, and it might even expose your account to more spam. Instead, you can report the account to X for spam or platform manipulation (there’s usually an option to report profiles in the app/website). Blocking the account is also a good idea; it prevents that bot from interacting with you in the future. If the bot is impersonating someone (maybe pretending to be your business or another public figure), reporting is especially important so that X’s team can investigate and remove it if confirmed fake.
For those managing brand accounts, it’s wise to keep an eye on your follower list and mentions. You don’t need to manually vet every follower, but if you see a sudden influx of new followers with sketchy profiles, it could be a bot follow-wave. Occasionally pruning obvious fake followers can improve the quality of your audience metrics. Additionally, consider using bot-detection tools or services if you have a serious bot problem – there are software solutions that analyze your account’s followers and engagement for inauthentic activity. While you can’t eliminate bots from X altogether, staying vigilant will help you minimize their impact on your social media efforts.
Staying Savvy on Social Media
Fake accounts on X are not likely to disappear overnight. In fact, as long as there are incentives to use bots (be it for profit, propaganda, or prank), we can expect this cat-and-mouse game to continue. However, you aren’t powerless. The best defense is awareness. By understanding that a portion of social media activity comes from bots, you can approach what you see on X with a critical eye. Don’t automatically trust that trending hashtag or that “popular” account’s follower count – do a little digging when it matters.
For everyday users, this might mean double-checking news before sharing (was it posted by a reputable source or an anonymous account with a bunch of numbers in its handle?). For small business owners and marketers, it means looking beyond the vanity metrics – focus on meaningful engagement and be skeptical of sudden spikes or results that look too good to be true.
In the grand scheme, staying savvy on social media will protect you and your business. It will help you make better decisions about what content to trust, which influencers or trends to pay attention to, and how to allocate your marketing resources. X remains a powerful platform to connect with customers and communities, but it’s one where not everything is as it seems. By keeping the presence of fake accounts in mind, you can navigate X (and other networks) with a bit more caution and clarity.
At the end of the day, social media is about connecting with real people. Bots may never be fully banished, but if you know how to spot them and not fall for their tricks, you’ll be ahead of the game. Stay informed, keep your wits about you, and you’ll continue to reap the benefits of online platforms without being easily fooled by the fakes.
Sources
Reuters – “Do spam bots really comprise under 5% of Twitter users? Elon Musk wants to know” (May 13, 2022). URL: https://www.reuters.com/technology/do-spam-bots-really-comprise-under-5-twitter-users-elon-musk-wants-know-2022-05-13/
SparkToro & Followerwonk – “Joint Twitter Analysis: 19.42% of Active Accounts Are Fake or Spam” (May 15, 2022). URL: https://sparktoro.com/blog/sparktoro-followerwonk-joint-twitter-analysis-19-42-of-active-accounts-are-fake-or-spam/
Teslarati – “Over 80% of Twitter accounts are likely bots: Former FBI security specialist” (Aug 31, 2022). URL: https://www.teslarati.com/twitter-accounts-80-percent-bots-former-fbi-security-specialist/
TIME – “Elon Musk Wants to Rid Twitter of ‘Spam Bots.’ Nearly Half His Followers Are Fake” (Apr 28, 2022). URL: https://time.com/6171726/elon-musk-fake-followers/
The Guardian – “Bots on X worse than ever according to analysis of 1m tweets during first Republican primary debate” (Sep 9, 2023). URL: https://www.theguardian.com/technology/2023/sep/09/x-twitter-bots-republican-primary-debate-tweets-increase