How to Spot a Russian Bot
Here are five ways to identify phony Twitter accounts and ditch them.
DANIEL COSTA-ROBERTS
In late July, Twitter’s stock took a beating as the company ramped up efforts to improve “conversational health” on the platform, including purging millions of fake accounts. That was followed by news that members of Congress led by Sen. Mark Warner (D-Va.) are studying potential ways to regulate Big Tech, in part to battle the kind of disinformation on social media that tainted the 2016 elections.
The Russian government and other malicious actors continue to spread false information and divisive political content on Twitter, Facebook, and other platforms—efforts that rely in no small part on the unwitting participation of everyday people, according to Bret Schafer, an analyst at the Alliance for Securing Democracy. “Social-media users need to be aware of their role in information laundering. If a user retweets, emails, or posts information taken from a less-than-credible point of origin, they now have become the new ‘source’ of that information for friends, family, [and] followers,” Schafer told Mother Jones. “This is how false information really spreads.”
“Social media users need to be aware of their role in information laundering.”
So-called bots, and bot networks, are a big part of the problem. These are generally accounts that operate according to programmed instructions, although human users may control them some of the time. Not all bots are malicious, of course; automated accounts can have benign or even whimsical purposes, like learning how to “see” or “talk.”
Various experts have offered guides to spotting Twitter bots, though the specifics can get pretty obscure and technical. We’ve culled the below from some of the best online guides and expert advice—with the caveat that malicious actors continue to grow more sophisticated and none of these indicators is surefire. (When the Atlantic Council’s Ben Nimmo, author of an excellent guide to spotting bots, fingered a pro-Kremlin troll recently, British media erroneously labeled the account “a bot,” prompting a retired British IT worker to come forward as the account’s owner.) Here are five key ways to identify possible bots in your Twitter feed:
Hyperactivity
Accounts that are programmed to push content often post every couple of minutes. Numerical benchmarks vary, but in general, “if an account has more than 50 to 60 tweets a day, that suggests automation,” explains Ahmer Arif, a researcher at the University of Washington who studies online disinformation. “For instance, if an account consistently seems to produce hundreds of tweets per day, every day, we are looking at abnormal patterns of behavior.”
How to look for it: Each Twitter account says the month and year the user joined Twitter. (This information appears just under the user’s avatar.) Hover your cursor over this field to view the exact date of creation. You can find the number of tweets the account has produced by looking just to the right of the avatar. To get the average daily tweets, divide the total number of tweets by the number of days that have passed since the account was started. (You can use a days-between-dates calculator to make that easier.) If the number you find is over 50, that should raise a red flag; if it’s over 100, even more so.
Suspicious images
If the avatar for an account uses a generic silhouette—the default image for every Twitter account—that’s one possible indicator of a bot. Many users simply don’t bother to upload an avatar image. The silhouette can also be paired with a gibberish alphanumeric handle; programs that mass-produce bots often assign them handles composed of randomly generated letters and numbers. A variation on this theme is the account that uses a fake or stolen avatar picture. Arif adds, “It is worth noting that bot-makers tend to use stolen images of women”—often young and conventionally attractive—”in order to target male audiences.”
How to look for it: You can tell whether an avatar image has been pilfered from someone else by doing a reverse image search through Google: Right-click on the avatar and choose “copy image address,” then paste the URL into Google and select “search by image.” If many different results appear, especially multiple social-media profiles with the same image, there’s a good chance you’re looking at a bot account.
URL shorteners
“Automated accounts often track traffic on a particular link by using URL shortening services,” Arif explains, “so the frequency with which these services are used can be an indicator of automation.”
How to look for it: Watch out for accounts whose timelines are packed with tweets that consistently use one or two URL shortening services to link to news stories or political content. These services—like trib.al, bit.ly or tinyurl.com—take a typical, full-length URL and shorten it down to just a few characters. Sometimes bot-makers use these to measure traffic to a given link, but they can also be used as part of software packages for automating Twitter activity.
Multiple languages
Bots created for use in commercial advertising have been hired to “advertise” for political positions. As a Daily Beast investigation revealed, nearly anyone can purchase the services of legions of Twitter bots, at little cost. These bots often post content in various languages according to who’s paying to use them.
How to look for it: If you see an account posting or retweeting content in English, Dutch, Arabic, Russian, and Mandarin, it’s possible you’ve encountered a talented polyglot—but it’s more likely you’ve spotted a bot.
Unlikely popularity
If you come across a post from an account with 100 followers that somehow has posts with 5,000 likes and retweets, you’ve likely spotted a bot—especially if the phenomenon shows up in multiple posts from that account. Botnets—groups of bots that operate in coordination—can amplify social-media content by working together to make one post appear highly popular.
How to look for it: When you see a tweet that has gone big, check how many followers the account has. If it has just a handful of followers but many popular tweets, you’re probably not looking at content generated directly by a person. Tweets from accounts that rely on botnets to push their content may also have nearly the same number of likes and retweets, since each bot may take both actions to amplify the tweet. As Schafer points out, “the popularity of a post or account (measured by follows, likes, retweets, etc.) should not be viewed as a credibility ranking. These rankings are very easy to manipulate through automation.”
Twitter continues to have a significant problem on its hands with phony accounts and content, but in the meantime, you can block and report any suspicious activity you see in your feed.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.