Support

All rights reserved:

© Громадське Телебачення, 2013-2025.

Expert Explains: How To Spot A Propaganda Bot on Twitter

Expert Explains: How To Spot A Propaganda Bot on Twitter

Ben Nimmo has spoken with Hromadske to explain how to recognize bots on Twitter.

“Attack of the Bots” – it sounds like something out of a 1950s Hollywood horror film. But that was the digital reality last week for Ben Nimmo.

Suddenly, Nimmo found his Twitter account overrun by pro-Russia robot accounts. The anonymous, faceless automated accounts flooded his feed with retweets. Then they made fake copies of his profile, including one which Russified his name as “Veniamin Nimovich.” Finally, one account – this one clearly not automated – even declared him dead.

The Twitter attack was “a form of harassment,” Nimmo says.

But the reason the bots targeted Nimmo also meant he was in a good position to learn from the attack. Nimmo is a senior fellow in information defense at the Atlantic Council’s Digital Forensic Research Lab.

Now Nimmo has spoken with Hromadske to explain how to recognize bots on Twitter.

According to Nimmo, identifying a bot is not hard, and you don't have to be a computer programmer to do it.

His advice is simple: look for the “three A’s” – activity, anonymity, and amplification. Bots tweet extremely often, they usually have no identifying details, and they largely focus on amplifying other accounts’ messages.

“If an account is posting that often, it doesn’t give any personal information, and all it’s doing is retweeting other accounts, then it’s obviously a bot, which has been created to retweet people,” Nimmo concludes.

He says that people who create “botnets” – large groups of bots – use extensive networks of automated accounts to spam the people they’re targeting.

“It’s the people who make the botnets saying: ‘Look how many of us there are. Look how powerful we are,’ which is something you often hear from people who run this kind of network.”

Nimmo’s tangle with the bots began after he and his fellow researchers uncovered how pro-Russia bots and botnets were reacting to the protests and subsequent terrorist attack in Charlottesville, Virginia last month.

After identifying several fake Twitter accounts and realizing that the bots were amplifying a far-right narrative, Nimmo published a report for the Atlantic Council. That report was subsequently picked up by the American news outlet ProPublica, which examined it alongside a research report on pro-Russia bots from the Hamilton 68 group.

“And then, ProPublica, on Twitter, got attacked by a big botnet, and because they had been quoting our research, they contacted us and asked us to look at what we could identify about the botnet,” Nimmo says. “We recognized that this was a botnet which, in the past, had been used to attack somebody that was tweeting about Russia.”

Nimmo and his colleagues were then able to put both incidents together and narrow it down to a single botnet, which will likely to be shut down at some point due to its newfound public exposure.

Nimmo’s attack last week also allowed him to understand more about the botnets. For example, he discovered the code word that the bots looked to retweet. This allowed him to get one tweet shared thousands of times and subsequently report those accounts to the Twitter administration for deletion.

Earlier this week, you and your colleagues were attacked online by what can be described as an ‘army of bots’. A basic question for you - in this context, what is a bot? And what is a botnet?

In this context, a bot is a Twitter account, which appears to be run by a computer program, rather than a human being, so it will post stuff without anybody being there to press the keys. It’s a bit like an airplane flying on autopilot. It just posts stuff without human intervention. A botnet is a network of hundreds, or thousands, or tens of thousands of accounts, which all do the same thing and can be programmed to do the same thing at the same time. What we saw on this occasion was that the botnet appeared - and it probably had about 90,000 accounts in it, so it was a large network - to have been programmed to react to certain words, and if the words were posted in a combination then everybody in the botnet would retweet that post. Now, because all of these accounts were only followed by other bots, it meant that the retweet itself wouldn’t get seen by human beings, but, what would happen is that the person who had made that post in the first place, so for example, I was targeted in this, as soon as I made the post, my notifications on my Twitter page were flooded by literally hundreds of new retweets every second. Effectively, I couldn’t use my notifications the way I normally would. So, it’s a form of harassment using very large networks of automated accounts to basically just spam the people that they’re targeting.

Let’s back up here a little bit to get some context on why this happened. Last week, you and your fellow researchers discovered how pro-Russian bots and botnets were reacting to the protests and subsequent terrorist attack in Charlottesville, Virginia. What did you see there?

What we saw to start with was the way there was a particular narrative, which was used by  far-right commentators and, what seemed to be, far-right botnets in the US, which was the narrative that American politicians were condemning the neo-Nazis in Charlottesville, but, at the same time, they were, and I quote, “supporting the neo-Nazis in Kyiv.” Now, that was a strange kind of thing to be hearing from the far-right in the United States, but it’s something we’ve been hearing ever since March 2014 from the Kremlin and from the Kremlin propaganda machine, so what we started looking at was the way this narrative of Ukrainian Nazis and American Nazis had spread from the Kremlin into far-right commentators in the United States. As part of that, we identified a few fake accounts, a few bots which were amplifying the message. Our report was then picked up by an American new outlet called ProPublica and they pulled together our report and a separate report by a group called Hamilton 68, which looked more at the bot angle, so Hamilton 68 did a lot more on pro-Russian amplifiers, including bots. ProPublica combined our research and the Hamilton 68 research and did a big article on the pro-Russian bots, which were amplifying the Charlottesville neo-Nazi messaging. And then, ProPublica, on Twitter, got attacked by a big botnet, and because they had been quoting our research, they contacted us and asked us to look at what we could identify about the botnet. And, we looked at it and we realised that the botnet was the same one, which had been used at the end of July this year to target a journalist called Julia Davis, who had been tweeting about the Russian shelling of Ukraine, and so we recognised that this was botnet which, in the past, had been used to attack somebody that was tweeting about Russia.

What do you think the greater purpose of these bots was, in your opinion? How effective were they at achieving that goal?

It looks like the purpose was intimidation because, if you’re the person who holds the Twitter account that they are targeting, suddenly what you see is your own notifications go mad - literally, I would be getting an update every minute that two hundred more accounts had retweeted me, for example. The first time that happens to you, it’s quite startling and it can be quite alarming. And, it also makes it quite difficult for you to work, just because the notifications are coming in so fast. So it looks like it was combination intimidation. It’s the people who make the botnets saying: Look how many of us there are. Look how powerful we are, which is something you often hear from people who run this kind of network. It was also harassment, it was making it harder for us to do our jobs. It wasn’t actually particularly effective, because we managed to work out exactly what words were triggering these botnets to start striking people, because they started off targeting ProPublica, then they moved on to us, we wrote another post on that, and then, people who wrote about our post, the people who tweeted about our post started getting some of the same treatment - not on the same scale, but they got it. Looking at the all the tweets that got us hacked, we looked at what the key words were, and we worked out that if you tweet a certain combination of words, that would trigger this botnet, and the botnet would start retweeting you. What we then did was write another tweet with those keywords, but including the address of the Twitter support team and the botnets started retweeting that Tweet. So, effectively every single bot account, which retweeted that tweet, was also identifying itself to the Twitter support network. The result has been that, yesterday evening, that tweet that we posted had 86 thousand retweets. By this morning, it was down to 35 thousand because more than 50 thousand bots had been identified and taken off air. So actually, the result has been that, by using the botnet in this way, the people who run the botnet have lost quite a lot of their bots.

So that’s very effective. Let’s just look quickly at the attack on you and the Atlantic Council earlier this week. There were some very colourful  - I would say - events in the course of that attack. Could you describe a few of them for us?

The main events were about half-way through this cycle of intimidation. Somebody - and we don’t know who the user was - created a couple of fake Twitter accounts. One of them stole image from my Twitter account, but it wasn’t trying to impersonate me, it was trying to mock me, it was claiming that I’m paid by the FSB, that I work for the Kremlin, it was silly. That was simply somebody playing a game. But then they also created an account which was an exact mirror of my boss and they got that account to tweet the news that I was dead. That was quite a lot darker and something which they’d clearly taken more time over. Because it was an impersonation account, it didn’t last very long, it was very obviously an account that was trying to fool people, it was trying to pretend to be my boss. So that had a darker tone to it, but again, that Tweet didn’t last for long, the account was taken down within a couple of hours, so it was a short-term impact.

That’s definitely a dark turn. You and the Atlantic Council are not the only ones who are writing about disinformation online, or not necessarily the only ones writing about bots - do you have any idea why maybe you and your organisation specifically were targeted? Do you think it had something to do with your past work, further back than the Charlottesville bot attacks?

It looks like it was a response to what ProPublica had written about Charlottesville and then the first article we did, exposing this botnet. I suspect - you can only ever make an informed deduction on this one - that what had happened was, because we recognised that the botnet which was targeting ProPublica was the same botnet that targeted the journalist Julia Davis before, we exposed both the way they were working, and we exposed something about why they were working. Because, if you think about the ProPublica article, it was about pro-Russian bots and the far-right in the United States, so effectively there were two different groups of people, who might have an interest in attacking that story. But, the Tweet that Julia Davis posted back in July, which was attacked back in July, was about Russia shelling Ukraine. Now, there’s no reason why the far-right in the US would be interested in attacking that, but there very clearly is a reason why pro-Russian accounts would be interested in attacking it. Putting those two incidents together, we were able to say that this is a single botnet operating - and we went through all the identifiers of why we could tell that this was a single botnet - and the likelihood is that it is run by pro-Russian people. Putting that together, that identifies the botnet, it exposes it, and it therefore lays it open to the likelihood that it will be shut-down at some point. In that sense, it threatens what the people who manage the botnet are doing, and therefore, they will try and strike back. Now, the way they did it was, in fact, to give us loads more data on their botnet and probably get quite a lot more of their accounts shut down.

You’ve described how this was, in some ways, not effective for them because it allowed you to learn more about them and get them shut down. Are there any other takeaways from this bot attack? Is there anything that we, or you, could learn from it?

Yeah, it was an interesting one because it exposed the scale of some of these botnets. It looks like there was a number of nets of slightly different styles working together. One group account had hundreds, if not thousands of members. You could see that the accounts had been quite carefully made, so they each had a plausible kind of name, they each had a photo attached to them - somebody had spent some time on this. But, the tell-tale was that you probably had a couple of thousand accounts, but they only had a hundred photos, so you had a lot of different accounts sharing the same photos, which is a giveaway. And then, there was a separate, much larger network, which had over 100 thousand members, where almost none of the accounts had any avatar pictures, there was very little in the way of personal attention, it didn’t look like anybody had put any effort into creating the network. So it looked like there were two different networks going on, but they were working together. So, if you add them up, you are probably talking of over 125 thousand accounts, so it shows us the size of this network, it shows the level of aggression which is out there. But, it actually also shows that there a lot of ways you can identify a botnet without using specialised equipment. For example, looking at the smaller of the two nets, we identified that all of these accounts were liking or retweeting content, so if you just go down the list of accounts which have liked them, you see the same profile picture coming up time and time again, but with different account names and different account handles. You don’t need to be any kind of programmer to know that that’s fake. So actually, it has been a very useful lesson in all the different ways that you can identify a fake account. Another example would be that, looking at some of these accounts which are active, each one of them was posting retweets in English, Russian, Arabic, Malaysian, Indonesian, French, Spanish and Turkish -  there is no credible way that is a human being doing that. Again, all you need to do is look at that account and just go down the list and within a minute you can be absolutely confident that that’s a fake account. What is has done is shown us how many different methods there are of identifying bots and identifying fake accounts.

So essentially any Twitter user, knowing this information, can look and see that these are bots?

That’s right. Not all bots are the same style, some are better disguised than others, but certainly the similar ones - any Twitter user can have a look and if they know the tricks, they can identify a bot. For example, for me, the three classic indicators are: activity, anonymity and amplification - I think of them as the three As. All you need to do, for example, for the activity, you look at the date the account was created and you look at the number of Tweets it’s posted and so you can work out from those two number how many times a day it is posting. There are bot accounts out there, which I know are posting 800 or a thousand times a day - that’s just not a human behaviour pattern, so all you need to do is compare those two numbers and some of the bots you will spot. If you look at the anonymity - you can look if they’ve got any kind of identifying features. If they haven’t, you might well be looking at a fake account. And you look at the way they amplify messaging, you can look at how many of their posts are retweets. And again, there are bots out there which I know that post a thousand times a day and all those posts are retweets. If an account it posting that often, it doesn’t give any personal information, and all it’s doing is retweeting other accounts, then it’s obviously a bot, which has been created to retweet people.

READ MORE: Fake News Becomes Roboticized, Researchers Warn