The social network Twitter has apparently been suffering fraudulent infiltration by ‘socialbots’ – automated accounts that mimic real users. Understanding the infiltration strategies of socialbots can help to identify and eliminate them.

Twitter:  Identifying and Neutralising Fraudulent Infiltration

Twitter has 200 million active users posting around 500 million tweets every day, which makes the microblogging platform a very useful source of information for companies that wish to exploit this data and develop applications to analyse tweets in real time. Twitter can be used to predict certain types of behaviour that is unlawful or criminal, but the network may equally well be used to abuse the trust of the more naive users in order to influence public opinion. False accounts, known as ‘socialbots’, can be created to mimic the behaviour of a real user in order to infiltrate the social networks. The Twitter surveillance team regularly identifies and takes down false accounts from the network, identifying 20 million false accounts on its network in 2013 alone. Now a team of researchers at the Federal University of Minas Gerais in Brazil and the Indian Institute of Engineering Science and Technology in Shibpur have been investigating how socialbots manage to infiltrate thenetwork. They created 120 socialbots in order to explore infiltration strategies and assess the extent to which such bots are able to infiltrate Twitter.

Studying infiltration strategies

The 120 false accounts were created in great detail; their profiles included a name, a biography, a profile picture and a background. The study monitored their interactions with other Twitter users over a period of 30 days. During this period only 38 of these 120 accounts – i.e. barely a third – were detected by Twitter as socialbots and suspended. In order to analyse the socialbots’ infiltration capacity the researchers measured during the one-month period the number of ‘followers’ they attracted, the Klout score – a popular metric for the online social influence of a user – and the number of message-based interactions with other users. The experiment revealed that the ability of the 120 socialbots to infiltrate and become influencers was not impacted by such characteristics as the supposed gender of the false user or whether re-tweets consisted of messages written by the bots or by real people. However, when an account is highly active, as measured by the number of tweets sent and interaction within a one-hour period, or targets a specific group of users who are expressing views on a given topic, the bot can infiltrate the network more easily and rapidly gain popularity.

Spotting fraudulent behaviour

Other research has been done on the Twitter network, including the kind of language used in tweets. However, this is the first time researchers have created their own socialbots in order to study how such infiltration actually works. Taking the view of the ‘spammer’ – i.e. the person who seeks to infiltrate –enabled the researchers to anticipate influencing issues. For instance, during the presidential elections in Mexico, the ‘trending topics’, i.e. the subjects most frequently cited on the network using a hashtag, were modified by the political team of one of the candidates in order to make the candidate look more popular. The researchers have published their results so that other researchers in the scientific community can explore different aspects of socialbot infiltration.  Future studies focusing on how bots are perhaps being used to encourage people to purchase a given product may help to drive develop techniques for spotting dishonest behaviour on popular social networks.

By Eliane HONG