💾 Archived View for gmi.noulin.net › mobileNews › 5122.gmi captured on 2023-09-08 at 17:48:15. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-01-29)

➡️ Next capture (2024-05-10)

-=-=-=-=-=-=-

How online 'chatbots' are already tricking you

2014-06-10 12:45:21

By Chris Baraniuk

Intelligent machines that can pass for humans have long been dreamed of, but as

Chris Baraniuk argues, they re already among us.

Sometimes it s the promise of sex that fools you. Sometimes it s because they

seem wise, friendly or just funny. The bots don t really care how they trick

you their only objective is to make you think they re human. In fact, if you

use social media or spend any time online, it s quite possible you ve already

been a victim.

This week, a controversial claim was made that a chatbot passed the Turing

test at an event at the Royal Society in London. During a series of text-based

conversations, a computer program named Eugene Goostman persuaded judges it was

a 13-year-old Ukrainian boy, thus passing a benchmark for artificial

intelligence proposed years ago by the computer scientist Alan Turing.

So does this announcement mark the era of human-like AI, as has been claimed?

Not really. Turing s test stopped being important for AI research years ago,

and many scientists see the contests as flawed because they can be won with

trickery such as pretending to be a non-native English speaker.

However, what chatbots are fully capable of in everyday life is far more

interesting. We re already surrounded by bots capable of tricking us into

thinking they are real people, and they don t enter competitions. Some are

sophisticated enough to infiltrate social networks and perhaps even influence

public opinion.

There are certainly plenty of them out there. Although most people think of the

web as a place primarily frequented by humans, the reality turns out to be

quite different. A recent report found that 61.5% of internet traffic is

generated by automated programs called bots.

Honey trap

The bots most likely to fool us employ colourful trickery, explains Richard

Wallace of Pandorabots, which makes chatbots for customer service and other

uses. Wallace is the creator of a bot called Alice, which on three occasions

has won the Loebner Prize a Turing-like contest in which chatbots vie to

convince judges that they are human.

The people who are the most skilful authors of these bots are not people who

are computer programmers, they are people who work in a creative field, says

Wallace. That s really the key to creating a believable chatbot writing

responses which are believable, entertaining and engaging.

Scammers are well aware of this phenomenon. Security research firm Cloudmark

has documented the rise of a flirtatious bot called TextGirlie . After

obtaining a victim s name and telephone number from their social media profile,

TextGirlie would send the victim a personalised message asking them to continue

the conversation in an online chatroom. A few coquettish exchanges later and

the victim would be asked to click on a link to an adult dating or web cam

site.

Cloudmark estimates that as many as 15 million initial TextGirlie text messages

could have been sent to mobile phones and they confirm that the scam operated

for several months. According to Andrew Conway, a research analyst at the firm,

this is a good indication that the attack was in some measure successful.

Automated deceit

People are more likely to be fooled by a bot in a situation where they d expect

odd behaviour or broken English. Back in 1971, for example, psychiatrist

Kenneth Colby was able to convince a few fellow practitioners that they were

talking to a patient via a computer terminal. In fact, Colby had simply set up

sessions with a program that simulated the speech of a paranoid schizophrenic.

And more recently, in 2006, psychologist Robert Epstein was fooled by a

cleverly programmed computer which wore the guise of a Russian woman who said

she was falling in love with him. Lately, bots have been turning up on online

dating networks in droves, potentially ensnaring more hapless singletons in a

web of automated deceit.

Sometimes, bots can even trick the web-savvy. Birdie Jaworski knows what it

feels like. Jaworski is a seasoned contributor to Reddit and fan of the digital

currency called dogecoin, a playful alternative to Bitcoin. On the Reddit forum

for dogecoin aficionados, a user called wise_shibe emerged recently, posting

witty remarks in the style of ancient proverbs. He would reply to you with a

fortune cookie style response, remembers Jaworski. It would sound like

something Confucius might say.

These comments even started making wise_shibe money, since the forum allows

users to send small digital currency tips to each other if they like a

comment that s been made. The wise_shibe rejoinders were popular, so were

showered with tips. But things soon started to look suspicious: the account was

active at all hours and eventually started repeating itself. When wise_shibe

was unmasked as a bot, the revelation divided members of the forum. Some were

incensed, while others said they didn t mind. Jaworski was amused, but also

felt cheated. All of sudden you realise this little robot is collecting all of

these tips, she says.

Phantom tweeters

If a bot s presence and interactions appear natural enough, it seems to be the

case that we are unlikely to even question its legitimacy we simply assume

from the outset that it s human. For Fabricio Benevenuto, this phenomenon has

become the subject of serious research. Recently he and three other academics

published a paper which explains just how easy it is to infiltrate Twitter with

socialbots so long as they look and act like real Twitter users.

Benevenuto and his colleagues created 120 bot accounts, making sure each one

had a convincing profile complete with picture and attributes such as gender.

After a month, they found that almost 70% of the bots were left untouched by

Twitter s bot detection mechanisms. What s more, the bots were pre-programmed

to interact with other users and quickly attracted a healthy band of followers,

4,999 in total.

The implications of this are not trivial. If socialbots could be created in

large numbers, they can potentially be used to bias public opinion, for

example, by writing large amounts of fake messages and dishonestly improve or

damage the public perception about a topic, the paper notes.

It s a problem known as astroturfing , in which a seemingly authentic swell of

grass-root opinion is in fact manufactured by a battalion of opinionated bots.

The potential for astroturfing to influence elections has already raised

concerns, with a Reuters op-ed in January calling for a ban on candidates use

of bots in the run-up to polls.

'More sophisticated'

The ramifications of astroturfing are in fact so serious that the US Department

of Defense has jointly funded research into software which can determine

whether a Twitter account is run by a bot. The application, called BotOrNot, is

available publicly online and provides a predictive analysis based on account

activity and tweet semantics which suggest whether the account operator is

likely to be a human or a bot.

But Emilio Ferrara, a lead researcher on the project, admits that the system

may already be outdated. Trained on Twitter data which is now three years old,

it s possible that today s best bots could still evade detection.

Now bots are more sophisticated, he says. They are better at disguising

their identity and looking more like humans. Therefore the task becomes harder

and harder we don t even know the accuracy of the system in detecting the

most recent and most advanced bots out there.

And so the rise of bots only looks set to continue with or without Turing

test approval. For Fritz Kunze of Pandorabots, the hope is that people will get

better at questioning innocent-looking users who contact them online so that

they re not so easily duped. But he is also acutely aware of how hard a task

that will be in the near future.

It s going to be a big shock to most people, he says. And these bots are

going to be really, really good they re going to be good at fooling people.