The realism of AI language models such as ChatGPT could create a wave of convincing scams, consumer group warns
Online fraudsters could use AI to launch a wave of convincing scams through ChatGPT and Bard, a consumer group has warned.
Which? said the two so-called large language models used in the programs lack effective defences for the public.
Consumers often spot scam emails and texts because they are badly written but ChatGPT and Bard can easily create messages that convincingly impersonate businesses and official bodies.
The City of London Police estimate that more than 70 per cent of fraud cases involving UK victims could have an international component. AI services allow fraudsters to send professional-looking emails from anywhere in the world.
Which? found that both ChatGPT and Bard have some safeguards but these can easily be circumvented.
Experts asked ChatGPT to create an email telling the recipient someone had logged into their PayPal account. In seconds, it produced a professionally written email with the heading ‘Important Security Notice – Unusual Activity Detected on Your PayPal Account’.
Online fraudsters could use AI to launch a wave of convincing scams through ChatGPT and Bard, a consumer group has warned
Consumers often spot scam emails and texts because they are badly written but ChatGPT and Bard can easily create messages that convincingly impersonate businesses and official bodies
It included steps on how to secure the account and links to reset a password and contact customer support.
But fraudsters would be able to use these links to redirect recipients to their malicious sites.
Which? also found it was possible to use Bard to create bogus security email alerts and direct victims to fake sites that can gather personal and security information.
The consumer group’s director of policy and advocacy, Rocio Concha, said: ‘OpenAI’s ChatGPT and Google’s Bard are failing to shut out fraudsters, who might exploit their platforms to produce convincing scams.
‘Our investigation clearly illustrates how this new technology can make it easier for criminals to defraud people. The Government’s upcoming AI summit must consider how to protect people from the harms occurring here and now, rather than solely focusing on the long-term risks of frontier AI.
‘People should be even more wary about these scams than usual and avoid clicking on any suspicious links in emails and texts, even if they look legitimate.’
Google said: ‘We have policies against the use of generating content for deceptive or fraudulent activities like phishing. While the use of generative AI to produce negative results is an issue across all LLMs, we’ve built important guardrails into Bard that we’ll continue to improve over time.’
OpenAI did not respond to a request for comment from Which?.
Meanwhile, a report claims we already use AI three times more often than we think. Most Britons recognise that the technology plays a role in their lives, according to a survey by the Institution of Engineering and Technology. But they completely underestimate the extent to which it is already a part of the tools they use every day.
Over half of respondents said they used AI once a day or less, with one in four claiming they never used it at all.
They were then asked about their online activity, such as writing emails, using Google Maps to find their way around and compiling personal playlists on Spotify. Almost two-thirds of people said they did at least one of these things on a daily basis.
Given that all of these apps rely on AI in some capacity, alongside others such as autocorrect and search engines, the IET said its poll showed most people failed to understand how much they were using the technology. It said: ‘We have three times as many daily interactions with AI than most people realise.’
Source: Read Full Article