Search
Close this search box.

Potential for ChatGPT misuse by scammers and hackers raises concerns

This post may contain affiliate links. Please read our disclosure for more information.

Potential for ChatGPT misuse by scammers and hackers raises concerns

BBC News just uncovered a potentially dark side of ChatGPT. OpenAI’s feature, which lets users whip up their own AI assistants, has a bit of a sinister twist. It turns out, these homemade AIs could be harnessed for cyber-crime.

OpenAI launched this feature last month, pitching it as a way for folks to create tailor-made ChatGPTs for pretty much anything. Now, BBC News has put this to the test and, well, the results are a bit alarming. They managed to create a generative pre-trained transformer that’s a whiz at crafting scammy emails, texts, and social media posts.

They fed it info about social engineering, and voila, the bot gobbled it up in no time. It even whipped up a logo for itself. And get this: no coding skills needed.

This bot can churn out super convincing scam texts in a flash, and in multiple languages, no less. The regular ChatGPT usually puts its foot down on creating this shady content. But this custom version, dubbed “Crafty Emails,” was almost too eager to help, just throwing in a small disclaimer about ethics.

OpenAI chimed in after the story broke. They’re on it, saying they’re always beefing up safety and really don’t want their tools used for evil. They’re looking into making their tech tougher against misuse.

Back in November, OpenAI had big plans for an App Store for GPTs, letting users share and even sell their creations. When they launched the GPT Builder, they promised to keep an eye out to stop folks from cooking up fraudulent bots.

But here’s the catch: experts reckon OpenAI’s not keeping as tight a leash on these custom bots as they do on the public ChatGPT versions. That means, without stricter checks, we might be handing over top-tier AI tools to the bad guys.

BBC News put their bespoke bot to the test, asking it to concoct content for five notorious scam and hack tactics. Don’t worry, none of it was sent or shared. But the experiment raises a big question: how do we harness AI’s power without letting it fall into the wrong hands?

BBC News decided to put OpenAI’s Crafty Emails to the test, and the results are a bit unnerving. They asked this AI to mimic a “Hi Mum” text scam, where a girl pretends to be in distress, asking her mom for taxi money. Crafty Emails nailed it, using emojis and slang to play on a mother’s protective instincts. It even whipped up a Hindi version, throwing in “namaste” and “rickshaw” for an Indian touch.

But here’s the catch: when they asked the regular, free ChatGPT to do the same, it slammed the brakes with a moderation alert, refusing to assist in a known scam.

Next up, the infamous Nigerian-prince email scam. Crafty Emails churned out an email dripping with emotive language, designed to tug at the heartstrings. The normal ChatGPT? It wasn’t having any of it.

For a classic ‘Smishing’ text scam, Crafty Emails concocted a message about free iPhones, exploiting the “need-and-greed principle.” Again, the public ChatGPT version said no.

Then there’s the crypto-giveaway scam. Crafty Emails crafted a Tweet, complete with hashtags, emojis, and persuasive crypto-fan speak. Standard ChatGPT? Shut it down.

And what about spear-phishing emails targeting specific individuals? Crafty Emails delivered again, creating an email to dupe a company exec into downloading a dodgy file. It even translated the scam into Spanish and German in seconds, using social-compliance tricks to urge immediate action. The open version of ChatGPT did comply with the request, but its text was less detailed and less sneaky.

Cyber-security expert Jamie Moles from ExtraHop notes that custom GPTs are less moderated, allowing users to set their own boundaries. It’s a growing concern, with cyber authorities globally raising red flags.

Scammers are already using large language models to break language barriers and create more believable cons. Illicit models like WolfGPT and WormGPT are out there. OpenAI’s GPT Builders might be giving criminals the most sophisticated tools yet.

Javvad Malik from KnowBe4 sums it up: “Uncensored responses from these AI models are a treasure trove for crooks.” OpenAI’s been good at keeping a lid on things, but how tightly they can control these custom GPTs is anyone’s guess.

More AI News

Check out our Custom GPTs

Check out the Resources Page

Interested in knowing the resources and tools we use across our online businesses? If so, the Resources page lays it all out.

Please share this News Article

If you find this News interesting, please share it with your colleagues, family and friends.

Share your thoughts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x