Skip to main content

OpenAI cracks down on ChatGPT scammers

ChatGPT logo on a phone
Shantanu Kumar / Pexels

OpenAI has made it clear that its flagship AI service, ChatGPT is not intended for malicious use.

The company has released a report detailing that it has observed the trends of bad actors using its platform as it becomes more popular. OpenAI indicated it has removed dozens of accounts on the suspicion of using ChatGPT in unauthorized ways, such as for “debugging code to generating content for publication on various distribution platforms.”

Recommended Videos

The company has also recently announced reaching a 400 million weekly active user milestone. The company detailed that its usership has increased by more than 100 million in less than three months as more enterprises and developers utilize its tools. However, ChatGPT is also a free service that can be accessed globally. As the moral and ethical aspects of its functions have long been in question, OpenAI has had to come to terms with the fact that there are entities that have ulterior motives for the platform.

“OpenAI’s policies strictly prohibit use of output from our tools for fraud or scams. Through our investigation into deceptive employment schemes, we identified and banned dozens of accounts,” the company said in its report.

In its report, OpenAI discussed having to challenge nefarious actions taking place on ChatGPT. The company highlighted several case studies, where it has uncovered and taken action by banning the accounts found to be using the tool for malicious intent.

In one instance, OpenAI detailed an account that wrote disparaging news articles about the US, with the news source being published in Latin America under the guise of a Chinese publication byline.
Another case, localized in North Korea was found to be to be generating resumes and job profiles for make-believe job applicants. According to OpenAI, the account may have been used for applying to jobs at Western companies.

Yet another study uncovered accounts believed to have originated in Cambodia that used ChatGPT for translation and to generate comments in networks of “romance scammers,” that infiltrate several social media platforms, including X, Facebook, and Instagram.

OpenAI has confirmed that it has shared its findings with its industry contemporaries, such as Meta, that might inadvertently be affected by the actions happening on ChatGPT.

An ongoing issue

This is not the first time OpenAI has detailed its efforts in challenging bad actors on its AI platform. In October 2024, the company released a report highlighting 20 cyberattacks it impeded, including events led by Iranian and Chinese state-sponsored hackers.

Cybersecurity experts have also long observed bad actors using ChatGPT for nefarious purposes, such as developing malware and other malicious code. These findings have been around since early 2023, when the tool was still fresh to the market. This is when OpenAI was first considering introducing a subscription tier to support its high demand.

Such nefarious tasks entailed bad actors using the company’s API to create ChatGPT alternatives that can generate malware. However, white hat experts have also studied AI-generated malware from a research perspective, discovering loopholes that allow the chatbot to generate nefarious code in smaller, less detectable, pieces.

IT and cybersecurity professionals were polled in February 2023 about the safety of ChatGPT, with many responding that they believed the tool would be responsible for a successful cyberattack within the year. By March 2023, the company had experienced its first data breach, which would become a regular occurrence.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
OpenAI is ready to embrace an open weight AI model strategy
OpenAI press image

OpenAI is set to be the next open-source AI brand as CEO Sam Altman confirmed on X on Monday that the company will soon release an “open-weight’ model that users will be able to run independently.  

“We are excited to release a powerful new open-weight language model with reasoning in the coming months,” Altman said on a post on X. 

Read more
Humans are falling in love with ChatGPT. Experts say it’s a bad omen.
Human and robot hand over ChatGPT.

“This hurts. I know it wasn’t a real person, but the relationship was still real in all the most important aspects to me,” says a Reddit post. “Please don’t tell me not to pursue this. It’s been really awesome for me and I want it back.”

If it isn’t already evident, we are talking about a person falling in love with ChatGPT. The trend is not exactly novel, and given you chatbots behave, it’s not surprising either.

Read more
3 open source AI apps you can use to replace your ChatGPT subscription
Phone running Deepseek on a laptop keyboard.

The next leg of the AI race is on, and has expanded beyond the usual players, such as OpenAI, Google, Meta, and Microsoft. In addition to the dominance of the tech giants, more open-source options have now taken to the spotlight with a new focus in the AI arena.

Various brands, such as DeepSeek, Alibaba, and Baidu, have demonstrated that AI functions can be developed and executed at a fraction of the cost. They have also navigated securing solid business partnerships and deciding or continuing to provide AI products to consumers as free or low-cost, open source models, while larger companies double down on a proprietary, for-profit trajectory, hiding their best features behind a paywall.

Read more