Skip to main content

81% think ChatGPT is a security risk, survey finds

ChatGPT has been a polarizing invention, with responses to the artificial intelligence (AI) chatbot swinging between excitement and fear. Now, a new survey shows that disillusionment with ChatGPT could be hitting new highs.

According to a survey from security firm Malwarebytes, 81% of its respondents are worried about the security and safety risks posed by ChatGPT. It’s a remarkable finding and suggests that people are becoming increasingly concerned by the nefarious acts OpenAI’s chatbot is apparently capable of pulling off.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

Malwarebytes asked its newsletter subscribers to respond to the phrase “I am concerned about the possible security and/or safety risks posed by ChatGPT,” a sentiment with which 81% agreed. What’s more, 51% disagreed with the statement “ChatGPT and other AI tools will improve Internet safety” while just 7% agreed, suggesting there is widespread concern over the impact ChatGPT will have on online security.

Recommended Videos

The discontent with AI chatbots was not limited to security issues. Only 12% of surveyed individuals agreed with the phrase “The information produced by ChatGPT is accurate,” while 55% of people disagreed. As many as 63% of people did not trust ChatGPT’s responses, with a mere 10% finding them reliable.

Generating malware

A person using a laptop with a set of code seen on the display.
Sora Shimazaki / Pexels

This kind of response is not entirely surprising, given the spate of high-profile bad acts ChatGPT has been used for in recent months. We’ve seen instances of it being deployed for all manner of questionable deeds, from writing malware to presenting users with free Windows 11 keys.

In May 2023, we spoke to various security experts about the threats posed by ChatGPT. According to Martin Zugec, the Technical Solutions Director at Bitdefender, “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”

Still, that hasn’t stemmed public anxiety about what ChatGPT could be used to do. It’s clear that people are worried that even novice malware writers could task AI chatbots with dreaming up a devastating virus or unbreakable piece of ransomware, even if some security experts feel that’s unlikely.

Pause on development

A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.
Viralyft / Unsplash

So, what can be done? When Malwarebytes asked its readers what they thought about the statement “Work on ChatGPT and other AI tools should be paused until regulations can catch up,” 52% agreed, while a little under 24% disagreed.

This call from the public joins several open letters from prominent tech leaders to pause AI chatbot development due to its “large-scale risks.” Perhaps it’s time decision-makers started to take heed.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
Man who looked himself up on ChatGPT was told he ‘killed his children’
ChatGPT logo on a phone

Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently -- yet wrongly -- claim that you had been jailed for 21 years for murdering members of your family.

Well, that’s exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAI’s widely used AI-powered chatbot.

Read more
Copilot might soon get more Microsoft AI models, less ChatGPT presence
Copilot app for Mac

Microsoft is one of the early backers of OpenAI, and has repeatedly hawked products like Copilot by touting their access to the latest ChatGPT models. Now, it seems Microsoft is looking to push its own AI models in the popular software suite, while also developing a rival to OpenAI’s reasoning models in the ”GPT-o” family.

As per The Information, employees at Microsoft’s AI unit recently concluded the training of “a new family of AI model” that are currently in development under the “MAI” codename. Internally, the team is hopeful that these in-house models perform nearly as well as the top AI models from the likes of OpenAI and Anthropic.

Read more
ChatGPT app could soon generate AI videos with Sora
Depiction of OpenAI Sora video generator on a phone.

OpenAI released its Sora text-to-video generation tool late in 2024, and expanded it to the European market at the end of February this year. It seems the next avenue for Sora is the ChatGPT app.

According to a TechCrunch report, which cites internal conversations, OpenAI is planning to bring the video creation AI tool to ChatGPT. So far, the video generator has been available only via a web client, and has remained exclusive to paid users.

Read more