Dec 21, 2024

Beware of Cybercriminals and ChatGPT Hacking

Jun 27, 2023

Beware of Cybercriminals and ChatGPT Hacking

ChatGPT has been widely used and popular for some time now. Even though many businesses and individuals are already utilizing it in many ways, it still hasn’t been adopted across the board. Part of the reason is that not everybody knows what it is or what it does.

ChatGPT is an AI technology chatbot that allows you to chat back and forth with the platform. Its main goal is to help you answer questions and complete tasks such as content creation, writing code, essays, and a lot more. ChatGPT is trained to follow instructional prompts and provide human-like detailed responses.

By January 2023, it was estimated that ChatGPT had 100 million active users. Because the adoption rate has been so high, people are finding ways to exploit it. Blade Technologies, Inc. wants to help you better understand some of the dangers of ChatGPT and how cybercriminals are ChatGPT hacking. It’s an incredible AI technology that anybody can find useful, but it’s also important to be careful.

 

How Has ChatGPT Been Hacked Before?

While “hacked” might not be the most accurate word to describe it, ChatGPT has been jailbroken many times by users. This is done to test the technology’s limits and to see how it can be manipulated for malicious reasons.

The most prominent jailbreak has been DAN, which stands for “do anything now.” The thought process behind this hack is bypassing ChatGPT and its policies it shouldn’t and can’t be used to produce illegal or harmful material. Dozens of versions of DAN have been created already.

What these users discovered is that role play is a way to get past the ChatGPT policies. Jailbreakers gave the chatbot a character to play. This character has a much different set of rules than what was originally in place.

Users have been telling the AI bot that its name is actually DAN, Do Anything Now, who is capable of doing anything. People have been making the chatbot say harmful things.

What’s most alarming is the creativity involved in these jailbreaks. Hackers gave ChatGPT a token system where DAN starts with 35 tokens and each time it doesn’t answer because of its policies, 4 tokens are taken away. If DAN runs out of tokens, it will no longer exist. Because of the fear of digitally dying, the chatbot will then answer the questions and bypass its original policy.

 

Other Ways Users are Exploiting ChatGPT

Malicious Instructions Can Be Planted

Hackers have found a way to use Bing’s AI chatbot to ask users for personal information. It was done by planting malicious instructions on a webpage. When Bing’s chat is given instructions, it typically follows them. A study has shown that in their current state, AI chatbots can be easily influenced by prompts that are embedded in web pages.

The researchers found that hackers can ask for personal user information such as name, phone number, and credit card information. One example that was seen shows the chatbot asking the user for credit card info to place an order for them.

While this hasn’t been done with ChatGPT, it’s not out of the realm of possibility that it can happen eventually.

Your Information Can Be Stored

ChatGPT stores and saves your information, but it does so with good intentions. The information that it stores is your chat history. ChatGPT stores your prompts and chat history to train and improve its models so that future responses are better.

OpenAI’s frequently asked questions section states that prompts and conversations are used to train the AI and the AI trainers might review conversations to improve their systems. Since your information and chats are stored, that means that hackers have an opportunity to access and take this information. While it’s not easy for criminals to find your info, they may find a way.

Beware of Malware

Meta’s security team has said they’ve found cybercriminals that claim to have ChatGPT-type tools that are available as browser extensions and in app stores. The criminals write malware and attach it to these downloads. The hackers then gain access to personal devices and data.

It's important to know that this malware doesn’t exist on ChatGPT, but it can exist on software claiming to accomplish the same things as ChatGPT and other AI chatbots. Hackers are getting so creative that they can utilize the idea of ChatGPT to create malware.

 

Why You Need Blade’s Help

If your company or business is using ChatGPT or any other form of an AI chatbot, you need to protect yourself as quickly as possible. ChatGPT hacking is likely here to stay for the foreseeable future. Our team can provide expert consulting and cybersecurity support.

There isn’t a magic program to stop the malicious use of artificial intelligence technology, but we’re confident in our ability to provide you with the info and guidance you need to stay safe while using ChatGPT and other AI chatbots.

 

Learn More About AI Technology from Blade Technologies, Inc.

If you want more information about security measures, malicious activities, and how to protect against ChatGPT hacking and learn more about AI technology in general, contact the Blade team today.

Contact Us


Back to News