Embracing ChatGPT? Pay Attention To These Cybersecurity Concerns

Beware the cybersecurity concerns about ChatGPT
The very qualities that make generative AI attractive also make it ‘a major cybersecurity concern.’

ChatGPT is all the rage these days, with people using the program to generate everything from blog posts to school essays and marketing materials. The constantly learning nature of ChatGPT means users are improving the program and its outputs, as the algorithm learns and trains itself based on what information is fed into it. Still, cybersecurity experts warn that this very quality that allows ChatGPT to improve itself has also caused it to become a major cybersecurity concern.

Recent months have seen the popularity of AI chatbots, such as ChatGPT, skyrocket. This greater use also means more data flows into these programs. As more data is input, there should be more care to protect the data. Unfortunately, OpenAI does not have the proper measures in place.

The pros and cons of open-source coding

One reason why ChatGPT poses such a massive risk to privacy and data security is its vulnerability to data breaches, particularly because it is built on open-source code. As a result, anyone with the proper technology and equipment can inspect, modify and enhance the code.

The merits of open-source coding have been discussed since its advent. Ultimately, the main function of open-source code is to create a digital ecosystem that is not only more equitable, but also fosters collaboration and innovation. With an open-source program, programmers from all over the world can identify and collaborate on solutions to problems that may exist within the program’s code. The wide availability of the code also means that individuals can better use and adapt the program to their own needs.

Another boon of open-source software is that its transparency can create increased security. Anyone can review the code to identify and fix vulnerabilities or bugs, leading to more robust and secure software. Also, open-source projects often benefit from a large community of developers and contributors who collaborate to improve and enhance the software. This collaborative approach can lead to faster development cycles, quick bug fixes and the introduction of new features.

However, the flip side of this argument is that, with the ability of the code to be adapted to unique use cases, people can adapt the code for malicious purposes—something which we are seeing increasingly often with ChatGPT. And rather than using its open-source code to identify and fix problems with the code, malicious actors will continue to identify these problems and exploit them for their own gain.

These security concerns culminated in a data breach confirmed by OpenAI in May 2023, when a hacker exploited a vulnerability in the code’s open-source library to access the chat history of other users. Some even theorize that the payment information of paying subscribers could have been left vulnerable in this attack. Although the OpenAI team responded quickly by taking the service offline, this incident exposes the harsh reality that this program might not be as safe as we would like to assume.

How ChatGPT’s use of data presents a security risk

Data security is an important consideration regarding the use of ChatGPT because its AI inherently stores massive amounts of data, and part of the nature of artificial intelligence training is understanding how it learns and evolves based on the data input. Although the primary concern of critics has been how this learning can contribute to biases and inaccuracies from users who input invalid information to the model, more attention should be paid to how this massive amount of data storage makes ChatGPT a valuable target for hackers.

Some organizations dealing with sensitive data, such as banks and hospitals, have already instituted policies restricting their employees’ use of AI programs like ChatGPT due to how vulnerable it leaves information that it is fed. However, other companies should also be wary of the data they use with ChatGPT. For example, customers’ personal information should never be used with the program, considering its proven vulnerability to breaches. In one case, a doctor input his patient’s name and medical condition, and asked ChatGPT to craft a letter to the patient’s insurance company.

Another type of data that should be protected from ChatGPT is intellectual property (IP). Because AI tends to learn and reuse materials from the inputs it is given, it could theoretically inadvertently reuse this information for an unrelated response. In a recent IP breach, an executive copied and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. This is the equivalent of giving away the company’s competitive advantage in 2023.

In April 2023, it was discovered that several Samsung employees inadvertently leaked sensitive company data on three separate occasions. The information that the staff of the company supposedly leaked included the source code of software responsible for measuring semiconductor equipment.

Take code, for example. If one user puts proprietary code into ChatGPT to test it, and another prompts the program to write new code, ChatGPT may use elements of the first user’s code to synthesize the second’s. This “accidental plagiarism” has been reported as a concern in many industries and communities.

Although using ChatGPT and other LLMs in the workplace can pose security risks and increase the possibility of a breach, companies need to show restraint from outright banning AI tools in the workplace. Employers and employees that do not leverage AI in the future will be left behind due to the rapid advancement of this new tech. A more efficacious approach would be for companies to equip their employees with knowledge of how using AI can improve their efficiency, but also leak information if used inappropriately. In the near future, we are going to see AI awareness training bundled into every company’s employee cybersecurity awareness training.

Protecting against wrongdoers exploiting ChatGPT

Additionally, some wrongdoers have simply found ways to exploit ChatGPT and use it for malicious purposes. Although ChatGPT’s originators created the program with certain safeguards to prevent the program from being abused, the open-source nature of the program’s code has allowed hackers to find ways to bypass these restrictions. Thus, with these “jailbreak” methods, scammers and other wrongdoers have been able to create prompts that cause the program to generate potentially dangerous text.

Perhaps one of the most obvious (and dangerous) uses of ChatGPT is for the creation of phishing emails. After all, the purpose of ChatGPT is to create text that is convincingly human, making it a perfect tool for scammers who seek to impersonate real people. The issue for legitimate users is that as users train the AI to draft emails for them, the AI will then turn around and use that training to draft fraudulent emails for illegitimate users.

Some scammers have even found ways to bypass the restrictions of the ChatGPT program to help them improve their malware. Of all the program’s capabilities, its ability to code is among its most rudimentary and infantile. However, this also means that the protections against coding for malicious purposes are bare bones. For example, some hackers have reported being successful in using ChatGPT to improve their malware programs by disguising the programs as legitimate.

It is also worth noting that these are only the potentially dangerous applications of ChatGPT. Remember: ChatGPT is just a user interface for OpenAI’s models, and these models can be integrated as APIs without any restrictions. As a result, anyone with the technological skills and capabilities to integrate OpenAI’s models into their own programs can benefit from—and more concerningly, abuse — the data fed into the ChatGPT program.

ChatGPT lacks human oversight

Another concerning fact about ChatGPT is that, as an AI program, it lacks human oversight. Although ChatGPT was developed by a human team and its responses require the input of a database (often maintained by humans), the learning and training process for its AI algorithm is not conducted by a human but rather by the algorithm itself, which improves its response through feedback from users.

Although OpenAI has created several methods by which users can report issues with the program—such as the OpenAI Forum, Reddit and GitHub—the massive backlog of issues that plague the system cannot be continuously monitored. Similarly, OpenAI’s “bug bounty program,” which offers users as much as $20,000 to report bugs in the system, is inherently flawed. For instance, let’s say a wrongdoer finds a massive loophole that has the potential to earn them more than the reward they will receive for reporting it. In such a scenario, what benefit would they stand to gain by reporting it?

Although ChatGPT and other chatbots are valuable tools, as is the case with any other technological advancement, users must beware of the data they put into this program. The more data users provide to these advanced learning models, the more likely they are to be abused by wrongdoers who hope to exploit their capabilities. Users should take care not only when putting personal data into the ChatGPT model, but also any data that could potentially be abused.

Get the StrategicCIO360 Briefing

Sign up today to get weekly access to the latest issues affecting CIOs in every industry

MORE INSIGHTS

Strategy, Insights, Action

In our weekly newsletter, get insight into the biggest issues facing CIOs, along with strategic ideas, solutions, and interviews.

Strategy, Insights, Action

Once a week, get insight into the biggest issues facing CIOs, along with strategic ideas, solutions, and interviews.

Your information is secure – we don’t sell or rent your data to any third-parties.