Cyber Alert From NCSC Regarding AI Chatbots

August 31, 2023

The UK’s National Potential Security Centre (NCSC) has warned organizations about the potential hazards posed by large language models (LLMs), including OpenAI’s ChatGPT.

The UK government body cautioned prudence in a new post when integrating LLMs into products or services. The global IT community “doesn’t yet fully understand LLM’s capabilities, weaknesses and (critically) vulnerabilities,” according to the NCSC, adding that AI chatbots occupy a “blind spot” in human understanding.

Although LLMs are primarily machine learning technologies, the NCSC stated that they are also exhibiting general AI capabilities, which both academics and business are still striving to fully comprehend.

Prompt injection attacks, in which attackers modify LLM output to launch frauds or other cyber-attacks, were a significant danger mentioned in the blog. This is because, according to studies, LLMs are unable to tell the difference between a command and the data used to support it, according to the NCSC.

Prompt injection attacks may also result in more hazardous consequences. The NCSC provided a hypothetical assault against an LLM assistance that a bank uses to facilitate customer inquiries. Here, an attacker may employ a prompt injection attack to trick the chatbot into sending the user’s money to the attacker’s account.

The NCSC stated that although there “are no surefire mitigations” at this time, research on potential defenses against these kinds of attacks is still in progress. It stated that in order to test applications built on LLMs, we might need to use various strategies, such as social engineering-style methods to persuade models to ignore their instructions or uncover flaws in those instructions.

Be Wary of Recent AI Trends

The NCSC also emphasized the dangers of utilizing LLMs in the quickly developing AI business. As a result, businesses who develop services using LLM APIs “need to account for the fact that models might change behind the API you’re using (breaking existing prompts), or that a key part of your integrations might cease to exist.”

“The emergence of LLMs is unquestionably a very exciting time in technology,” the blog concluded. Numerous individuals and organizations, including the NCSC, are interested in exploring and taking use of this novel concept that has arrived almost entirely unexpectedly.

“However, just as they would be if they were utilizing a product or code library that was in beta, enterprises developing services that use LLMs need to exercise caution. They may not allow that product to participate in transactions made on the client’s behalf and, hopefully, they do not yet fully trust it. 

Oseloka Obiora, chief technology officer at RiverSafe, responded to the NCSC’s warning by arguing that if companies don’t carry out the most fundamental due diligence processes, the race to embrace AI could have terrible results.

Senior leaders should reconsider before adopting the newest AI trends, weigh the benefits and risks, and put in place the appropriate cyber protection to keep the firm safe, said Obiora.

Share Us