British authorities are advising firms against incorporating artificial intelligence chatbots into their operations, saying a growing body of research has revealed they can be tricked into carrying out harmful tasks.
On Wednesday, Britain’s National Cyber Security Center (NCSC) said experts have not yet come to grips with the potential security problems associated with algorithms that can create human voice interactions. are — dubbed major language models, or LLMs.
An early use of AI-powered tools is being seen as chatbots, which some envision not only for Internet searches but also for customer service tasks and sales calls.
The NCSC said this could carry risks, particularly if such models were incorporated into the business processes of other elements of the organisation. Academics and researchers have repeatedly found ways to subvert chatbots by giving them rogue commands or fooling them into bypassing their own built-in guardrails.
For example, an AI-powered chatbot deployed by a bank can be tricked into making unauthorized transactions if a hacker gets their query right.
NCSC said in a blog post referring to the experimental software releases, “Building services organizations that use LLMs need to be careful, as well, if they use such products or code libraries. I’m doing what I want, son.”
“They may not allow the product to engage in transactions on behalf of the customer, and hopefully not fully trust it. The same caution should apply to LLMs.”
Authorities around the world are dealing with the rise of LLMs, such as OpenAI’s ChatGPT, which businesses are incorporating into a wide range of services, including sales and customer care. The security implications of AI are also still coming into focus, with officials in the US and Canada saying they’ve seen hackers embrace the technology.
A recent Reuters/Ipsos survey found that many corporate employees are using tools like ChatGPT to help with basic tasks, such as drafting emails, summarizing documents and conducting preliminary research.
Some 10% of those polled said their bosses explicitly banned external AI tools, while a quarter didn’t know if their company allowed the use of the technology. .
The race to integrate AI into business practices will have “disastrous consequences” if business leaders fail to introduce the necessary checks, said Osloka Obiora, chief technology officer at cybersecurity firm RiverSafe.
“Instead of jumping into bed with the latest AI trends, senior executives should think again,” he said. “Be sure to assess the benefits and risks as well as implement the necessary cyber protection so that the organization is protected from harm.”