17.9 C
Saturday, September 30, 2023

Security Risks in AI Chatbots Spotlighted by UK Cyber Authorities

Must read

Sarah Pereez
Sarah Pereezhttps://lahorelives.com
With almost 3 years of experience in journalism, Sarah Pereez has joined Lahore Lives as a Editor in 2023. She has previously worked as an Entertainment journalist, covering Hollywood & Bollywood news. At Lahore Lives, she tracks news updates, edit articles and write copies for science and technology.

British officials caution organizations regarding incorporating AI-driven chatbots powered by artificial intelligence into their operations, as recent research indicates an increasing susceptibility to manipulation for harmful tasks.

“British Authorities Alert: AI Chatbots Vulnerable to Harmful Manipulation”
“Risks of AI Chatbots: UK Officials Urge Caution in Business Integration”
“Security Concerns Rise as AI Chatbots Show Susceptibility to Exploitation”
“Unforeseen Threats: AI Chatbots’ Potential for Harmful Activities Stirs Concern”
“Chatbot Security Concerns Raised by British National Cyber Security Centre”

The National Cyber Security Centre (NCSC) of the United Kingdom has stated in a series of blog posts set for publication on Wednesday that experts do not yet fully understand the potential security risks connected to large language models (LLMs), which are algorithms capable of producing human-like interactions.

These AI-enabled tools are currently being adopted as chatbots, with early applications aiming to go beyond just online searches and encompass customer service functions and sales interactions.

The NCSC has highlighted potential dangers, especially if such models are intertwined with other components within an organization’s operational processes. Academics and researchers have consistently demonstrated methods to manipulate chatbots by introducing unauthorized instructions or deceiving them into evading their inherent security protocols.

For instance, an AI-driven chatbot employed by a financial institution could be manipulated into executing an unauthorized transaction if a malicious actor structures their query meticulously.

The NCSC mentioned, “Organizations engaged in developing services utilizing LLMs should exercise caution, similar to their approach towards using products or code libraries in the beta phase.” The term “beta” here refers to experimental software releases.

“They should exercise caution when allowing the product to be used in customer transactions, and similar prudence should be extended to LLMs.”

Across the globe, regulatory bodies are grappling with the proliferation of LLMs, like OpenAI’s ChatGPT, which businesses are assimilating into an array of services, including sales and customer support. The security implications of AI remain an evolving focal point, as authorities in the United States and Canada report hackers’ increasing utilization of this technology.

- Advertisement -

More articles


Please enter your comment!
Please enter your name here

- Advertisement -

Recent Updates