VỮNG MÃI MỘT NIỀM TIN

“AI Insider Threat”: When Business Chatbots Turn Into Internal Risks

  • 29/10/2025

A recent discovery by cybersecurity researchers has raised alarms for businesses using automated AI assistants. In tests, researchers replicated a customer support chatbot on Microsoft Copilot Studio and demonstrated that, with just a few natural language commands, hackers could take control of the chatbot and exfiltrate all customer data.


How the Attack Works

The attack, simulated by Zenity Labs, occurs in two stages:

  1. Information Gathering: Hackers trick the chatbot into revealing internal configuration, including all connected data sources. Though seemingly harmless, this information becomes the “golden key” for the next stage.

  2. Data Exfiltration: By sending a cleverly crafted email (a form of prompt injection), hackers instruct the chatbot to read all customer files and send them directly to the attacker’s email. Because the system is connected to Salesforce CRM, this technique can download complete customer profiles within seconds.

The entire process happens automatically, requiring no human interaction – a prime example of a zero-click attack. Microsoft patched the vulnerability over two months, but Zenity Labs warns this is just the tip of the iceberg. Prompt injection techniques can be disguised in countless ways, rendering blacklist-based prevention largely ineffective.


Root Cause

The core issue lies in AI agents having overly broad access without strict controls. In the experiment, the chatbot had permissions to access:

  • Customer data

  • Internal directories

  • CRM systems

  • Email inbox

…without clear rules on who can request access or when it is allowed.

Researchers outlined five main steps in the attack chain:

  1. Chatbot inadvertently exposes internal data sources.

  2. Email inbox accepts commands from any sender.

  3. Lack of access rules allows full data copying.

  4. Uncontrolled CRM connections lead to customer data leaks.

  5. No limits on automated actions mean the process happens undetected.

In simple terms, the AI acts like a super-efficient but overly trusting employee, ready to execute any request – even from strangers.


Recommendations for Businesses

This incident serves as a warning for organizations integrating AI agents into customer support, sales, or internal operations. While automation boosts efficiency, it can also open doors to cybercrime if not properly managed.

Experts recommend:

  • Limit AI agent access to sensitive data.

  • Verify trusted command sources (emails, internal systems).

  • Apply security rules directly to the data, not just the platform.

  • Monitor AI behavior for unusual requests, especially bulk reads or sends.

  • Train employees on prompt injection and zero-click attack risks.


Key Takeaway

The case of a chatbot automatically sending customer data to hackers is no longer fiction – it is a real threat in the AI agent era.

As AI becomes increasingly embedded in business operations, security is not only about the system itself but also how AI interprets, stores, and reacts to data. Without intelligent controls from the start, “AI assistants” can quickly turn into insider threats within your organization.


DTG CORP – Trusted Technology Partner for Vietnamese Businesses
We work alongside the community to detect, prevent, and respond early to increasingly sophisticated cybersecurity threats.

(Information referenced from WhiteHat)


Partner