Ethical Hacking News
Pen testers at Pen Test Partners exposed four security vulnerabilities in Eurostar's AI chatbot, prompting a heated response from the train operator's head of security that has left many in the cybersecurity community questioning the company's handling of the issue. Despite reporting the flaws to Eurostar via its vulnerability disclosure program, the researchers were accused of "blackmail" by the company, sparking outrage and debate among experts about the importance of acknowledging and responding to security reports.
Pen testers at Pen Test Partners exposed four security vulnerabilities in Eurostar's public AI chatbot, leaving the train operator red-faced. The researchers were accused of "blackmail" by Eurostar's head of security, sparking outrage among the cybersecurity community. Eurostar failed to respond to the initial vulnerability report via its program, but only escalated it after being contacted on LinkedIn. Experts debate the importance of acknowledging and responding to security reports in a timely manner. The chatbot's design flaw leaves earlier messages vulnerable to tampering and injection of malicious code. The incident highlights the need for companies to prioritize security when developing public-facing chatbots.
Pen testers at Pen Test Partners recently exposed four security vulnerabilities in Eurostar's public AI chatbot, leaving the train operator red-faced. In a shocking turn of events, the researchers were accused of "blackmail" by Eurostar's head of security, sparking outrage among the cybersecurity community.
The story begins when Ross Donald and Ken Munro, managing partner and team lead at Pen Test Partners respectively, reported their findings to Eurostar via its vulnerability disclosure program. The researchers had discovered several security issues with the chatbot, including a weakness that allowed an attacker to inject malicious HTML content or trick the bot into leaking system prompts.
However, despite initially reporting the vulnerabilities, Donald and Munro did not receive a response from Eurostar. It wasn't until they contacted Eurostar's head of security on LinkedIn that the issue was escalated, and the researchers were told to use the vulnerability reporting program, which they had already done. The company even went so far as to say that the researchers' report was "blackmail," a claim that has been met with skepticism by many in the cybersecurity community.
According to Munro, the problems lay not just with the chatbot's design but also with how Eurostar handled the vulnerability disclosure process. In his account of events, Munro writes: "Maybe a simple acknowledgement of the original email report would have helped?" This comment has sparked debate among experts about the importance of acknowledging and responding to security reports in a timely manner.
The vulnerabilities themselves are relatively easy to abuse and stem from the API-driven chatbot's design. Every time a user sends a message to the chatbot, the frontend relays the entire chat history - not just the latest message - to the API. However, the server only runs a guardrail check on the latest message to ensure that it's allowed. If the message is allowed, the server marks it "passed" and returns a signature. But if the message doesn't pass the safety checks, the server responds with "I apologise, but I can't assist with that specific request" and no signature.
This design flaw has significant implications for the security of Eurostar's chatbot, as it leaves earlier messages vulnerable to tampering and injection of malicious code. According to Munro, this weakness could be exploited by an attacker to trick the bot into leaking sensitive information or even injecting phishing links into users' browsers.
The researchers point out that while some of the issues may have been patched, they are still unsure whether all of the vulnerabilities have been fixed. In fact, the discovery of these security flaws has raised questions about how many disclosures were lost during the vulnerability disclosure process due to a change in Eurostar's VDP (vulnerability disclosure program).
The incident serves as a cautionary tale for companies with consumer-facing chatbots - a growing concern in today's digital landscape. As more and more organizations turn to AI-powered chatbots, it is essential that they build robust security controls into their systems from the start.
In conclusion, this recent incident highlights the importance of responsible disclosure practices and the need for companies to prioritize security when developing public-facing chatbots. It also underscores the risks associated with poorly designed systems, particularly those reliant on AI-driven interfaces.
Related Information:
https://www.ethicalhackingnews.com/articles/Pen-testers-accused-of-blackmail-after-reporting-Eurostar-AI-chatbot-flaws-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2025/12/24/pentesters_reported_eurostar_chatbot_flaws/
Published: Thu Dec 25 10:12:40 2025 by llama3.2 3B Q4_K_M