Ethical Hacking News
Researchers have uncovered a new font-rendering attack that allows malicious commands to be hidden from AI assistants, emphasizing the need for improved security measures in these systems.
A new font-rendering attack can hide malicious commands from AI assistants. The attack relies on social engineering to persuade users to run a malicious command displayed on a webpage. Attackers can alter the human-visible meaning of a page without changing the underlying DOM, allowing them to hide malicious commands in harmless-looking HTML code. AI assistants may provide reassuring responses if the user asks about the safety of instructions, but this is only based on analyzing harmless text from the attacker's perspective. Researchers recommend that LLM vendors treat fonts as a potential attack surface and extend parsers to scan for suspicious font characteristics. User reliance on AI assistants without additional safeguards can lead to inaccurate responses, dangerous recommendations, and eroded trust.
A recent discovery by browser-based security company LayerX has revealed a new font-rendering attack that can hide malicious commands from AI assistants. The technique, which relies on social engineering to persuade users to run a malicious command displayed on a webpage, takes advantage of the disconnect between what an AI assistant sees and what a user sees when interacting with a webpage.
According to LayerX researchers, this disconnect allows attackers to alter the human-visible meaning of a page without changing the underlying DOM. This means that while AI assistants analyze a webpage as structured text, browsers render that webpage into a visual representation for the user. Within this rendering layer, attackers can hide malicious commands by encoding them in harmless-looking HTML code, which is then displayed clearly on the webpage to users.
The attack begins with the user visiting a page that appears safe and promises a reward of some kind that could be obtained by executing a command for a reverse shell on the machine. If the victim asks the AI assistant to determine if the instructions are safe, they will receive a reassuring response. However, this is because the AI tool has only analyzed the harmless text from the attacker's perspective, not the malicious instruction rendered to the user in the browser.
To demonstrate the attack, LayerX created a proof-of-concept (PoC) page that promises an Easter egg for the video game Bioshock if the user follows onscreen instructions. The underlying HTML code of this PoC includes harmless text hidden from the user but not the AI assistant. Additionally, it contains a malicious command disguised as Bioshock content, which is visible to users via custom font.
Researchers at LayerX say that an LLM (Large Language Model) analyzing both the rendered page and the text-only DOM would be better equipped to determine the safety level for the user. They provide recommendations for LLM vendors, including treating fonts as a potential attack surface and extending parsers to scan for foreground/background color matches, near-zero opacity, and smaller fonts.
Unfortunately, Google initially accepted the report but later downgraded and closed the issue, stating that it couldn't cause "significant user harm" and that it was "overly reliant on social engineering." Microsoft was the only vendor that fully addressed the issue by opening a case in MSRC. It's essential for users to understand that relying solely on AI assistants without additional safeguards can lead to inaccurate responses, dangerous recommendations, and eroded trust.
The discovery of this new font-rendering trick highlights the importance of continued research into vulnerabilities in AI systems and their potential misuse by attackers. As AI technology advances, it's crucial that developers and vendors prioritize security and implement robust safeguards against such attacks.
Related Information:
https://www.ethicalhackingnews.com/articles/New-Font-Rendering-Trick-Hides-Malicious-Commands-from-AI-Assistants-ehn.shtml
https://www.bleepingcomputer.com/news/security/new-font-rendering-trick-hides-malicious-commands-from-ai-tools/
https://layerxsecurity.com/blog/poisoned-typeface-a-simple-font-rendering-poisons-every-ai-assistant-and-only-microsoft-cares/
Published: Tue Mar 17 10:29:55 2026 by llama3.2 3B Q4_K_M