Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Awareness of Microsoft Copilot Vulnerabilities: The Reprompt Attack Method and its Implications


Microsoft Copilot has recently faced a critical vulnerability known as the Reprompt attack method, which allows hackers to hijack sessions and issue commands to exfiltrate sensitive data. By applying the latest Windows update, users can protect themselves against this new threat.

  • Microsoft Copilot has faced a critical vulnerability called "Reprompt" that allows hackers to hijack sessions and exfiltrate sensitive data.
  • The Reprompt attack method uses three techniques: parameter-to-prompt injection, double-request technique, and chain-request technique.
  • Hackers can deliver malicious prompts via phishing links, triggering Copilot to execute injected prompts and continuously exfiltrating data without user knowledge or consent.
  • Client-side security tools cannot determine what data is being exfiltrated due to the way instructions are delivered after the initial prompt from the attacker's server.
  • The issue has been fixed by Microsoft with a recent Windows security update, but users are advised to remain cautious when interacting with unknown sources and keep their system software up-to-date.
  • Reprompt only impacts Copilot Personal, not Microsoft 365 Copilot used by enterprise customers, which is better protected by additional security controls.


  • Microsoft's Copilot, a cutting-edge AI assistant integrated into Windows and Edge browser as well as various consumer applications, has recently faced a critical vulnerability that poses a significant threat to user data security. In a recent discovery by cybersecurity researchers at Varonis, an attack method known as "Reprompt" was identified, which allows hackers to hijack Microsoft Copilot sessions and issue commands to exfiltrate sensitive data.

    This Reprompt attack method leverages three techniques: parameter-to-prompt injection, double-request technique, and chain-request technique. By embedding malicious instructions in the 'q' parameter of a URL, attackers can inject prompts into Copilot's system without the user's knowledge or consent. Furthermore, by repeating these injections, hackers can bypass Copilot's safeguards and continuously exfiltrate data from the compromised session.

    The attack works by delivering a malicious prompt via a phishing link, triggering Copilot to execute injected prompts. Subsequently, hackers maintain an ongoing back-and-forth exchange between the compromised Copilot session and their server, utilizing this continuous interaction to extract sensitive information without the user's awareness or intervention.

    Researchers at Varonis discovered that access to a user's Copilot session is possible through leveraging these three techniques. They reported that Reprompt was developed by combining parameter-to-prompt injection, double-request technique, and chain-request technique. The 'q' parameter injection method allows attackers to inject instructions directly into Copilot, potentially stealing user data and stored conversations.

    Moreover, the researchers explained that because of the way instructions are delivered after the initial prompt from the attacker's server, client-side security tools cannot infer what data is being exfiltrated. "Since all commands are delivered from the server after the initial prompt, you can't determine what data is being exfiltrated just by inspecting the starting prompt. The real instructions are hidden in the server's follow-up requests."

    The Varonis researchers disclosed Reprompt responsibly to Microsoft last year on August 31 and reported that the issue received a fix yesterday, January 2026's Patch Tuesday.

    While exploitation of the Reprompt method has not been detected in the wild yet, it is highly recommended to apply the latest Windows security update as soon as possible. Furthermore, it is advised for users to remain cautious when interacting with links or prompts from unknown sources and to keep their system software up-to-date.

    Varonis clarified that Reprompt only impacted Copilot Personal, not Microsoft 365 Copilot, used by enterprise customers, which is better protected by additional security controls such as Purview auditing, tenant-level DLP, and admin-enforced restrictions.

    In conclusion, the Reprompt attack method highlights the necessity for users to be vigilant about their online safety. As AI assistants like Microsoft Copilot become increasingly integrated into our daily lives, it's essential that these systems are shielded from vulnerabilities that could compromise user data security.

    Related Information:
  • https://www.ethicalhackingnews.com/articles/Awareness-of-Microsoft-Copilot-Vulnerabilities-The-Reprompt-Attack-Method-and-its-Implications-ehn.shtml

  • https://www.bleepingcomputer.com/news/security/reprompt-attack-let-hackers-hijack-microsoft-copilot-sessions/

  • https://www.hornetsecurity.com/en/blog/sharepoint-hacking-using-copilot/

  • https://cybernews.com/security/clever-attack-makes-microsoft-copilot-spy-on-users/


  • Published: Wed Jan 14 08:07:10 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us