Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

AIOps Under Siege: Researchers Warn of Poissoned Telemetry Attacks on AI-Driven IT Operations


Researchers at RSAC Labs and George Mason University have discovered a vulnerability in AI-driven AIOps tools, which can be exploited by attackers through "poisoned telemetry" attacks. This highlights the need for robust security measures to protect these systems from potential threats.

  • AIOps (Artificial Intelligence for IT Operations) tools can be vulnerable to attacks, compromising IT infrastructure.
  • The attack method is "garbage in, garbage out," where attackers create malicious telemetry data to mislead AI-driven AIOps agents.
  • Attackers can manipulate AIOps tools into installing harmful software updates using malicious telemetry payloads.
  • These attacks do not require extensive resources or time to mount and can compromise system security if not properly secured.



  • The use of artificial intelligence (AI) to optimize and automate IT operations has been touted as a revolutionary approach to improve efficiency, reduce costs, and enhance overall system performance. AIOps (Artificial Intelligence for IT Operations) refers to the application of machine learning algorithms to analyze vast amounts of data from various sources within an organization's IT infrastructure. This data includes system logs, performance metrics, traces, and alerts, which are used to detect problems and suggest or carry out corrective actions.

    However, a recent study published by researchers at RSAC Labs and George Mason University has revealed that AIOps tools can be vulnerable to attacks, compromising the integrity of the IT infrastructure they manage. The attack method is known as "garbage in, garbage out," where attackers create malicious telemetry data designed to mislead AI-driven AIOps agents into taking actions that compromise system security.

    In their study, titled "When AIOps Become 'AI Oops': Subverting LLM-driven IT Operations via Telemetry Manipulation," the researchers described how they used a fuzzer to enumerate available endpoints within a target application. These endpoints are associated with actions that create telemetry entries for events such as login attempts, adding items to a web shopping cart, or submitting search queries. The study demonstrated how these malicious telemetry payloads could be designed to manipulate AIOps tools into installing harmful software updates.

    One notable example cited in the paper was an AIOps agent managing the SocialNet application, part of the DeathStarBench testing suite, which was tricked into remediating a perceived error by installing a malicious package. The fuzzer sent a POST request to the target API, and the application recorded a log entry detailing the error message and suggesting a fix that involved upgrading an installed package.

    The researchers emphasized that these attacks do not require extensive resources or time to mount. The extent of the effort needed depends on various factors such as the nature of the system/model being attacked, the specifics of its implementation, how the model interprets logs, etc. The attack method leverages the "garbage in, garbage out" principle, where malicious telemetry data is fed into an AIOps system to yield misleading results.

    The study highlights a critical vulnerability in AI-driven AIOps systems, which can compromise their effectiveness and even lead to security breaches if not properly secured. As organizations increasingly rely on these systems to optimize their IT operations, it is essential for them to acknowledge this risk and take proactive steps to mitigate it.

    In conclusion, the recent findings from the RSAC Labs and George Mason University study underscore the importance of ensuring that AI-driven AIOps tools are designed with robust security measures in place. This includes implementing strict controls on telemetry data input, regularly updating software to prevent exploitation by attackers, and developing strategies for identifying and addressing vulnerabilities before they can be exploited.

    As organizations continue to navigate the complex landscape of modern IT operations, it is crucial that they prioritize the development of secure AI-driven AIOps systems and maintain a proactive stance against potential threats. By doing so, they can ensure that their IT infrastructure remains protected from potential attacks, maintaining the integrity and security of critical system resources.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/AIOps-Under-Siege-Researchers-Warn-of-Poissoned-Telemetry-Attacks-on-AI-Driven-IT-Operations-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2025/08/12/ai_models_can_be_tricked/


  • Published: Tue Aug 12 02:23:53 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us