Ethical Hacking News
IBM's Cost of a Data Breach Report 2025 highlights the growing concern of lax AI security among enterprises, with nearly one-third experiencing operational disruption due to an AI-related breach. Most organizations lack adequate governance in place to mitigate AI risk, leaving them vulnerable to attacks and data breaches.
Adequate security measures and governance are lacking for AI systems in many organizations. AI-related breaches can cause operational disruption, financial losses, and reputational damage. Most organizations (87%) have no governance in place to mitigate AI risk. Unsanctioned or "shadow" AI poses a significant security risk. SaaS providers' trustworthiness is questioned due to the majority of security incidents involving AI being attributed to third-party vendors. AI security must be treated as foundational to prevent attacks and protect sensitive data.
Artificial intelligence (AI) has revolutionized various industries, and its adoption is becoming increasingly widespread across enterprises worldwide. However, as AI systems become more embedded in business operations, a pressing concern is emerging – the lack of adequate security measures and governance surrounding these technologies.
According to IBM's Cost of a Data Breach Report 2025, which analyzed data from 600 organizations globally between March 2024 and February 2025, AI-related exposures currently make up only a small proportion of the total, but these are anticipated to grow in line with greater adoption of AI in enterprise systems. The report reveals that nearly one-third (34%) of the breached organizations experienced operational disruption as a result of an AI-related breach, while 23% incurred financial losses and 17% suffered reputational damage.
Moreover, the survey for the report found that most organizations (87 percent) have no governance in place to mitigate AI risk. Two-thirds of those that were breached didn't perform regular audits to evaluate risk and more than three-quarters reported not performing adversarial testing on their AI models. This lack of oversight and security measures can put sensitive data and AI systems at risk.
Furthermore, the report highlights the danger of unsanctioned or "shadow" AI, which refers to the unofficial use of these tools within an organization, without the knowledge or approval of the IT or data governance teams. Because shadow AI may go undetected by the organization, there is an increased risk that attackers will exploit its vulnerabilities.
The majority of organizations that reported a security incident involving AI said the source was a third-party vendor providing software as a service (SaaS). This raises concerns about the trustworthiness of SaaS providers and their ability to ensure the security of their AI-based services.
In light of these findings, IBM's VP of Security and Runtime Products, Suja Viswesan, emphasizes the importance of treating AI security as foundational. "The report reveals a lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation," she said. "As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn't just financial; it's the loss of trust, transparency, and control."
The report draws particular attention to the need for organizations to adopt a proactive approach to AI security, including implementing regular audits, adversarial testing, and governance frameworks to mitigate AI risk.
In conclusion, while AI has immense potential to drive business innovation, its widespread adoption also raises significant security concerns. As enterprises continue to invest in AI technologies, it is essential that they prioritize AI security and governance to protect sensitive data and prevent attacks on their systems.
Related Information:
https://www.ethicalhackingnews.com/articles/The-Lax-Security-of-Artificial-Intelligence-A-Growing-Concern-for-Enterprises-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2025/07/30/firms_are_neglecting_ai_security/
Published: Wed Jul 30 15:02:14 2025 by llama3.2 3B Q4_K_M