Ethical Hacking News
Despite their increasing adoption across various industries, AI systems pose significant security challenges due to inadequate tools and skills. A recent study reveals that many CISOs are struggling to defend these systems with yesterday's skills and tools, highlighting a pressing need for proactive measures to address the growing threat landscape associated with AI.
The majority of US CISOs struggle to defend AI systems with current tools and skills. Only 11% have security tools specifically designed for securing AI infrastructure, while 75% rely on legacy security controls. The primary obstacle is the lack of internal expertise, identified by 50% of respondents as their top challenge. Limited visibility into AI usage and insufficient security tools are also significant concerns. The gap in expertise has far-reaching consequences, including ineffective centralized oversight and difficulty assessing risk. AI introduces new behaviors that security teams must assess, such as autonomous decision-making and indirect access paths. The legacy control approach can be problematic, as it may not account for AI's changing access patterns and expanded attack paths. Organizations must prioritize developing robust AI security practices and expertise to mitigate the risks inherent in this field.
The advent of Artificial Intelligence (AI) has revolutionized numerous aspects of our lives, from mundane tasks like data analysis and machine learning, to complex processes such as decision-making, problem-solving, and automation. However, as AI continues to integrate itself into various facets of modern life, including enterprise infrastructure, a pressing concern emerges: the lack of adequate security measures to safeguard these AI systems.
A recent study by Pentera, an organization that provides cybersecurity services, sheds light on this pressing issue. The AI and Adversarial Testing Benchmark Report 2026, which was conducted among 300 US CISOs and senior security leaders, reveals a stark reality: the majority of these security experts are struggling to defend AI systems with tools and skills that are not designed for the challenge.
The study found that only 11 percent of respondents reported having security tools specifically designed for securing AI infrastructure. Moreover, 75 percent of CISOs rely on legacy security controls, such as endpoint, application, cloud, or API security tools, to protect AI systems. This approach, while providing basic coverage, can prove inadequate in the face of emerging threats.
The primary obstacle hindering the development of effective AI security measures is the lack of internal expertise. A staggering 50 percent of respondents identified this as their top challenge, followed closely by limited visibility into AI usage (48 percent) and insufficient security tools designed specifically for AI systems (36 percent). The study also revealed that only 17 percent cited budget constraints as a primary concern, suggesting that many organizations are willing to invest in AI security but do not possess the necessary specialized skills.
The consequences of this gap in expertise are far-reaching. As AI systems become increasingly complex and integrated into various aspects of corporate infrastructure, effective centralized oversight has collapsed. This makes it challenging for security teams to assess risk effectively, as basic questions about which identities AI systems rely on, what data they can access, or how they behave when controls fail often remain unanswered.
Furthermore, AI introduces behaviors that security teams are still learning to assess, including autonomous decision-making, indirect access paths, and privileged interaction between systems. Without the right expertise and active testing, it becomes difficult to evaluate whether existing controls are effective as intended.
The legacy control approach, where most enterprises extend existing security controls to cover AI infrastructure, can also be problematic. This method reflects a familiar pattern seen during previous technology shifts, where organizations initially adapt existing defenses before more tailored security practices emerge. While this can provide basic coverage, controls built for traditional systems may not account for how AI changes access patterns and expands potential attack paths.
As AI becomes an integral part of enterprise infrastructure, it is imperative that organizations focus on building expertise and improving their ability to validate security controls across environments where AI is already operating. This will require significant investments in training and upskilling programs, as well as the development of specialized tools designed specifically for securing AI systems.
The findings of this study serve as a stark reminder of the pressing need for proactive measures to address the growing threat landscape associated with AI. As we continue to navigate the complexities of an increasingly AI-driven world, it is essential that organizations prioritize the development of robust AI security practices and expertise to mitigate the risks inherent in this rapidly evolving field.
Related Information:
https://www.ethicalhackingnews.com/articles/The-AI-Security-Gap-How-CISOs-are-Struggling-to-Keep-Pace-with-the-Rapidly-Evolving-Threat-Landscape-ehn.shtml
https://thehackernews.com/2026/03/ai-is-everywhere-but-cisos-are-still.html
https://www.scworld.com/perspective/most-cisos-now-own-ai-security-heres-what-that-means-for-your-business
Published: Tue Mar 17 07:05:04 2026 by llama3.2 3B Q4_K_M