Ethical Hacking News
The debate over whether AI favors defense or offense has sparked significant discussion among security experts at Black Hat 2025. With AI becoming an increasingly critical component of both cybersecurity strategies and attack methodologies, understanding the nuances of its application is crucial for organizations seeking to stay ahead in this rapidly evolving threat landscape.
AI is becoming a more significant force in both defense and attack, forcing organizations to rethink their approach to security. Defenders currently have an advantage over attackers with AI, but this may change in the future. AI can be used for good or evil, and its utility depends on proper implementation and human oversight. Human guidance is essential when using AI tools, as they can be flawed if left to operate without direction. The optimal approach for red teams involves using AI as an augmentative tool rather than a replacement.
The 2025 edition of Black Hat, one of the largest security shindigs in the industry, has just concluded its keynote sessions, bringing to light a critical discussion on the role of artificial intelligence (AI) in cybersecurity. The central theme that emerged from these sessions is that AI is slowly but surely becoming a more significant force in both defense and attack, forcing organizations to rethink their approach to security.
Mikko Hyppönen, outgoing chief research officer for Finnish security firm WithSecure, kicked off the discussion by stating that AI has become a key field in security, with defenders currently ahead of attackers. He pointed out that 2024 saw no zero-day vulnerabilities discovered by AI systems, but researchers have since spotted around two dozen using large language models (LLMs) to carry out such attacks, all of which were later fixed. Hyppönen warned, however, that hackers are increasingly leveraging AI for research and will inevitably find more ways to exploit it.
In stark contrast, Nicole Perlroth, a former New York Times security correspondent and now a partner at venture capital biz Silver Buckshot Ventures, presented an opposing view during her keynote. She argued that by next year, offensive capabilities built around AI are going to have the upper hand, citing the significant number of vacancies (500,000) in the US security industry as a concern.
This divergent perspective is not surprising given the rapid evolution of AI tools and their integration into the cybersecurity landscape. Both Hyppönen and Perlroth highlighted the importance of recognizing that while AI may currently favor defense, its utility in both sectors will only continue to grow.
AI's role in red teaming for penetration testing was also a focus point during Black Hat 2025. Charles Henderson, an executive veep at cybersecurity firm Coalfire, noted that their organization was utilizing AI tools but stressed the need for human oversight and direction, as its utility is heavily dependent on proper implementation. He emphasized that while AI can do about 60 percent of the job when properly directed, it's fundamentally flawed if left to operate without human guidance.
Chris Yule, director of threat research at Sophos Cyber Threat Unit, echoed similar sentiments, suggesting that the optimal approach for red teams involves using AI as an augmentative tool rather than a replacement. He advocated for setting clear, limited goals and then guiding machine learning systems with careful human oversight for best results.
The disparity in views on AI's utility reflects the complex landscape of AI adoption in cybersecurity. While some organizations are beginning to harness its capabilities, others are still grappling with its integration into their security strategies.
As it becomes increasingly apparent that both defense and attack will rely heavily on AI in the coming years, one thing is clear: this technology demands a thoughtful and well-informed approach from those tasked with safeguarding our digital world. Without careful consideration of its benefits and pitfalls, organizations risk falling prey to the rapidly evolving threat landscape.
Related Information:
https://www.ethicalhackingnews.com/articles/The-AI-Imperative-Balancing-Defense-and-Attack-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2025/08/11/ai_security_offense_defense/
https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/
https://www.csoonline.com/article/4029862/how-ai-red-teams-find-hidden-flaws-before-attackers-do.html
Published: Mon Aug 11 11:41:10 2025 by llama3.2 3B Q4_K_M