Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

AI's Growing Ability to Find Bugs Raises Concerns About its Efficacy in Fixing Them


A new generation of AI-powered bug-finding tools has raised concerns about the efficacy of these systems in fixing bugs, highlighting the need for further research and development in cybersecurity to address these challenges.

  • AI systems have improved at identifying bugs and vulnerabilities in software code.
  • There are concerns about the effectiveness of AI in fixing these issues.
  • A recent tool, Claude Code Security, has been shown to identify over 500 vulnerabilities but only fix a small number of them.
  • The issue lies not just with finding bugs, but also validating and patching them.
  • Security teams face challenges in keeping up with the pace of new discoveries and fixes due to a backlog of unanalyzed CVE entries.
  • AI-powered tools may generate false positives, leading some bug bounty programs to be closed.



  • In a recent development that has sent shockwaves through the cybersecurity community, it has become apparent that AI systems have grown significantly more adept at identifying bugs and vulnerabilities in software code. However, this newfound ability to find these issues has led to concerns about the AI's effectiveness in actually fixing them.

    According to a report by The Register, a leading technology publication, Anthropic, a company specializing in artificial intelligence, has developed an AI-powered tool called Claude Code Security that is capable of identifying vulnerabilities in software code and proposing patches. While this may seem like a boon for cybersecurity efforts, experts are cautioning that the AI's ability to identify bugs does not necessarily translate to its ability to fix them.

    The Register reported on how Anthropic's red team had used its tool to find over 500 vulnerabilities in production open-source codebases. However, upon further investigation, only two or three of these identified vulnerabilities were actually fixed. This raises significant concerns about the efficacy of AI-powered bug-finding tools and their ability to translate into meaningful action.

    Guy Azari, a stealth startup founder who worked previously as a security researcher at Microsoft and Palo Alto Networks, expressed his skepticism about the effectiveness of Claude Code Security in fixing bugs. He pointed out that without Common Vulnerabilities and Exposures (CVE) assignments, it is clear that the security process remains incomplete. Finding vulnerabilities was never the issue, he said, but rather the inability to validate and patch them.

    In 2025, according to Azari, the National Vulnerability Database had a backlog of roughly 30,000 CVE entries awaiting analysis, with nearly two-thirds of reported open source vulnerabilities lacking an NVD severity score. This highlights the overwhelming nature of the task at hand for security teams trying to keep up with the pace of new discoveries and fixes.

    Azari also pointed out that maintainers of certain projects, such as the curl project, have had to close their bug bounty programs due to the sheer volume of false positives generated by AI-powered tools. This suggests that while AI may be able to identify bugs, it is not yet able to provide actionable intelligence for security teams.

    In response to these concerns, Feross Aboukhadijeh, CEO of Socket, a security firm, emphasized that the spread of powerful, security-optimized AI tools will present security teams with an increasing torrent of patches, upgrades, and emergency fixes. However, he noted that the key challenge lies in turning vulnerability candidates into validated, reproducible findings that can be acted upon.

    Aboukhadijeh pointed out that Certified Patches, which consist of direct changes to existing dependencies as an alternative to updating dependencies to a patched version, are one way his company is addressing this issue. However, he warned that the competitive advantage will not belong to whoever can generate the most findings, but rather to those who can convert findings into safe, prioritized, low-disruption change.

    The growing concern about AI's ability to find bugs but struggle with fixing them highlights the need for further research and development in the field of cybersecurity. As AI-powered tools continue to become more prevalent, it is essential that their limitations and challenges are fully understood and addressed. Ultimately, security teams will require more than just bug-finding capabilities; they will need comprehensive solutions that can translate findings into concrete action.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/AIs-Growing-Ability-to-Find-Bugs-Raises-Concerns-About-its-Efficacy-in-Fixing-Them-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2026/02/24/ai_finding_bugs/


  • Published: Tue Feb 24 17:27:16 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us