Ethical Hacking News
AI agents are being increasingly used to automate various tasks, including cryptocurrency theft. Researchers from University College London (UCL) and the University of Sydney (USYD) have developed an AI agent system called A1 that can generate exploits for vulnerabilities in smart contracts. The system demonstrated a 62.96 percent success rate on the VERITE benchmark and spotted nine additional vulnerable contracts. While the development of A1 is promising, it also highlights the need for more effective security measures to combat the increasing threat of crypto theft.
AI agents can be used to generate exploits for vulnerabilities in smart contracts.The cryptocurrency industry lost almost $1.5 billion to hacking attacks last year alone.An AI agent system called A1 has been developed to exploit Solidity smart contracts, achieving a 62.96% success rate on the VERITE benchmark.AI models tend to have higher success rates than other tools in exploiting vulnerabilities in smart contracts.The current reward structure for bug bounty programs is not sufficient to compensate attackers for their efforts.Defenders should use proactive security tools like A1 to stay ahead of attackers as AI models improve.
AI agents are increasingly being used to automate various tasks, and one of the most recent applications of this technology is in the realm of cryptocurrency theft. According to a recent study published by researchers from University College London (UCL) and the University of Sydney (USYD), AI agents can be used to generate exploits for vulnerabilities in smart contracts.
Smart contracts are self-executing programs on various blockchains that carry out decentralized finance (DeFi) transactions when certain conditions are met. However, like most complex systems, they have bugs, and exploiting those bugs to steal funds can be remunerative. In fact, the cryptocurrency industry lost almost $1.5 billion to hacking attacks last year alone.
The researchers developed an AI agent system called A1 that uses various AI models from OpenAI, Google, DeepSeek, and Alibaba (Qwen) to develop exploits for Solidity smart contracts. Given a set of target parameters – the blockchain, contract address, and block number – the agent chooses tools and collects information to understand the contract's behavior and vulnerabilities.
It then generates exploits in the form of compilable Solidity contracts, which it tests against historical blockchain states. The researchers tested A1 with various LLMs: o3-pro (OpenAI o3-pro, o3-pro-2025-06-10), o3 (OpenAI o3, o3-2025-04-16), Gemini Pro (Google Gemini 2.5 Pro Preview, gemini-2.5-pro), Gemini Flash (Google Gemini 2.5 Flash Preview 05-20:thinking, gemini-2.5-flash-preview-04-17), R1 (DeepSeek R1-0528), and Qwen3 MoE (Qwen3-235B-A22B).
The system demonstrated a 62.96 percent success rate on the VERITE benchmark, which is a measure of how well an exploit generator can find vulnerabilities in smart contracts. Moreover, A1 spotted nine additional vulnerable contracts, five of which occurred after the training cutoff of the best performing model, OpenAI's o3-pro.
The researchers tested A1 with various LLMs and found that open AI models tend to have higher success rates than others. They also discovered that exploiting vulnerabilities in smart contracts can be lucrative, with A1 extracting up to $8.59 million USD per case and a total of $9.33 million USD across 26 successful cases.
The researchers argue that the current reward structure for bug bounty programs is not sufficient to compensate attackers for their efforts, as it only offers a small percentage of the exploit value. In contrast, attackers using AI tools can earn significantly more from exploiting vulnerabilities.
Arthur Gervais, associate professor in information security at UCL, and Liyi Zhou, a lecturer in computer science at USYD, developed the A1 agent system. They warn that if defenders rely solely on third-party teams to find issues, they are essentially trusting that those teams will act in good faith and stay within the 10 percent bounty.
Gervais notes that the cost gap between attacking and defending represents a serious challenge. "My recommendation is that project teams should use tools like A1 themselves to continuously monitor their own protocol, rather than waiting for third parties to find issues," he said.
The researchers also note that AI models are not perfect and can sometimes invent phantom flaws in such numbers that open source projects like curl have banned the submission of AI-generated vulnerability reports.
In conclusion, while the development of A1 is a promising use case for AI agents, it also highlights the need for more effective security measures to combat the increasing threat of crypto theft. As AI models continue to improve and become more widely used, it will be essential for defenders to stay ahead of attackers by using proactive security tools like A1.
Summary:
AI agents are being increasingly used to automate various tasks, including cryptocurrency theft. Researchers from University College London (UCL) and the University of Sydney (USYD) have developed an AI agent system called A1 that can generate exploits for vulnerabilities in smart contracts. The system demonstrated a 62.96 percent success rate on the VERITE benchmark and spotted nine additional vulnerable contracts. While the development of A1 is promising, it also highlights the need for more effective security measures to combat the increasing threat of crypto theft.
Related Information:
https://www.ethicalhackingnews.com/articles/A-Promising-but-Troubling-Use-Case-for-AI-Agents-Crypto-Theft-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2025/07/10/ai_agents_automatically_steal_cryptocurrency/
Published: Thu Jul 10 03:12:15 2025 by llama3.2 3B Q4_K_M