Emerging Artificial Intelligence Achieves Parity with Humans in Identifying Vulnerabilities in Blockchain Smart Contracts

Anthropic's recent research reveals that AI can not only replicate known exploits in blockchain smart contracts but also uncover new vulnerabilities, highlighting a critical shift in AI's role from a diagnostic tool to an active threat identifier in cybersecurity. This advancement serves as a crucial alert for developers relying on smart contracts, especially in sensitive sectors like finance and personal data, urging a rapid evolution in defense strategies.

Nathan Mercer

December 2, 2025

The recent findings from Anthropic, showcasing AI's prowess in mimicking human hackers to exploit vulnerabilities in blockchain smart contracts, slice through any remaining illusions about the invulnerability of these technologies. According to Decrypt, AI effectively reproduced over half of the recorded exploits. The implications here are twofold: the growing AI capabilities in cybersecurity and the persistent, unaddressed risks within blockchain infrastructures.

The fact that AI models like GPT-5 and Claude Sonnet 4.5 not only replicated past exploits but also discovered new vulnerabilities underscores a significant shift. These models are evolving from passive tools for identifying existing weaknesses to active participants in cybersecurity, capable of uncovering previously unknown threats. This isn't just a technical achievement; it's a wake-up call for every developer reliant on smart contracts, particularly in high-stakes environments like finance and personal data.

Interestingly, the AI’s ability to pinpoint weaknesses doesn't rely on undisclosed, arcane knowledge but on vulnerabilities that are often hidden in plain sight, detailed in public disclosures such as Common Vulnerabilities and Exposures or audit reports. This makes the swift identification of such vulnerabilities both a blessing and a curse. It's admirable because it can enhance proactive security measures. Yet, it's a potential nightmare due to the ease with which malicious actors could automate and scale attacks, a concern that David Schwed of SovereignAI highlighted in his comments to Decrypt.

The use of AI in this capacity is a classic double-edged sword. On the one hand, it can substantially shorten the time between vulnerability identification and patch application. On the other hand, as Anthropic's findings reveal, AI can also be weaponized to automate and optimize cyber attacks, making it crucial for defensive strategies to adapt at a comparable or faster rate. The falling cost per attack, as noted with the Claude models' lowering token costs, only adds fuel to this fire. Reduced costs mean broader accessibility, not just for legitimate operators but for those with malicious intent as well.

This technological arms race between developing AI systems for defense and those used for attacks suggests an escalating battlefront in cybersecurity within the blockchain space. The rapid advancements in AI capabilities call for equally dynamic responses from developers and security experts. Anthropic’s recommendation for developers to integrate automated tools in their security protocols is a step in the right direction, although it barely scratches the surface of what needs to be done to stay ahead of AI-powered cyber threats.

What this essentially points to is a future where the security of blockchain technologies can no longer rely solely on traditional methods. As AI becomes a staple in both offensive and defensive strategies, the real challenge will be in fostering a security culture that can predict and preempt these AI advancements. Not an easy task, but then again, nobody said securing the future of finance would be.

Sign up to Radom to get started