KelpDAO Audit Passed. $292M Left Anyway.

Posted on April 20th, 2026 by Runtime Verification
Last updated on May 20th, 2026
Posted in News

kelpdao.png

On formal modeling as a systematic defense in a world where well-resourced attackers have AI.


On April 18, 2026, an attacker drained $292M from KelpDAO's Ethereum escrow in a single transaction. The OFTAdapter contract that released 116,500 rsETH did exactly what it was designed to do. No bug was exploited, no zero-day in LayerZero's on-chain code. The entire on-chain system was, by conventional security standards, clean.

The exploit required compromising one off-chain signer, a single Decentralized Verifier Network (DVN), which was the only DVN configured to authenticate cross-chain messages on that pathway. One signer. No redundancy. $292M in escrow behind a single key.

A full technical breakdown of the exploit is covered in Blockaid's post-mortem. Go read that post, it’s very well written and we share the same sentiments and ideas as Blockaid regarding current DeFi security. This post is about what that incident reveals about the state of security practice, and what a different approach, such as formal modeling, would look like.


Blackhats with AI only need to find one thing

The threat environment has changed in a way that doesn't yet feel fully internalized across the industry.

Attackers targeting high-value DeFi protocols in 2026 are not opportunistic. The Drift Protocol exploit in early April, a $285M loss through a multi-week social engineering campaign against multisig signers, required sustained planning across multiple targets. The KelpDAO attack involved deliberate pre-positioning, a precise understanding of LayerZero's message verification architecture, and an extraction strategy that used Aave's collateral mechanics to convert stolen assets into a position that was difficult to unwind before any response could reach it.

These are well-resourced operations, and they now have access to AI tooling that dramatically compresses the cost of the analytical work that precedes an attack. Mapping protocol architectures, tracing trust dependencies, identifying single points of failure, modeling the composability graph to find the fastest and most irreversible off-ramp once funds are in hand: all of this is substantially cheaper and faster than it used to be.

An attacker scanning for targets only needs to find one exploitable pattern anywhere in your system. Once they have a specific target and a viable strategy, they can invest heavily to execute it. They have time, resources, and increasingly powerful tools for the reconnaissance phase. The question isn't whether someone will do a thorough adversarial analysis of your protocol. The question is whether they'll do it before you do.


What could KelpDAO have done?

It's worth sitting with the difficulty of the situation before reaching for solutions.

KelpDAO was not cutting corners. They audited their smart contracts. They deployed on a battle-tested cross-chain messaging protocol. They operated one of the largest liquid restaking positions in the EigenLayer ecosystem using standard industry tooling and practice.

The vulnerability wasn't in any code that a smart contract audit would read. It was in a configuration parameter, requiredDVNCount = 1 with optionalDVNCount = 0, that expressed a trust assumption about off-chain infrastructure. The OFTAdapter trusted unconditionally that its single configured DVN would only sign valid messages. That assumption lived nowhere in the smart contract audit scope. It wasn't surfaced as a risk in any pre-launch review. It was just a setting.

As Blockaid states: “the attack surface in modern DeFi is no longer primarily the smart contract code. It is the off-chain infrastructure, key management, and trust assumptions that smart contracts delegate to.” A conventional audit covers one slice of that surface. An attacker can work anywhere on it.

The tools and practices that exist today leave large parts of that surface unexamined, and that's an industry-wide gap rather than a failure specific to any one team.


Making assumptions explicit

The core problem is that the assumptions holding a system together are mostly hidden. BlockAid’s post identifies that a 1-of-1 DVN configuration is an “unacceptable security risk”, but how could the KelpDAO team have made the same identification on their own pre-deployment?

Every DeFi protocol operates on a set of trust assumptions: about who controls the signing keys, about what third-party infrastructure will and won't do, about what happens to your token's value if a composability dependency fails. These assumptions exist whether or not they're written down. When they're implicit, buried in configuration choices and undocumented dependencies, they can't be examined, challenged, or stress-tested.

Formal modeling is the practice of making those assumptions explicit. It requires you to write down every principal your system trusts, every property that system is supposed to guarantee, and every way those properties could be violated if a principal is compromised. The model has to characterize how the system behaves when things go wrong, not just when everything goes right.

The output is a map: here are the things your system trusts, here is what each trust assumption is worth, here is what fails if any given assumption is violated. That map is the thing that's been missing from the standard DeFi security toolkit.


How formal modeling would have helped here

A formal model of KelpDAO's bridge architecture would have required the team to include the DVN infrastructure explicitly as a component. Not as background plumbing, but as a named principal in the trust model with defined capabilities and defined failure modes.

Once the DVN is in the model, the natural question follows: what is the risk profile if a DVN is compromised? The model traces through the consequences. A compromised DVN can forge an arbitrary cross-chain message. With a 1-of-1 configuration, that message is sufficient to release the full escrow, which at the time of the exploit held $1.07 billion in backing reserves. The model would also capture the exfiltration side: rsETH was whitelisted as collateral on Aave and other major lending protocols without circuit-breakers on deposit size, so an attacker could immediately borrow real assets against the stolen collateral. The position settles within a single block, hindering attempts to halt it.

The picture that emerges is: DVN compromise is extremely high consequence, and there are no meaningful guards anywhere between the initial compromise and a completed extraction.

That analysis, done before deployment, puts the team in a position to make an informed and targeted decision about where to invest in mitigation. Two levers become clear. One is validator redundancy on the DVN side, requiring multiple independent DVNs to attest before any message executes. The other is circuit-breakers where applicable, whether on the OFTAdapter itself, on rsETH's collateral acceptance in lending protocols, or both. The team can weigh the cost and complexity of each against the risk the model has quantified. That's a materially different conversation than discovering the exposure at incident time.

Formal modeling doesn't guarantee safety. What it provides is visibility: the ability to make real tradeoffs with full information rather than finding out which tradeoffs were made implicitly after the fact.


A systematic defense

Smart contract audits still matter. Real-time monitoring still matters. Incident response still matters. But the threat environment has moved to a place where implicit assumptions have become the primary attack surface, and the adversaries finding and exploiting those assumptions are systematically better-resourced than they used to be. A well-resourced attacker with AI assistance can map your protocol's trust dependencies, identify the assumptions that break under single-entity compromise, and model the composability graph for fast off-ramps. That analysis is increasingly cheap to run.

The deeper problem is that even teams doing everything right are still assembling their security posture ad-hoc. They run a smart contract audit because audits are table stakes. They bring in an opsec firm to review their deployment setup. They set up a monitoring service. Each of these is a genuine contribution to security, and the firms selling them are not wrong that they're necessary. But necessary isn't the same as sufficient, and when security is assembled from individual marketed services rather than derived from a model of the system, there's no principled way to know what's missing. You spend the budget on what you've been told to buy, and you find out at incident time what you forgot to cover.

A formal model changes the basis for those decisions. It tells you which components carry the most risk, where the trust boundaries are that actually need scrutiny, and what monitoring needs to exist for the failure modes that matter. Audits, opsec reviews, and monitoring are still the tools. The model is what tells you how to apply them. And the earlier it exists, the more value it generates: engineers make configuration decisions knowing which components are load-bearing, code reviewers know which trust boundaries need scrutiny, agents have an explicit specification to reason from, and auditors arrive with a map of the properties that actually need to hold.

If you're building or operating a DeFi protocol and want to think through whole-system formal modeling, we'd be glad to talk. Reach out at contact@runtimeverification.com.

Against the class of adversary that's now doing the attacking, it's the most systematic defense available.

 

Post-mortem source: Blockaid Post