The efficiencies of validity proofs

Imagine you could let Darth Vader run a supercomputer with a gazillion TPS and he still couldn't corrupt the chain, while you can verify on a budget smartphone.

This is the superpower of validity proofs.

A monolithic execution layer requires 1000s of block producers and non-producing full nodes with a majority needing to be honest. As a result, even if you don’t care about ease-of-verification, you need conservative hardware requirements, let’s say something like a 16-core CPU, 256 GB RAM, 1 GB/s bandwidth.

Darth Vader can run a 256-core, 4 TB RAM, 100 GB/s supercomputer instead, and all the user needs to verify is a succinct proof on their budget smartphone.

Let’s say the monolithic execution layer takes 500ms to execute and reprocess. Assuming the same VM and clients are used, the Darth Vader validity proven system will complete execution in less than 100ms.

Proving is improving dramatically over time, to the point something as complex as zkEVM is already trivial. Furthermore, proving is embarrassingly parallelizable. Just assign enough provers and aggregate it all, so your proving process is over in let’s say, 150ms.

Net result: the validity proven execution layer competes processing more transactions in half the time.

But surely the proving makes it more expensive? Nope. A monolithic execution layer requires 1000s of nodes reprocessing all transactions, let’s say this costs $1,000/month * 5000 = $5 million. But of course, the biggest cost for a blockchain is economic security. For example, to attain Ethereum-level security, you need an issuance of $125 million per month, or for Bitcoin-level security, $750 million per month.

The validity proven execution layer only needs to execute and prove once, so instead of a $5M computational network, you only need $X,000 for execution (because it’s a faster system, >$1,000). Even if the proving cost is 1,000x execution (they aren’t), it’s still cheaper. With proving costs plummeting over time, total compute costs are going to be a very small fraction relative to monolithic execution in the long term. Let me reiterate on that - of course, all of this is a long-term vision.

That aside, a validity proven execution layer need only pay per transaction instead to an economically secure L1. With EIP-4844, danksharding and validium-like constructions, this will be quite negligible for Ethereum rollups/validiums, instead of hundreds of millions.

Now, of course, Darth Vader may not be able to corrupt your transactions, but he could refuse to or be unable to accept your transaction. There can be inclusion mechanisms to force Darth Vader, else Darth Maul or a variety other methods can do so. This will be more than adequate for a bunch of usecases, especially non-financial usecases which may require a ton of throughput.

But what if you don’t want to wait? No problem, just assemble a Jedi Council, and only one needs to be honest. You don’t need thousands of node with 67% needing to be honest, you just need a bunch with only 1 honest. Utilizing governance penalties and incentives, you can pretty much guarantee real-time inclusion. Of course, there are many ways to achieve this, I describe PBS+crList above, but in every case it’s superior to monolithic execution layers as you only need a mechanism for censorship-resistance and liveness, not safety.

Let me be abundantly clear about this - the unique proposition of blockchains is to securely come to consensus. Otherwise, it’s just a database. Ask yourself - why do blockchains have consensus protocols? Yeah, exactly. The “only need one honest replica” stuff comes after - once the consensus, the only unique thing blockchains do, has already been achieved. (To be clear - consensus protocols are the first pass, consensus is ultimately achieved by full nodes. Also to be clear, not light clients.)

Remember, validity proofs are malleable, can be recursed, folded etc. You can have real-time proofs, longer-term proofs, you can have thousands of executions layer recursing to a single succinct proof etc. etc. Sky’s the limit to what validity proofs can do for scaling blockchains. But what if you’d rather trust 1000s of randos than the validity proof? Sure, and this is why validity proven execution layers will take time to mature. But the technical superpowers are undeniable, and why every L1 and L2 worth their salt are researching ways to implement it. Once again, this is a long-term vision. And of course, looking deeper into the future, further innovations like IO may come.

Optimistic execution layers - yes, indeed, it gets a lot of these efficiency benefits, but you also need to run a lot more recomputing nodes. A lot less than monolithic execution layers, to be sure. Likewise, you have to keep hardware requirements in check, so you can’t do a supercomputer.

Validity proofs at L1 - this also gets you most of the benefits - much higher throughput, much lower cost-of-verification, much more cost-efficient network-wide. But it won’t get you the economic security benefits, that still needs to be earned the old-fashioned honest-majority way. Needless to say, monolithic execution layers have a path forward by making their execution layers validity provable. Indeed, this is one of the biggest upgrades on the Ethereum roadmap - L1 zkEVM.

Then there’s the zero-knowledge properties of ZKPs, of course, but here I just wanted to focus on succinctness and validity proving properties.

Tl;dr: validity proofs make execution layers faster, cheaper and easier to verify.

Subscribe to polynya
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
This entry has been permanently stored onchain and signed by its creator.