The Aggregation Mode runs once every 24 hours and performs the following steps:
-
Fetch Proofs from the Verification Layer
QueriesNewBatchV3events from theAlignedLayerServiceManagerand downloads the batches fromS3, starting from the last processed block of the previous run. -
Filter Proofs
Filters proofs by supported verifiers and proof types. -
Aggregate Proofs in the zkVM
Selected proofs are aggregated using a zkVM. -
Construct the Blob
A blob is built containing the commitments of the aggregated proofs. -
Send Aggregated Proof
The final aggregated proof and its blob are sent to theAlignedProofAggregationServicecontract for verification.
[Note] Currently if you want your proof to be verified in the
AggregationModeyou need to submit it via theVerificationLayer(a.k.aFastMode). As explained above, in the next run theAggregationModewill fetch your proof from theVerificationLayerbatches in spite of its verification status.
Two separate aggregators are run every 24 hours:
- Risc0: Aggregates proofs of types
CompositeandSuccinct. - SP1: Aggregates proofs of type
Compressed.
The proof commitment is a hash that uniquely identifies a proof. It is defined as the keccak of the proof public inputs + program ID:
- For SP1:
The commitment is computed as:keccak(proof_public_inputs_bytes || vk_hash_bytes) - For Risc0:
The commitment is computed as:keccack(receipt_public_inputs_bytes || image_id_bytes)
To scale aggregation without exhausting zkVM memory, aggregation is split in two programs:
-
User Proof Aggregator
Processes batches ofnuser proofs. Each run creates an aggregated proof that commits to a Merkle root of the user proofs inputs. This step is repeated for as many chunks as needed. Usually each chunks contains256proofs but it can be lowered based on the machine specs. -
Chunk Aggregator
Aggregates all chunk-level proofs into a single final proof. It receives:- The chunked proofs
- The original proofs commitments included each chunk received
During verification, it checks that each chunk’s committed Merkle root matches the reconstructed root to ensure input correctness. The final Merkle root, representing all user proofs commitments, is then committed as a public input.
Once aggregated, the proof is sent to Ethereum and verified via the AlignedProofAggregationService contract. Depending on the proving system, the contract invokes:
verifySP1for SP1 proofsverifyRisc0for Risc0 proofs
Each function receives:
- The public inputs
- The proof binary
The program ID is hardcoded in the contract to ensure only trusted aggregation programs (chunk_aggregator) are accepted.
If verification succeeds, the new proof is added to the aggregatedProofs map in contract storage.
To verify a user’s proof on-chain, the following must be provided:
- The proof bytes
- The proof public inputs
- The program ID
- A Merkle proof
The Merkle root is computed and checked for existence in the contract using the verifyProofInclusion function of the ProofAggregationServiceContract, which:
- Computes the merkle root
- Returns
trueorfalsedepending if there exists anaggregatedProofwith the computed root.
When submitting the aggregated proof to Ethereum, we include a blob that contains the commitments of all the individual proofs that were aggregated. This blob serves two main purposes:
- It makes the proof commitments publicly available for 18 days.
- It allows users to:
- Inspect which proofs were aggregated
- Get a Merkle proof to verify that their proof is included in the aggregated proof
As dictated in the eip-4844 Each blob can hold:
FIELD_ELEMENTS_PER_BLOB = 4096BYTES_PER_FIELD_ELEMENT = 32
Which results in a total theoretical capacity of:
FIELD_ELEMENTS_PER_BLOB * BYTES_PER_FIELD_ELEMENT = 4096 * 32 = 131.072 bytes
However, this full capacity can't be used due to how KZG bytes to elliptic curve points are encoded. Specifically:
- Ethereum uses the BLS12-381 curve, whose scalar field modulus is slightly less than
2^256, in fact, it's closer to2^255. - That means the 32-byte field elements can't represent arbitrary 256-bit values.
- To stay within the field modulus, we pad the value with a leading
0x00byte, ensuring it's below the modulus. - This reduces the usable payload to 31 bytes per field element.
So the actual usable capacity per blob becomes:
4096 * 31 = 126.976 bytes
Since each proof commitment is exactly 32 bytes, the maximum number of proof commitments that can fit in a single blob is:
126.976 / 32 = 3968 proofs
This is the current upper limit on how many proofs we can include in a single aggregation run.
To increase throughput we can:
-
Send Multiple Blobs per Transaction
Up to 6 blobs can be included per transaction, supporting up to 23,808 proofs per run, which is more than we can aggregate in one day. -
Run Aggregation More Frequently
Reducing the interval between aggregation runs.