The paper has a lot of goodies inside, like new ZK sumchecks for constrained codes, ZK code-switching, dispersers to boost distance, private zero-evaders and more.
Read more here: ia.cr/2026/391
The paper has a lot of goodies inside, like new ZK sumchecks for constrained codes, ZK code-switching, dispersers to boost distance, private zero-evaders and more.
Read more here: ia.cr/2026/391
In the polylog regime, we achieve ZK analogues of WHIR ๐ช๏ธ, FRI, and Ligerito with zero-knowledge, list-decoding soundness and again for general linear codes, thanks to a new zero-knowledge code-switch.
In the sublinear regime (O(sqrt{N}) argument size) we achieve a ZK list-decoding sound analogue of Ligero and Brakedown, with straight-line round-by-round knowledge soundness and that works with general linear codes.
ZO0K is a family of zkIOPs matching performance of the state-of-the-art while being *zero-overhead*: the cost of the ZK protocol is (1 + o(1)) * cost of the original protocol.
This means there is *no performance reason* to implement a non ZK version of the protocol anymore.
๐: ia.cr/2026/391
While hash-based SNARKs have had an explosion of progress recently (Brakedown, BaseFold, STIR, Blaze, WHIR, Tensorswitch, Bolt, Lightningโฆ) most of them are *not* zero-knowledge, and the cost of adding ZK is at least a 2x on most metrics of interest.
A screenshot of the name of the paper: Zero-Knowledge IOPPs for Constrained Interleaved Codes
New work with Ale and Guy!
We present ZO0K ๐ฆ: a family of zero-overhead zero-knowledge hash-based SNARKs, built from interleaved linear codes.
The construction is plausibly post-quantum secure, and it is the first result showing a non-interactive version of the Killian-Micali transformation secure in the plain model!
This transformation (and successors such as BCS) underlie every efficient hash-based SNARG/K known.
Previously, SNARGs for NP in the plain model were only known from *heavy* assumptions such as indistinguishability obfuscation. This work shows that a clever combination of PCP, FHE-based vector commitment and CI-hash functions (all instantiatable from LWE) in fact suffices!
Super exciting work from Ziyi and Eylon! They construct the first SNARG for NP in the *plain* model (no random oracle) using *only* (subexponential) LWE!
Perhaps most surprisingly, the SNARG is one (very clever) instantiation of the classical Killian-Micali construction!
Go read the paper!
๐: eprint.iacr.org/2025/2065.pdf
Special thanks to @succinctlabs.bsky.social and @simonsinstitute.bsky.social where this work was mainly done!
Towards the result, we formalize a few notions previously implicit in the literature, that might be of independent interest such as: (i) interactive oracle commitment schemes; and (ii) the right-extension of a linear code which captures how โcode switchableโ a code is.
Tensorswitch is very flexible, being both field-agnostic and code-agnostic.
It yields PCD matching WARPโs theoretical efficiency while also achieving a sublinear accumulator, making it suitable for distributed proving.
Plus, it should be great for committing to sparse data.
Further, our scheme is promising for concrete efficiency:
- Commit time is roughly one encoding and 3n multiplications
- Prover time is roughly 6n multiplications (and requires no committing to large extension field messages)
- Query complexity is optimal for a tensor code
We construct a hash-based polynomial commitment scheme that is asymptotically optimal in most parameters: linear prover time, O(lambda) query complexity and logarithmic verifier time.
We present Tensorswitch๐งฎ:
a new nearly optimal hash-based polynomial commitment scheme from tensor codes!
Joint work with Benedikt Bรผnz, Ron Rothblum and @defund.bsky.social
๐: ia.cr/2025/2065
Finally, we show how modifications of the sumcheck protocol can achieve better time-space tradeoffs, and present (log^*N + k)-round protocols that run in time O(N * (log^* + k)) and space O(N^1/k) (and achieve linear time in the O(N) space setting).
Perhaps surprisingly, we show that the log log N factor is inherent, unless the exponent in matrix multiplication is 2 (in that case, we can recover the single multilinear tradeoff).
Further, we show that the tradeoff in Blendy was optimal.
We fix this, and show a sumcheck algorithm that runs in time O(N * (log log N + k)) and uses space O(N^1/k). This algorithm is concretely efficient, using up to 120x less memory at ~2x prover slowdown compared to non-space efficient alternatives.
In previous work, we showed that, for sumcheck claims about a single multilinear and any parameter k, there is a sumcheck algorithm that runs in time O(k * N) and space O(N^1/k). Annoyingly, we couldnโt find an equivalent for products.
Back to actual researchโฆ
We present a family of space-efficient sumcheck algorithms, and show that they are optimal! ๐น
Joint work with Anubhav, Ale, Elisabetta, @zkproofs.bsky.social, Tushar and Andrew
๐: ia.cr/2025/1473
๐ง๐ปโ๐ป: github.com/compsec-epfl...
Easy optimizations such as path pruning should improve proof size significantly. Sometimes the proofs also just contain some random trash (see below), which we could, um, avoid sending. Read the paper for more!
For the serious part of this post, this in fact should not happen, hash based proofs should be not-compressible. The fact that they are hints at serialization being suboptimal, and we should improve (as a community) to achieve proof size reduction at minimal cost.
Surprisingly, this leads to a reduction *across the board on proof size*, on each proof system that we tested (including ones I had written, and except for Ligerito, damn you Julia!), which is why we recommend to zip your proofs always.
We, um, just ran zip on them.
This has caused some headaches, which we found a great solution to using some relatively unknown techniques from information theory, buried in some papers published in 70s, leading to proof size reduction to as much as 60% of the original size!
Hash-based proofs tend to be larger than their elliptic curve counterparts, and a focus of the ethproofs.org initiative is to compress them to minimize the bandwidth requirements of validators
Excited to share the new frontier of reducing hash-based SNARKs proof size: a post-quantum secure lightweight black-box technique to reduce proof size to 60% of the original one!
w/ my wonderful coauthor Yuwen Zhang.
ia.cr/2025/1446
Attack 1 (session replay): An adversary in physical proximity of the lock (without ever having a valid account on the lock) can record the Bluetooth Low Energy (BLE) communication of a whole session and replay it to repeat all executed commands, including unlocking the lock. Attack 2 (exceeding access): Former guests can continue unlocking the lock after their access has been revoked. Attack 3 (clock tampering): Malicious guests can adjust the clock time of the smart lock arbitrarily, extending their own access past expiration or locking out all legitimate users. Attack 4 (audit log tampering): An adversary that only knows the lockโs identifier (which is advertised over BLE) can upload arbitrary audit events to the telemetry server, and prevent legitimate audit events from being uploaded. Hence, the adversary can hide their own activities. Attack 5 (malformed messages): Without valid access, an adversary can send malformed BLE messages to the lock that make it unresponsive or corrupt memory, which results in a Denial of Service (DoS) for authorized users. A malicious authorized user can even leak the memory of the smart lock.
Our WOOT paper went out of disclosure today. We found 5 attacks on the Master Lock D1000 which allow unauthorized unlocking, bypassing access revocation, forging log entries, and causing DoS.
If you're in Seattle, come to our talk given by Chengsong, one of the students I mentored for this paper.
Thank you!!!
thank you!!!