Investigation: Formal proofs & alternate "verifier" implementation

With the recent rangeproof inflation bug, there has been increased talk about ensuring the integrity of the chain in the face of possible undetected implementation errors.

I’m very interested in developing a principled approach to this problem.

We have a distinct advantage over many other formal proof / verification applications — we can have multiple implementations that are optimized for different aspects of the problem, and the ecosystem as a whole benefits from the superset of all of them. In particular:

  • Our existing implementations are computationally performant, but not proven correct.

  • We could create an additional “verifier” implementation where performance is (relatively) unimportant, but that has some level of correctness proof.

The idea is that the “verifier” implementation would be significantly slower, but could be optionally run by anyone who wishes to devote extra resources to helping ensure chain integrity. It would not necessarily be a meaningful part of consensus, but would act as a “canary” early warning that something may be wrong with the chain before serious damage could be done.

We could even start small by “rusting out” parts of the implementation: Small parts of the existing code could be replaced by small parts of verified / auto-generated code that that is less performant but more likely to be correct.


  • What language(s) would be easiest to formalize our proofs in? I’m familiar with Coq and Lean, there may be others? Are there any particular language or library requirements for the type of proofs we would use? Good support for elliptic curves? (Though… we’re not necessarily trying to formally prove elliptic curve cryptography, just our additions on top of it. We can make some fairly broad axiomatic assumptions, and then just prove on top of those.)

  • Do we have any connections to mathematicians / mathy-programmer-types who could help us with the formalization? (Kevin Buzzard of Lean’s amazing mathlib “math standard library” might have some pointers here.)

  • What options are available for code generation from these languages? Note that we can combine multiple target languages and be as esoteric as we like, as the resulting code will ideally be largely auto-generated. (Example: Coq to Haskell)

  • What are the most promising initial components of our constructions that can be piecemeal replaced?

  • What are the “scariest” / least proven parts of the implementations?

  • Are there any other approaches that could increase confidence in chain integrity that take advantage of additional processor cycles or probabilistic effects?

  • What is the prior art of formal methods as applied to cryptography and blockchains in particular?

Thanks to @oryhp for inspiration and talking through some of this with me!


I would like to start with a Haskell program that queries the local grin node for the latest header as well as the kernel, output and rangeproof MMRs, and then proceeds to check all UTXO rangeproofs and all kernels, and verifies the expected emission. It would only use other Haskell libraries, ones that are geared toward clarity rather than efficiency or prevention of side channels (like timing attacks) we don’t care about. This doesn’t constitute a formal proof, but is probably a useful first step on the way to formal proofs, and is also a lot easier to realize in practice.

It could also serve as a foundation for an eventual full node implementation in Haskell. As demonstration of Haskell’s suitability for implementing formal specifications, see this implementation of the rules of Go [1].

That would have to be the bulletproofs.



I don’t have experience with formal verification but I agree that taking a snapshot of the state and importing it in a haskell program that does the verification is a good starting point.

I see the (main) verification in 2 parts:

  1. object validity
forall kernel => is_valid_kernel(kernel)
forall utxo   => is_valid_rangeproof(utxo.commitment, utxo.rangeproof)
  1. equation validity
Σ utxo = Σ kernel + offset * G + height * 60 * H

Taking a snapshot of the objects and importing them in haskell would be a good first move. Then we can gradually add the object validity checks. The object validity (bulletproofs) will be the hard part, after this is done, the equation validity check should be simple. At the end, we might want to check also that the utxo set is actually a set, check the MMR positions, block headers etc., but I think starting with the import and the object validity is a good starting point. Whether it is possible to get an auto-generated code for this is unclear to me.

1 Like

The third part:

  1. commitment validity (latest header commits to all objects)

is just as important, otherwise you can make up arbitrary objects that satisfy parts 1 and 2.

1 Like

Who’s going to write a bulletproof verification in haskell? Afaik adjoint has a bulletproof implementation but at the end of the readme file it states:
This is experimental code meant for research-grade projects only. Please do not use this code in production until it has matured significantly.

I’m crazy enough to start poking at it. Not expecting to move mountains in a day, but it seems like a path worth starting down. The nice part is that it is useful even without ever being “production ready”, because it’s a secondary implementation.

There are several useful Haskell libraries although they all appear to be wrappers around C-libraries, e.g.


Found this paper on IPDL that seems designed for the job. Paper published this year.

Their language looks like it gets encoded into Coq (from a skim read of the paper).

Their code is on GitHub, too:


There’s also Cryptol, from the wizards at Galois:

discussed on Hacker News at Cryptol DSL, a tool for writing correct crypto algorithms, is now open-source | Hacker News


Cross-post from keybase:

They seem pretty fond of { log } and EasyCrypt github:

Looks like a lot of options to choose from. Wonder what the best way to reach out to the researchers to get them involved in this discussion?

1 Like

@tromp So just to confirm, if we could have a formally specified and verified implementation of some part of the Grin code fall magically from the sky, the bulletproof verification code would be a good place to start?

Quickly looking around, it seems like the entry point to that currently lives in secp256k1_bulletproof_rangeproof_verify (or the multi variant):

Is this at all accurate? :slight_smile:

We have Transaction::validate call TransactionBody::validate call Output::batch_verify_proofs call Secp256k1::verify_bullet_proof_multi, which I have trouble locating in GitHub - mimblewimble/secp256k1-zkp: Fork of secp256k1-zkp for the Grin/MimbleWimble project

Ah, looks like it’s:

which then calls secp256k1_bulletproof_rangeproof_verify_multi.

Would you consider, say, secp256k1_bulletproof_rangeproof_verify_multi and everything underneath useful to be formally specified?

No doubt it would be useful. Not only to Grin, but to all systems relying on the secp256k1 library. The single verification you pointed out above would be similarly useful (would not be hard to use for alternative if slower verification).

The big question is: what language to use for formal specification that will allow for proving the library correct, and how big an effort will that be?
And would the effort be smaller if we write an alternate implementation in a language like Haskell?


Yep! I’m investigating some possibilities, wanted to make sure I’m looking at the right part. :slight_smile:


The paper “A Formal Analysis of the MimbleWimble Cryptocurrency Protocol”
at ends with this final remark:

"We plan to continue working on the lines presented in Section 4, namely,
considering tools oriented towards the verification of cryptographic protocols and implementations, such as EasyCrypt [2], ProVerif [16], CryptoVerif [15] and Tamarin [56]. In particular, we are especially interested in using EasyCrypt, an interactive framework for verifying the security of cryptographic constructions in the computational model.

They only briefly mention rangeproofs:

“Range proofs Grin and Beam implement range proofs using Bulletproofs [19]. Bulletproofs are a non-interactive zero-knowledge proof protocol. They are short proofs (logarithmic in the witness size) with the aim of proving that the committed value is in a certain (positive) range without reveling it. Proof generation and verification times are linear in the length of the range. Regarding our model, it is the first property a transaction should satisfy to be valid (Property 1). Furthermore, for every transaction in a bock, the range proofs of all the outputs should be valid too (Property 2).”


My hunch would be: yes. There seems to be a lot of tooling to compile/transpile Haskell to Coq, which could then be formally verified.

Other alternatives are the higher-level frameworks built on top of Coq (EasyCrypt, Cryptol, etc.). I’m new to formal verification, so it would take me a whiie to spin-up on any of the alternatives, Haskell probably the least time-intensive.

Think it would come down to the level of expertise we have available in the Grin community.

@tromp @trevyn how experienced are you with formal verification? If you have experience, do you have a language/framework preference? Are the authors from the arxiv paper still around / active in the community?

I’m definitely willing to help, even if just review/brain-storming. Getting my hands dirty with some impl could be fun, too :smile:

Personally, don’t have a preference for language/framework, since I’m new to formal verification.

If we’re all similarly experienced, what looks like the easiest to pick up?

Does EasyCrypt stay true to its name?

Are the tools for Haskell mature, and easy enough to pick up?