This guy believes its possible to make grin quantum resistant
Skip to 29 mins
This guy believes its possible to make grin quantum resistant
sounds like mystical mumbo jumbo.
Quantum resistant cryptographic primitives are still a very early and active area of research. We have included some protections we can soft fork in case of a completely unexpected breakthrough. But quantum resistance is still a field in cryptography that needs to be fleshed out before it can be leveraged by any public system. Otherwise you risk replacing a secure system with a new unsecure one for fear of the bogeyman.
Not sure how legit this is:
Hash based signatures offer a better alternative in terms of quantum resistance, but it comes with the overhead of the size of signature (ie. Lamport Signatures). Could ZK-SNARKS, Mimble Wimble, or similar cryptographic protocols offer a way to compress these quantum resistant signatures and maintain its security or would the security still fall back on the underlying ZK-SNARK / MW security assumptions? Or is my question fundamentally flawed?
MW relies on algebraic properties of its underlying crypto to achieve cut-through, which appears to be fundamentally incompatible with hash-based signatures that provide no such algebraic properties…
Thanks @tromp, Im trying to understand ZK & bulletproofs…not exactly easy. So from what I’ve read so far MW uses switch commitments for quantum resistance. The everlastingly binding property of them sounds very interesting.
How can we know the current capabilities of a quantum computer at any time? If a malicious actor built one strong enough to completely break asymmetric encryption, we wouldn’t know until it’s too late. They would just compute keys of interest, one by one. People would still assume they got hacked or leaked the key somehow. Public blockchains could give us visibility on this…but only if we can recognize it.
It’s exceedingly unlikely that a malicious actor can advance the field of quantum computers much faster than the well funded public groups at Google, IBM, Microsoft, Rigetti, etc, that are currently pushing the boundaries.
Its only merely unlikely
Im playing devils advocate here, not trying to stir the pot. I know creating a quantum computer is next-level. But who’s to say that we won’t find more efficient ways of producing & using quantum circuits and/or integrating them in ways to do computations we’ve never thought of before? Quantum physics has a knack for flipping our views of reality upside down.
I’m offended your not insulting anyone.
There is lots of talk about the error rate scaling poorly, given that large scale quantum situations dont yet exist there may be a reason the error rate grows. It’s not like spooky action at a distance is the cleanest thoery, it just happens to be the best math.
Could you enlighten me on the subject of the “quantum error rate”? I had heard and read very little about quantum error correcting codes…maybe I am thinking that at a certain scale, or amount of time, the probability of a calculation being inaccurate is about the same as a photon being interrupted by noise in the process?
Nothing to do with computer science, math and physics ppl have been saying that has the numbers of qbits increase the error rate of the machines also increases.
Since its on my mind and presumably people here are knowledgeable, I believe that the most likely outcome is that the quantum dream will prove to be false but that quantum supremacy will be achieved for specific np-complete problems with a “ln” improvement as another “jank” computer system.
“gil kalais” and friends argument is roughly that quantum systems are probably exponentially unstable thru size and time, which sure sounds correct to me; he talks way to much to say it, but proably, if the mechanism for spooky action at a distance is a “message” between two particals rather then the whole system itself fully connected graphs grow exponentially, 1 node is zero, 2 is 1, 3 is 3 etc. gamma rays, strings, blah blah, whatever. So every mirco second its going thru gates these “edges” may be hit by whatever causes random collapses.
But this doesn’t break all quantum machines, imagine a “tree” of programmable entanglement gates with a single “quantum instruction” at the end. Since the N quantum state would only exist for a mirco second, 2X N/2 exist for 2 mirco seconds, 4x N/4 3 mirco seconds, etc. this should be generating n ln n ish errors and this would be solving a version of sat, at extreme cost; and since the math people play it fast and loose with building that np-compete tree, converting that into solving real probably likely would be still extremely expensive. I.e. people can say brainfuck is turing complete, but it takes n*n time to compute multiplication when a normal computer does it in one specialized instruction; jank has a cost, its useful as all fuck, but that cost in complexity is always there, converting in and out of a special version of sat and that tree for sat to more general np-complete problems will likely follow such a pattern.
Long term storage means the quantum state is existing for longer the 1 micro second and is therefore a bad idea as is error correction with larger quantum states or “non-deterministic computations”(i.e. ones that read super positions, rather then “rotating” a super position to a normal position) that would add to the error rates even more.
Any one want to offer corrections?; that roughly my take on whats possible with the future of quantum.
The problems that quantum computers can “crack” (solve exponentially faster than what we think classical computers can do) are not np-complete problems. In particular, Integer factoring and discrete log problems are not np-complete.
I disagree with the quantum dream based on error correction working out; I’m aware of the standard conclusion, but its going to be built on axioms I disagree with. So I don’t care about the stated conclusions as a rule.
Shor’s algorithms use direct collapsing of quantum state not rotation. I know its got everyone all excited but more skeptical, it adds error to the system and hopes to fix its mistakes later. Maybe adding error to unstable systems is not the best idea in the world.
It seems to me there must be something happening under the hood of quantum, the field doesn’t have a real product so they don’t yet know what they are talking about and controlled collapsing super positions may not be possible at scale. The only “supremacy” existing is the “Deutsch–Jozsa algorithm” in that it literally exists, while shor hasn’t really, so the pessimistic view is that the hype around shor is misguided and the paradigm should be around quantum circuits and building on a foundation of the hard the spell and pronounce one.
Adiabatic quantum computation may provide the answer.
"Deutsch–Jozsa algorithm”…black box quantum computer…now we’re talkin haha.
I kind of agree, I think there is a brick wall we are all hitting here. There is definitely a set of problems and calculations we can expect to do well if we can set up a quantum system correctly. But that is the thing, there is no universal quantum “lab” we can set up to do any type of calculation more efficiently than a classical system. If you want to solve a specific problem classical cant solve more quickly, you must have a specific setup which is pointed to the exact problem at hand. If you want to factor integers quickly, you must set up a specific quantum system, and this is in contrast to the classical computers that we all use which can solve many arbitrary calculations at our imaginations’ expense.
Two lattice-based commitment schemes as possible Pedersen replacement.
Carsten Baum, Ivan Damgård, Sabine Oechsner, Chris Peikert:
Efficient Commitments and Zero-Knowledge Protocols from Ring-SIS with Applications to Lattice-based Threshold Cryptosystems
Akinori Kawachi, Keisuke Tanaka, Keita Xagawa:
Concurrently Secure Identification Schemes Based on the Worst-Case Hardness of Lattice Problems.
@septimavector do you know if there’s anything else apart from quantum resistance that could be gained by switching the commitment scheme?