Please help evaluate: Grin's security process

Review of Grin’s Vulnerability disclosure and security process

Background

In the latest Governance meeting, the recent vulnerability was discussed and what learnings could be drawn from it. A number of questions were raised as part of the discussion, and it was decided that a review of our current Vulnerability disclosure and Security process could be in order. This post outlines the proposed process for this, and also covers the questions raised. Participation from the entire Grin community is welcomed. If you have suggestions, feedback, questions, or other thoughts, feel free to raise in-thread here.

Proposed process

  1. (This) Forum post where questions are outlined. Feedback and comments are solicited from the community.
  2. The resulting discussion is distilled into a wiki document.
  3. This document is discussed in a future governance meeting. Any changes proposed to the process and document are adopted in the meeting.
  4. The policy is updated as required.

Initial questions

  1. Add additional point of contact. Currently there are two, Ignotus Peverell and Daniel Lehnberg. If one is away and the other for whatever reason is unreachable, there is no-one to get a hold of. Should a third contact be added?
  2. Compartmentalization. During the recent vulnerability, we handled information and knowledge sharing through compartmentalisation: Only those directly involved, i.e. having been notified in the first place, working on a fix, or testing and validating the fix, were made aware of the details behind the vulnerability. The entire council was not briefed. Is that the correct process to handle this? If so, should it be clearly outlined? If not, how should it instead be handled, and why?
  3. Severity scoring and handling of issue. What do we do if it’s a critical vulnerability? What do we do if it’s a low severity vulnerability? Can we agree on different approaches based on the severity? Can these be mapped out and announced in advance?
  4. How do we handle disclosure to Grin forks? A vulnerability can affect downstream projects. We want to be responsible, but also ensure that the risk of premature leaking of vulnerabilities (i.e. before a fix is out and adopted by the network) is contained. How should we behave in this process against these projects?
  5. How do we handle disclosure to other Mimblewimble projects? Similarly, what do we do if a vulnerability affects other projects that we do not share a codebase with?
  6. Are there timelines/milestones where we enforce certain rules? I.e. must a vulnerability have to be disclosed within a certain timeline? No matter what? And if it’s not, what should happen?

Feel free to raise your own questions.

I don’t think there should be any additional responsibility for the forked projects. They will find out at the same time as the general public, just like the recent Zcash vulnerability.

Same should apply here. Assuming there is compartmentalization, people are either part of a “privileged” group or not, and this is more or less based on their trustworthiness and contribution, regardless of whether they are part of another MW project or not.

1 Like

Should a third contact be added?

Yes, I am in favor of one more contact.

Compartmentalization .

Since vulnerabilities are handled internally in the council, you guys should come up with a process that works for you.

Thanks for the comments so far - just a quick bump to see if we can get more thoughts ahead of tomorrow’s meeting.

I agree with this. Once you create a privileged group of forks, you must maintain it, and be judged for its contents (and lack thereof).

Add additional point of contact. Currently there are two

Why not make hierarchical list? Its works for presidential assassinations, it may just be a good system; aim for a dozen even if you only want the first 3 ever contacted.

How do we handle disclosure to Grin forks?

I still believe a prefabbed message saying “somethings wrong, consider not taking risks today, here’s links to our policies” would be nice.

This is more a problem if the market has futures, but finding an attack, poking the dev team, launching the attack and collecting whatever bug bontiy while shorting the market would be double dipping into both black and white hatting; if you have have a stated policy this attacker can merely short off your message, making this action more grey hat as they wouldn’t need to cause an attack to happen and even if they launch an attack harm is reduced if exchanges slow down withdrawals.

How do we handle disclosure to other Mimblewimble projects? Similarly, what do we do if a vulnerability affects other projects that we do not share a codebase with?

Greedily, fix grin first then if its beam attack it (is being exposed to corpate double speak twice, enough to secure my undying hate; well, yes.), otherwise contract their dev teams after the threat is gone.

I disagree, the shitcoin wave is likely to be big, but its not that big to be unmanageable to know the few mw projects with actual development behind it

Requiring 100 lines of code to exist on github to add it to the list of “privileged” would cut down wm clone coins to likely a dozen.

Also I’m suggesting grin first.

Add additional point of contact.

Yes, definitely. We don’t want a situation where Igno and lehnberg are afk and a disgruntled bug hunter just logs a severe bug on Twitter.

Compartmentalization

Generally I think compartmentalization is a good solution, mining pools and exchanges have higher stakes than a single user. I don’t think “running a pool/exchange” is a valid qualifier though: clearly Poloniex has even higher stakes than my small pool, and I don’t think every small exchange or pool should be notified immediately. The more people are in, the higher the risk of a leak.

I trust the designated point of contacts to make a better case-by-case decision than us coming up with an arbitrary long list of rules. IMHO, for bugs like the last, notify established and “trusted” exchanges with a “large” volume, as well as established and “trusted” pools with a “large” number of found blocks.

The entire council was not briefed. Is that the correct process to handle this? If so, should it be clearly outlined?

Suggestion:

  • Inform the council in a general way that does not risk a leak, e.g. “we’ve got a report of a bug that allows remote (code execution | double spending | stealing of funds) and are working on a fix”.
  • Inform those directly involved/affected (see compartmentalization) the same way
  • Inform them of the “hidden” fix (e.g. last time it was buried in 1.0.2)
  • Give a clear time window for when the bug is made known to the wider public

How do we handle disclosure to Grin forks?

We’re not responsible for forks.

“Trusted” forks could be informed in advance, but IMHO none of the current forks are. [I personaly see the risk of guys like bitgrin leaking the issue in order to shill their fork]

How do we handle disclosure to other Mimblewimble projects?

Same as with forks.

Personally, I’d inform e.g. the BEAM of the bug, i.e. I’d consider them “trusted” for this specific case

Are there timelines/milestones where we enforce certain rules?

Yes, it should be disclosed publicly. IMHO the way Project Zero deals with this is an improvement. I’m not sure if e.g. 30 days is enough, but disclosing after N days should be the default.

3 Likes
  • Add additional point of contact.… Should a third contact be added?

3 is better :slight_smile:

  • Compartmentalization . During the recent vulnerability, we handled information and knowledge sharing through compartmentalisation: Only those directly involved, i.e. having been notified in the first place, working on a fix, or testing and validating the fix, were made aware of the details behind the vulnerability. The entire council was not briefed. Is that the correct process to handle this? If so, should it be clearly outlined? If not, how should it instead be handled, and why?

We did well in recent vulnerability :slight_smile: but some proposals here could make it even better in the future:

  1. Currently Grin council has 9 members, it’s not a big group and I believe each member is trustable for Grin based on the long term contributions and good records.
  2. For a complex vulnerability fixing, we definitely need a team work, and I don’t think a 9-members team is much over qualified for security related vulnerability.
  3. Compartmentalization increase the working complexity, we don’t need consider on each time which ones should be invited to a vulnerability.

So, I propose to make it simple, just using the current 9-members council group to handle any future vulnerability.

  • Severity scoring and handling of issue. What do we do if it’s a critical vulnerability? What do we do if it’s a low severity vulnerability? Can we agree on different approaches based on the severity? Can these be mapped out and announced in advance?

About handling, It’s much better to be written down and announced in advance.

For example:

  1. Publicly announce we have a dedicated Keybase group for Pools and another dedicated group for Exchanges, and both Keybase group are ONLY used for vulnerability upgrading notification. And write a notice page on website to let Pools / Exchanges people know how to join this group and what’s the responsibility for them to be in this Keybase group. (btw, the joined Pools and Exchanges should be listed on somewhere, website? wiki?)

  2. Define a clear period between pools/exchanges notification and fully public notification, 3 days? 1 month? or 3 month?

Related grin-pm issue: https://github.com/mimblewimble/grin-pm/issues/149

I’ve also suggested we add a third point of contact, to be discussed in next week’s governance meeting.

It appears that now, with Igno’s leave-of-absence, this may need to be increased further.

Good thinking @Jordy - added to the agenda of the next governance meeting.

Please be aware of an RFC that goes covers a lot of these topics:

It’s expected to go into Final Comment Period shortly. Feedback is welcomed in the pull request.

2 Likes