Review of Grin’s Vulnerability disclosure and security process
Background
In the latest Governance meeting, the recent vulnerability was discussed and what learnings could be drawn from it. A number of questions were raised as part of the discussion, and it was decided that a review of our current Vulnerability disclosure and Security process could be in order. This post outlines the proposed process for this, and also covers the questions raised. Participation from the entire Grin community is welcomed. If you have suggestions, feedback, questions, or other thoughts, feel free to raise in-thread here.
Proposed process
- (This) Forum post where questions are outlined. Feedback and comments are solicited from the community.
- The resulting discussion is distilled into a wiki document.
- This document is discussed in a future governance meeting. Any changes proposed to the process and document are adopted in the meeting.
- The policy is updated as required.
Initial questions
- Add additional point of contact. Currently there are two, Ignotus Peverell and Daniel Lehnberg. If one is away and the other for whatever reason is unreachable, there is no-one to get a hold of. Should a third contact be added?
- Compartmentalization. During the recent vulnerability, we handled information and knowledge sharing through compartmentalisation: Only those directly involved, i.e. having been notified in the first place, working on a fix, or testing and validating the fix, were made aware of the details behind the vulnerability. The entire council was not briefed. Is that the correct process to handle this? If so, should it be clearly outlined? If not, how should it instead be handled, and why?
- Severity scoring and handling of issue. What do we do if it’s a critical vulnerability? What do we do if it’s a low severity vulnerability? Can we agree on different approaches based on the severity? Can these be mapped out and announced in advance?
- How do we handle disclosure to Grin forks? A vulnerability can affect downstream projects. We want to be responsible, but also ensure that the risk of premature leaking of vulnerabilities (i.e. before a fix is out and adopted by the network) is contained. How should we behave in this process against these projects?
- How do we handle disclosure to other Mimblewimble projects? Similarly, what do we do if a vulnerability affects other projects that we do not share a codebase with?
- Are there timelines/milestones where we enforce certain rules? I.e. must a vulnerability have to be disclosed within a certain timeline? No matter what? And if it’s not, what should happen?
Feel free to raise your own questions.