diff --git a/README.md b/README.md index cc79e1f..7628e6a 100644 --- a/README.md +++ b/README.md @@ -27,8 +27,8 @@ Here are some of the most-visited sections: - [How to get started as a Code4rena warden](roles/wardens#joining-an-audit) - [Submission policy](roles/wardens/submission-policy.md) and [reporting guidelines](roles/wardens/submission-guidelines.md) - [Becoming Certified (KYC’d): benefits and process](roles/certified-contributors) - - [+Backstage warden role: overview, criteria and process](roles/certified-contributors/backstage-wardens.md) - - [Lookout role: overview, criteria and process](roles/certified-contributors/lookouts.md) + - [Certified Security Researchers: overview, criteria and process](roles/certified-contributors/sr-backstage-wardens.md) + - [Validator role: overview, criteria and process](roles/certified-contributors/validators.md) - [Scout role: overview and selection process](roles/certified-contributors/scouts.md) - Awarding [model](awarding/incentive-model-and-awards) and [process](awarding/incentive-model-and-awards/awarding-process.md) - [Judging criteria](awarding/judging-criteria) and [severity categorization](awarding/judging-criteria/severity-categorization.md) diff --git a/awarding/incentive-model-and-awards/README.md b/awarding/incentive-model-and-awards/README.md index c5a8442..64e7bb2 100644 --- a/awarding/incentive-model-and-awards/README.md +++ b/awarding/incentive-model-and-awards/README.md @@ -1,6 +1,6 @@ # Incentive model and awards -To incentivize **wardens**, C4 uses a unique scoring system with two primary goals: reward contestants for finding unique bugs and also to make the audit resistant to Sybil attack. A secondary goal of the scoring system is to encourage contestants to form teams and collaborate. +To incentivize **wardens**, C4 uses a unique scoring system with two primary goals: reward participants for finding unique bugs and also to make the audit resistant to Sybil attack. A secondary goal of the scoring system is to encourage participants to form teams and collaborate. **Judges** are incentivized to review findings and decide their severity, validity, and quality by receiving a share of the prize pool themselves. @@ -11,7 +11,7 @@ To incentivize **wardens**, C4 uses a unique scoring system with two primary goa ## High and Medium Risk bugs -Contestants are given shares for bugs discovered based on severity, and those shares give the owner a pro rata piece of the pie: +Wardens are given shares for bugs discovered based on severity, and those shares give the owner a pro rata piece of the pie: `Med Risk Slice: 3 * (0.9 ^ (split - 1)) / split`\ `High Risk Slice: 10 * (0.9 ^ (split - 1)) / split` @@ -40,6 +40,14 @@ The resulting awards are: | 'Warden B' | 'H-02' | '3' | 8.91 | 3 | 2.70 | 1000 | | 'Warden C' | 'H-02' | '3' | 8.91 | 3 | 2.70 | 1000 | +### Bonuses for top competitors +For audits starting on or after April 30, 2024, there are two bonuses for top-performing wardens: + +1. **Hunter bonus:** 10% of the HM pool will be awarded to the warden or team who identifies the greatest number of unique HMs. +2. **Gatherer bonus:** 10% of the HM pool will be awarded to the warden or team who identifies the greatest number of valid HMs. + +Both bonuses weigh Highs more heavily than Mediums, similarly to Code4rena's standard awarding mechanism. + ### Duplicates getting partial credit All issues which identify the same functional vulnerability will be considered duplicates regardless of effective rationalization of severity or exploit path. @@ -103,66 +111,46 @@ We can see here that the logic behind the `partial-` labels only impacts the awa Only the award amounts for "partial" findings have been reduced, in line with expectations. The aim of this adjustment is to recalibrate the rewards allocated for these specific findings. Meanwhile, the awards for full-credit findings remain unchanged. -## Bot races - -The first hour of each Code4rena audit is devoted to a bot race, to incentivize high quality automated findings as the first wave of the audit. - -- The winning bot report is selected and shared with all wardens within 24 hours of the audit start time. -- The full set of issues identified by the best automated tools are considered out of scope for the audit and ineligible for awards. - -Doing this eliminates the enormous overlapping effort of all wardens needing to document common low-hanging issues And because the best bot report is shared with auditors at the start of the audit, these findings serve as a thorough starting place for understanding the codebase and where weaknesses may exist. - -**Ultimately, the bot race ensures human auditors are focused on things humans can do.** - -By designating a portion of the pool in this direction, Code4rena creates a separate lane for the significant investment of effort that many auditors already make in automated tooling -- and rather than awarding 100 people for identifying the same issue, we award the best automated tools. - -## Analyses - -Each warden is encouraged to submit an Analysis alongside their findings for each audit, to share high-level advice and insights from their review of the code. - -Where individual findings are the "trees" in an audit, the Analysis is a "forest"-level view. +### Validator-improved submissions -Advanced-level Analyses compete for a portion of each audit's award pool, and are graded and awarded similarly to QA and Gas Optimization reports. +[Validators](https://docs.code4rena.com/roles/certified-contributors/validators.md) may enhance submissions (add PoC, increase quality of report, etc.) in exchange for a % of the finding’s payout. +For Validator-improved submissions: if the judge believes the validator added a measurable enhancement, they get a split of the value of the issue: +- 25% cut → small enhancement +- 50% cut → med enhancement +- 75% cut → large enhancement ## QA and Gas Optimization Reports -In order to incentivize wardens to focus efforts on high and medium severity findings while also ensuring quality coverage, the pool’s allocation is capped for low severity, non-critical, and gas optimization findings. +In order to incentivize wardens to focus efforts on high and medium severity findings while also ensuring quality coverage, the pool’s allocation is capped for low severity, governance/centralization risk, and gas optimization findings. -Low and non-critical findings are submitted as a **single** QA report. Similarly, gas optimizations are submitted as a single gas report. For more on reports, see [Judging criteria](/awarding/judging-criteria/README.md). +Low severity and governance/centralization risk findings are submitted as a **single** QA report. Similarly, gas optimizations are submitted as a single gas report. For more on reports, see [Judging criteria](/awarding/judging-criteria/README.md). QA and gas optimization reports are awarded on a curve based on the judge’s score. -- QA reports compete for a share of 2.5% of the prize pool (e.g. $1,250 for a $50,000 audit); -- The gas optimization pool varies from audit to audit, but is typically 2.5% of the total prize pool (e.g. $1,250 for a $50,000 audit); -- QA and Gas optimization reports are scored by judges using A/B/C grades (with C = unsatisfactory), and awarded on a curve. +- QA reports compete for a share of 4% of the prize pool (e.g. $2,000 for a $50,000 audit); +- The gas optimization pool varies from audit to audit; +- QA and Gas optimization reports are awarded on a curve. There is a very high burden of quality and value provided for QA and gas optimization reports. Only submissions that demonstrate full effort worthy of consideration for inclusion in the report will be eligible for rewards. -It is highly recommended to clearly spell out the impact of proposed gas optimizations. - -Historically, Code4rena valued non-critical findings at 0; the intent of the QA report is not to increase the value of non-criticals, but rather to allow them to be consolidated in reports alongside low severity issues. - **Note:** Audits pre-dating February 3, 2022 awarded low risk and gas optimization shares as: `Low Risk Shares: 1 * (0.9 ^ (findingCount - 1)) / findingCount` -In the unlikely event that zero high- or medium-risk vulnerabilities are found, the HM award pool will be divided based on the QA Report curve. - -## Grades for Analyses, QA and Gas reports +### Ranks for QA and Gas reports -Analyses, QA reports and Gas reports are graded A, B, or C. +_These guidelines apply to all audits starting on or after April 30, 2024._ -C scores are unsatisfactory and ineligible for awards. +After post-judging QA is complete, the Judge and Validators vote to select the top 3 QA reports and Gas reports. (In the case of a tie vote, there may be a 4th place report.) -All A-grade reports receive a score of 2; All B-grade reports get a 1. Awarding for QA and Gas reports is on a curve that's described [here](https://docs.code4rena.com/awarding/incentive-model-and-awards/curve-logic). +The 1st, 2nd, and 3rd place winners are awarded using a curve model that will be documented here ASAP. -### Bonus for best / selected for report -Judges choose the best report in each category (Analysis, QA report, and Gas report), each of which earns the same 30% share bonus described under "High and Medium Risk bugs." +Satisfactory reports not among the winning reports will not be awarded -- but will count towards wardens' accuracy scores. -**Note:** if the `selected for report` submission has a B-grade label, it will still be treated as A-grade and given proportionally more than B-grade, plus the 30% bonus for being `selected for report`. +In the unlikely event that zero high- or medium-risk vulnerabilities are found, the HM award pool will be divided among all satisfactory QA reports based on the QA Report curve, **unless otherwise stated in the audit repo.** ## Satisfactory / unsatisfactory submissions -Any submissions deemed unsatisfactory are ineligible for awards. +Any submissions deemed unsatisfactory are ineligible for awards, and count against wardens' accuracy scores. The bar for satisfactory submissions is that they are roughly at a level that could be found in a draft report by a professional auditor: specifically on the merits of technical substance, with writing quality considered only where it interferes with comprehension of the technical message. @@ -176,3 +164,39 @@ It is possible for a submission to be *technically* valid and still unsatisfacto - approach is disrespectful of sponsors’ and judges’ time in some way Any submissions that appear to be direct copies of other reports in the current audit will be collectively deemed unsatisfactory. + +## Other submission types + +As of April 30, 2024, the following submission types are paused: + +### Bot reports + +The first hour of each Code4rena audit is devoted to a bot race, to incentivize high quality automated findings as the first wave of the audit. + +- The winning bot report is selected and shared with all wardens within 24 hours of the audit start time. +- The full set of issues identified by the best automated tools are considered out of scope for the audit and ineligible for awards. + +Doing this eliminates the enormous overlapping effort of all wardens needing to document common low-hanging issues And because the best bot report is shared with auditors at the start of the audit, these findings serve as a thorough starting place for understanding the codebase and where weaknesses may exist. + +**Ultimately, the bot race ensures human auditors are focused on things humans can do.** + +By designating a portion of the pool in this direction, Code4rena creates a separate lane for the significant investment of effort that many auditors already make in automated tooling -- and rather than awarding 100 people for identifying the same issue, we award the best automated tools. + +### Analyses + +Analyses share high-level advice and insights from wardens' review of the code. + +Where individual findings are the "trees" in an audit, the Analysis is a "forest"-level view. + +Analyses compete for a portion of each audit's award pool, and are graded and awarded similarly to QA and Gas Optimization reports. + +### Understanding historical grading for QA, Gas, and Analysis reports + +For audits that started before April 30, 2024: + +- Analyses, QA reports and Gas reports in this time period were graded A, B, or C. +- C scores are unsatisfactory and ineligible for awards. +- All A-grade reports receive a score of 2; All B-grade reports get a 1. Awarding for QA and Gas reports is on a curve that's described [here](https://docs.code4rena.com/awarding/incentive-model-and-awards/curve-logic). +- Judges choose the best report in each category (Analysis, QA report, and Gas report), each of which earns the same 30% share bonus described under "High and Medium Risk bugs." + +**Note:** if the `selected for report` submission has a B-grade label, it will still be treated as A-grade and given proportionally more than B-grade, plus the 30% bonus for being `selected for report`. diff --git a/awarding/incentive-model-and-awards/awarding-process.md b/awarding/incentive-model-and-awards/awarding-process.md index cd2877c..4ea22ba 100644 --- a/awarding/incentive-model-and-awards/awarding-process.md +++ b/awarding/incentive-model-and-awards/awarding-process.md @@ -5,18 +5,18 @@ description: >- # Awarding process -At the conclusion of an audit, sponsors review wardens’ findings and express their opinions with regard to severity of issues. Judges evaluate input from both and make the ultimate decision in terms of severity and validity of issues. (See [How to judge an audit](../../roles/judges/how-to-judge-a-contest.md) for more detail.) +At the conclusion of an audit, sponsors review wardens’ findings and express their opinions with regard to severity of issues. Judges evaluate input from both and make the ultimate decision in terms of severity and validity of issues. (See [How to judge an audit](https://docs.code4rena.com/roles/judges/how-to-judge-a-contest.md) for more detail.) In making their determination, judges add labels to Github issues, while the original submission data (including the warden's proposed severity rating) is preserved via a JSON data file. -The judge's decisions are reviewed by the sponsoring project team and by [+backstage wardens](https://docs.code4rena.com/roles/certified-contributors/backstage-wardens) via a 48-hour QA process, to ensure fairness and quality. +The judge's decisions are reviewed by the sponsoring project team and by [certified Security Researchers](https://docs.code4rena.com/roles/certified-contributors/backstage-wardens) via a 48-hour QA process, to ensure fairness and quality. Judging data is used to generate the awards using Code4rena's award calculation script, which factors in: - Risk level - Validity - Number of duplicates -- Grade (A, B, C; Satisfactory/Unsatisfactory) +- Rank (1st, 2nd, 3rd; Satisfactory/Unsatisfactory) - In some cases, "partial duplicate" status It should be possible to reverse engineer awards using a combination of two CSV files: @@ -40,7 +40,7 @@ If you still don’t see the award in your wallet, please [open a help desk tick We are occasionally asked how wardens should declare Code4rena earnings for tax (or other financial/legal) purposes. Due to the nascent nature of DAOs, we are unable to provide reliable information in this area. You must assess and determine your own best course of action. -Audit contest rewards are distributed by the DAO, which does not have a legal personality. +Audit rewards are distributed by the DAO, which does not have a legal personality. The DAO has designated Code4rena Foundation as its agent via [a governance action](https://github.com/code-423n4/org/discussions/13) [approved by DAO members](https://polygonscan.com/tx/0x8fbe178e34a7ae03a5e0d1f49f23e38f3a1c0d1186a67920d33196a89f79da98) for purposes of entering into contractual agreements. However, wardens are not in any contractual agreement with the Foundation [unless they are certified](https://code4rena.com/certified-contributor-summary/). diff --git a/awarding/incentive-model-and-awards/qa-gas-report-faq.md b/awarding/incentive-model-and-awards/qa-gas-report-faq.md index 4e93d4d..be6bed9 100644 --- a/awarding/incentive-model-and-awards/qa-gas-report-faq.md +++ b/awarding/incentive-model-and-awards/qa-gas-report-faq.md @@ -1,32 +1,28 @@ # FAQ about QA and Gas Reports -This FAQ pertains to the award mechanism update that takes effect February 3, 2022, which changes the submission guidelines for low-risk, non-critical, and gas optimization reports. For more details, see [Judging Criteria](https://docs.code4rena.com/roles/wardens/judging-criteria). +This FAQ pertains to the award mechanism update that takes effect April 30, 2024, which changes the submission guidelines for low-risk, non-critical, and gas optimization reports. For more details, see [Judging Criteria](https://docs.code4rena.com/roles/wardens/judging-criteria). ### What happens to the award pool if no Med/High vulns are found? Unless otherwise stipulated in the audit repo, the full pool would then be divided based on the QA Report curve. -### Will non-critical findings hold some weight? Just want to know if it's worth spending a considerable amount of time writing this part of the report. +### Can I still include non-critical findings in my QA report? -The full QA report will be graded on a curve against the other reports. We'll be experimenting together as a community with this, but we think we'll learn a lot and it will be interesting to see the best practices emerge. +Non-critical findings are discouraged for QA reports. -We are intentionally not providing an "example," as we are eager to see what approaches folks take and to be able to learn from a variety of approaches. - -### What if a low-impact QA report turns out to be a high-impact report? How does that work with the 10% prize pool? Would the report be upgraded? +### What if a low-impact QA report turns out to be a high-impact report? Would the report be upgraded? It's conceivable it could be upgraded, though it's important to consider that part of auditing is demonstrating proper theory of how an issue could be exploited. If a warden notices something is "off" but is unable to articulate why it could lead to loss of funds, for example, the job is only half-done; without understanding the implications, a developer could very well overlook or deprioritize the issue. -The tl;dr for determining severity is relatively clear with regard to separating by impact. +The tl;dr for [determining severity](https://docs.code4rena.com/awarding/judging-criteria/severity-categorization.md) is relatively clear with regard to separating by impact. -### What happens when an issue submitted by the warden as part of their QA report (an L or N) *DOES* get bumped up to Med/High by the judge after review? +### What happens when an issue submitted by the warden as part of their QA report (an L or C) *DOES* get bumped up to Med/High by the judge after review? If it seemed appropriate to do so based on a judge's assessment of the issue, they could certainly choose to do this. -The judge could create a new separate Github issue in the findings repo that contains the relevant portions of the warden's QA report, and add that to the respective H or M level bucket. - However, QA items may be marked as a duplicate of another finding *without* being granted an upgrade, since making the case for *how* an issue can be exploited, and providing a thorough description and proof of concept, is part of what merits a finding properly earning medium or high severity. ### Conversely, in the reverse situation where an issue submitted by wardens as H/M level, is subsequently downgraded to QA level by the judge during their review, would the penalty just be excluding the overrated warden submission from consideration in regards to the QA rewards? -We'll need to see how it works in reality, but our current assumption is that (a) low severity findings attempted to get pushed into med/high would essentially get zero (just logically so since they wouldn't be high or med), and then (b) their QA report would be lower quality as a result, and so they wouldn't score as highly as they could have. Judges could also decide to mark off points in someone's QA report if they saw behavior that seemed like it might be trying to game for higher rewards by inflating severity, so it could have a negative consequence as well. +In theory, findings downgraded to QA are grouped with the warden's QA report (if one exists) and they are grouped together. In practice, however, we have found that downgraded issues do not have a significant impact on wardens' overall QA score. Judges can also decide to mark off points in someone's QA report if they see behavior that seems like it might be trying to game for higher rewards by inflating severity, so it can have a negative consequence as well. diff --git a/awarding/judging-criteria/README.md b/awarding/judging-criteria/README.md index 79f4055..b59fa2e 100644 --- a/awarding/judging-criteria/README.md +++ b/awarding/judging-criteria/README.md @@ -69,8 +69,35 @@ The scoring system has three primary goals: * Hardening C4 code audits to Sybil attacks * Encouraging coordination by incentivizing Wardens to form teams. +### QA reports (Low risk and Governance/Centralization risk) + +Low risk and Governance/Centralization risk findings must be submitted as a _single_ QA report per warden. We allocate a **fixed 4% of prize pools toward QA reports.** + +QA reports should include: + +* all low severity findings; and +* all Governance/Centralization risk findings. + +Each QA report should be assessed based on report quality and thoroughness as compared with other reports, with awards distributed on a curve. + +Judges have discretion to assign a lower grade to wardens overstating the severity of QA issues (submitting low/non-critical issues as med/high in order to angle for higher payouts). Judges may also raise the severity of a QA finding at their discretion. + +### Gas reports + +Gas reports should be submitted using the **same approach as the QA reports:** a single submission per warden which includes all identified optimizations. + +Gas pools are optional, but for audits that include Gas optimizations, the precise award pool can be found in that audit's repo. + +## Estimating Risk + +See [Severity Categorization](https://docs.code4rena.com/awarding/judging-criteria/severity-categorization). + +## Other report types + ### Analysis +_This report type is currently paused, and is not accepted for audits starting on or after April 30, 2024._ + Analyses are judged A, B, or C, with C being unsatisfactory and ineligible for awards. The judge selects the best Analysis for inclusion in the audit report. An analysis is a written submission outlining: @@ -99,30 +126,3 @@ Areas of interest include: - Weakspots and any single points of failure Merely repeating the code functionality in pseudo-documentation is not considered valuable information. - -### QA reports (low/non-critical) - -QA reports are graded A, B, or C, with C being unsatisfactory and ineligible for awards. The judge selects the best QA report for inclusion in the audit report. - -Low and non-critical findings must be submitted as a _single_ QA report per warden. We allocate a **fixed 2.5% of prize pools toward QA reports.** - -QA reports should include: - -* all low severity findings; and -* all non-critical findings. - -Each QA report should be assessed based on report quality and thoroughness as compared with other reports, with awards distributed on a curve. - -Judges have discretion to assign a lower grade to wardens overstating the severity of QA issues (submitting low/non-critical issues as med/high in order to angle for higher payouts). Judges may also raise the severity of a QA finding at their discretion. - -### Gas reports - -Gas reports are graded A, B, or C, with C being unsatisfactory and ineligible for awards. The judge selects the best Gas report for inclusion in the audit report. - -Gas reports should be submitted using the **same approach as the QA reports:** a single submission per warden which includes all identified optimizations. The gas pool is allocated on a curve. - -The gas pool varies from audit to audit, but typically it consists of 2.5% of the total prize pool. The precise gas pool for each audit can be found in that audit's repo. - -## Estimating Risk - -See [Severity Categorization](https://docs.code4rena.com/awarding/judging-criteria/severity-categorization). diff --git a/awarding/judging-criteria/severity-categorization.md b/awarding/judging-criteria/severity-categorization.md index 7d3a820..6887a8b 100644 --- a/awarding/judging-criteria/severity-categorization.md +++ b/awarding/judging-criteria/severity-categorization.md @@ -2,7 +2,7 @@ Where **assets** refer to funds, NFTs, data, authorization, and any information intended to be private or confidential: -* **QA (Quality Assurance)** Includes both **Non-critical** (code style, clarity, syntax, versioning, off-chain monitoring (events, etc) and **Low risk** (e.g. assets are not at risk: state handling, function incorrect as to spec, issues with comments). Excludes Gas optimizations, which are submitted and judged separately. +* **QA (Quality Assurance)** Includes **Low risk** (e.g. assets are not at risk: state handling, function incorrect as to spec, issues with comments) and **Governance/Centralization risk** (including admin privileges). Excludes Gas optimizations, which are submitted and judged separately. Non-critical issues (code style, clarity, syntax, versioning, off-chain monitoring (events, etc) are discouraged. * **2 — Med:** Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements. * **3 — High:** Assets can be stolen/lost/compromised directly (or indirectly if there is a valid attack path that does not have hand-wavy hypotheticals). @@ -10,7 +10,7 @@ Where **assets** refer to funds, NFTs, data, authorization, and any information Submissions describing centralization risks should be submitted as follows: -- Direct misuse of privileges shall be submitted in the Analysis report. +- Direct misuse of privileges shall be submitted in the QA report. - Reckless admin mistakes are invalid. Assume calls are previewed. - Mistakes in code only unblocked through admin mistakes should be submitted within a QA Report. - Privilege escalation issues are judged by likelihood and impact and their severity is uncapped. diff --git a/roles/certified-contributors/README.md b/roles/certified-contributors/README.md index 95cf986..994d354 100644 --- a/roles/certified-contributors/README.md +++ b/roles/certified-contributors/README.md @@ -5,15 +5,15 @@ description: In order to create opportunities for contributions which rely on es Contributors who have provided ID verification and a signed agreement may be eligible to participate in: -- Private or invite-only contests -- Scout role (focused on scoping and pre-contest code intel) +- Private or invite-only audits +- Scout role (focused on scoping and pre-audit code intel) - [Judging](/roles/judges/README.md) -- ["Backstage" warden opportunities](backstage-wardens.md) (post-contest triage and post-judging QA) +- [Certified Security Researchers opportunities](sr-backstage-wardens.md) (post-audit findings access and post-judging QA) - Providing mitigation review services - Offering solo audit and consulting services through C4 Additional opportunities we are considering include: -- Certain contest bonus token awards which may be restricted from US persons due to regulations or token grant agreements +- Certain bonus token awards which may be restricted from US persons due to regulations or token grant agreements - May be a factor in maxing out awards in the future ## **Certification process and constraints** diff --git a/roles/certified-contributors/backstage-wardens.md b/roles/certified-contributors/sr-backstage-wardens.md similarity index 70% rename from roles/certified-contributors/backstage-wardens.md rename to roles/certified-contributors/sr-backstage-wardens.md index 4b47ea8..868273f 100644 --- a/roles/certified-contributors/backstage-wardens.md +++ b/roles/certified-contributors/sr-backstage-wardens.md @@ -1,24 +1,24 @@ -# +Backstage wardens +# Certified Security Researchers (formerly +Backstage wardens) -Certified contributors who meet certain performance criteria within C4 gain "+Backstage" access to C4 audits, which includes: +Certified contributors who meet certain performance criteria within C4 gain the Certified Security Researcher (SR) role, which provides access to: -- Immediate access to findings repos after audits conclude +- Findings repos, immediately after audits conclude - Post-judging QA -The minimum criteria to become +Backstage are as follows: +The minimum criteria to become a Certified SR are as follows: 1. Be approved as a Certified C4 contributor; 1. Submit valid findings to at least 3 Code4rena audits (i.e. valid findings on the [Code4rena leaderboard](https://code4rena.com/leaderboard/)); 1. Have at least 1 high severity finding OR 3 medium severity findings on the [Code4rena leaderboard](https://code4rena.com/leaderboard/), OR score A on a QA report, Gas report, or Analysis (formerly "Advanced Analysis"; basic Analysis grades are not eligible); 1. Abide by the Certified Contributor Terms and Conditions (see [application form](https://code4rena.com/certified-contributor-application/)). -## To request +Backstage access +## To request the SR role -Once you meet the eligibility criteria, submit a [Help Desk Request](https://code4rena.com/help/) to request +Backstage access, and C4 staff will get you set up. +Once you meet the eligibility criteria, submit a [Help Desk Request](https://code4rena.com/help/) to request the SR role, and C4 staff will get you set up. ## Certified contributor professional conduct guidelines -Contributors may lose their +Backstage role by violating the code of professional conduct as outlined in the certified contributor agreement. This code asks wardens to: +Contributors may lose their SR role by violating the code of professional conduct as outlined in the certified contributor agreement. This code asks wardens to: - take an objective, collegial, and intellectually open tone in considering and discussing all findings - treat wardens and sponsors, and all other Code4rena community members with respect and an assumption of positive intent diff --git a/roles/certified-contributors/validators.md b/roles/certified-contributors/validators.md index d5085f5..1594557 100644 --- a/roles/certified-contributors/validators.md +++ b/roles/certified-contributors/validators.md @@ -3,14 +3,14 @@ > Validators decentralize triage by reviewing submissions from wardens with accuracy rates below the qualifying threshold (eg 50%). > -All open competitive audits at Code4rena that begin on or after May 1, 2024 will include Validators. +All open competitive audits at Code4rena that begin on or after April 30, 2024 will include Validators. ## Validator tl;dr -- Each competition has a **qualifying threshold** that allows wardens to bypass validators. This threshold is based on your submission accuracy rate and being established as a quality contributor and as non-sybil. (For those familiar with ‘backstage warden’ criteria, it is essentially that plus an acceptable accuracy rate.) +- Each competition has a **qualifying threshold** that allows wardens to bypass validators. This threshold is based on your submission accuracy rate, as well as being established as a quality contributor and as non-sybil. (For those familiar with ‘backstage warden’ criteria, it is essentially that plus an acceptable accuracy rate.) - Qualified wardens’ submissions go directly to the usual findings repo. - All other wardens’ submissions are routed to a Validation repo. -- 3-5 **Validators** (✨ new role) review submissions in the Validation repo immediately after the audit closes +- 3-5 **Validators** review submissions in the Validation repo immediately after the audit closes - Satisfactory submissions are forwarded to the findings repo - Unsatisfactory submissions are closed - Validators may also enhance submissions (add PoC, increase quality of report, etc.) in exchange for a percentage of the finding’s payout. (See “Awarding” section below.) @@ -38,6 +38,8 @@ The new **Validator** role replaces the Lookout role, so the Lookout pool will b - Validator can edit (improve) an issue and submit it (see below for more detail) - After completing 5 issues, another set of 5 will be assigned to you. - Reviewing 1 submission (adding any label other than `unknown`) from within the `unknown` queue will assign you another 5 issues. This incentivizes picking up issues that other validators passed on in order to capture more of the pool since you can only have a limited number of issues assigned to you at a given time. +- In addition to the initial set of 5 HM issues, each Validator is assigned a share of QA and Gas reports to review: `total_reports / total_validators` + - Each validator should forward all satisfactory QA/Gas reports to the findings repo for judging. - ⏰ **Timeline:** goal is for Validators to complete work within 48h after the audit closes. ## Limbo Round (for any Judges/Validators) — after 48 hours @@ -53,7 +55,7 @@ Once the audit is finalized: Each round's validations have different values. -(`*`) Note: After May 1, 2024, the Lookout role will be retired in favour of the new Validator role. All current judges and lookouts will be granted the Validator role by default. Going forward, the Validator role will be granted to community members who meet the eligibility criteria. +(`*`) Note: After April 30, 2024, the Lookout role will be retired in favour of the new Validator role. All current judges and lookouts will be granted the Validator role by default. Going forward, the Validator role will be granted to community members who meet the eligibility criteria. --- @@ -70,12 +72,17 @@ Each round's validations have different values. - The findings repo (for high-performing wardens’ submissions + Validated submissions) will also have duplicate submissions. - The Judge is responsible for reviewing and finalizing dupe sets, and assessing quality. -## Validators can improve submissions +## Validators can improve HM submissions + +Validators may improve High and Medium-risk submissions (HMs); they may not improve QA or Gas reports. If a validator chooses to improve a submission: - The original submission is preserved for the judge to see - The judge evaluates validators’ enhancements and whether they validated, proved, or enhanced them. (See “Awarding” section below.) +- Improved submissions must share the same root cause as the original submission. + +*N.B. If the finding is already present in the findings repo, then improved submissions will be judged in their original versions, and Validator improvements will be disregarded.* Validators should check the findings repo for duplicates prior to improving a submission. ## Awarding @@ -85,12 +92,12 @@ If a validator chooses to improve a submission: - The phase during which the issues were triaged, and - Final accuracy. - For Validator-improved submissions: if the judge believes the validator added a measurable enhancement, they get a split of the value of the issue: - - 25% cut → small enhancement = moved submission from unsatisfactory to satisfactory - - 50% cut → med enhancement = moved submission from invalid to valid - - 75% cut → large enhancement = identified a more severe vulnerability + - 25% cut → small enhancement + - 50% cut → med enhancement + - 75% cut → large enhancement - Phases and points: - **Round 1** validations are worth 1 point each - - **Round 2 (”limbo”)** validations validations are assigned a value on a curve based on the order of the submission (the "nonce") such that the later the submission is validated the more it is worth. The function for this is `steepness ^ (totalNonces - nonceOrder)` where steepness is set to `1.015` + - **Round 2 (”limbo”)** validations are assigned a value on a curve based on the order of the submission (the "nonce") such that the later the submission is validated the more it is worth. The function for this is `steepness ^ (totalNonces - nonceOrder)` where steepness is set to `1.015` - Passing costs `0.5` (ie passing twice neutralizes the value of 1 successful validation) - The validator's accuracy in this competition has an impact on their overall point total. `points = ((roundOneTotal + limboTotal - (pass*y)) * (accuracy ^x)` where `y` is the `pass cost` (`0.5` by default) and `x` is the accuracy decrementer (`3` by default). - For Round 2 (”limbo”) submissions, the value of the accuracy decrementer is 50% of the nonce value. @@ -98,10 +105,16 @@ If a validator chooses to improve a submission: ## Miscellaneous -- Judges can play the Judge and Validator role on the same audit. +- Judges can play the Judge and Validator role on the same audit, but are not eligible for any HM pool payouts on audits they judge — even if they enhance an issue. - Both the Validation repo and the Findings repo will be open to wardens with the SR role, for the purposes of post-judging QA. - All PJQA requests must be posted in the Github Discussion in the findings repo. + - QA and Gas reports closed by Validators (i.e. *not* added to the findings repo) are NOT eligible - Both repos will be made public when the audit report is published. - Validators’ accuracy score as a warden is impacted by their accuracy as Validators: - 50% of Validators’ *round 1* validator accuracy is applied to their personal accuracy score. - Limbo phase accuracy only has a 25% impact on their accuracy. + - All report types (High, Medium, QA, and Gas) are counted towards accuracy scores. + - False positives count 50% toward Validators' submission accuracy + - False negatives count 100% toward Validators' submission accuracy + - In other words, each false negative is double the negative value of false positives in the ‘incorrect’ count. +- [Certified Security Researchers](../certified-contributors/sr-backstage-wardens.md) may appeal improvements made by Validators, and request that the judge review their original submission during post-judging QA. diff --git a/roles/judges/how-to-judge-a-contest.md b/roles/judges/how-to-judge-a-contest.md index 5aaf2b7..03a3d73 100644 --- a/roles/judges/how-to-judge-a-contest.md +++ b/roles/judges/how-to-judge-a-contest.md @@ -7,13 +7,15 @@ We ask that you try to complete the judging process quickly so that we can distr ## Here’s how the process works leading up to judging -C4 kicks off the code competition and establishes a private repo to receive incoming issues. Typically, most findings come in on the last day of the audit. When the audit ends, a Lookout will presort the repo and then it will be handed to the sponsor. Sponsors will have the chance to review the findings, comment, and provide feedback on issues. +C4 kicks off the code competition and establishes a private repo to receive incoming issues. Typically, most findings come in on the last day of the audit. When the audit ends, you will get access to both the validation repo and the findings repo. A group of [Validators](https://docs.code4rena.com/certified-contributors/validators.md) will triage submissions from all wardens below a set accuracy threshold, and submissions they deem satisfactory will be added to the findings repo. -Sponsor input is non-binding, and do note that sponsors are heavily biased against having a report that includes very many vulnerabilities. Focus your work as a judge on protecting users and providing feedback to wardens. +Sponsors are invited to review the findings, comment, and provide feedback on issues within the findings repo. Sponsor input is non-binding, and do note that sponsors are heavily biased against having a report that includes very many vulnerabilities. Focus your work as a judge on protecting users and providing feedback to wardens. + +Judges may begin work anytime after the submission period ends. ## Before you get started -Read the [Judging Criteria](https://docs.code4rena.com/roles/wardens/judging-criteria), [Submission Policy](../wardens/submission-policy.md), and review the audit readme as provided by the sponsor. +Read the [Judging Criteria](https://docs.code4rena.com/roles/wardens/judging-criteria), [Submission Policy](https://docs.code4rena.com/wardens/submission-policy.md), and review the audit readme as provided by the sponsor. You may also be interested in browsing past audits, and [reviewing open issues in the Rulebook repo](https://github.com/code-423n4/rulebook/issues), in order to see how other judges have handled issues. @@ -32,7 +34,7 @@ Those documents also includes all information regarding de-duping, grading QA/Ga > “Sandwich attacks are inherent to AMMs, so this isn’t a unique issue presented by the MarginSwap implementation. With this in mind, I’m downgrading the risk from a proposed medium severity to QA.” -One important caveat to all of the above: _**unless otherwise specified by the audit sponsor or intended to be handled by the code**_**.** For example, flash loans are generally unavoidable, but since MarginSwap had a safeguard against them, we considered these findings relevant in their contest. +One important caveat to all of the above: _**unless otherwise specified by the audit sponsor or intended to be handled by the code**_**.** For example, flash loans are generally unavoidable, but since MarginSwap had a safeguard against them, we considered these findings relevant in their audit. ## Dealing with spam / repeated low-quality submissions @@ -42,11 +44,11 @@ Note: this policy was instated after [this proposal](https://docs.code4rena.com/ ## Discussing issues with the sponsor -Ultimately the judge has the final word, but we want your decisions to be well-informed. In a typical C4 audit, there will be a few issues that benefit from discussion with the sponsor; the judge may find that their understanding of the system is incomplete and you need to ask for clarification, or where there is room for misunderstanding. Don’t hesitate to connect directly with the sponsor, either in the Github comments (where you can tag them in if needed), or via Discord. +Ultimately the judge has the final word, but we want your decisions to be well-informed. In a typical C4 audit, there will be a few issues that benefit from discussion with the sponsor; the judge may find that their understanding of the system is incomplete and you need to ask for clarification, or where there is room for misunderstanding. Don’t hesitate to connect directly with the sponsor, either in the Github comments (where you can tag them in if needed), or via Discord. ## If you have questions -Do not hesitate to post in the #judges Discord channel, or DM a Contest Administrator with questions as you're working on judging. Any questions or feedback you can add to this documentation, or comments/questions on items above are highly welcome and essential for us improving our process. Thank you! 🙏 +Do not hesitate to post in the #judges Discord channel, or DM a Civics Administrator with questions as you're working on judging. Any questions or feedback you can add to this documentation, or comments/questions on items above are highly welcome and essential for us improving our process. Thank you! 🙏 ## Final step before handing off @@ -54,4 +56,4 @@ Please add a comment to your top scoring QA report noting where there are any it ## When you’re done reviewing -Ping a C4 Contest Administrator and let us know you’re ready to hand off the results for post-judge QA and then award distribution. +Ping a C4 Civics Administrator and let us know you’re ready to hand off the results for post-judging QA and then award distribution. diff --git a/roles/sponsors/README.md b/roles/sponsors/README.md index f420311..668ba5f 100644 --- a/roles/sponsors/README.md +++ b/roles/sponsors/README.md @@ -51,23 +51,13 @@ We use a 120-character line length standard for scoping, and our default `.prett To attract warden participation in the highly competitive engineering market, we work with standard award pool sizes based on the scope of the audit. We regularly evaluate and adjust audit pricing to ensure incentive alignment with wardens. Sponsors always have the option of boosting their award pool, which tends to attract more warden talent and attention. -#### Analysis pool - -5% of each audit's award pool is typically allocated to Analyses. These reports contain high-level advice and review of the code: the "forest" to individual findings' "trees." They augment and contextualize the bug reports that are incentivized by the remaining 95% of the pool. - -For a long time, wardens have wanted a better place to contribute value via the high-level / overview / advice that isn't necessarily covered by specific bugs. The Analysis pool provides them with a method to get credit for this advisory-level work. - -Projects have discretion to adjust the default allocation for the Analysis pool up or down; this should be clarified during the pre-audit booking and setup phase. - #### Gas optimization pool -By default, 2.5% of the award pool is allocated to valid gas optimizations. We encourage all sponsors to keep this in place, as we can help each other be conscious of ways to minimize gas fees for users -- and indeed some sponsors may wish to allocate a higher percentage of the award pool to this purpose. - -Some projects may not wish to create a separate incentive for gas optimizations, and removing it should be discussed with Code4rena staff during the pre-audit setup phase. +You may opt to allocate a portion of your award pool to gas optimizations; typically we recommend 2.5% of the award pool, but the amount is discretionary. ### Org fee -There is a fee on top of the determined audit pool, which goes to the Code4rena DAO to cover the costs associated with organizing, promoting, and reporting on audits. +There is a fee on top of the determined audit pool, which goes to Code4rena to cover the costs associated with organizing, promoting, and reporting on audits. ### Audit scheduling diff --git a/roles/sponsors/contest-process.md b/roles/sponsors/contest-process.md index 14b2f9a..5d61053 100644 --- a/roles/sponsors/contest-process.md +++ b/roles/sponsors/contest-process.md @@ -23,7 +23,7 @@ Your work will play a role in developing a public report of the audit. ### How Code4rena mitigation reviews work -- While judging for your audit contest is underway, your team works through whatever mitigations you choose to pursue. For each mitigation, you link them back to the findings in the repo. +- While judging for your audit is underway, your team works through whatever mitigations you choose to pursue. For each mitigation, you link them back to the findings in the repo. - After judging is finalized, the valid findings in the repo will be assigned a set of IDs containing a risk prefix + number (e.g. H-01 for a high-risk issue, M-03 for a medium). Mitigations of all High and Medium issues (we call them "HMs" for short) will be considered in-scope. We don't expect you to mitigate every QA / Gas issue, so we exclude those from mitigation reviews. - Most mitigation reviews are invitational competitions between 3-5 of the top-performing wardens from your audit. Code4rena staff will post the opportunity for RSVP as soon as judging is finalized for your initial audit. - Usually we can kick off the mitigation review within a few days of judging (assuming your mitigations have been completed), and they typically run for 5 days. diff --git a/roles/wardens/README.md b/roles/wardens/README.md index 4637587..d835366 100644 --- a/roles/wardens/README.md +++ b/roles/wardens/README.md @@ -30,7 +30,7 @@ All team registrations and updates will create pull requests that are flagged fo ### Audit timeline * **Most audits run for 3-7 days,** and typically start and end at 20:00 UTC. -* The rest of our audit timeline is documented on the [Audit timeline](../../structure/our-process/) page. +* The rest of our audit timeline is documented on the [Audit timeline](https://docs.code4rena.com/structure/our-process/) page. ### Questions? @@ -44,7 +44,7 @@ When a sponsor designates a team member who is available for questions, that per * Turn in your reports before the audit end time. * For each audit, submit your Medium and High risk findings individually. -* Bundle all of your low-risk and non-critical findings into a *single* QA report. +* Bundle all of your low-risk and governance / centralization risk findings into a single QA report. * Similarly, list *all* of your gas optimizations together in a single Gas report. -* Be sure to [register your handle and Polygon address](https://code4rena.com/login/) to receive your share. +* Be sure to [register your handle and Polygon address](https://code4rena.com/register/account) to receive your share. * Publicly disclosing (e.g. publishing or discussing) any discovered bugs or vulnerabilities before the audit report has been published is grounds for disqualification from all C4 events. diff --git a/roles/wardens/submission-guidelines.md b/roles/wardens/submission-guidelines.md index ef5036b..808232e 100644 --- a/roles/wardens/submission-guidelines.md +++ b/roles/wardens/submission-guidelines.md @@ -23,57 +23,38 @@ It is also recommended to ensure you receive email confirmation of each submissi - **High, Medium, and QA reports:** - Wardens should [review Code4rena's severity categorization](https://docs.code4rena.com/awarding/judging-criteria/severity-categorization) prior to submitting vulnerabilities, and select the appropriate risk when submitting. - Medium or High severity findings should be submitted individually. - - All QA findings (Low risk or Non-critical) must be submitted as a single QA report per warden (or team). - - Centralization risks, systemic risks, and architecture recommendations should be submitted as part of an Analysis (see below). -- **Analyses:** An analysis is a written submission outlining: - - Wardens' analysis of the codebase as a whole and any observations or advice they have about architecture, mechanism, or approach - - Broader concerns like systemic risks or centralization risks - - The approach taken in reviewing the code - - New insights and learnings from the audit -- **Gas optimizations:** All identified gas optimizations should be submitted as a separate report. Note: the gas award pool is set according to the sponsor's preference, and some audits do not include gas optimizations awards. - -### Report formats - -- Medium or High severity findings should be submitted individually. -- Analyses should be submitted via the "Submit Analysis report" form. -- All QA findings (Low risk or Non-critical) must be submitted within a single QA report per warden (or team). -- All Gas optimizations must be submitted within a single Gas report per warden (or team). + - All QA findings (Low risk or Governance / Centralization risk) must be submitted as a single QA report per warden (or team). + - Centralization risks, and systemic risks should be submitted as part of the QA report. +- **Gas optimizations:** For audits that include a Gas optimization pool, all identified gas optimizations should be submitted within a single Gas report per warden (or team). Note: the gas award pool is set according to the sponsor's preference. Wardens who submit multiple QA and/or Gas findings to a single audit without following the required format will have all QA/Gas submissions invalidated for that audit. -### Analyses +### QA reports (low/governance) -An analysis allows wardens to provide a high-level architectural review and codebase analysis as well as recommendations — and to win a slice of the pool based on their insights and advice. +Low and non-critical findings must be submitted as a single QA report per warden. We allocate a **fixed 4% of prize pools toward QA reports.** -The Analysis submission form includes a set of questions for you to answer to the best of your ability: - -1. Analysis of the codebase (What's unique? What's using existing patterns?) -2. Architecture feedback -3. Centralization risks -4. Systemic risks -5. Other recommendations -6. How much time did you spend? - -### QA reports (low/non-critical) +Your QA report should include: -Low and non-critical findings must be submitted as a single QA report per warden. We allocate a **fixed 2.5% of prize pools toward QA reports.** +- all low severity findings +- all Governance / Centralization risk findings (including centralization risks and admin privileged functions) +- Non-critical findings are discouraged. -Your QA report should include: +Formatting: -- all low severity findings; and -- all non-critical findings. +- Wardens are encouraged to use a standard format to label findings, e.g. `L-01`, `L-02`, etc. for low-risk findings, and `C-01`, `C-02`, etc. for centralization/governance findings. +- Please do not use `G-` prefixes as those are typically used to identify Gas optimization findings. +- Non-standard labels such as `R-` (refactor), `I-` (informational), or `S-` (suggestion) will be considered non-critical and are therefore discouraged. -Each QA report will be assessed based on report quality and thoroughness as compared with other reports, with awards distributed on a curve. The top QA report author will receive the top prize from the category. +Each QA report is assessed based on report quality and thoroughness as compared with other reports. Wardens overstating the severity of QA issues (submitting low/non-critical issues as med/high in order to angle for higher payouts) will have their scores reduced by judges. -In the unlikely event that zero high- or medium-risk vulnerabilities are found, the full pool will be divided based on the QA Report curve. - ### Gas reports -Gas reports should be submitted using the same approach as the QA reports: a single submission per warden which includes all identified optimizations. The gas pool will be allocated on a curve, and the top reporter will receive the top prize in the category. - -The gas pool varies from audit to audit, but typically it consists of 2.5% of the total prize pool. The precise gas pool for each audit can be found in that audit's repo. +- Not all audits include a Gas optimization pool; please check the audit repo before submitting a Gas report. +- Gas reports should be submitted using the same approach as the QA reports: a single submission per warden which includes all identified optimizations. +- It is highly recommended to clearly spell out the impact of proposed gas optimizations. +- Submissions that claim gas optimization when the optimizer is inactive will be considered invalid. For more details on QA and Gas reports, and estimating risk, please see [Judging Criteria](https://docs.code4rena.com/roles/wardens/judging-criteria#qa-reports-low-non-critical). diff --git a/structure/frequently-asked-questions.md b/structure/frequently-asked-questions.md index e05488d..6d4dc9d 100644 --- a/structure/frequently-asked-questions.md +++ b/structure/frequently-asked-questions.md @@ -14,13 +14,9 @@ Our platform is designed to incentivize everyone to participate in finding vulne In short, yes! Anyone can become a Code4rena Warden, and plenty of resources are available to learn more and earn rewards. You can find out more about this in our [Discord](https://discord.gg/code4rena). -### What’s the difference between Wardens and Masons? - -The simplest way to define the difference between Wardens and Masons is this: Wardens contribute to the ecosystem by auditing code and identifying vulnerabilities, while Masons leverage unique skills outside of auditing to contribute. Examples of Mason contributions could include things like explainer videos, blogs, mentorship programs etc. - ### How do I sign up to be a Warden? -Jump into our [Discord](https://discord.gg/code4rena) and get started! From there, you’ll need to [register](https://code4rena.com/register/account) as a Warden. +[Head over here](https://code4rena.com/register/account) to register as a Warden. ### Can I change my username? @@ -40,7 +36,7 @@ It’s really simple! Just visit [this link](https://code4rena.typeform.com/i-wa ### Do you have a blog? -We do indeed, [here](https://medium.com/code-423n4). We post product updates, sponsor interviews and more. +We do indeed, [here](https://code4rena.com/blog). We post product updates, sponsor interviews and more. ### What’s the best way to stay up to date with Code4rena? @@ -68,7 +64,7 @@ Code4rena works with an amazing team of artists, led by [Jaime Robles](https://b ### What does "HM" stand for? -"HM" is Code4rena shorthand for "High and Medium risk findings." C4 audits typically have an HM award pool that is distributed according to our [incentive model](../awarding/incentive-model-and-awards/README.md). +"HM" is Code4rena shorthand for "High and Medium risk findings." C4 audits typically have an HM award pool that is distributed according to our [incentive model](https://docs.code4rena.com/awarding/incentive-model-and-awards/README.md). ## Warden FAQ @@ -84,7 +80,7 @@ You should also receive an email confirmation from submissions@code4rena.com. (I ### I submitted a finding but then realized it was invalid. Do I need to contact Code4rena? -You can go to the `Your Findings` tab (located to the right of `Details`) on the specific audit page and open the finding. There you will see an option to `Withdraw` the finding. +While the audit is still active, you can go to the `Your Findings` tab (located to the right of `Details`) on the specific audit page and open the finding. There you will see an option to `Withdraw` the finding. ### Can I edit my findings post-submission? @@ -115,65 +111,3 @@ We’re an organization that aims to refine our processes wherever and whenever ### If I’ve got questions about the severity I should assign to a finding, where should I go? In the C4 Discord, these types of questions are commonly asked in #questions and/or #wardens. - -## FAQ about Analyses - -## What is the difference between QA and analysis? - -QA reports include specific issues that are non-critical or low severity; Analysis is intended to give wardens the opportunity to share high level advice and review of the code. - -QA/HMs are "trees" to analyses' "forest." For a long time wardens have wanted a better place to contribute value on (and get credit and compensation for) high level overviews and advice that aren't necessarily covered by specific bugs. - -Over time, we expect the best analyses will result in a diverse set of "consultative" advice to augment "here's a set of bugs." - -## Can I see an example of an analysis? - -Sure - your best bet is to look at the Reports section of the Code4rena website, and read through reports for audits that ran on or after June 6, 2023. Here are three examples: - -- [Analysis of Angle Protocol by warden \_\_141345\_\_](https://code4rena.com/reports/2023-06-angle#audit-analysis) -- [Analysis of Llama by warden 0xnev](https://code4rena.com/reports/2023-06-llama#audit-analysis) -- [Analysis of Nouns DAO by warden 0xnev](https://code4rena.com/reports/2023-07-nounsdao#audit-analysis) - -## Where is the Analysis submission form? - -Every C4 audit that includes an Analysis pool will have a submission form for Analyses that is separate from the finding submission form. You can find the correct submission form by navigating to the audit on the Code4rena website while logged in. You should see a "Submit Analysis report" option when you hover over the “Make a submission” button. - -## I’m a non-native English speaker. Will I be penalized for language differences? - -As with all Code4rena submissions, judges are asked to assess analyses based on their content; we aren’t looking for perfect grammar. - -That being said, if you prefer, you may submit your analysis in another language, and C4’s judge and lookout will use translation tools to read it. When possible, we will pull in Judges and Lookouts who speak the language. - -## Are analyses a part of **all** Code4rena audits? - -Not quite yet, but we expect they will be a core feature of all Code4rena audits in the near future. - -For now, you can tell which audits have Analyses by looking at the award pool details in: - -- the #rsvp channel in the Code4rena Discord server, -- the audit repo, or -- the audit page on the [Code4rena.com](http://Code4rena.com) website. - -## How do judges assess Analyses? - -The judging rubric for Analyses is still emerging, but we asked our judges for advice and here’s what they told us: - -> It is mostly qualitative analysis based on how deep the report went and the value provided. -Here are a couple of suggestions: -> -> - **Architecture** - How does this codebase compare to others you're familiar with? What ideas can be incorporated? What are some architecture-level weak spots and how can they be mitigated? -> - **Centralization** - What are all the trust assumptions laid out in the contract? How can they be reduced with little friction? -> - **Systemic risks** - List out all the external conditions that could make contracts behave in an unsafe way. Consider analyzing their likelihood and suggest ways to reduce their impact, or check for them at runtime. -> - **Documentation / Mental model** - Lay out diagrams with all the different components in play and how they interact with each other. - -## What if I have a limited amount of time to spend on an audit, but still want to submit a finding? Is there still value in submitting an analysis? - -Yes. You can simply state in your analysis that you spent a limited amount of time with the code base. Even a brief analysis helps Lookouts and Judges understand your approach and perspective. - -## What happened to Basic Analyses? - -When Code4rena announced Analyses in June 2023, we hoped this new report category would incentivize and reward the many wardens whose expertise includes architecture and systemic risk advice, as well as providing judges and projects with insight into wardens' workflows and learning processes. - -The results so far tell us that Advanced Analyses provide precisely the value to projects that we hoped they would. 👏 Wardens’ observations and acuity within this report category have been met with deep appreciation. - -Basic Analyses have not quite delivered the value we had hoped for, so we have removed them from future audits, effective August 18, 2023. diff --git a/structure/our-process/README.md b/structure/our-process/README.md index ae513c1..d63a9f6 100644 --- a/structure/our-process/README.md +++ b/structure/our-process/README.md @@ -11,8 +11,8 @@ We are working on tightening up all of our processes in order to be able to dist | | Ideal | Actual (on average) | | --- | --- | --- | | Audit submissions close | Day 1 | Day 1 | -| Lookout pre-sorts findings (de-duping and triage) | Day 7 | Day 3-4 | -| Sponsors review and give feedback on findings | Day 9 | Day 9-10 | +| Validators triage findings | Day 3-4 | Day 3-4 | +| Sponsors review and give feedback on findings | Day 7 | Day 9-10 | | Judges determine final severity | Day 12 | Day 19-21 | | Judging QA complete; awards announced | Day 15 | Day 21-22 | | Awards are distributed; Sponsors complete mitigation of any issues | Day 15 | Day 25-39 |