Upon initial submission, the
ropensci-review-bot performs a suite of tests
and checks, and will paste a report into the GitHub issue thread. This report
is the primary source of information used to inform initial editorial
decisions. The best way to understand how decisions are to be made in response
to these reports is to provide a concrete example. The remainder of this
initial sub-section contains the contents of such a report, generated for
an R package generated by the
with additional statistical standards inserted with the
As for rOpenSci’s current peer-review system, packages are submitted directly
repository on GitHub, with
submissions handled initially by the rotating Editor-in-Chief (EiC).
Statistical software is nevertheless handled differently to standard
(non-statistical) packages from the first moment of submission. Statistical
submissions use a different template, the submission
of which automatically generates a detailed report from our
ropensci-review-bot, as described in the initial sub-section of this chapter,
The EiC need only purvey the summary checks within the initial section of the report. Most submissions should receive ticks for all items listed, and no crosses, in which case the initial checklist will conclude with a statement that,
This package may be submitted
In response to that statement, the sole tasks for an EiC prior to delegating a handling editor are to check the following single item:
- The categories nominated by the submitting authors are appropriate for this package
And to choose one the following two items:
- The package does not fit within any additional categories of statistical software.
The package could potentially be described by the following additional
categories of statistical software:
- <list categories here>
Additional effort by the EiC will only be required in “edge cases” where a package may be unable to pass one of the checks, as in the sample automated check, which concludes with the statement that,
All failing checks above must be addressed prior to proceeding
In these cases, submitting authors must explain why these checks may fail, and
the EiC must then determine whether these failures are acceptable.
Such cases ought nevertheless be rare, and it may be expected in the majority
of cases that the sole tasks of the EiC are to confirm a positive bot response,
to complete the two checklist items given above, and to allocate a handling
editor. This latter step is done by calling
@ropensci-review-bot assign <name> as editor.
The Handling Editor should use the summary report generated by the opening of the issue (and exemplified here) to perform an initial assessment and to guide assignment of reviewers. The EiC need only consider the initial summary checklist, but handling editors should consider all details contained within the automated report.
The first section describes checks conducted by the
Review Roclets) package.
This check confirms that all statistical standards have been documented within
the code, and all packages must pass this check. The report linked to in that
section is primarily intended to aid reviews, and may be ignored by handling
The second section describes the “Statistical Properties” of the package being submitted, and should be considered by handling editors. In particular, this section contains information which identifies any statistically noteworthy properties of the package. The example report illustrates how this report immediately identifies that the package has very little code, very few functions, and very few tests. Handling editors should consider these statistical details, and particularly any noteworthy aspects (defined by default as lying within upper or lower fifth percentiles in comparison with all current CRAN packages). Any aspects which seem concerning should be explicitly raised with submitting authors prior to proceeding. The measures currently considered include various metrics for:
- Size of code base, both overall and in sub-directories
- Numbers of files in various sub-directories
- Numbers of functions
- Numbers of documentation lines per function
- Numbers of parameters per function
- Numbers of blank lines
A final metric,
fn_call_network_size, quantifies the number of
inter-relationships between different functions. In
R directories, these are
function calls, while relationships may be more complex within
directories. Small network sizes indicate packages which either construct few
objects (functions), or in which internal objects have no direct relationships.
The third and final section of the automated report contains details of
goodpractice checks including:
- Code coverage estimates for each file (from the
- Code style reports from the
- Cyclomatic complexity reports from the
- Any errors, warnings, or notes raised by running
R CMD check(from the
Any aspects of these
goodpractice reports which do
not pass the initial checklist (such as warnings or errors from
R CMD check,
or test coverage < 75%) should be clarified with authors prior to proceeding
Finally, the initial Statistical Description includes details of computer languages used in a package, and should be used to ensure reviewers have appropriate experience and abilities with the language(s) in which a package is written.
The handling editor may update the initial package check results at any time with the following command:
@ropensci-review-bot check package
This is likely to be necessary following each review, to ensure any issues identified from the initial checks have been satisfactorily addressed.
Having considered the automated package report, and addressed all issues raised within that report, Handling Editors should check the following items before assigning reviewers.
- Any issues raised during initial processing by EiC have been resolved (or there were none).
Issues raised in the
goodpracticechecks, statistical anomalies, and other details of the report do not appear to be major problems and can be directed to reviewers for further scrutiny.
- Either (i) this package is aiming for a bronze badge, or, (ii) for packages aiming for silver or gold badges, the authors have clarified which of the four aspects listed in the “Guide for Authors” section on silver badges they intend to fulfil.
Issues flagged by the handling editor may require iteration with submitting authors. As stated in the Dev Guide:
- If authors believe changes might take time, apply the holding label to the submission.
- If the package raises a new issue for rOpenSci policy, start a conversation in Slack or open a discussion on the rOpenSci forum to discuss it with other editors (example of policy discussion).
Once all items have been checked, Handling Editors may assign reviewers using
@ropensci-review-bot add <@GITHUB_USERNAME> to reviewers, and generally
following the procedure given in the Dev
Handling editors may provide guidance to reviewers on issues that they think may be worth
looking into based on their initial review of the package and automated checks.
[Note that the Dev Guide is rapidly iterating as new capabilities are added to the review bot, and you may wish to refer to the draft version at <devdevguide.netlify.app/>.]
Handling editors are responsible for resolving any disagreements between authors’ stated or desired grade of badge and reviewers’ recommendations. Click here to jump to the corresponding recommendations for reviews. The views of reviewers should generally be prioritized in such cases. Grades as declared by authors are contained in the opening comment of the issue. These may be extracted by calling:
@ropensci-review-bot stats grade
Editors may modify these grades by editing the opening comment, and changing
the value of the
After having completed a checklist and ensuring agreement on badge grade, the handling editor may approve a submission with the following command:
The bot will identify that this is a statistical software issue, extract the appropriate grade, and attach a corresponding badge which will also label the latest version of our statistical standards.