The following guidance is intended for reviewers to use in both assessing and drafting their review. Please be cognizant that comments will be included in the publication of the submission. If upon assignment you recognize that you are not suited to perform the review (i.e. schedule conflicts, not an area of expertise, etc.), please send an email to the TPC as soon as possible so that another reviewer may be assigned.

 

Categories of Review

Methodology Review -  This review is a read through and verification that the concepts and methods expressed in the submission are sound.  This is the lowest level review. This method should only be used when it is the only available method to the reviewer or the content is not suitable for a higher level review.

Verified Review using Author Provided Datasets - This review requires verifying the technical details of the submission with a set of data provided by the author.  It is expected that this review category is the minimum requirement when data is provided by the author.

Validated Review using Reviewer Generated Datasets - This is the “gold standard” for DFIR Review. When possible, this is the preferred method of review.  When conducting this type of review, please ensure you include the OS/app/HW versions you are validating with in your review. It is completely acceptable to extend beyond the initial review with your learnings.

 

Reviewer Confidence

This is where the reviewer states their confidence with the material under review and their ability to provide a review. If you feel that you are a 1 or a 2, it is suggested that you email the TPC and inform them that you are not comfortable performing the review so someone with more experience in that area of forensics can be assigned the review.

 

Review Guidelines

Reviews will be published along with accepted article in addition to the name of the reviewer and the category or review performed.

  • Reviews should be comprehensive and should highlight the reviewer's knowledge of the subject matter in the article.
  • Reviews should provide practitioners, lawyers, and judges with assurance that the article is reliable and complete.
  • Reviews can state how the article can be useful in a digital investigation, and any associated limitations or risks.
  • Reviews should state if an article is trivial or not useful for addressing questions in a digital investigation.
  • Reviews should state if the article misses important details.
  • Reviews should explain the basis of the review (testing performed, prior knowledge from a case, etc.) including details regarding verification/validation via use of the author provided data sets or creation of data sets by the reviewer. When validating with reviewer generated data, please provide details as to OS, app, hardware used for the testing as applicable.
  • Reviews should highlight related questions that could be explored in future research.
  • Reviews should describe technical details of any testing performed, such as:

"Using Autopsy version 4.10, I confirmed that the digital trace described in this article was present on Windows 10. In addition, I confirmed that the trace was compatible with the activity/interpretation presented in this article."

 

If we cannot repeat the steps that are presented in the article, we should state this, such as:

"After performing the specified action in the article on a Windows 10 system, I examined the XYZ.dat file using a hex viewer but did not find the digital traces described in this article. However, I did not observe the digital traces described in this article. The traces I observed showed that the specified action occurred, but did not include additional details presented in the article (consider providing a screenshot of observed trace)."

OR

"Although it was not possible to perform testing on the same type of system (vehicle), I tested the XYZ application independently on an Android ABC device and obtained results that were compatible with those presented in this article"

  • Reviews should highlight related questions that could be explored in future research.

 

Review Rubric

Criteria Reject Revise Accept
Content and Organization (Weight 60%) Substantial omission of details. Serious factual errors. Poorly organized. Omission of minor pertinent details. Small factual errors. Overall organization good, but individual sections lack coherence/ depth or good content presented in a disorganized fashion. Excessive information not pertinent to subject. Follows the submission guidelines, providing all pertinent details. No factual errors. Well organized in pursuit of a clearly defined objectives.
Use of Sources* (Weight 30%) Little use of supporting sources, mostly generalities. Overuse of links to or quotations from other texts/websites. Scanty use of supporting sources. Minor misuse of sources, or selection of inappropriate sources. Incomplete details. Drawing on several details from sources (rather than general concepts) and/or multiple sources. Appropriate use and choice of sources.
Grammar and Language (Weight 10%) Substantial grammar and/or typographical errors that confuse clarity and sense. Misuse of vocabulary; excessive repetition in expression. Some grammatical/ typographical errors, but without effect on sense. Repetitious use of vocabulary or expression. Few to no grammatical errors or typos.

* Submissions should contain sufficient details to repeat the process and determine whether or not the results are compatible with those described in the submission. All testing or other sources should be described in the review. Sources includes not just citations but testing that is thoroughly described. Author created testing is a source.