How To Peer-Review A Paper

During graduate school, you may be asked to provide peer reviews for manuscripts submitted for publication. For example, your advisor may serve as academic editor on one or more journals, or serve on a conference panel for which potential papers are being evaluated.

If you’re asked to do a peer review, I’d encourage you to view it as a valuable learning experience and important component of your graduate training. The perspective gained by being a reviewer will help you in your own publications, allowing you to realistically assess, “How would I respond to this [your] paper if I were the reviewer?”

Being a reviewer is also a valuable service contribution to your field. It is part of the “volunteer service” that holds the professional organizations and societies (e.g., IEEE) together behind-the-scenes. At the current time (post-AI), your human judgement on the criteria below is more important than ever.

Below I list five key aspects of a strong paper with additional commentary. Use these to evaluate any manuscript I ask you to peer review.

1) Proper Literature Review

Do the authors provide a literature review that clearly identifies: the current state-of-the-art in the problem domain, gaps in the state-of-the-art, how the proposed research is distinguished from existing approaches, and how specifically it addresses the identified gaps? Does the paper provide a clear statement such as, “Key contributions of this paper are:” that identifies and summarizes the latter?

2) Sufficient Novelty

Does the paper present research that is a novel contribution to the literature in some way? Novelty can be measured by an idea or approach that is completely unique, or a “non-trivial” extension of an existing approach. The latter is the type of novelty you’ll most often see as a reviewer; assessing novelty in this case is subjective and this is where your (human) expert opinion comes in.

The question is: “Would this research present a novel and valuable approach to someone knowledgeable of the application?” In brief, does the research have archival value to experts in the field?

3) Defensible Claims

The paper should include some kind of claim in terms of the value of the research (if not, it should be rejected). First, are these claims clearly stated? For example, in the conclusions and abstract is there a clear and quantitative summary such as: “We show that [the proposed method] results in [an increase in some measure of goodness] by x% compared to [some other method or class of methods].”

Second, what evidence is given to support the assertions made? Simulation results are a minimum bar, and they should be compared to existing methods. A stronger defense (and typically the minimum bar for a high-quality journal publication) is showing hardware measurements that confirm the simulation results.

Third, do plots or other displays of data confirm the claims in the paper, or show signs of disconfirming them? If there are potential disconfirming signs, it’s important to evaluate how the authors address the situation. Do they ignore it? Do they obscure it (e.g., in the way the data are displayed)? Or do they acknowledge and properly address these issues, e.g., give a coherent explanation of potential anomalies? Ignoring and (especially) obscuring anomalies are grounds for a recommendation to reject the paper. If they give an explanation, it is up to you to determine if you accept it.

Finally, since no paper is perfect, do the authors cite limitations of the research, and suggest opportunities for improvements or extension (i.e., “future research”)? Note that any explanations of anomalies mentioned in the previous paragraph should be stated in the conclusions. For example, “We found that the XYZ method resulted in…However, some of the case studies indicated… Further investigation is required to…”

4) Organized

Does the organization and text of the paper have a logical flow? That is, does it start with the high-level problem, motivate the current research, describe the approach, validate the approach, and then discuss the results?

5) Quality Writing

Does the paper read well? Are there grammatical or spelling mistakes? Are the figures legible? Are the references properly formatted, e.g., in IEEE format?

Summarizing Your Conclusions First, when you summarize your peer review keep in mind that the authors have typically done a lot of work on their research and writing the paper. They have a vested personal interest in having the paper published. Ensure that any comments you make are professional—never degrade or insult the authors, even if you think it’s a “bad” paper. Using the criteria above will help in keeping the evaluation objective.

Second, organize your review in terms of recommendations you consider “Major” and “Minor.” Major recommendations are those you consider required before the paper is deemed publishable. These are mostly concerned with failing to meet criteria 1-3 above. Minor recommendations are less important fixes that should be made to improve minor mistakes (e.g., typos) or certain aesthetic aspects of the paper. These are typically related to criteria 4-5 above.

A final summary may look similar to that shown below.


Review comments (visible to authors) Dear Authors:

This paper describes an algorithmic approach for energy optimization in microgrids, including uncertainty awareness. This research is timely, given recent widespread interest in microgrid technologies.

The discussion of prior research is adequate and key innovations of the paper are clearly stated. Performance of the algorithm is demonstrated through numerical case studies. Overall, the paper is well-written; however, I list some minor recommended corrections below; the authors may find these suggestions useful to improve the paper. My main recommendation is to include a comparison of your algorithm performance with the well-known XYZ method. I consider this a required addition to the paper before publication.

Major Editorial Comments:

  1. The authors should provide a comparative analysis between their approach and the XYZ method.

Minor Editorial Comments:

  1. In the definition of the time set (line X), it is recommended to use a different symbol for the set itself, or create an index set, to avoid confusion.

  2. In equation (x), since the units of price are $/kWh, revenue should be computed as energy (kWh) over the time increment. Will the authors please check these equations.

  3. The constant c_i on line X should be capitalized.

  4. There is a typo on line X; the correct spelling is Markov.

Other Comments:

In the conclusions, the authors should state any limitations of the paper. For example, were there any simplifications or assumptions that could make the analyses described in the paper more realistic?


Review comments (visible to editors only) Dear Editors:

This manuscript describes an algorithmic approach for energy optimization in microgrids, including uncertainty awareness. This research is timely, given recent widespread interest in microgrid technologies, and I believe it could have archival value and readership interest within the X Journal.

Overall, the paper is well-written (other than minor corrections noted in my comments to the authors). The discussion of prior research is adequate and key innovations of the paper are clearly stated. Performance of the algorithm is demonstrated through numerical case studies. My main criticism of the paper is that there is no comparison of the results to the well-known XYZ method. I consider this a major shortcoming that the authors should include before the paper is advanced to publication. I’m therefore recommending a decision of “accept with major corrections.”


Be careful here; it’s not your role to force the authors to change something that’s truly subjective. However, it some cases you can give a lightly-stated (“minor”) recommendation if you feel the paper could be greatly enhanced by the modification. See example Minor Editorial Comment 1.