REVIEWER GUIDELINES

Last update: November 9, 2021

Updates:

  • 11/8: Added FAQ on handling papers using withdrawn datasets.
  • 11/9: Clarified policy on IRB approval for Personal data/Human subjects.

The CVPR 2022 Reviewer Guidelines

Thank you for volunteering your time to review for CVPR 2022! To maintain a high-quality technical program, we rely very much on the time and expertise of our reviewers. This document explains what is expected of all members of the Reviewing Committee for CVPR 2022.

Benefits for Reviewers: 100 of our outstanding reviewers will receive a CVPR Outstanding Reviewer certificate and a gift certificate of $100 USD. In addition, all reviewers who did a good job (on time when submitting reviews, no reviews with very few words) will be guaranteed a registration ticket for a period of time after registration opens.

In addition to the guidelines below, you should read this CVPR2022 Reviewer Tutorial for a summary of tips to be a good CVPR reviewer along with tips on what to include and avoid when writing your reviews. Please note that we have introduced a number of new policies this year, which need your attention as a reviewer. 

Major Policy Changes in CVPR 2022

  • Check for Data Contribution
  • Check for Attribution of Data Assets
  • Check for Use of Personal Data and Human Subjects
  • Check for Discussion of Negative Societal Impact
  • Check for Discussion of Limitations
  • Social Media Ban

The CVPR 2022 Reviewing Timeline

Paper Submission Deadline
 

Social Media Silence Period

November 16, 2021
 

October 19, 2021 to March 2, 2022

Papers Assigned to Reviewers

December 16, 2021

Reviews Due

January 14, 2022

Start of Post-Rebuttal Discussion Period

February 1, 2022

Final Recommendations Due

February 11, 2022

Decisions Released to Authors

March 2, 2022


Blind Reviews

Our Author Guidelines have instructed authors to make reasonable efforts to hide their identities, including omitting their names, affiliations, and acknowledgments. This information will of course be included in the final published version of the manuscript. Likewise, reviewers should make all efforts to keep their identity invisible to the authors.

With the increase in popularity of arXiv preprints, sometimes the authors of a paper may be known to the reviewer. Posting to arXiv is NOT considered a violation of anonymity on the part of the authors, and in most cases, reviewers who happen to know (or suspect) the authors’ identity can still review the paper as long as they feel that they can do an impartial job. An important general principle is to make every effort to treat papers fairly whether or not you know (or suspect) who wrote them. If you do not know the identity of the authors at the start of the process, DO NOT attempt to discover them by searching the Web for preprints.

Please read the FAQ at the end of this document for further guidelines on how arXiv prior work should be handled.

Check your papers

As soon as you get your reviewing assignment, please go through all the papers to make sure that (a) there is no obvious conflict with you (e.g., a paper authored by your recent collaborator from a different institution) and (b) you feel comfortable to review the paper assigned. If issues with either of these points arise, please let us know right away by emailing the Program Chairs (programchairs-cvpr2022@googlegroups.com).

Please read the [Author Guidelines] carefully to familiarize yourself with all official policies (such as dual submission and plagiarism). If you think a paper may be in violation of one of these policies, please contact the Area Chair and Program Chairs. In the meantime, proceed to review the paper assuming no violation has taken place.

What to Look For

Each paper that is accepted should be technically sound and make a contribution to the field. Look for what is good or stimulating in the paper. In particular, look for what new knowledge advancement the paper made. We recommend that you embrace novel, brave concepts, even if they have not been tested on many datasets. For example, the fact that a proposed method does not exceed the state-of-the-art accuracy on an existing benchmark dataset is not grounds for rejection by itself. Rather, it is important to weigh both the novelty and potential impact of the work alongside the reported performance. Minor flaws that can be easily corrected should not be a reason to reject a paper.

Check for Reproducibility

To improve reproducibility in AI research, we highly encourage authors to voluntarily submit their code as part of supplementary material, especially if they plan to release it upon acceptance. Reviewers may optionally check this code to ensure the paper’s results are reproducible and trustworthy, but are not required to. Reviewers are also encouraged to use the Reproducibility Checklist as a guide for assessing whether a paper is reproducible or not. All code/data should be reviewed confidentially and kept private, and deleted after the review process is complete. We expect (but do not require) that the accompanying code will be submitted with accepted papers. 

Check for Data Contribution

Datasets are a significant part of Computer Vision research. If a paper is claiming a dataset release as one of its scientific contributions, it is expected that the dataset will be made publicly available no later than the camera-ready deadline, should it be accepted. Please indicate in the corresponding field in the review form whether the paper made such claims and whether the corresponding field in the submission form has been marked.

Check for Attribution of Data Assets

Authors are advised that they need to cite data assets used (e.g., datasets or code) much like papers. As a reviewer, please carefully check if a paper has adequately cited data assets used in the paper, and comment in the corresponding field in the review form.

Check for Use of Personal Data and Human Subjects

If a paper is using personal data or data from human subjects, the authors must have an ethics clearance from an institutional review board (IRB, or equivalent) or clearly describe that ethical principles have been followed. If there is no description of how ethical principles were ensured or GLARING violations of ethics (regardless of whether discussed or not), please inform the Area Chairs and the Program Chairs, who will follow on each specific case. Reviewers shall avoid dealing with such issues by themselves directly. 

IRB reviews for the US or the appropriate local ethics approvals are typically required for new datasets in most countries. It is the dataset creators' responsibility to obtain them. If the authors use an existing, published dataset, we encourage, but do not require them to check how data was collected and whether consent was obtained. Our goal is to raise awareness of possible issues that might be ingrained in our community. Thus we would like to encourage dataset creators to provide this information to the public. 

In this regard, if a paper uses an existing public dataset that is released by other researchers/research organizations, we encourage, but not require them to include a discussion of IRB related issues in the paper. Reviewers hence should not penalize a paper if such a discussion is NOT included.

Check for Discussion of Negative Societal Impact

The CVPR community has not put as much emphasis on the awareness of possible negative societal impact as other AI communities so far, but this is an important issue. We aim to raise awareness without introducing a formal policy (yet). As a result, authors are encouraged to include a discussion on potential negative societal impact. Reviewers shall weigh the inclusion of a meaningful discussion POSITIVELY. Reviewers shall NOT reject a paper solely based on that the paper has not included such a discussion as we do not have a formal policy requiring that.

Check for Discussion of Limitations

Discussing limitations used to be commonplace in our community, but seems to be increasingly lost. We point out the importance of discussing limitations especially to new authors. Therefore, authors are encouraged to explicitly and honestly discuss limitations. Reviewers shall weigh the inclusion of an honest discussion POSITIVELY, instead of penalizing the papers for including it. We note that a paper is not required to have a separate section to discuss limitations, so it can not be a sole factor for rejection. 

Be Specific

Please be specific and detailed in your reviews. Your main critique of the paper should be written in terms of a list of strengths and weaknesses. You can use bullet points here, but also explain your arguments. A single short sentence or a few words do NOT suffice. Your discussion, more than your score, will help the authors, fellow reviewers, and Area Chairs understand the basis for your recommendation, so please be thorough. You should include specific feedback on ways the authors can improve their papers.

In the discussion of related work and references, simply saying “this is well known” or “this has been common practice in the industry for years” is not sufficient: You MUST cite specific publications, including books or public disclosures of techniques. If you do not provide references to support your claim, the Area Chairs are forced to discount it.

Social Media Ban

Per the motion passed in the CVPR2021 PAMI-TC meeting, authors should NOT use social media to promote their paper submissions to CVPR during the review period. We are imposing a policy slightly stronger than what passed in the motion, where we define the social media silence period. By our definition, the social media silence period starts four weeks before the paper submission deadline, until the final paper decision notifications are sent to authors. Per the currently planned schedule, the social media silence period is from 10/19/2021 to 03/02/2022. Any social media promotion of a paper incurred in this period, proactively initiated by the authors, is considered a policy violation. 

Social media promotion of a paper submission not only runs a bigger risk of violating the double blind review policy, but also imposes an even stronger bias into the process, which may influence reviewers’ technical judgement of a paper.

We are implementing this policy for CVPR 2022 as required. Reviewers shall report any such cases to Area Chairs and Program Chairs for further investigation to establish a conclusion whether a violation is incurred, which may result in the paper be desk rejected. However, after reporting the case and before a conclusion is drawn, Reviewers should continue reviewing the paper as if there is no violation.

Please read the FAQ at the end of this document for further details on how to treat related work on arXiv, supplementary material, and rebuttals.
 

Ethics for Reviewing Papers

1. Protect Ideas

As a reviewer for CVPR, you have the responsibility to protect the confidentiality of the ideas represented in the papers you review. CVPR submissions are not published documents. The work is considered new or proprietary by the authors; otherwise they would not have submitted it. Of course, their intent is to ultimately publish to the world, but most of the submitted papers will not appear in the CVPR proceedings. Thus, it is likely that the paper you have in your hands will be refined further and submitted to some other journal or conference. Sometimes the work is still considered confidential by the authors' employers. These organizations do not consider sending a paper to CVPR for review to constitute a public disclosure. Protection of the ideas in the papers you receive means:

  • You should not show the paper to anyone else, including colleagues or students, unless you have asked them to write a review, or to help with your review.
  • You should not show any results, videos/images, code or any of the supplementary material to non-reviewers.
  • You should not use ideas/code from papers you review to develop your own ideas/code.
  • After the review process, you should destroy all copies of papers and supplementary material and erase any code that the authors submitted as part of the supplementary, and any implementations you have written to evaluate the ideas in the papers, as well as any results of those implementations.

2. Avoid Conflict of Interest

As a reviewer of a CVPR paper, it is important for you to avoid any conflict of interest. There should be absolutely no question about the impartiality of any review. Thus, if you are assigned a paper where your review would create a possible conflict of interest, you should return the paper and not submit a review. Conflicts of interest include (but are not limited to) situations in which:

  • You work at the same institution as one of the authors.
  • You have been directly involved in the work and will be receiving credit in some way. If you're a member of an author's thesis committee, and the paper is about his or her thesis work, then you were involved.
  • You suspect that others might perceive a conflict of interest in your involvement.
  • You have collaborated with one of the authors in the past three years (more or less). Collaboration is usually defined as having written a paper or grant proposal together, although you should use your judgment.
  • You were the MS/PhD advisor or advisee of one of the authors. Most funding agencies and publications typically consider advisees to represent a lifetime conflict of interest. CVPR has traditionally been more flexible than this, but you should think carefully before reviewing a paper you know to be written by a former advisor or advisee, especially a recent one.

While the organizers make every effort to avoid such conflicts in the review assignments, they may nonetheless occasionally arise. If you recognize the work or the author and feel it could present a conflict of interest, email the Program Chairs (programchairs-cvpr2022@googlegroups.com) as soon as possible so they can find someone else to review it.

3. Be Professional

Belittling or sarcastic comments have no place in the reviewing process. The most valuable comments in a review are those that help the authors understand the shortcomings of their work and how they might improve it. Write a courteous, informative, incisive, and helpful review that you would be proud to sign with your name (were it not anonymous).
 

Additional Tips for Writing Good Reviews

  • Take the time to write good reviews. Ideally, you should read a paper and then think about it over the course of several days before you write your review.
  • Short reviews are unhelpful to authors, other reviewers, and Area Chairs. If you have agreed to review a paper, you should take enough time to write a thoughtful and detailed review. Bullet lists with one short sentence per bullet are NOT a detailed review.
  • Be specific when you suggest that the writing needs to be improved. If there is a particular section that is unclear, point it out and give suggestions for how it can be clarified.
  • Be specific about novelty. Claims in a review that the submitted work “has been done before” MUST be backed up with specific references and an explanation of how closely they are related. At the same time, for a positive review, be sure to summarize what novel aspects are most interesting in the Strengths section.
  • Do not reject papers solely because they are missing citations or comparisons to prior work that has only been published without review (e.g., arXiv or technical reports). Refer to the FAQ below for more details on handling arXiv prior art.
  • Do not give away your identity by asking the authors to cite several of your own papers.
  • If you think the paper is out of scope for CVPR's subject areas, clearly explain why in the review. Then suggest other publication possibilities (journals, conferences, workshops) that would be a better match for the paper. However, unless the area mismatch is extreme, you should keep an open mind, because we want a diverse set of good papers at the conference.
  • The tone of your review is important. A harshly written review will be resented by the authors, regardless of whether your criticisms are true. If you take care, it is always possible to word your review constructively while staying true to your thoughts about the paper.
  • Avoid referring to the authors in the second person (“you”). It is best to avoid the term “the authors” as well, because you are reviewing their work and not the person. Instead, use the third person (“the paper”). Referring to the authors as “you” can be perceived as being confrontational, even though you may not mean it this way.
  • Be generous about giving the authors new ideas for how they can improve their work. You might suggest a new technical tool that could help, a dataset that could be tried, an application area that might benefit from their work, or a way to generalize their idea to increase its impact.

Finally, keep in mind that a thoughtful review not only benefits the authors, but also yourself. Your reviews are read by other reviewers and especially the Area Chairs, in addition to the authors. Unlike the authors, the Area Chairs know your identity. Being a helpful reviewer will generate good will towards you in the research community – and may even help you to win an Outstanding Reviewer award.
 

CMT Instructions

Once you've been notified by email that papers have been assigned to you, please log into the CMT site (https://cmt3.research.microsoft.com/CVPR2022), choose the “Reviewer” role on top, and follow the steps below.

1. Download your papers.

To download individual papers, you can click the links underneath individual paper titles. Or, you can click the “Actions” button in the top right corner and then choose “Download Files”. This allows you to download a ZIP file containing all the papers plus supplementary files (if available).

2. Check for possible conflict or submission rule violations.

Contact the Program Chairs (programchairs-cvpr2022@googlegroups.com) immediately if:

  1. You think you are conflicted with the paper (see the section entitled “Avoid Conflict of Interest” above).
  2. You think the paper violates submission rules regarding anonymity, double submission, or plagiarism (please refer to the Author Guidelines for precise definitions of what is and isn’t considered acceptable). In the meantime, go ahead and review the paper as if there is no violation. The Program Chairs will follow up, but this may take a bit of time.

3. Review papers and assign them a preliminary (pre-rebuttal) rating.

For a paper, under the review column, click "Edit Review" to get to the review form. You can hover the mouse over the “?” symbol next to each question for a more detailed explanation. Before you start writing your reviews, make sure you have read the Reviewer Guidelines above and the CVPR2022 Reviewer Tutorial.

4. (Optional) Review papers offline.

To enable offline reviewing, go to “Actions - > Import Reviews”. You can select papers and click “Download” to obtain XML review stubs and the update files as needed. Once you are done updating, you can upload the file from the same page. We suggest that you use an XML editor to edit the file. You should always verify the review after uploading by inspecting it online.

5. Participate in discussions with Area Chairs and other reviewers.

After the rebuttal period, reviewers will work with Area Chairs to clear up any confusions and attempt to reach consensus on papers. The CMT site has an electronic bulletin board feature that allows Area Chairs to contact reviewers anonymously. Once the Area Chair posts a note, reviewers will be notified and asked to log in to see the post and respond. The identities of the reviewers will be hidden from each other (but not from the Area Chair).

6. Enter your final (post-rebuttal) recommendation.

After the rebuttal period you will enter your final recommendation on CMT. This may differ from your preliminary rating, and should reflect your judgment taking into account all the other reviews, the authors' rebuttal, and the discussion about the paper (if any).

Reviewer FAQs

Q. How should code submission be handled?

A. Please read the Author FAQ regarding code submission.

Q. Is there a minimum number of papers I should accept or reject?

A. No. Each paper should be evaluated in its own right. If you feel that most of the papers assigned to you have value, you should accept them. It is unlikely that most papers are bad enough to justify rejecting them all. However, if that is the case, provide clear and very specific comments in each review. Do NOT assume that your stack of papers necessarily should have the same acceptance rate as the entire conference ultimately will.

Q. Can I review a paper I already saw on arXiv and hence know who the authors are?

A. In general, yes, unless you are conflicted with one of the authors. See next question below for guidelines.

Q. How should I treat papers for which I know the authors?

A. Reviewers should make every effort to treat each paper impartially, whether or not they know who wrote the paper. For example: It is not OK for a reviewer to read a paper, think “I know who wrote this; it's on arXiv; they are usually quite good” and accept the paper based on that reasoning. Conversely, it is also not OK for a reviewer to read a paper, think “I know who wrote this; it's on arXiv; they are no good” and reject the paper based on that reasoning.

Q. Should authors be expected to cite related arXiv papers or compare to their results?

A. Consistent with good academic practice, the authors should cite all sources that inspired and informed their work. This said, asking authors to thoroughly compare their work with arXiv reports that appeared shortly before the submission deadline imposes an unreasonable burden. We also do not wish to discourage the publication of similar ideas that have been developed independently and concurrently. Reviewers should keep the following guidelines in mind:

  • Authors are not required to discuss and compare their work with recent arXiv reports, although they should properly acknowledge those that directly and obviously inspired them.
  • Failing to cite an arXiv paper or failing to beat its performance SHOULD NOT be SOLE grounds for rejection.
  • Reviewers SHOULD NOT reject a paper solely because another paper with a similar idea has already appeared on arXiv. If the reviewer suspects plagiarism or academic dishonesty, they are encouraged to bring these concerns to the attention of the Program Chairs.
  • It is acceptable for a reviewer to suggest that an author should acknowledge or be aware of something on arXiv.

Q. How should I treat the supplementary material?

A. The supplementary material is intended to provide details of derivations and results that do not fit within the paper format or space limit. Ideally, the paper should indicate when to refer to the supplementary material, and you need to consult the supplementary material only if you think it is helpful in understanding the paper and its contribution. According to the Author Guidelines, the supplementary material MAY NOT include results obtained with an improved version of the method (e.g., following additional parameter tuning or training), or an updated or corrected version of the submission PDF. If you find that the supplementary material violates these guidelines, please contact the Program Chairs.

Q. Can I request additional experiments in the author’s rebuttal? How should I treat additional experiments reported by authors in the rebuttal?

A. In your review, you may request clarifications or additional illustrations in the rebuttal. Per a passed 2018 PAMI-TC motion, reviewers should not request substantial additional experiments for the rebuttal, or penalize for lack of additional experiments. “Substantial” refers to what would be needed in a major revision of a paper. The rebuttal may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. However, papers should also not be penalized for supplying extra results; you can simply choose to ignore them.

Q. If a social media post shared information on a CVPR submission without the authors being involved, does that signal a violation?

A. No, it does not. A violation occurs only when authors are proactively doing so.

Q. Why is social media promotion an issue?

A. Groups with large followings and the resources to mount visible social media promotions can receive significant attention for work that is under review. Reviewers are exposed to this work and the attention it receives can bias their judgment if so many people on social media are excited, mustn't it be good? Groups with fewer followers or that refrain from such campaigns are disadvantaged. This biases the peer review process and reduces trust in its fairness.

Peer review is the backbone of science. The process helps detect mistakes or false claims before the work appears in public. This reduces the chance that work needs to be retracted and, hence, increases public trust in science and the scientific process. Science depends on this trust both for funding and for its independence. Anything that undermines this trust can have long term negative consequences for basic research.

Q. What about arXiv papers?

A. The field has decided that dissemination on arXiv facilitates the rapid spread of information within the field. arXiv papers are not “published” but are understood to be “pre-publications.” This open pre-publication process provides a form of community review where problems can be detected (much like formal peer review). arXiv papers are often corrected and modified; the site is set up to support this scientific process of revision. Putting a paper on arXiv for early analysis by experts is very different from publicly promoting work on social media to a broad audience. For example, attention to a particular arXiv paper cannot be gathered from arXiv itself.

Q. arXiv tweets new papers. Is that a violation?

A. No. This is an automatic process and does not constitute the authors promoting their work. arXiv tweets are largely followed by experts in the field and not the general public. The work is presented in its entirety and a pre-publication and can be judged as such. This differs from, for example, promotional videos posted on social media.

Q. Doesn’t this slow down scientific progress?

A. No. Experts in the field make scientific progress, not the general public. The exemption for arXiv means that the research community still gets early access to research and can evaluate it as non-peer-reviewed.

Q. May the authors have a video link in their arXiv paper?

A. Yes, the authors may and it is not considered a violation of the social media ban, as long as the video (or a link of the video) is not posted on any social media platform, and it does not contain any information that would otherwise link it to your CVPR submission. In addition, if your video is hosted on a video platform that behaves similar to a social media platform (such as YouTube, Douban, or Blibli) then your video post must be comment disabled and unlisted. 

Q. May the authors build a project website related to their arXiv paper?

A. Yes, the authors may and it is not considered a violation of the social media ban, as long as the url of the project website is not posted on any social media platform, and the project website itself does not contain any information that would otherwise link it to your CVPR submission.

Q. A paper is using a withdrawn dataset, such as DukeMTMC-ReID or MS-Celeb-1M. How should I handle this?

A. Reviewers are advised that the choice to use a withdrawn dataset, while not in itself grounds for rejection, should invite very close scrutiny.  Reviewers should flag such cases in the review form for further consideration by ACs and PCs. Consider questions such as: Do the authors explain why they had to do this? Is this explanation compelling? Is there really no alternative dataset that could have been used? Remember, authors might simply not know the dataset had been withdrawn. If you believe the paper could be accepted without the authors’ use of a withdrawn dataset, then it is natural to advise the authors to remove the experiments associated with this dataset.

Q. If a paper did not evaluate on a withdrawn dataset, can I request authors that they do?

A. It is a violation of policy for a reviewer or area chair to require comparison on a dataset that has been withdrawn without a detailed consultation with PCs or DEI chairs or ombuds

Q. A paper is using a withdrawn dataset, such as DukeMTMC-ReID or MS-Celeb-1M. How should I handle this?

A. Reviewers are advised that the choice to use a withdrawn dataset, while not in itself grounds for rejection, should invite very close scrutiny.  Reviewers should flag such cases in the review form for further consideration by ACs and PCs. Consider questions such as: Do the authors explain why they had to do this? Is this explanation compelling? Is there really no alternative dataset that could have been used? Remember, authors might simply not know the dataset had been withdrawn. If you believe the paper could be accepted without the authors’ use of a withdrawn dataset, then it is natural to advise the authors to remove the experiments associated with this dataset.