Call for Demos

Submissions Website
https://cmt3.research.microsoft.com/CVPRDemo2022

Online Demos Submission
https://huggingface.co/cvpr

 

The CVPR 2022 Demo Program invites submissions of demos for the CVPR Demo Track. 

Submissions may range from early research demos to mature production-ready systems. We particularly encourage publicly available open-source and open-access systems. We also encourage interactive industrial or individual systems that are innovative given the state of the art of current research in the CVPR community. Each submitted demo will include the system (or link) AND a paper (Papers are limited to eight pages, including figures and tables, in the CVPR style.) describing the system.

Areas of interest include all topics related to CVPR, including but not limited to the topics listed on the main conference website.

Submitted demo systems may be of the following types:

  • CVPR software/hardware systems or system components
  • Application systems/tools using CVPR components such as (but not limited to):
    • Multimodal/embodied systems
    • Creative image and video editing or generation
    • Biomedical
    • Earth Observation/ Agriculture
    • Education
    • Transportation
    • E-commerce
    • Robotics and hardware technologies
  • Tools for model inspection, data annotation, visualization and  other development and research tools related to CVPR

Accepted demos will be accessible either through the virtual CVPR website or the physical CVPR event (or both if applicable). Papers describing accepted demonstrations will be published in the CVPR conference proceedings (Demo Track).

 

Note: Commercial sales and marketing activities are not appropriate for submissions in the CVPR Demo Track and should be arranged as part of the CVPR Exhibit Program.

 

Best Demo Awards

CVPR 2022’s demo track will feature Best Demo Awards. We hope to encourage researchers to create interactive systems based on cutting-edge research that is publicly available, fun, useful, and easy-to-use.

 

Important Dates

  • Paper registration deadline Round 1: (Late registrations will require organizer permission): Jan 31, 2022
  • Paper submission deadline Round 1: Feb 7, 2022
  • Notification of acceptance Round 1: March 8, 2022
  • Paper registration deadline Round 2 (fast track for CVPR): March 18, 2022
  • Paper submission deadline Round 2 : March 20, 2022
  • Notification of acceptance Round 2: April 10, 2022
  • Camera-ready submission: April 18, 2022

All deadlines are 11.59 pm Pacific Time

The proposed system needs to be ready at Camera-Ready time.  Additional improvements are allowed, but should not diverge significantly from the published description in the paper.

Note: we encourage authors to submit to Round 1 if possible. For Round 2, we focus on fast track for accepted/rejected CVPR main conference papers. Each demo paper based on a previous CVPR submission should be submitted along with the CVPR paper submission, reviews and decisions.

 

Submission of papers describing demonstrations

A paper submitted to accompany a demonstration should outline the design of the system and provide sufficient details to allow the evaluation of its validity, quality, and relevance to CVPR. A paper can do this by addressing the following questions:

  • What problem does the proposed system address?
  • Why is the system important and what is its impact?
  • What is the novelty in the approach/technology on which this system is based?
  • Who is the target audience?
  • How does the system work?
  • How does it compare with existing systems?
  • How is the system licensed?
  • Any additional concerns about the proposed system? (e.g. ethical or environmental concerns)

Paper submission is electronic, using the CMT system (Link).

Style files should meet the requirements of the CVPR main conference. Submissions may consist of 4-8 pages, plus unlimited references. Submissions must conform to the CVPR author guidelines and they must be in PDF format. The submissions have to be original work (unpublished), as the publication in CVPR Demo Track will be archival.

 

Multiple Submission Policy

We follow the Dual/Double Submission Policy of the CFP of the CVPR 2022 main conference. The paper has to be written specifically for this conference and cannot be submitted elsewhere. However, if a demo is accompanying a paper that is accepted at the CVPR 2022 main conference track, it is not considered as a violation of the multiple submission policy. In this case, the demo paper should be submitted in round 2 (CVPR fast track, and include the acceptance information in the submission. The content of the submission should address the requirements of demo track and differ from the main conference submission).


 

CVPR Ethical Guidelines

As computer vision demonstration systems are having increased societal impact, we would like to do our best collectively to increase the societal benefits and limite potential harm. We expect all demo submissions to observe the CVPR Ethical Guidelines as specified here: https://cvpr2022.thecvf.com/ethics-guidelines.

 

Reviewing Policy

Reviewing will be single-blind, so authors do not need to conceal their identity. The paper should include the authors’ names and affiliations. Self-references are also allowed.

 

Demo Details

As the conference may be hybrid, we strongly recommend all demos are provided via one of the following formats: (1) A live demo website; or (2) A website with a downloadable installation package of the demo; unless this is impossible because some special hardware is required or access is otherwise limited. We are partnering with Hugging Face Spaces to have a dedicated place for you to upload your Gradio demo for free unlimited hosting. See the bottom of this page for detailed instructions on using Gradio and Spaces to upload your demo to the dedicated Hugging Face CVPR organization Space.

You are encouraged to submit a short (~5 minutes) screencast video demonstrating the system together with your paper submission. This screencast will be used to evaluate the paper, but won’t be published unless requested. We encourage the authors to include visual aids (e.g., screenshots, snapshots, or diagrams) in the paper. Authors will also be able to upload and submit additional material, if needed. If you choose to submit a screencast, please upload the video to some hosting site (YouTube, Vimeo, etc.) and include the link in your paper submission. To ensure accessibility for deaf or hard-of-hearing viewers, we encourage authors to caption videos prior to submission.

 

Demo chairs:

  • Humphrey Shi (U of Oregon,  Picsart AI Research, and UIUC)
  • Maria Vakalopoulou (CentraleSupélec, University of Paris-Saclay)

 

For any questions, please contact the demo chairs
(shihonghui3@gmail.com and maria.vakalopoulou@centralesupelec.fr)


Hugging Face Spaces & Gradio for Showcasing your CVPR ‘22 Demo 

 

Thanks for submitting a demo to the CVPR ‘22 Demo Track! 

 

In this tutorial, we will demonstrate how to showcase your demo with an easy to use web interface using the Gradio Python library and host it on Hugging Face Spaces so that conference attendees can easily find and try out your demos. 

🚀 Create a Gradio Demo from your Model

 

The first step is to create a web demo from your model. As an example, we will be creating a demo from an image classification model (called model) which we will be uploading to Spaces. The full code for steps 1-4 can be found in this colab notebook.

 

  1. Install the gradio library

 

All you need to do is to run this in the terminal:

 

pip install gradio

 

  1. Define a function in your Python code that performs inference with your model on a data point and returns the prediction

 

Here’s we define our image classification model prediction function in PyTorch (any framework, like TensorFlow, scikit-learn, JAX, or a plain Python will work as well):

 

def predict(inp):

  inp = Image.fromarray(inp.astype('uint8'), 'RGB')

  inp = transforms.ToTensor()(inp).unsqueeze(0)

  with torch.no_grad():

    prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)

  return {labels[i]: float(prediction[i]) for i in range(1000)}

 

  1. Then create a Gradio Interface using the function and the appropriate input and output types

 

For the image classification model from Step 2, it would like like this:

 

inputs = gr.inputs.Image()

outputs = gr.outputs.Label(num_top_classes=3)

io = gr.Interface(fn=predict, inputs=inputs, outputs=outputs)

 

If you need help creating a Gradio Interface for your model, check out the Gradio Getting Started guide.

 

  1. Then launch() you Interface to confirm that it runs correctly locally (or wherever you are running Python)

 

io.launch() 

 

You should see a web interface like the following where you can drag and drop your data points and see the predictions:

 

 

🤗 Host it on Hugging Face Spaces

 

  1. Create a Hugging Face account (https://huggingface.co/join) if you don’t already have one

 

This will allow you to create and share demos on Spaces, for free!

 

  1. Join the CVPR ‘22 organization by visiting https://huggingface.co/CVPR  and clicking “Request to join this org” 

 

 

You may need to wait a little while for approval before you are part of the organization.

 

  1. Click your profile picture and then click on New Space 

 

 

  1. Change the “Owner” tab to “CVPR”, select Gradio as the choice for the “Space SDK” and select public to proceed to creating the Space

 

 

  1. When presented with the repo information, follow the isntructions to clone the repo and add your code to generate and launch your Gradio demo (the code you wrote in steps 1-4). If needed, add a requirements.txt file listing all your Python packages and a packages.txt listing Debian dependencies.

 

 

  1. When you push your changes to the repo, the Gradio demo should automatically start building. When you refresh the page after a few moments, you should see a working Gradio demo hosted on Spaces!

 

 

Questions? Ask on the Spaces forums

 


We have partnered with HuggingFace/Gradio to help online demo authors to prepare demos and we have prepared a tutorial for authors as above.  It is recommended but not enforced.