Call for Proposals: The Robotic Vision Challenges - The Probabilistic Object Detection

Our first challenge requires participants to detect objects in video data produced from high-fidelity simulations. The novelty of this challenge is that participants are rewarded for providing accurate estimates of both spatial and semantic uncertainty for every detection using probabilistic bounding boxes. Accurate spatial and semantic uncertainty estimates are rewarded by our newly developed probability-based detection quality (PDQ) measure. Full details about this new measure are available in our arxiv paper.
Applications are closed

Our first challenge requires participants to detect objects in video data produced from high-fidelity simulations. The novelty of this challenge is that participants are rewarded for providing accurate estimates of both spatial and semantic uncertainty for every detection using probabilistic bounding boxes.

Accurate spatial and semantic uncertainty estimates are rewarded by our newly developed probability-based detection quality (PDQ) measure. Full details about this new measure are available in our arxiv paper.

We invite anyone who is interested in object detection and appreciates a good challenge to please participate and compete in the competition so that we may continue to push the state-of-the-art in object detection in directions more suited to robotics applications. We also appreciate any and all feedback about the challenge itself and look forward to hearing from you.

Participation and Presentation of Results

We maintain two evaluation servers on Codalab:

  • An ongoing evaluation server with a public leaderboard that remains open year-round and can be used to benchmark your algorithm, e.g. for paper submissions. It contains a validation dataset, and a test-dev dataset (coming soon).
  • competition evaluation server that will only be available before competitions we organise at major computer vision and robotics conferences.

CVPR 2019 Competition Evaluation Server

We are organising a competition and workshop at CVPR 2019 in June. Participants can present their results and we will announce the challenge winners, distributing $5000 AUD in prize money (sponsored by the Australian Centre for Robotic Vision). Please head to our competition evaluation server to participate, download the training/validation and test dataset, and find out further information around the dataset and submission format.

Ongoing Evaluation Server

We maintain an ongoing evaluation server with a public leaderboard that can be used year-round to benchmark your approach for probabilistic object detection.

How to Cite

When using the dataset and evaluation in your publications, please cite:

@article{hall2018probability,
  title={Probability-based Detection Quality (PDQ): A Probabilistic Approach to Detection Evaluation},
  author={Hall, David and Dayoub, Feras and Skinner, John and Corke, Peter and Carneiro, Gustavo and S{\"u}nderhauf, Niko},
  journal={arXiv preprint arXiv:1811.10800},
  year={2018}
}

Photo: Probabilistic object detections provide bounding box corners as Gaussians (corner point with covariance). (credit: Australian Centre for Robotic Vision (ACRV) )

Photo: This results in a per-pixel probability of belonging to the detected object. Our evaluation takes this spatial uncertainty into account. (credit: Australian Centre for Robotic Vision (ACRV) )

Photo: Example scenes from the dataset. (credit: The Robotic Vision Challenges)

Photo: Example scenes from the dataset. (credit: The Robotic Vision Challenges)

Photo: Example scenes from the dataset. (credit: The Robotic Vision Challenges)

What is Probabilistic Object Detection?

For robotics applications, detections must not just provide information about where and what an object is, but must also provide a measure of spatial and semantic uncertainty. Failing to do so can lead to catastrophic consequences from over or under-confident detections.

Semantic uncertainty can be provided as a categorical distribution over labels. Spatial uncertaintyin the context of object detection can be expressed by augmenting the commonly used bounding box format with covariances for their corner points. That is, a bounding box is represented as two Gaussian distributions. See below for an illustration.

Final Detection Submissions Due - 10th May 2019 Midnight UTC
Final Paper Submissions Due - 12th May 2019 Midnight UTC
 
Source: Australian Centre for Robotic Vision (ACRV)

 

Read more

If your organization published a call for proposals, call for projects or call for applications for Startups, Entrepreneurs or Researchers, we will be happy to publish your call free of charge on our website. We will also share your call on the social media (Facebook, Twitter, LinkedIn, PInterest, Tumblr, Instagram).
Please drop your pdf files and/or your url link on our WhatsApp +85577778919 or email to us at info@adalidda.com


Thank You
Adalidda's Team

Comments

No comments to display.