# IntPhys Challenge 2019¶

The challenge 2019 edition is hosted on codalab. It contains training dev and test data for the three blocks (O1, O2 and O3). Ground truth for the test dataset is not provided and is kept secret for evaluation.

Important

The codalab URL of the challenge is https://competitions.codalab.org/competitions/20771.

## Participation step-by-step¶

To participate to the challenge, follow those steps:

• Register on codalab,

• Download the test dataset and the starter kit from Download and resources. If you want to train your model on our data, download the train dataset as well. A dev set, structured as the test dataset with ground truth labels, is also available.

• Extract the test.*.tar.gz archives, it will produce the directories tree test/{O1, O2, O3} (i.e. each block in a sub-directory).

• In the starter kit you will find the file task.txt, listing all the movies of the test set, for which you must compute a plausibility score. In the task file, the paths to movies are expressed as paths relative to the test directory. For example:

O1/0001/1
O1/0001/2
O1/0001/3
O1/0001/4
O1/0002/1
...
O3/1080/3
O3/1080/4

• Submission format

• Participants must submit one plausibility score per movie, in a file named answer.txt, bundled in a .zip archive.

• Each score is a probability, so we must have for all .

• The answer.txt file must have the following format: each line contains the movie path (as provided by the task.txt file) along with the plausibility score you computed, in the format <movie-path> <score>. For example:

O1/0001/1 0.9751
O1/0001/2 0.0614
O1/0001/3 0.0397
O1/0001/4 0.0874
O1/0002/1 0.8663
...
O3/1080/3 0.1986
O3/1080/4 0.5458


An example submission is provided for each task in the challenge’s starter kit.

• Once your submission zip file is ready, you can use the script validate.py from the starter kit to confirm your file is in the expected format and will not be rejected.

Exemple of validation output:

validating ../build/challenge/submission_example.zip ...
check zip extension ...
check valid zip format ...
check answer.txt is in zip ...
check answer.txt is the only file in zip ...
check entries are valid for in answer.txt ...
submission is valid, ready to be submitted!

• Once your submission archive is valid, go to https://competitions.codalab.org/competitions/20771#participate and submit it to codalab.

• Each result submitted by a participant is evaluated on a codalab server and the detailed score is available to the participant. A public leaderboard will be frequently updated on this webpage, participant who don’t want to appear in this leaderboard should email mathieu.a.bernard _at_ inria.fr.

## Evaluation¶

• evaluation metric: absolute and relative error rates are detailed in the Evaluation metric page.
• computed scores: both the relative and absolute error rate are computed for each movie and the average score for each block is derived as the the final score. We distinguish 3 conditions: occluded, visible and all (i.e. mixing occluded and visible movies).
• The evaluation program score.py is provided in the starter kit.

Note

The leaderboard is not yet published, participants will be notified by email (using the email address provided during the codalab registration) when the leaderboard will become available.

## Credits¶

This Challenge is hosted by the CoML team (EHESS - ENS - CNRS - INRIA). It was funded by the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL* ), and a grant from Facebook AI Research.

Contributors:

• Organization: M. Bernard, R. Riochet, E. Dupoux.
• Design of the Blocks: E. Dupoux, R. Riochet, V. Izard.
• Datasets preparation (Unreal Engine/Python): M. Bernard, M. Ynocente Castro, E. Simon, M. Métais, V. Daul.
• Codalab/Website: M. Bernard, R. Riochet.
• Human Data: R. Riochet.