News

Introduction

Although the recent advance in computer vision has effectively boosted the performance of multimedia systems, a core question still cannot be explicitly answered: Does the machine understand what is happening in a video, and are the results of the analysis interpretable by human users? Another way to look at the limitation is to evaluate how many facts the machine can recognize from a video. In many AI and knowledge-based systems, a fact is represented by a relation between a subject entity and an object entity (a.k.a. <subject,predicate,object>), which forms the fundamental building block for complex inferences and decision-making tasks.

As a key aspect of recognizing facts, Video Relation Understanding (VRU) is very challenging since it requires the system to understand many perspectives of the two entities, including appearance, action, speech, and interactions between them. In order to detect and recognize the relations in videos accurately, a system must recognize not only the features in these perspectives, but also the large variance in relation representation. This year’s VRU challenge encourages researchers to explore and develop innovative models and algorithms to detect object entities and the relationships between each pair of them in a given video.

Dataset: VidOR

This benchmark dataset contains 10,000 user-generated videos (98.6 hours) from YFCC100M. It is spatial-temporally annotated with 80 categories of objects (e.g. adult, dog, toy) and 50 categories of relationships (e.g. next to, watch, hold).

Main Task: Video Relation Detection

This task is to detect relation triplets (i.e. <subject,predicate,object>) of interest and spatio-temporally localize the subject and object of each detected relation triplet using bounding-box trajectories. For each testing video, we compute Average Precision to evaluate the detection performance and rank according to the mean AP over all the testing videos.

Participation

This challenge is a team-based competition. Each team can have one or more members, and an individual cannot be a member of multiple teams. To register, please first create an account and form teams in the submission website, and then fill in the Google Form for the following notifications. Please remember to submit the Google Form before Registration close (June 7). More guides to the usage of the server can be found in FAQs. Note that each team must select a final submission in the server before the submission deadline, and we will conduct a final evaluation based on the selection.

At the end of the challenge, all teams will be ranked based on the objective evaluation metrics, and the leaderboard of main task will be public on this website. To be eligible for ACM MM'21 grand challenge award competition, each team need further submit a 4-page overview paper (plus 1-page reference) to the conference's grand challenge track. The top three teams in terms of both the solution novelty and the ranking in the main task will receive award certificates.

Leaderboard


Main Task: Video Relation Detection

Rank Team Name Performance: mean AP* Team Members
1 Item-XiXi 0.0948 Kaifeng Gao, Long Chen, Yifeng Huang, Jun Xiao
Zhejiang University & Columbia University
2 Planck 0.0669 Shunli Wang
Fudan University
3 SayHi2U 0.0593 Xu Han
Southeast University

* This year's challenge has 48 registered teams from around the world, and 11 valid submissions for final evalutation.

Timeline

Organizers


For general information about this challenge, please contact:

For information about the main task, please contact: For reporting issue on the submission server, please contact:

Previous Years