Although the recent advance in computer vision has effectively boosted the performance of multimedia systems, a core question still cannot be explicitly answered: Does the machine understand what is happening in a video, and are the results of the analysis interpretable by human users? Another way to look at the limitation is to evaluate how many facts the machine can recognize from a video. In many AI and knowledge-based systems, a fact is represented by a relation between a subject entity and an object entity (a.k.a. <subject,predicate,object>), which forms the fundamental building block for complex inferences and decision-making tasks.
As a key aspect of recognizing facts, Video Relation Understanding (VRU) is very challenging since it requires the system to understand many perspectives of the two entities, including appearance, action, speech, and interactions between them. In order to detect and recognize the relations in videos accurately, a system must recognize not only the features in these perspectives, but also the large variance in relation representation. This year’s VRU challenge encourages researchers to explore and develop innovative models and algorithms to detect object entities and the relationships between each pair of them in a given video.
This benchmark dataset contains 10,000 user-generated videos (98.6 hours) from YFCC100M. It is spatial-temporally annotated with 80 categories of objects (e.g. adult, dog, toy) and 50 categories of relationships (e.g. next to, watch, hold).
This task is to detect relation triplets (i.e. <subject,predicate,object>) of interest and spatio-temporally localize the subject and object of each detected relation triplet using bounding-box trajectories. For each testing video, we compute Average Precision to evaluate the detection performance and rank according to the mean AP over all the testing videos.
As the first step in relation detection, this task is to detect objects of certain categories and spatio-temporally localize each detected object using a bounding-box trajectory in videos. For each object category, we compute Average Precision to evaluate the detection performance and rank according to the mean AP over all the categories.
This challenge is a team-based competition. Each team can have one or more members, and an individual cannot be a member of multiple teams. To register, please create an account and form teams in the submission server. More guides to the usage of the server can be found in FAQs. Note that each team must select a final submission in the server before the submission deadline, and we will conduct a final evaluation based on the selection.
At the end of the challenge, all teams will be ranked based on the objective evaluation metrics, and the leaderboard of both tasks will be public on this website. To be eligible for ACM MM'20 grand challenge award competition, each team need further submit a 4-page overview paper (plus 1-page reference) to the conference's grand challenge track. The top three teams in terms of both the solution novelty and the ranking in the main task will receive award certificates.
For general information about this challenge, please contact: