The Search and Hyperlinking Task envisions the following scenario: a user is searching for a known segment in a video collection. Furthermore, in order to expand the search experience the user wants to have links to other related video segments.
The Search and Hyperlinking task comprises two sub-tasks:
- Search: finding suitable video segments based on a short natural language query,
- Linking: defining links to other relevant video segments in the collection.
We run the task on the blip10000 collection, a crawl from the Internet video sharing platform Blip.tv, which contains videos and user generated meta-data that include the titles, tags, short descriptions. The collection is split in a development set and a test set, 5288 and 9550 videos respectively. It is accompanied by user generated meta-data that include the titles, tags, short descriptions. Additionally, we provide the audio part of the data, at least one version of automatic speech recognition (ASR) transcripts (containing the 1-best output; possibly also latices, and confusion networks), shot definitions, and key frames.
We provide two sets of queries:
- Search queries: short natural language sentences (1-2 sentences)
- Linking queries: set of anchor segments in the videos that correspond to the queries set of the Search sub-task (roughly 10 to 30 seconds long)
All the videos in the dataset fit into different categories of genre (art, politics, music and entertainment, etc).
Participants of the task will have the information about the genre of the video the query is associated with and can use this information in their work.
The two-sub-tasks are defined as follows:
- Search sub-task: The queries, which are defined by the MT workers, are used in the search. A submitted run contains for each query a ranked list of video segments in decreasing order of likelihood that the segment is the required one. Participants are required to make one submission for the 1-best output of each ASR transcript version, no information about the genre being used. Additionally, participants can deliver up to two other runs for each ASR transcript version.
- Linking sub-task: The video segments, which are defined by the MT workers, are used as anchors. Participants have to return a ranked list of video segments, which are relevant to the information in the anchor video segment (independent of the initial textual query).
The two sub-tasks will be evaluated as follows:
- Search sub-task evaluation: We evaluate the search sub-task using the variation of the mean reciprocal rank metrics (mGAP) that takes into account the distance to the actual relevant jump-in point. Additionally, we possibly use variations of mGAP. A script that evaluates the evaluation metrics will be provided by the task organizers.
- Linking sub-task evaluation: We use MT workers to define the ground-truth of the relevant video segments in the submitted runs. We evaluate the linking sub-task by precision oriented metrics, such as precision at rank 10, which are computed by the trec_eval tool (http://trec.nist.gov/trec_eval/).
MediaEval 09 Linking Task Overview
MediaEval 09 Linking Task papers
Wikify!: linking documents to encyclopedic knowledge
MediaEval 11 Rich Speech Retrieval Task Overview
Robin Aly, University of Twente, Netherlands
Maria Eskevich, Dublin City University, Ireland
Gareth Jones, Dublin City University, Ireland
Martha Larson, TU Delft, Netherlands
Roeland Ordelman, University of Twente and Netherlands Institute for Sound and Vision, Netherlands
For more information contact:
Maria Eskevich meskevich (at) computing.dcu.ie and Robin Aly r (dot) aly (at) ewi.utwente.nl
Special thanks to Shu Chen, Dublin City University
This task is a "Brave New Task", which means that it will run as a closed task in 2012, with an eye to becoming a larger, open task in 2013. Participation is by invitation only. If you are interested in receiving an invitation, please write and email to Robin Aly (first initial dot last name ewi.utwente.nl) and Maria Eskevich (first initial no dot last name at computing.dcu.ie).
This task is made possible by a collaboration of projects including Axes and IISSCOS with support from CUbRIK.