GameStory: The 2019 Video Game Analytics Challenge

Task description
In this task, participants analyze multi-view multimedia data captured at a Counter-Strike: Global Offensive event. The data includes sources such as audio and video streams, commentaries, game data, as well as statistics, interaction traces, and viewer-to-viewer communication. We ask participants to develop systems capable of multi-stream synchronization, replay detection, and ultimately: summarization towards a GameStory.

Task motivation and background
That video and computer games have reached the masses is a well known fact. Furthermore, game streaming and watching other people play video games is another phenomenon that has outgrown its small beginning by far, and game streams, be it live or recorded, are today viewed by millions. E-sports is the result of organized leagues and tournaments in which players can compete in controlled environments and viewers can experience the matches, discuss and criticize, just like in physical sports. Already in 2013 concurrent users for a single event exceeded eight million for a League of Legends Championship. In 2018 approximately 222 million viewers viewed e-sports streams frequently. E-sports broadcasting and streaming offers a rich bouquet of data including audio and video streams, commentaries, game data and statistics, interaction traces, viewer-to-viewer communication, and many more channels. However, as traditional sports, e-sport matches may be long and contain less interesting parts, introducing the challenge of producing well directed summaries and highlights.

One of the key challenges today is to render the results of e-sports events more promotable. It is notoriously difficult to search, e.g., for exciting highlights, because of the huge amounts of video recorded for each match and their homogeneity: for example, every match of League of Legends roughly looks the same. A lot depends on the manual selection and curation of supplementary material for promotion on an event’s website. This way, individual matches of e-sports events can be more easily accessed by those attending on-site or by stream, however, at the cost of significant manual overhead. At the same time, these efforts also enable the event and its highlights to be retrieved or recommended later on.

Replay detection & multi-stream synchronization: In this task, we ask participants for replay detection and multi stream synchronization. This can be evaluated automatically and subjectively and provides participants with a particular and well defined problem to solve. In the date set, we offer the players views (five per team) on the game as well as the commentators view, in which players' views are used to comment on the game. In the commentators' stream important scenes are replayed, and that's what we ask participants find, i.e. in that order

1. Identification of replays in the commentators stream
2. Re-finding the particular scene in the players stream
3. Matching the time points and the length in the videos.

Summarization as a story: Optional submission for each team is a summary: an engaging and captivating story, which boils down the thrill of the game to its mere essence. For the optional task we encourage participants to think up and investigate ways to summarize how e-sport matches ramp up, evolve and play out over time. Instead of iterating highlights, the summary needs to present an engaging and captivating story, which boils down the thrill of the game to its mere essence.


Training and test data for the task in 2018 have been provided in cooperation with ZNIPE.TV, which is a platform for e-sport streaming. The data will be enriched with a ground truth on stream synchronization and event detection. Evaluation is two-fold. On the one hand we evaluate the submissions against the ground truth with objective measures for synchronization and event detection, on the other hand we will evaluated summary submissions by an expert panel. The expert panel will include professionals from the game industry, amateur players and frequent viewers, and players and researchers from the field of game engineering, game studies and narratives in video games. The exact criteria for evaluating submissions will be available to the participants within the in-depth task description.

Target group
The target groups of researchers for this task is diverse and broad. It includes researchers from multimedia, web science, data science, machine learning, information retrieval, natural language processing, and game and entertainment-related fields. We also encourage interdisciplinary research involving people from the media sciences, the cultural sciences, and the humanities discussing what would be part of such a story and what the expectations of the audience are. There are multiple angles from which one can approach this task, namely classical video and audio summaries, natural language generation, multimedia analysis, emotion recognition, non-linear stories, multimodal data analysis, user interaction analysis, retrieval and classification of relevant segments, multi-view videos, weighting and ordering of events and video segments, and many more. In any case, regardless the research background, it will help to have a basic understanding of the gaming culture, or ideally, to enlist dedicated gamers.

Training and test data for the task are provided in cooperation with ZNIPE.TV. The video and event streams available are covering a Counter-Strike: Global Offensive, short CS:GO, tournament in the ESL One league from Katowice in 2018. All of the matches have video streams from the 10 gamers (five per team), a spectator video stream with commentator audio, a video stream with the map overview, structured data on game statistics and game highlights.

Gamestory 2019 is split into a very technical and easily approachable task, and an open, creative approach (which is optional). While the latter is more interesting to the organizers, it has proven (in the 2018 edition of Gamestory) to have a very high entry barrier for young researchers as they fear choosing the wrong approach, although there is no “wrong” in this case.

Replay detection & multi-stream synchronization: We ask participants to identify replays in the commentator stream and re-find them in the players’ streams. A ground truth will be provided.

Summarization as a story: Participants are asked to submit up to three stories for a match from the test data set. There is no constraint on the modalities of the story, so it can be video, audio, text, images or a combination thereof. An expert panel with professionals and researchers from the field of game development, game studies, e-sports, and non-linear narratives will then investigate the submissions and judge them for:
  • Conciseness and accurateness (ie. does the story recite the match?)
  • Is it exciting, compelling, engaging, does it provide flow and peak of a good story?
  • Innovation (ie. surprisingly new approach, non-linearity of the story, creative use of cuts, etc.)
  • Simplicity and degree of automation of the approach
  • Cross-domain applicability

Recommended reading
[1] J. Bryce & J. Rutter, 2002. "Spectacle of the Deathmatch: Character and Narrative in First Person Shooters" in G. King & T. Krzywinska (Ed.s), ScreenPlay: Cinema/videogames/interfaces, Wallflower Press, pp.66-80
[2] Shah, Rajiv Ratn, et al. "Eventbuilder: Real-time multimedia event summarization by visualizing social media." Proceedings of the 23rd ACM international conference on Multimedia. ACM, 2015.
[3] Lee, Wen-Yu, Winston H. Hsu, and Shin’ichi Satoh. "Learning From Cross-Domain Media Streams for Event-of-Interest Discovery." IEEE Transactions on Multimedia 20.1 (2018): 142-154.
[4] M. Seif El-Nasr, A. Drachen, A. Canossa (Eds.). “Game Analytics”. Springer, 2013.
[5] Lux, M., Riegler, M., Dang-Nguyen, D. T., Larson, M., Potthast, M., & Halvorsen, P. (2018). GameStory Task at MediaEval 2018. In Working Notes Proceedings of the MediaEval 2018 Workshop.

Task organizers
Duc-Tien Dang-Nguyen, duc-tien.dang-nguyen at, Dublin City University
Johanna Pirker, TU Graz, Austria
Martin Potthast, Universität Leipzig
Mathias Lux, mlux at, Alpen-Adria-Universität Klagenfurt
Michael Riegler, SIMETRIC & University of Oslo, michael at
Pål Halvorsen, Simula Research Laboratory & University of Oslo

Task schedule
Development data release: 31 May
Test data release: 15 June
Runs due: 20 September
Results returned: 23 September
Working Notes paper due: 30 September
MediaEval 2019 Workshop (in France, near Nice): 27-29 October 2019