The 2017 Multimedia Satellite Task:
Emergency Response for Flooding Events
Register to participate in this challenge on the MediaEval 2017 registration site.

The multimedia satellite task requires participants to retrieve and link multimedia content from social media streams (Flickr, Twitter, Wikipedia) of events that can be remotely sensed such as flooding, fires, land clearing, etc. to satellite imagery. The purpose of this task is to augment events that are present in satellite images with social media reports in order to provide a more comprehensive view. This is of vital importance in context of situational awareness and emergency response for the coordination of rescue efforts.

To align with recent events, the challenge focuses on flooding events, which constitute a special kind of remotely sensed event. The multimedia satellite task is a combination of satellite image processing, social media retrieval and fusion of both modalities. The different challenges are addressed in the following subtasks.

[DIRSM] - Disaster Image Retrieval from Social Media
The goal of this task is to retrieve all images which show direct evidence of a flooding event from social media streams, independently of a particular event. The objective is to design a system/algorithm that given any collection of multimedia images and their metadata (e.g., YFCC100M, Twitter, Wikipedia, news articles) is able to identify those images that are related to a flooding event. Please note, that only those images which convey an evidence of a flooding event will be considered as True Positives. Specifically, we define images showing „unexpected high water levels in industrial, residential, commercial and agricultural areas“ as images providing evidence of a flooding event. Examples of True Positive (green) and True Negatives (red) are available in the following example (please click).

mmsat17sample_images

The main challenges of this task are the proper discrimination of the water level in different areas (e.g., images showing a lake vs. showing high water at a street) as well as the consideration of different types of flooding events (e.g., coastal flooding, river flooding, pluvial flooding).

Since it is the first time that this task is offered, there will be only a small development set of about 2000 images. Participants are highly encouraged to also use the images of the YFCC100M dataset and are allowed to incorporate additional sources (e.g., Twitter, Instagram, Wikipedia).

[FDSI] - Flood-Detection in Satellite Images:
The aim of this subtask is to extract changes in satellite images which are caused by a flooding event. Participants will get satellite images covering the affected area before and after the event for a list of flooding events. The challenge of this task is develop a method/algorithm that is able to identify from those images the regions affected by the flooding event. The satellite image scenes will be provided by the organizers of the task. Participants report a segmentation mask for each flooding event, which will be evaluated by the percentage of pixels correctly labeled.

[SMSIL] - Social Media to Satellite Imagery Linking (Experimental):
The goal of this task is to link social media images which contain evidence of flooding to the corresponding event being present in satellite images. Participants are given a list of flooding events from the past. A flooding event will be characterized by a start and end timestamp as well as a shape file, representing the affected areas. Participants submit for every given flooding event all relevant images (e.g., from YFC100M, Twitter, Flickr) which contain direct evidence of a flooding for that particular event.

Target group
The task is of interest to researchers in the areas of computer vision, multimedia information retrieval, satellite image processing, remote-sensing, social media, media analysis. Due to the high task overlap and existing background knowledge, we highly encourage participants of MediaEval’s Placing Task to take part in this new task.

Data
The images and corresponding metadata for the DIRSM task will be extracted from YFCC100M. These images are shared under Creative Commons licenses that allow their redistribution. Images will be labeled with the two classes (0) Showing evidence of a flooding event and (1) Showing no evidence of a flooding event. In addition to the images, we also will supply participants with additional metadata information. We will release a development set of 2000 images, however participants are also allowed to further use the images of the YFCC100M dataset and to incorporate additional sources (e.g. Twitter, Instagram, Wikipedia).

Precomputed features will be provided along with the dataset to help teams from different communities to participate to the task.

For the FDSI task, we will provide satellite images before and after a flooding event for multiple instances of flooding events from multiple sources. The dataset will contain modified Copernicus Sentinel data 2014/2015/2016 for Sentinel data from ESA’s Sentinel satellites. Additionally, we included satellite images from NASA’s Landsat (7/8) satellites. We are also able to provide high resolution satellite images of flooded areas gathered from Planet [1] as underlying source of data.
For each flooding event, there will be one segmentation mask for the flooded areas. The segmentation masks will be used to assess the results and will not be provided to the participants.

Ground truth and evaluation
The images for the DIRSM task are to be manually annotated with the two class labels (showing evidence/showing no evidence of a flooding event) by human assessors. The correctness of retrieved images will be evaluated with the metric Precision at X (P@X). This metric measures the number of relevant images among the top X results.

The segmentation masks of flooded areas in the satellite images for the FDSI task, will be extracted by human assessors. The official evaluation metric of the task is the Percentage of correctly labeled pixels for each submitted satellite image. Therefore, the intersection of the inferred segmentation and the ground truth, divided by the union is computed: ACC = TP/(TP+FN+FP). Pixels marked as „void“ in the ground truth (e.g. due to cloud coverage) will be excluded from this measure.

Recommended reading
[1] Bischke, Benjamin, et al. "Contextual enrichment of remote-sensed events with social media streams." Proceedings of the 2016 ACM on Multimedia Conference. ACM, 2016.

[2] Chaouch, Naira, et al. "A synergetic use of satellite imagery from SAR and optical sensors to improve coastal flood mapping in the Gulf of Mexico." Hydrological Processes 26.11 (2012): 1617-1628.

[3] Klemas, Victor. "Remote sensing of floods and flood-prone areas: an overview." Journal of Coastal Research 31.4 (2014): 1005-1013.

[4] Lagerstrom, Ryan, et al. "Image classification to support emergency situation awareness." Frontiers in Robotics and AI 3 (2016): 54.

[5] Ogashawara, Igor, Marcelo Pedroso Curtarelli, and Celso M. Ferreira. "The use of optical remote sensing for mapping flooded areas." International Journal of Engineering Research and Application 3.5 (2013): 1-5.

[6] Peters, Robin, and J. P. D. Albuquerque. "Investigating images as indicators for relevant social media messages in disaster management." The 12th International Conference on Information Systems for Crisis Response and Management. 2015.

[7] Planet Team (2017). Planet Application Program Interface: In Space for Life on Earth. San Francisco, CA.

[8] Schnebele, Emily, et al. "Real time estimation of the Calgary floods using limited remote sensing data." Water 6.2 (2014): 381-398.

[9] Ticehurst, C. J., P. Dyce, and J. P. Guerschman. "Using passive microwave and optical remote sensing to monitor flood inundation in support of hydrologic modelling." Interfacing modelling and simulation with mathematical and computational sciences, 18th World IMACS/MODSIM Congress. 2009.

[10] Wedderburn-Bisshop, Gerard, et al. Methodology for mapping change in woody landcover over Queensland from 1999 to 2001 using Landsat ETM+. Department of Natural Resources and Mines, 2002.

[11] Woodley, Alan, et al. "Introducing the Sky and the Social Eye." Working Notes Proceedings of the MediaEval 2016 Workshop. Vol. 1739. CEUR Workshop Proceedings, 2016.

[12] Yang, Yimin, et al. "Hierarchical disaster image classification for situation report enhancement." Information Reuse and Integration (IRI), 2011 IEEE International Conference on. IEEE, 2011.


Task organizers
Benjamin Bischke, German Research Center for Artificial Intelligence (DFKI), Germany (first.last at dfki.de)
Damian Borth, German Research Center for Artificial Intelligence (DFKI), Germany (first.last at dfki.de)
Christian Schulze, German Research Center for Artificial Intelligence (DFKI), Germany (first.last at dfki.de)
Alan Woodley, Queensland University of Technology (QUT), Australia
Venkat Srinivasan, Virginia Tech (Blacksburg VA), US

Task schedule
1 May: Development data release
1 June: Test data release
17 August: Run submission
21 August: Results returned
27 August: Working notes paper: initial submission deadline
13-15 Sept: MediaEval Workshop in Dublin

Acknowledgments
We would like to thank Planet for providing us with high resolution satellite images for this task.