The 2014 Visual Privacy Task
For this task, a participant should implement a combination of several privacy filters to protect various personal information regions in videos. The filters should optimise privacy filtering so as to: 1. obscure personal visual information effectively whilst, 2. keeping as much as possible of the ‘useful’ information that would enable a human viewer to interpret the obscured video frame.

The task emphasizes the human perspective on privacy. Videos are typically viewed by humans, therefore, de facto, only human viewers can determine whether privacy is protected or not. A shortcoming of today’s systems for protecting privacy in video is that the people who make a decision of what to protect and to which degree are developers of the privacy protection solutions themselves. New for this task in 2014 is that it introduces a range of perspectives on what constitutes effective privacy protection, including the views of the end-users of systems (for example, the staff who monitor a surveillance system) and naive viewers from the general public.

Personal visual information is defined as subjective (i.e., human-perceived) information that can expose the identity of a person in a video to a human viewer. Personal visual information can include distinctive facial features or personal jewellery, as well as skin tone or silhouette. The power of personal visual information to reveal identity is only now becoming better understood. It involves many complex dependencies. One example is that the ability of a particular characteristic of a person in a video (for example, posture) to reveal identity is dependent on how well the viewer of the video knows the person. Another example is that the ability of a particular characteristic to identify a person in a video depends on the context in which that person is depicted in the video. As in past years, the Visual Privacy Task offers researchers an opportunity to develop solutions for a particular problem. The problem represents a step towards gaining a better overall understanding of the relationship between personal visual information and privacy.

Target group
Those working in image/video processing and video-analytics for privacy protection applications.

The same dataset as used by the Visual Privacy task in MediaEval 2013 will be used [7] but with additional annotations for:
i) Regions of High or Low types of personally identifiable information in each video frame;
ii) Unusual events occurring in some frames within the dataset; such as persons fighting, stealing etc.

In order to simulate use cases requiring context–aware privacy protection solutions [2], two types of annotations are provided with the data set: Unusual events (e.g., fighting, stealing, and dropping a bag), and High and Low information regions. A High information is an image area containing rich detail, and a low information region is an image area containing less information.

Ground truth and evaluation
Participants submit privacy protected versions of the video clips in the test subset of the dataset. The submitted video clips will be evaluated based on:
i) The human-perceived level of privacy filtering, i.e., success in obscuring the High and Low regions of personal visual information.
ii) Human judgments of the level of retained information at a global level, i.e., intelligibility, and, appropriateness (i.e., acceptability and attractiveness) of the privacy filtered image as a whole [1].

Weightings will be agreed upon with the task participants that reflect consensus on the relative importance of each of i. and ii. above.

The privacy protected video clips will be evaluated by each of three distinct communities of human evaluators:

a) naive viewers (accessed via a crowdsourcing platform)
b) surveillance monitoring staff
c) privacy filtering technology developers.

It is anticipated that a different weighting will be applied to each evaluator group.

References and recommended reading
[1] Badii, A., Al-Obaidi, A., Einig, M. MediaEval 2013 Visual Privacy Task: Holistic Evaluation Framework for Privacy by Co-Design Impact Assessment. In Working Notes Proceedings of the MediaEval 2012 Workshop,, 927, ISSN: 1613-0073. Pisa, Italy, 2012.

[2] Badii, A., Einig, M., Tiemann, M., Thiemert, D., Lallah, C. Visual Context Identification for Privacy-Respecting Video Analytics. In Proceedings of 14th IEEE MMSP International Workshop on Multimedia Signal Processing. Banff, Canada, 2012, 366-371.

[3] Dufaux, F., Ebrahimi, T. Scrambling for Privacy Protection in Video Surveillance Systems. IEEE Transaction on Circuits and Systems for Video Technology, 18(8), 2008, 1168-1174.

[4] Fradi, H., Eiselein, V., Keller, I., Dugelay, J.-L., Sikora, T. Crowd Context-Dependent Privacy Protection Filters. In Proceedings of 18th International Conference on Digital Signal Processing. Santorini, Greece, 2013.

[5] Friedland, G. Privacy Concerns When Sharing Multimedia in Social Networks. In Proceedings of ACM International Conference on Multimedia. ACM, Nara, Japan, 2012, 1121-1122.

[6] Korshunov, P., Cai, S., Ebrahimi, T. Crowdsourcing Approach for Evaluation of Privacy Filters in Video Surveillance. In Proceedings of International ACM Workshop on Crowdsourcing for Multimedia, CrowdMM’12. ACM, Nara, Japan, 2012.

[7] Korshunov, P., Ebrahimi, T. PEViD: Privacy Evaluation Video Dataset Applications of Digital Image Processing XXXVI. In Proceedings of SPIE International Society for Optics and Photonics. San Diego, California, USA, 2013.

[8] Piatrik, T., Fernandez, V., Izquierdo, E. The Privacy Challenges of In-depth Video Analytics. In Proceedings of IEEE MMSP International Workshop on Multimedia Signal Processing. Banff, Alberta, 2012, 383-386.

[9] Senior, A., Pankanti, S., Hampapur, A., Brown, L., Tian, Y.-L., Ekin, A., Connell, J., Shu, C. F., Lu, M. Enabling Video Privacy Through Computer Vision. In Proceedings of IEEE Security and Privacy 3, 2005, 50-57.

Task organizers
Atta Badii, University of Reading, UK
Pavel Korshunov, EPFL, Switzerland
Tomas Piatrik, Queen Mary University of London
Volker Eiselein, Technische Universität Berlin, Germany
Ahmed Al-Obaidi, University of Reading, UK
Touradj Ebrahimi, EPFL, Switzerland
Christian Fedorczak, Thales Security Solutions & Services

Task auxiliary
Lucas Teixeira, University of Reading, UK

Task schedule
6 May: Development data release
2 June: Test data release
16 August (updated deadline) Run submission due
15 September: Results returned
28 September: Working notes paper deadline