The 2012 Visual Privacy Task
For this task, participants will need to propose methods whereby human faces occurring in digital imagery can be obscured so as to render them unrecognisable. This is intended to ensure that a person appearing in a video frame will not be visually identifiable based on an image which is thus obscured. This action performed as a matter of privacy protection for any persons whose picture might happen to be knowingly or unknowingly captured in a video frame.
Since the resulting partly obscured videos would nonetheless have to remain available for viewing, an optimal balance should be struck so that despite the extent of such masking of the facial identity as may be necessary, the categorical identity of any masked data subject therein, i.e. as a human being, can still be recognisable to the viewer.
This task is of interest to researchers in the areas of image processing, face detection and privacy protection techniques.
The data set will consist of about 100 high resolution asf video files of an average length of 1m30s each and containing at least 1 person walking in front of the camera in an indoor environment. People may also wear specific personal items (e.g. accessories) which could potentially reveal their identity. For this year, the list of such items to be considered in the task is reduced to 3 items: scarf, hat, and glasses. People may be at a distance from the camera or near the camera, making their faces vary considerably in pixel size and quality. The ambient lighting conditions of the videos shall be variable across the scenarios. Participants shall be provided with a number of video files for each scenario.
Ground truth and evaluation
The ground truth will be created manually by the task organisers and will consist of annotations of faces and defined personal accessories as listed above. The obscuring of faces and personal item as may be worn by a data subject (more than one such item can be present in each frame) will be evaluated using metrics that are based on human perception of salience in images (e.g. shape, brightness, density, colour) and visual appropriateness. As a complement to these official metrics, a certain number of key runs from submitted runs will also be evaluated by a user study aimed at developing a deeper understanding of user perceptions of appropriateness in privacy protection. Insight from this user study should help us refine the metrics and thereby inform the specification of the follow-on task to be set next year.
1 June: Development data release
1 July: Test data release
3 September: Run submission due
15 September: Results returned
Senior, A., Privacy Protection in a Video Surveillance System, Privacy Protection in Video Surveillance, Springer, 2009.
Dufaux, F. & Ebrahimi, T., A framework for the validation of privacy protection solutions in video surveillance, 2010 IEEE International Conference on Multimedia and Expo (ICME), pp.66-71, 19-23 July 2010
Dufaux, F. & Ebrahimi, T., Scrambling for Privacy Protection in Video Surveillance Systems, IEEE Transaction on Circuits and Systems for Video Technology, Vol. 18, Nr. 8 (2008), p. 1168-1174
Tomas Piatrik, Queen Mary University of London, UK
Atta Badii, University of Reading, UK
Special thanks to the VideoSense team and in particular:
Mathieu Einig and Chattun Lallah, University of Reading, UK
This task is organised by the EU FP7 project VideoSense: Virtual Centre of Excellence for Ethically-guided and Privacy-respecting Video Analytics in Security