A decade ago, I did some work for the BBC on how they could disguise the faces of people in broadcasts (whilst still retaining the image in each case of a live, moving face).
Since, then the field has moved on and the world is an even less safe place (with security cameras on almost every vertical surface). Recent research has found that we extract most recognition-related information from images of faces when they are around 30 x 30 pixels in size.
Rather than demanding ever more detail, it seems we recognise faces best when they are quite coarsely pixellated (but not too coarsely).
Today’s invention is therefore a new way for overloaded security observers to be presented with eg on-screen crowd scenes, when searching for individual terrorists (or suspects).
Knowing how far away members in a crowd are, it’s possible to pixellate the whole image so that an average sized face occupies 30*30 pixels. This image would then be automatically blurred a little to remove the distractions of the high spatial frequencies present in the edges of the pixels.
It would then be easier for observers to detect individuals more quickly.