Almost every new digital camera now has an image stabilisation option to allow the user to take handheld shots, with a giant telephoto lens, more or less in the dark. These seem to work based on tiny gyroscopes that react against camera motion by sending a signal, via a servomotor, to move the sensor plane in the opposite direction. If I’d suggested that level of complexity, people might well have quibbled.
One possible approach is to use postprocessing (deconvolution) techniques (if you know the path taken, you can mathematically ‘retrace the steps’ of the camera’s motion and subtract their effect at each point). But if you aren’t taking pictures of eg stars, often the path of the motion is unknown.
So today’s invention is an alternative process which makes use of the fact that multiple digital shots are free. Even my Canon A700 can take about 5, 640×480 shots per second, at 1/400 sec (with flash disabled).
-Take a sequence of such short-exposure images (each of which will look almost uniformly black)
-Locate the peak of intensity in each, using a very simple in-camera program
-Align the peaks and add the images together, so as to create a bright image, without motion blur
(If you were prepared to live with lower-resolution images, you could of course just shoot video, rather than multiple timed exposures).